Podcasts about Python

Share on
Share on Facebook
Share on Twitter
Share on Reddit
Copy link to clipboard
  • 2,206PODCASTS
  • 7,618EPISODES
  • 45mAVG DURATION
  • 3DAILY NEW EPISODES
  • Nov 29, 2021LATEST

POPULARITY

20112012201320142015201620172018201920202021


Best podcasts about Python

Show all podcasts related to python

Latest podcast episodes about Python

Teaching Python
Episode 80: Reaching for the Stars with Dr. Becky Smethurst

Teaching Python

Play Episode Listen Later Nov 29, 2021 45:46


This week Sean and Kelly are joined by Dr. Becky Smethurst from Oxford to talk about code and science. Dr. Becky is an astrophysicist, author, and science communicator. Each week, she publishes a video on her YouTube channel explaining a bit about space, Special Guest: Becky Smethurst.

ICOPOD
Episode #199: WWF RAW 6/17/1996: Can Goldust Handle Jake‘s Python?

ICOPOD

Play Episode Listen Later Nov 29, 2021 95:08


It's the last RAW before King of the Ring 1996 and we still have some tournament matches on TV! Steve Austin continues his rivalry with Savio Vega in a tournament match. Plus, Marc Mero squares off against Owen Hart. We find out who the special referee at KOTR will be for the WWF World Championship match. A new signing to the WWF had a press conference and so much more!

Holiness Preaching Online
Rev.Zack Coffman- "Spirit of the python"

Holiness Preaching Online

Play Episode Listen Later Nov 28, 2021 26:35


Dryden road Pentecostal fellowship meeting 2017

Adafruit Industries
Python on Hardware weekly video 158

Adafruit Industries

Play Episode Listen Later Nov 26, 2021 3:16


The wonderful world of Python on hardware! Episode 158 (November 24, 2021). This is our weekly Python video-newsletter-podcast! Ladyada and PT review the Python on hardware news & highlights of the week. The news comes from the Python community, Discord, Adafruit communities and more. It's part of the comprehensive newsletter we do each week. The video playlist of episodes is here: http://adafru.it/pohepisodes Sign up for the Python on Microcontrollers weekly email newsletter here: https://www.adafruitdaily.com/ Read the newsletters past and present at https://www.adafruitdaily.com/category/circuitpython/ Learn all about CircuitPython here: https://www.circuitpython.org/ https://adafruit.com/circuitpython/ Join us on Discord! https://adafru.it/discord/ Visit the Adafruit shop online, we're open for business - http://www.adafruit.com Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/

Stuff You Should Know
Python-a-palooza!

Stuff You Should Know

Play Episode Listen Later Nov 25, 2021 50:49


Pythons are big snakes. Really big. But there's more to them than their size. Learn all about these big daddies in today's episode.  Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Get Attitude Podcast with Glenn Bill
S2 #37 - Amy Siewe, python hunter

Get Attitude Podcast with Glenn Bill

Play Episode Listen Later Nov 24, 2021 53:48


8:11 - What prompted you to leave real estate? Swamp People TV show. Catching pythons in the everglades. Viral video: https://www.youtube.com/watch?v=3PGLKTSqoMQ  14:11 - How many pythons are there in the everglades? https://www.PythonHuntress.com  18:15 - Toledo Zoo? Being stalked by Nile Crocodiles. Getting bit by a python. What do snakes eat.  23:48 - National Parks Alliance for donations 25:25 - What's it like to be a python hunter for 8 hours a day? Jungle busting. Skinning a python. Python skins. How do you kill a python?  31:10 - PETA and Ron DeSantis. Pricing a snake skin. How can somebody become a python hunter? What are the attitudes of other python hunters?  39:14 - Knowledge through the decades. What is the attitude lesson at birth or of new life. Fighting made and taking on the world.  43:05 - What is the attitude lesson at the age of 20? University of Toledo. Jack Canfield. Being ok with who you are. Being your authentic self.  45:20 - What is the attitude lesson at the age of 30? Keller Williams. Create the business mindset. Have a mentor and copy them.  47:47 - What is the attitude lesson at the age of 40? Marco Island. Flip flops in December.  50:16 - Show close and message of hope. Figure out how you respond to fear. Making excuses.  _ _ _ _ _ _ _ _ _ _ _ _ _ _  SUBSCRIBE / RATE / REVIEW

Coder Radio
441: Dependency Derby

Coder Radio

Play Episode Listen Later Nov 24, 2021 45:01


Are Linux devs getting upset with the Python community? We weigh in on a nuanced issue. Plus the mass-mod resignation over at Rust, and Mike's thoughts on setting up a dev environment on Windows 11.

All Jupiter Broadcasting Shows
Dependency Derby | Coder Radio 441

All Jupiter Broadcasting Shows

Play Episode Listen Later Nov 24, 2021


Are Linux devs getting upset with the Python community? We weigh in on a nuanced issue. Plus the mass-mod resignation over at Rust, and Mike's thoughts on setting up a dev environment on Windows 11.

Coder Radio Video
Dependency Derby | Coder Radio 441

Coder Radio Video

Play Episode Listen Later Nov 24, 2021


Are Linux devs getting upset with the Python community? We weigh in on a nuanced issue. Plus the mass-mod resignation over at Rust, and Mike's thoughts on setting up a dev environment on Windows 11.

Screaming in the Cloud
Breaking the Tech Mold with Stephanie Wong

Screaming in the Cloud

Play Episode Listen Later Nov 24, 2021 45:02


About StephanieStephanie Wong is an award-winning speaker, engineer, pageant queen, and hip hop medalist. She is a leader at Google with a mission to blend storytelling and technology to create remarkable developer content. At Google, she's created over 400 videos, blogs, courses, and podcasts that have helped developers globally. You might recognize her as the host of the GCP Podcast. Stephanie is active in her community, fiercely supporting women in tech and mentoring students.Links: Personal Website: https://stephrwong.com Twitter: https://twitter.com/stephr_wong TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the things that makes me a little weird in the universe is that I do an awful lot of… let's just call it technology explanation slash exploration in public, and turning it into a bit of a brand-style engagement play. What makes this a little on the weird side is that I don't work for a big company, which grants me a tremendous latitude. I have a whole lot of freedom that lets me be all kinds of different things, and I can't get fired, which is something I'm really good at.Inversely, my guest today is doing something remarkably similar, except she does work for a big company and could theoretically be fired if they were foolish enough to do so. But I don't believe that they are. Stephanie Wong is the head of developer engagement at Google. Stephanie, thank you for volunteering to suffer my slings and arrows about all of this.Stephanie: [laugh]. Thanks so much for having me today, Corey.Corey: So, at a very high level, you're the head of developer engagement, which is a term that I haven't seen a whole lot of. Where does that start and where does that stop?Stephanie: Yeah, so I will say that it's a self-proclaimed title a bit because of the nuance of what I do. I would say at its heart, I am still a part of developer relations. If you've heard of developer advocacy or developer evangelist, I would say this slight difference in shade of what I do is that I focus on scalable content creation and becoming a central figure for our developer audiences to engage and enlighten them with content that, frankly, is remarkable, and that they'd want to share and learn about our technology.Corey: Your bio is fascinating in that it doesn't start with the professional things that most people do with, “This is my title and this is my company,” is usually the first sentence people put in. Yours is, “Stephanie Wong is an award-winning speaker, engineer, pageant queen, and hip hop medalist.” Which is both surprising and more than a little bit refreshing because when I read a bio like that my immediate instinctive reaction is, “Oh, thank God. It's a real person for a change.” I like the idea of bringing the other aspects of what you are other than, “This is what goes on in an IDE, the end,” to your audience.Stephanie: That is exactly the goal that I had when creating that bio because I truly believe in bringing more interdisciplinary and varied backgrounds to technology. I, myself have gone through a very unconventional path to get to where I am today and I think in large part, my background has had a lot to do with my successes, my failures, and really just who I am in tech as an uninhibited and honest, credible person today.Corey: I think that there's a lack of understanding, broadly, in our industry about just how important credibility and authenticity are and even the source of where they come from. There are a lot of folks who are in the DevRel space—devrelopers, as I insist upon calling them, over their protests—where, on some level, the argument is, what is developer relations? “Oh, you work in marketing, but they're scared to tell you,” has been my gag on that one for a while. But they speak from a position of, “I know what's what because I have been in the trenches, working on these large-scale environments as an engineer for the last”—fill in the blank, however long it may have been—“And therefore because I have done things, I am going to tell you how it is.” You explicitly call out that you don't come from the traditional, purely technical background. Where did you come from? It's unlikely that you've sprung fully-formed from the forehead of some god, but again, I'm not entirely sure how Google finds and creates the folks that it winds up advancing, so maybe you did.Stephanie: Well, to tell you the truth. We've all come from divine creatures. And that's where Google sources all employees. So. You know. But—[laugh].Corey: Oh, absolutely. “We climbed to the top of Olympus and then steal fire from the gods.” “It's like, isn't that the origin story of Prometheus?” “Yeah, possibly.” But what is your background? Where did you come from?Stephanie: So, I have grown up, actually, in Silicon Valley, which is a little bit ironic because I didn't go to school for computer science or really had the interest in becoming an engineer in school. I really had no idea.Corey: Even been more ironic than that because most of Silicon Valley appears to never have grown up at all.Stephanie: [laugh]. So, true. Maybe there's a little bit of that with me, too. Everybody has a bit of Peter Pan syndrome here, right? Yeah, I had no idea what I wanted to do in school and I just knew that I had an interest in communicating with one another, and I ended up majoring in communication studies.I thought I wanted to go into the entertainment industry and go into production, which is very different and ended up doing internships at Warner Brothers Records, a YouTube channel for dance—I'm a dancer—and I ended up finding a minor in digital humanities, which is sort of this interdisciplinary minor that combines technology and the humanities space, including literature, history, et cetera. So, that's where I got my start in technology, getting an introduction to information systems and doing analytics, studying social media for certain events around the world. And it wasn't until after school that I realized that I could work in enterprise technology when I got an offer to be a sales engineer. Now, that being said, I had no idea what sales engineering was. I just knew it had something to do with enterprise technology and communications, and I thought it was a good fit for my background.Corey: The thing that I find so interesting about that is that it breaks the mold of what people expect, when, “If someone's going to talk to me about technology—especially coming from a”—it's weird; it's one of the biggest companies on the planet, and people still on some level equate Google with the startup-y mentality of being built in someone's garage. That's an awfully big garage these days, if that's even slightly close to true, which it isn't. But there's this idea of, “Oh, you have to go to Stanford. You have to get a degree in computer science. And then you have to go and do this, this, this, this, and this.”And it's easy to look dismissively at what you're doing. “Communications? Well, all that would teach you to do is communicate to people clearly and effectively. What possible good is that in tech?” As we look around the landscape and figure out exactly why that is so necessary in tech, and also so lacking?Stephanie: Exactly. I do think it's an underrated skill in tech. Maybe it's not so much anymore, but I definitely think that it has been in the past. And even for developers, engineers, data scientists, other technical practitioner, especially as a person in DevRel, I think it's such a valuable skill to be able to communicate complex topics simply and understandably to a wide variety of audiences.Corey: The big question that I have for you because I've talked to an awful lot of folks who are very concerned about the way that they approach developer relations, where—they'll have ratios, for example—where I know someone and he insists that he give one deeply technical talk for every four talks that are not deeply technical, just because he feels the need to re-establish and shore up his technical bona fides. Now, if there's one thing that people on the internet love, it is correcting people on things that are small trivia aspect, or trying to pull out the card that, “Oh, I've worked on this system for longer than you've worked on this system, therefore, you should defer to me.” Do you find that you face headwinds for not having the quote-unquote, “Traditional” engineering technical background?Stephanie: I will say that I do a bit. And I did, I would say when I first joined DevRel, and I don't know if it was much more so that it was being imposed on me or if it was being self-imposed, something that I felt like I needed to prove to gain credibility, not just in my organization, but in the industry at large. And it wasn't until two or three years into it, that I realized that I had a niche myself. It was to create stories with my content that could communicate these concepts to developers just as effectively. And yes, I can still prove that I can go into an hour-long or a 45-minute-long tech talk or a webinar about a topic, but I can also easily create a five to ten-minute video that communicates concepts and inspires audiences just the same, and more importantly, be able to point to resources, code labs, tutorials, GitHub repos, that can allow the audience to be hands-on themselves, too. So really, I think that it was over time that I gained more experience and realized that my skill sets are valuable in a different way, and it's okay to have a different background as long as you bring something to the table.Corey: And I think that it's indisputable that you do. The concept of yours that I've encountered from time to time has always been insightful, it is always been extremely illuminating, and—you wouldn't think of this as worthy of occasion and comment, but I feel it needs to be said anyway—at no point in any of your content did I feel like I was being approached in a condescending way, where at every point it was always about uplifting people to a level of understanding, rather than doing the, “Well, I'm smarter than you and you couldn't possibly understand the things that I've been to.” It is relatable, it is engaging, and you add a very human face to what is admittedly an area of industry that is lacking in a fair bit of human element.Stephanie: Yeah, and I think that's the thing that many folks DevRel continue to underline is the idea of empathy, empathizing with your audiences, empathizing with the developers, the engineers, the data engineers, whoever it is that you're creating content for, it's being in their shoes. But for me, I may not have been in those shoes for years, like many other folks historically have been in for DevRel, but I want to at least go through the journey of learning a new piece of technology. For example, if I'm learning a new platform on Google Cloud, going through the steps of creating a demo, or walking through a tutorial, and then candidly explaining that experience to my audience, or creating a video about it. I really just reject the idea of having ego in tech and I would love to broaden the opportunity for folks who came from a different background like myself. I really want to just represent the new world of technology where it wasn't full of people who may have had the privilege to start coding at a very early age, in their garages.Corey: Yeah, privilege of, in many respects, also that privilege means, “Yes, I had the privilege of not having to have friends and deal with learning to interact with other human beings, which is what empowered me to build this company and have no social skills whatsoever.” It's not the aspirational narrative that we sometimes are asked to believe. You are similar in some respects to a number of things that I do—by which I mean, you do it professionally and well and I do it as basically performance shitpost art—but you're on Twitter, you make videos, you do podcasts, you write long-form and short-form as well. You are sort of all across the content creation spectrum. Which of those things do you prefer to do? Which ones of those are things you find a little bit more… “Well, I have to do it, but it's not my favorite?” Or do you just tend to view it as content is content; you just look at different media to tell your story?Stephanie: Well, I will say any form of content is queen—I'm not going to say king, but—[laugh] content is king, content is queen, it doesn't matter.Corey: Content is a baroness as it turns out.Stephanie: [laugh]. There we go. I have to say, so given my background, I mentioned I was into production and entertainment before, so I've always had a gravitation towards video content. I love tinkering with cameras. Actually, as I got started out at Google Cloud, I was creating scrappy content using webcams and my own audio equipment, and doing my own research, and finding lounges and game rooms to do that, and we would just upload it to our own YouTube channel, which probably wasn't allowed at the time, but hey, we got by with it.And eventually, I got approached by DevRel to start doing it officially on the channel and I was given budget to do it in-studio. And so that was sort of my stepping stone to doing this full-time eventually, which I never foresaw for myself. And so yeah, I have this huge interest in—I'm really engaged with video content, but once I started expanding and realizing that I could repurpose that content for podcasting, I could repurpose it for blogs, then you start to realize that you can shard content and expand your reach exponentially with this. So, that's when I really started to become more active on social media and leverage it to build not just content for Google Cloud, but build my own brand in tech.Corey: That is the inescapable truth of DevRel done right is that as you continue doing it, in time, in your slice of the industry, it is extremely likely that your personal brand eclipses the brand of the company that you represent. And it's in many ways a test of corporate character—if it makes sense—as do how they react to that. I've worked in roles before I started this place where I was starting to dabble with speaking a lot, and there was always a lot of insecurity that I picked up of, “Well, it feels like you're building your personal brand, not advancing the company here, and we as a company do not see the value in you doing that.” Direct quote from the last boss I had. And, well, that partially explains why I'm here, I suppose.But there's insecurity there. I'd see the exact opposite coming out of Google, especially in recent times. There's something almost seems to be a renaissance in Google Cloud, and I'm not sure where it came from. But if I look at it across the board, and you had taken all the labels off of everything, and you had given me a bunch of characteristics about different companies, I would never have guessed that you were describing Google when you're talking about Google Cloud. And perhaps that's unfair, but perceptions shape reality.Stephanie: Yeah, I find that interesting because I think traditionally in DevRel, we've also hired folks for their domain expertise and their brand, depending on what you're representing, whether it's in the Kubernetes space or Python client library that you're supporting. But it seems like, yes, in my case, I've organically started to build my brand while at Google, and Google has been just so spectacular in supporting that for me. But yeah, it's a fine line that I think many people have to walk. It's like, do you want to continue to build your own brand and have that carry forth no matter what company you stay at, or if you decide to leave? Or can you do it hand-in-hand with the company that you're at? For me, I think I can do it hand-in-hand with Google Cloud.Corey: It's taken me a long time to wrap my head around what appears to be a contradiction when I look at Google Cloud, and I think I've mostly figured it out. In the industry, there is a perception that Google as an entity is condescending and sneering toward every other company out there because, “You're Google, you know how to do all these great, amazing things that are global-spanning, and over here at Twitter for Pets, we suck doing these things.” So, Google is always way smarter and way better at this than we could ever hope to be. But that is completely opposed to my personal experiences talking with Google employees. Across the board, I would say that you all are self-effacing to a fault.And I mean that in the sense of having such a limited ego, in some cases, that it's, “Well, I don't want to go out there and do a whole video on this. It's not about me, it's about the technology,” are things that I've had people who work at Google say to me. And I appreciate the sentiment; it's great, but that also feels like it's an aloofness. It also fails to humanize what it is that you're doing. And you are a, I've got to say, a breath of fresh air when it comes to a lot of that because your stories are not just, “Here's how you do a thing. It's awesome. And this is all the intricacies of the API.”And yeah, you get there, but you also contextualize that in a, “Here's why it matters. Here's the problem that solves. Here is the type of customer's problem that this is great for,” rather than starting with YAML and working your way up. It's going the other way, of, “We want to sell some underpants,” or whatever it is the customer is trying to do today. And that is the way that I think is one of the best ways to drive adoption of what's going on because if you get people interested and excited about something—at least in my experience—they're going to figure out how the API works. Badly in many cases, but works. But if you start on the API stuff, it becomes a solution looking for a problem. I like your approach to this.Stephanie: Thank you. Yeah, I appreciate that. I think also something that I've continued to focus on is to tell stories across products, and it doesn't necessarily mean within just Google Cloud's ecosystem, but across the industry as well. I think we need to, even at Google, tell a better story across our product space and tie in what developers are currently using. And I think the other thing that I'm trying to work on, too, is contextualizing our products and our launches not just across the industry, but within our product strategy. Where does this tie in? Why does it matter? What is our forward-looking strategy from here? When we're talking about our new data cloud products or analytics, [unintelligible 00:17:21], how does this tie into our API strategy?Corey: And that's the biggest challenge, I think, in the AI space. My argument has been for a while—in fact, I wrote a blog post on it earlier this year—that AI and machine learning is a marvelously executed scam because it's being pushed by cloud providers and the things that you definitely need to do a machine learning experiment are a bunch of compute and a whole bunch of data that has to be stored on something, and wouldn't you know it, y'all sell that by the pound. So, it feels, from a cynical perspective, which I excel at espousing, that approach becomes one of you're effectively selling digital pickaxes into a gold rush. Because I see a lot of stories about machine learning how to do very interesting things that are either highly, highly use-case-specific, which great, that would work well, for me too, if I ever wind up with, you know, a petabyte of people's transaction logs from purchasing coffee at my national chain across the country. Okay, that works for one company, but how many companies look like that?And on the other side of it, “It's oh, here's how we can do a whole bunch of things,” and you peel back the covers a bit, and it looks like, “Oh, but you really taught me here is bias laundering?” And, okay. I think that there's a definite lack around AI and machine learning of telling stories about how this actually matters, what sorts of things people can do with it that aren't incredibly—how do I put this?—niche or a problem in search of a solution?Stephanie: Yeah, I find that there are a couple approaches to creating content around AI and other technologies, too, but one of them being inspirational content, right? Do you want to create something that tells the story of how I created a model that can predict what kind of bakery item this is? And we're going to do it by actually showcasing us creating the outcome. So, that's one that's more like, okay. I don't know how relatable or how appropriate it is for an enterprise use case, but it's inspirational for new developers or next gen developers in the AI space, and I think that can really help a company's brand, too.The other being highly niche for the financial services industry, detecting financial fraud, for example, and that's more industry-focused. I found that they both do well, in different contexts. It really depends on the channel that you're going to display it on. Do you want it to be viral? It really depends on what you're measuring your content for. I'm curious from you, Corey, what you've seen across, as a consumer of content?Corey: What's interesting, at least in my world, is that there seems to be, given that what I'm focusing on first and foremost is the AWS ecosystem, it's not that I know it the best—I do—but at this point, it's basically Stockholm Syndrome where it's… with any technology platform when you've worked with it long enough, you effectively have the most valuable of skill sets around it, which is not knowing how it works, but knowing how it doesn't, knowing what the failure mode is going to look like and how you can work around that and detect it is incredibly helpful. Whereas when you're trying something new, you have to wait until it breaks to find the sharp edges on it. So, there's almost a lock-in through, “We failed you enough times,” story past a certain point. But paying attention to that ecosystem, I find it very disjointed. I find that there are still events that happen and I only find out when the event is starting because someone tweets about it, and for someone who follows 40 different official AWS RSS feeds, to be surprised by something like that tells me, okay, there's not a whole lot of cohesive content strategy here, that is at least making it easy for folks to consume the things that they want, especially in my case where even the very niche nature of what I do, my interest is everything.I have a whole bunch of different filters that look for various keywords and the rest, and of course, I have helpful folks who email me things constantly—please keep it up; I'm a big fan—worst case, I'd rather read something twice than nothing. So, it's helpful to see all of that and understand the different marketing channels, different personas, and the way that content approaches, but I still find things that slip through the cracks every time. The thing that I've learned—and it felt really weird when I started doing it—was, I will tell the same stories repeatedly in different forums, or even the same forum. I could basically read you a Twitter thread from a year ago, word-for-word, and it would blow up bigger than it did the first time. Just because no one reads everything.Stephanie: Exactly.Corey: And I've already told my origin story. You're always new to someone. I've given talks internally at Amazon at various times, and I'm sort of loud and obnoxious, but the first question I love to ask is, “Raise your hand if you've never heard of me until today.” And invariably, over three-quarters of the room raises their hand every single time, which okay, great. I think that's awesome, but it teaches me that I cannot ever expect someone to have, quote-unquote, “Done the reading.”Stephanie: I think the same can be said about the content that I create for the company. You can't assume that people, A) have seen my tweets already or, B) understand this product, even if I've talked about it five times in the past. But yes, I agree. I think that you definitely need to have a content strategy and how you format your content to be more problem-solution-oriented.And so the way that I create content is that I let them fall into three general buckets. One being that it could be termed definition: talking about the basics, laying the foundation of a product, defining terms around a topic. Like, what is App Engine, or Kubeflow 101, or talking about Pub/Sub 101.The second being best practices. So, outlining and explaining the best practices around a topic, how do you design your infrastructure for scale and reliability.And the third being diagnosis: investigating; exploring potential issues, as you said; using scripts; Stackdriver logging, et cetera. And so I just kind of start from there as a starting point. And then I generally follow a very, very effective model. I'm sure you're aware of it, but it's called the five point argument model, where you are essentially telling a story to create a compelling narrative for your audience, regardless of the topic or what bucket that topic falls into.So, you're introducing the problem, you're sort of rising into a point where the climax is the solution. And that's all to build trust with your audience. And as it falls back down, you're giving the results in the conclusion, and that's to inspire action from your audience. So, regardless of what you end up talking about this problem-solution model—I've found at least—has been highly effective. And then in terms of sharing it out, over and over again, over the span of two months, that's how you get the views that you want.Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn't think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don't use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you'll have a chance to prove yourself. Compete in four unique lab challenges, where they'll be awarding more than $2000 in cash and prizes. I'm not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That's cloudacademy.com/corey. We're gonna have some fun with this one!Corey: See, that's a key difference right there. I don't do anything regular in terms of video as part of my content. And I do it from time to time, but you know, getting gussied up and whatnot is easier than just talking into a microphone. As I record this, it's Friday, I'm wearing a Hawaiian shirt, and I look exactly like the middle-aged dad that I am. And for me at least, a big breakthrough moment was realizing that my audience and I are not always the same.Weird confession for someone in my position: I don't generally listen to podcasts. And the reason behind that is I read very quickly, and even if I speed up a podcast, I'm not going to be able to consume the information nearly as quickly as I could by reading it. That, amongst other reasons, is one of the reasons that every episode of this show has a full transcript attached to it. But I'm not my audience. Other people prefer to learn by listening and there's certainly nothing wrong with that.My other podcast, the AWS Morning Brief, is the spoken word version of the stuff that I put out in my newsletter every week. And that is—it's just a different area for people to consume the content because that's what works for them. I'm not one to judge. The hard part for me was getting over that hump of assuming the audience was like me.Stephanie: Yeah. And I think the other key part of is just mainly consistency. It's putting out the content consistently in different formats because everybody—like you said—has a different learning style. I myself do. I enjoy visual styles.I also enjoy listening to podcasts at 2x speed. [laugh]. So, that's my style. But yeah, consistency is one of the key things in building content, and building an audience, and making sure that you are valuable to your audience. I mean, social media, at the end of the day is about the people that follow you.It's not about yourself. It should never be about yourself. It's about the value that you provide. Especially as somebody who's in DevRel in this position for a larger company, it's really about providing value.Corey: What are the breakthrough moments that I had relatively early in my speaking career—and I think it's clear just from what you've already said that you've had a similar revelation at times—I gave a talk, that was really one of my first talks that went semi-big called, “Terrible Ideas in Git.” It was basically, learn how to use Git via anti-pattern. What it secretly was, was under the hood, I felt it was time I learned Git a bit better than I did, so I pitched it and I got a talk accepted. So well, that's what we call a forcing function. By the time I give that talk, I'd better be [laugh] able to have built a talk that do this intelligently, and we're going to hope for the best.It worked, but the first version of that talk I gave was super deep into the plumbing of Git. And I'm sure that if any of the Git maintainers were in the audience, they would have found it great, but there aren't that many folks out there. I redid the talk and instead approached it from a position of, “You have no idea what Git is. Maybe you've heard of it, but that's as far as it goes.” And then it gets a little deeper there.And I found that making the subject more accessible as opposed to deeper into the weeds of it is almost always the right decision from a content perspective. Because at some level, when you are deep enough into the weeds, the only way you're going to wind up fixing something or having a problem that you run into get resolved, isn't by listening to a podcast or a conference talk; it's by talking to the people who built the thing because at that level, those are the only people who can hang at that level of depth. That stops being fodder for conference talks unless you turn it into an after-action report of here's this really weird thing I learned.Stephanie: Yeah. And you know, to be honest, the one of the most successful pieces of content I've created was about data center security. I visited a data center and I essentially unveiled what our security protocols were. And that wasn't a deeply technical video, but it was fun and engaging and easily understood by the masses. And that's what actually ended up resulting in the highest number of views.On top of that, I'm now creating a video about our subsea fiber optic cables. Finding that having to interview experts from a number of different teams across engineering and our strategic negotiators, it was like a monolith of information that I had to take in. And trying to format that into a five-minute story, I realized that bringing it up a layer of abstraction to help folks understand this at a wider level was actually beneficial. And I think it'll turn into a great piece of content. I'm still working on it now. So, [laugh] we'll see how it turns out.Corey: I'm a big fan of watching people learn and helping them get started. The thing that I think gets lost a lot is it's easy to assume that if I look back in time at myself when I was first starting my professional career two decades ago, that I was exactly like I am now, only slightly more athletic and can walk up a staircase without getting winded. That's never true. It never has been true. I've learned a lot about not just technology but people as I go, and looking at folks are entering the workforce today through the same lens of, “Well, that's not how I would handle that situation.” Yeah, no kidding. I have two decades of battering my head against the sharp edges and leaving dents in things to inform that opinion.No, when I was that age, I would have handled it way worse than whatever it is I'm critiquing at the time. But it's important to me that we wind up building those pathways and building those bridges so that people coming into the space, first, have a clear path to get here, and secondly, have a better time than I ever did. Where does the next generation of talent come from has been a recurring question and a recurring theme on the show.Stephanie: Yeah. And that's exactly why I've been such a fierce supporter of women in tech, and also, again, encouraging a broader community to become a part of technology. Because, as I said, I think we're in the midst of a new era of technology, of people from all these different backgrounds in places that historically have had more remote access to technology, now having the ability to become developers at an early age. So, with my content, that's what I'm hoping to drive to make this information more easily accessible. Even if you don't want to become a Google Cloud engineer, that's totally fine, but if I can help you understand some of the foundational concepts of cloud, then I've done my job well.And then, even with women who are already trying to break into technology or wanting to become a part of it, then I want to be a mentor for them, with my experience not having a technical background and saying yes to opportunities that challenged me and continuing to build my own luck between hard work and new opportunities.Corey: I can't wait to see how this winds up manifesting as we see understandings of what we're offering to customers in different areas in different ways—both in terms of content and terms of technology—how that starts to evolve and shift. I feel like we're at a bit of an inflection point now, where today if I graduate from school and I want to start a business, I have to either find a technical co-founder or I have to go to a boot camp and learn how to code in order to build something. I think that if we can remove that from the equation and move up the stack, sure, you're not going to be able to build the next Google or Pinterest or whatnot from effectively Visual Basic for Interfaces, but you can build an MVP and you can then continue to iterate forward and turn it into something larger down the road. The other part of it, too, is that moving up the stack into more polished solutions rather than here's a bunch of building blocks for platforms, “So, if you want a service to tell you whether there's a picture of a hot dog or not, here's a service that does exactly that.” As opposed to, “Oh, here are the 15 different services, you can bolt together and pay for each one of them and tie it together to something that might possibly work, and if it breaks, you have no idea where to start looking, but here you go.” A packaged solution that solves business problems.Things move up the stack; they do constantly. The fact is that I started my career working in data centers and now I don't go to them at all because—spoiler—Google, and Amazon, and people who are not IBM Cloud can absolutely run those things better than I can. And there's no differentiated value for me in solving those global problems locally. I'd rather let the experts handle stuff like that while I focus on interesting problems that actually affect my business outcome. There's a reason that instead of running all the nonsense for lastweekinaws.com myself because I've worked in large-scale WordPress hosting companies, instead I pay WP Engine to handle it for me, and they, in turn, hosted on top of Google Cloud, but it doesn't matter to me because it's all just a managed service that I pay for. Because me running the website itself adds no value, compared to the shitpost I put on the website, which is where the value derives from. For certain odd values of value.Stephanie: [laugh]. Well, two things there is that I think we actually had a demo created on Google Cloud that did detect hot dogs or not hot dogs using our Vision API, years in the past. So, thanks for reminding me of that one.Corey: Of course.Stephanie: But yeah, I mean, I completely agree with that. I mean, this is constantly a topic in conversation with my team members, and with clients. It's about higher level of abstractions. I just did a video series with our fellow, Eric Brewer, who helped build cloud infrastructure here at Google over the past ten decades. And I asked him what he thought the future of cloud would be in the next ten years, and he mentioned, “It's going to be these higher levels of abstraction, building platforms on top of platforms like Kubernetes, and having more services like Cloud run serverless technologies, et cetera.”But at the same time, I think the value of cloud will continue to be providing optionality for developers to have more opinionated services, services like GKE Autopilot, et cetera, that essentially take away the management of infrastructure or nodes that people don't really want to deal with at the end of the day because it's not going to be a competitive differentiator for developers. They want to focus on building software and focusing on keeping their services up and running. And so yeah, I think the future is going to be that, giving developers flexibility and freedom, and still delivering the best-of-breed technology. If it's covering something like security, that's something that should be baked in as much as possible.Corey: You're absolutely right, first off. I'm also looking beyond it where I want to be able to build a website that is effectively Twitter, only for pets—because that is just a harebrained enough idea to probably raise a $20 million seed round these days—and I just want to be able to have the barks—those are like tweets, only surprisingly less offensive and racist—and have them just be stored somewhere, ideally presumably under the hood somewhere, it's going to be on computers, but whether it's in containers, or whether it's serverless, or however is working is the sort of thing that, “Wow, that seems like an awful lot of nonsense that is not central nor core to my business succeeding or failing.” I would say failing, obviously, except you can lose money at scale with the magic of things like SoftBank. Here we are.And as that continues to grow and scale, sure, at some point I'm going to have bespoke enough needs and a large enough scale where I do have to think about those things, but building the MVP just so I can swindle some VCs is not the sort of thing where I should have to go to that depth. There really should be a golden-path guardrail-style thing that I can effectively drag and drop my way into the next big scam. And that is, I think, the missing piece. And I think that we're not quite ready technologically to get there yet, but I can't shake the feeling and the hope that's where technology is going.Stephanie: Yeah. I think it's where technology is heading, but I think part of the equation is the adoption by our industry, right? Industry adoption of cloud services and whether they're ready to adopt services that are that drag-and-drop, as you say. One thing that I've also been talking a lot about is this idea of service-oriented networking where if you have a service or API-driven environment and you simply want to bring it to cloud—almost a plug-and-play there—you don't really want to deal with a lot of the networking infrastructure, and it'd be great to do something like PrivateLink on AWS, or Private Service Connect on Google Cloud.While those conversations are happening with customers, I'm finding that it's like trying to cross the Grand Canyon. Many enterprise customers are like, “That sounds great, but we have a really complex network topology that we've been sitting on for the past 25 years. Do you really expect that we're going to transition over to something like that?” So, I think it's about providing stepping stones for our customers until they can be ready to adopt a new model.Corey: Yeah. And of course, the part that never gets said out loud but is nonetheless true and at least as big of a deal, “And we have a whole team of people who've built their entire identity around that network because that is what they work on, and they have been ignoring cloud forever, and if we just uplift everything into a cloud where you folks handle that, sure, it's better for the business outcome, but where does that leave them?” So, they've been here for 25 years, and they will spend every scrap of political capital they've managed to accumulate to torpedo a cloud migration. So, any FUD they can find, any horse-trading they can do, anything they can do to obstruct the success of a cloud initiative, they're going to do because people are people, and there is no real plan to mitigate that. There's also the fact that unless there's a clear business value story about a feature velocity increase or opening up new markets, there's also not an incentive to do things to save money. That is never going to be the number one priority in almost any case short of financial disaster at a company because everything they're doing is building out increasing revenue, rather than optimizing what they're already doing.So, there's a whole bunch of political challenges. Honestly, moving the computer stuff from on-premises data centers into a cloud provider is the easiest part of a cloud migration compared to all of the people that are involved.Stephanie: Yeah. Yeah, we talked about serverless and all the nice benefits of it, but unless you are more a digitally-born, next-gen developer, it may be a higher burden for you to undertake that migration. That's why we always [laugh] are talking about encouraging people to start with newer surfaces.Corey: Oh, yeah. And that's the trick, too, is if you're trying to learn a new cloud platform these days—first, if you're trying to pick one, I'd be hard-pressed to suggest anything other than Google Cloud, with the possible exception of DigitalOcean, just because the new user experience is so spectacularly good. That was my first real, I guess, part of paying attention to Google Cloud a few years ago, where I was, “All right, I'm going to kick the tires on this and see how terrible this interface is because it's a Google product.” And it was breathtakingly good, which I did not expect. And getting out of the way to empower someone who's new to the platform to do something relatively quickly and straightforwardly is huge. And sure, there's always room to prove, but that is the right area to focus on. It's clear that the right energy was spent in the right places.Stephanie: Yeah. I will say a story that we don't tell quite as well as we should is the One Google story. And I'm not talking about just between Workspace and Google Cloud, but our identity access management and knowing your Google account, which everybody knows. It's not like Microsoft, where you're forced to make an account, or it's not like AWS where you had a billion accounts and you hate them all.Corey: Oh, my God, I dread logging into the AWS console every time because it is such a pain in the ass. I go to cloud.google.com sometimes to check something, it's like, “Oh, right. I have to dig out my credentials.” And, “Where's my YubiKey?” And get it. Like, “Oh. I'm already log—oh. Oh, right. That's right. Google knows how identity works, and they don't actively hate their customers. Okay.” And it's always a breath of fresh air. Though I will say that by far and away, the worst login experience I've seen yet is, of course, Azure.Stephanie: [laugh]. That's exactly right. It's Google account. It's yours. It's personal. It's like an Apple iCloud account. It's one click, you're in, and you have access to all the applications. You know, so it's the same underlying identity structure with Workspace and Gmail, and it's the same org structure, too, across Workspace and Google Cloud. So, it's not just this disingenuous financial bundle between GCP and Workspace; it's really strategic. And it's kind of like the idea of low code or no code. And it looks like that's what the future of cloud will be. It's not just by VMs from us.Corey: Yeah. And there are customers who want to buy VMs and that's great. Speed up what they're doing; don't get in the way of people giving you their money, but if you're starting something net-new, there's probably better ways to do it. So, I want to thank you for taking as much time as you have to wind up going through how you think about, well, the art of storytelling in the world of engineering. If people want to learn more about who you are, what you're up to, and how you approach things, where can they find you?Stephanie: Yeah, so you can head to stephrwong.com where you can see my work and also get in touch with me if you want to collaborate on any content. I'm always, always, always open to that. And my Twitter is @stephr_wong.Corey: And we will, of course, put links to that in the [show notes 00:40:03]. Thank you so much for taking the time to speak with me.Stephanie: Thanks so much.Corey: Stephanie Wong, head of developer engagement at Google Cloud. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment telling me that the only way to get into tech these days is, in fact, to graduate with a degree from Stanford, and I can take it from you because you work in their admissions office.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

USU Career Studio
Bonus Friday Face-to-Face with Sixto Cabrera: Electrical Engineering

USU Career Studio

Play Episode Listen Later Nov 24, 2021 21:52


This week, Marissa met with Sixto Cabrera, USU alum and Senior Electrical Engineer at Vivint. Join them as they discuss Sixto's unique educational timeline, a day in the life of an electrical engineer, setting healthy work boundaries in a high-demand job, and examples of Sixto's most meaningful work. Sixto is an experienced electronics engineer with professional experience in hardware design for a wide range of applications including embedded, IoT, RF, digital/analog circuits, and power electronics. He also has skills in programming/scripting languages such as C and C++, Python, R, MATLAB and Verilog. He is currently working as a Senior Electrical Engineer at Vivint. Learn more about and connect with Sixto by visiting his LinkedIn profile.

Greater Than Code
260: Fixing Broken Tech Interviews with Ian Douglas

Greater Than Code

Play Episode Listen Later Nov 24, 2021 64:32


01:01 - Ian's Superpower: Curiosity & Life-Long Learning * Discovering Computers * Sharing Knowledge 06:27 - Streaming and Mentorship: Becoming “The Career Development Guy” * The Turing School of Software and Design (https://turing.edu/) * techinterview.guide (https://techinterview.guide/) * twitch.tv/iandouglas736 (https://www.twitch.tv/iandouglas736) 12:01 - Tech Interviews (Are Broken) * techinterview.guide (https://techinterview.guide/) * Daily Email Series (https://techinterview.guide/daily-email-series/) * Tech vs Behavior Questions 16:43 - How do I even get a first job in the tech industry? * Tech Careers = Like Choose Your Own Adventure Book * Highlight What You Have: YOU ARE * Apply Anyway 24:25 - Interview Processes Don't Align with Skills Needed * FAANG Company (https://en.wikipedia.org/wiki/Big_Tech) Influence * LeetCode-Style Interviews (https://leetcode.com/explore/interview/card/top-interview-questions-medium/) * Dynamic Programing Problems (https://medium.com/techie-delight/top-10-dynamic-programming-problems-5da486eeb360) * People Can Learn 35:06 - Fixing Tech Interviews: Overhauling the Process * Idea: “Open Source Hiring Manifesto” Initiative * Analyzing Interviewing Experiences; Collect Antipatterns * Community/Candidate Input * Company Feedback (Stop Ghosting! Build Trust!) * Language Mapping Reflections: Mandy: Peoples' tech journeys are like a Choose Your Own Adventure book. Keep acquiring skills over life-long learning. Arty: The importance of 1-on-1 genuine connections. Real change happens in the context of a relationship. Ian: Having these discussions, collaborating, and saying, “what if?” This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode) To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Transcript: ARTY: Hi, everyone. Welcome to Episode 260 of Greater Than Code. I am Arty Starr and I'm here with my fabulous co-host, Mandy Moore. MANDY: Thank you, Arty. And I'm here with our guest today, Ian Douglas. Ian has been in the tech industry for over 25 years and suggested we cue the Jurassic Park theme song for his introduction. Much of his career has been spent in early startups planning out architecture and helping everywhere and anywhere like a “Swiss army knife” engineer. He's currently livestreaming twice a week around the topic of tech industry interview preparation, and loves being involved in developer education. Welcome to the show, Ian. IAN: Thanks for having me. It's great to be here. MANDY: Awesome. So we like to start the show with our famous question: what is your superpower and how did you acquire it? IAN: Probably curiosity. I've always been kind of a very curious mindset of wanting to know how things work. Even as a little kid, I would tear things apart just to see how something worked. My parents would be like, “Okay, great. Put it back together.” I'm like, “I don't know how to put it back together.” So [chuckles] they would come home and I would just have stuff disassembled all over the house and yeah, we threw a lot of stuff out that way. But it was just a curiosity of how things work around me and that led into computer programming, learning how computers worked and that just made the light bulb go off in my mind as a little kid of, I get to tell this computer how to do something, it's always going to do it. And that just led of course, into the tech industry where you sign up for a career in the tech industry, you're signing up for lifelong learning and there's no shortage of trying to satiate that curiosity. I think it's just a never-ending journey, which is fantastic. ARTY: When did you first discover computers? What was that experience like for you? IAN: I was 8 years old. I think it was summer, or fall of 1982. I believe my dad came home with a Commodore 64. My dad was always kind of a gadget nut. Anything new and interesting on the market, he would find an excuse to buy and so he, brought home this Commodore 64 thinking family computer, but once he plunked it down in front of me, it sort of became mine. I didn't want to share. I grew up in Northern Canada way, way up in the Northwest territories and in the wintertime, we had two things to do. We could go play hockey, or we'd stay indoors and not freeze. So I spent a lot of time indoors when I wasn't playing hockey—played a lot of hockey as a kid. But when I was home, I was basically on this Commodore 64 all the time, playing games and learning how the computer itself worked and learning how the programming language of it worked. Thankfully, the computer was something I had never took apart. Otherwise, it would have been a pile of junk, but just spending a lot of time just learning all the ins and outs. Back then, the idea was you could load the software and then you type a run command and it would actually execute the program. But if you type a list, it would actually show you all the source code of the program as well and that raised my curiosity, like what is all this symbols and what all these words mean? In the back of the Commodore 64 book, it had several chapters about the basic programming language. So I started picking apart all these games and trying to learn how they worked and then well, what would happen if I change this instruction to that and started learning how to sort of hack my games, usually break the game completely. But trying to hack it a little bit; what if I got like an extra ship, an extra level, or what if I change the health of my character, or something along those lines? And it kind of snowballed from there, honestly. It was just this fascination of, oh, cool, I get to look at this thing. I get to change it. I get to apply it. And then of course, back in the day, you would go to a bookstore and you'd have these magazines with just pages and pages and pages of source code and you'd go home and you type it all in expecting something really cool. At the end of it, you run it and it's something bland like, oh, you just made a spreadsheet application. It's like, “Oh, I wanted a game.” Like, “Shucks.” [laughter] But as a little kid, that kind of thing wasn't very enticing, but I'm sure as an adult, it's like, oh cool, now I have a spreadsheet to track budgeting, or whatever at home. It was this whole notion of open source and just sharing knowledge and that really stuck with me, too and so, as I would try to satiate this innate curiosity in myself and learn something, I would go teach it to a friend and it's like, “Hey, hey, let me show you what I just did. I learned how to play this thing on the piano,” or “I learned how to sing this song,” or “I learned how to use a magnifying glass to cook an ant on the sidewalk.” [chuckles] Whatever I learned, I always wanted to turn around and teach it to somebody else. I would get sometimes more excitement and joy out of watching somebody else do it because I taught them than the fact that I was able to learn that and do it myself. And so, after a while it was working on the computer became kind of a, oh yeah, okay, I can work on the computer, I can do the thing. But if I could turn around and show somebody else how to do that and then watch them explore and you watch that light bulb go off over their head, then it's like, oh, they're going to go do something cool with that. Just the anticipation of how are they going to go use that knowledge, that really stuck with me my whole life. In high school doing little bits of tutoring here and there. I was a paid tutor in college. Once I got out of college and got into the workplace, again, just learning on my own and then turning around and teaching others led into running my own web development business where I was teaching some friends how to do web development because I was taking on so much work that I had to subcontract it the somebody where I wasn't going to meet deadlines and so, I subcontracted them. That meant that I got to pay my friends to help me work this business. And so, that kind of kicked off and then I started learning well, how to servers work and how does the internet work and how do I run an email server on all this stuff? So just never-ending stream of knowledge going on in the internet and then just turning around and sharing that knowledge and keeping that community side of things building up over time. MANDY: Very cool. So in your bio, it said you're streaming now so I'm guessing that's a big part of what you do today with the streaming. So what are you streaming? IAN: So let's see, back in 2014, I started getting involved in mentorship with a local code school here in Denver called The Turing School of Software and Design. It's the 7-month code program and they were looking for someone that could help just mentor students. They were teaching Ruby on Rails at the time. So I got involved with them. I was working in Ruby at SendGrid at the time where I was working, who was later acquired by Twilio. And I'm like, “Yeah, I got some extra time. I can help some people out.” I like giving back and I like the idea of tutoring and teaching. I started that mentorship and it quickly turned into hey, do any of our mentors know anything about resumes and the hiring and interviewing and things like that. And by that point, I had been the lead engineer. I had done hiring. I hired several dozen engineers at SendGrid, or helped hire several dozen people at SendGrid. And I'm like, “Yeah, I've looked at hundreds and thousands of resumes.” Like, “What can I help with?” So I quickly became the career development guy to help them out and over time, the school started developing their career development curriculum and I like to think I had a hand in developing some of that. 3 years later, they're like, “You just want a job here? Like you're helping so many students, you just want to come on staff?” And so, I joined them as an instructor, taught the backend program, had a blast, did that for almost 4 full years. And then when I left Turing in June of 2021, I thought, “Well, I still want to be able to share this knowledge,” and so, I took all these notes that I had been writing and I basically put it all onto a website called techinterview.guide. When I finished teaching, I'm like, “Well, I still miss sharing that knowledge with people,” and I thought, “How else can I get that knowledge out there in a way that is scalable and manageable by one human being?” And I thought, “Well, I'll just kind of see what other people are doing.” Fumbled around on YouTube, watched some YouTube videos, watched people doing livestreaming on LinkedIn, livestreaming on Facebook, livestreaming on YouTube and trying to think could I do that? Nah, I don't know if I could do that. A friend of mine named Jonan Scheffler, he currently works at New Relic, he does a live stream. So I was hanging out on his stream one night and it was just so much fun seeing people interact and chat and how they engage the people in the chat and answering questions for them. I'm like, “I wonder if I could do that.” The curiosity took over from there and you can imagine where that went; went way down some rabbit holes on how to set up a streaming computer. Started streaming and found out that I wasn't very good at audio routing, [chuckles] recording things, and marketing, all that kind of stuff. But I kind of fumbled my way through it and Jonan was very generous with his time to help me straighten some things out and it kind of took off from there. So I thought, “Well, now I've got a platform where I can share this career development advice having been in the industry now for 25 years. Now, I've been director of engineering. I'm currently the director of engineering learning at a company. I've got an education background now as an instructor for several years. I've been doing tons of mentoring.” I love to give back and I love to help other people learn a thing that's going to help improve their life. I think of it like a ripple effect, like I'm not going to go out and change the world, but I can change your world and that ripple effect is going to change somebody else's world and that's going to change somebody else's world. So that's how I see my part in all of this play out. I'm not looking to be the biggest name in anything. I'm just one person with a voice and I'm happy to share my ideas and my perspectives, but I'm also happy to have people on my stream that can share their ideas and perspectives as well. I think it's important to hear a lot of perspectives, especially when it comes to things like job hunt, interview prep, and how to build a resume. You're going to see so much conflicting advice out there like, “This is the way you should do it,” and someone else will be like, “No, this is the way you should do it.” Meanwhile, I'm on the sidelines going, “You can do it all of that way.” Just listen to everybody's advice and figure out how you want to build your resume and then that's your resume. It doesn't have to look like the way I want it, or the way that someone else wants it; it can look how you want it to look. This is just our advice kind of collectively. So the livestream took off from there and I've got only a couple of hundred followers, or so on Twitch, but it's been a lot of fun just engaging with chat and people are submitting questions to me all the time. So I do a lot of Q&A sessions, like ask me anything sessions and it's just been a ton of fun. ARTY: That's awesome. I love the idea of focusing on one person and how you can make a difference in that one person's life and how those differences can ripple outward. That one-on-one connection, I feel like if we try and just broadcast and forget about the individuals, it's easy for the message and stuff to just get lost in ether waves and not actually make that connection with one person. Ultimately, it's all those ones that add up to the many. IAN: Definitely. Yeah. ARTY: So can you tell us a little bit more about the Tech Interview Guide and what your philosophy is regarding tech interviews? IAN: The tech interview process in – well, I mean, just the interview process in general in the tech industry is pretty broken. It lends itself very well to people who come from position and privilege that they can afford expensive universities and have oodles and oodles of free time to go study algorithms for months and months and months to go jump through a whole bunch of hoops for companies that want four, or five, six rounds of interviews to try to determine whether you're the right fit for the company and it's super broken. There are a lot of companies out there that are trying to change things a little bit and I applaud them. It's going to be a tough journey, for sure. Trying to convince companies like hey, this is not working out well for us as candidates trying to apply for jobs. As a company, though I understand because I've been a hiring manager that you need to be able to trust the people that you're hiring. You need to trust that they can actually do the job. Unfortunately, a lot of the tech interview process does not adequately mimic what the day-to-day responsibility of that job is going to be. So the whole philosophy of me doing the Tech Interview Guide is just an education of, “Hey, here's my perspective on what you're likely to face as a technical interview. These are the different stages that you'll typically see.” I have a lot of notes on there about how to build a resume, how to build a cover letter, thoughts on building a really big resume and then how to trim it down to one page to go apply for a particular job. How to write a cover letter that's customized to the business to really position yourself as the best candidate for that role. And then some chapters that I have yet to write are going to be things like how do you negotiate once you get an offer, like what are some negotiation tips. I've shared some of them live on the stream and I've shared a growing amount of information as I learn from other people as well, then I'll turn around and I'll share that on the stream. The content that's actually on the website right now is probably 3, 4 years old, some of it at least and so, I'm constantly going back in and I'm trying to revamp that material a little bit to kind of be as modern as possible. I used to want to go a self-publish route where I actually made a book. Several of my friends have actually gone through the process of actually making a book and getting it published. I'm like, “Oh, I want to do that, too. My friends are doing that. I could do that, too,” and I got looking into it. It's like, okay, it's an expensive, really time-consuming process and by the time I get that book on a shelf somewhere, a lot of the information is going to be out of date because a lot of things in the tech industry change all the time. So I decided I would just self-publish an online book where I can just go in and I can just constantly refresh the information and people can go find whatever my current perspective is by going to the website. And then as part of the website, I also have a daily email series that people can sign up for. I'm about to split it into four mailing lists. But right now, it's a single mailing list where I'm presenting technical questions and behavioral questions that you're likely to get asked as a web developer getting into the business. But I don't spend time in the email telling you how to answer the question; what I do instead is I share from the interviewer's perspective. This is why I'm asking you this question. This is what I hope to hear. This is what's important for me to hear in your answer. Because there's so many resources out there already that are trying to tell you how to craft the perfect answer, where I'm trying to explain this is why this question is important to us in the first place. So I'm taking a little bit different perspective on how I present that information and to date I've sent out, I don't know, something like 80,000 emails over a couple of years to folks that have signed up for that, which has been really tremendous to see. I get a lot of good feedback from that. But again, that information it doesn't always age well and interview processes change. I'm actually going through the process right now in the month of November to rewrite a lot of that information, but then also break out into multiple lists and so, where right now it's kind of a combination of a little bit of technical questions, a little bit of behavioral questions, a little bit of procedural, like what is an interview and so on. Now I'm actually going to break them out into separate lists of this list is all just technical questions and this list is all just behavioral questions and this list is going to be general process and then the process of going through the interview and how to do research and so on. And then the last one is just general questions and answers and a lot of that is stemmed from the questions that people have submitted to me that I answered on the live stream. So it all kind of packages up together. MANDY: That's really cool. I'd like to get into some of the meat of the material that you're putting out here. IAN: Yeah. MANDY: So as far as what are some of the biggest questions that you get on your street? IAN: Probably the most popular question I get—because a lot of the people that come by the stream and find the daily email list are new in the industry and they're trying to find that first job. And so by far, the number one question is, how do I even get a job in the industry right now? I have no experience. I've got some amount of education, whether it's an actual CS degree, or something similar to a CS degree, or they've gone through a bootcamp of some kind. How do I even get that first job? How do I position myself? How do I differentiate myself? How do I even get a phone call from a company? That's a lot of what's broken in the industry. Everybody in the industry right now wants people with experience, or they're saying like, “Oh, this is a “entry-level role,” but you must have 3 to 4 years' experience.” It's like, well, it's not entry level if you're asking for experience; it can't be both. All they're really doing is they're calling it an entry-level role so they don't have to pay you as much. But if they want 3-, or 4-years' experience, then you should be paying somebody who has 3-, or 4-years' experience. So the people writing these job posts are off their rocker a little bit, but that's by far, the number one question I get is how do I even get that first job. Once you get that first job and you get a year, year and a half, 2 years' experience, it's much easier to get that second job, or third job. It's not like oh, I'm going to quit my job today and have a new job tomorrow. But the time to get that next job is usually much, much shorter than getting this first job. I know people that have gone months and months, or nearly a year just constantly trying to apply, getting ghosted, like not getting any contact whatsoever from companies where they're sending in resumes and trying to apply for these jobs. Again, it's just a big indication of what's really broken in our industry that I think could be improved. I think that there's a lot of room for improvement there. MANDY: So what do you tell them? What's your answer for that? How do they get their first job? How do you get your first job? IAN: That's a [chuckles] good question. And I hate to fall back on the it depends answer. It really does depend on the kind of career that you want to have. I tell people often in my coaching that the tech industry is really a choose your own adventure kind of book. Like, once you get that job a little bit better, what you want your next job to be and so, you get to choose. If you get your first job as a QA developer, or you get that first job as a technical writer, or you get that first job doing software development, or you get that first job in dev ops and then decide, you don't want to do that anymore, that's fine. You can position yourself to go get a job doing some other kind of technical job that doesn't have to be what your previous job was. Now, once you have that experience, though recruiters are going to be calling you and saying, “Hey, you had a QA role. I've also got a QA role,” and you just have to stand firm and say, “No, that's not the direction I'm taking my career anymore. I want to head in this direction. So I'm going to apply for a company where they're looking for people with that kind of direction.” It really comes down to how do you show the company what you bring to the company and how you're going to make the company better, how are you going to make the team better, what skill, experience, and background are you bringing to that job. A lot of people, when they apply for the job, they talk about what they don't have. Like, “Oh, I'm an entry level developer,” or “I only went to a bootcamp,” or “I don't know very much about some aspect of development like I don't know, test driven development,” or “I don't really understand object-oriented programming,” or “I don't know anything about Docker, but I want to apply for this job.” Well, now you're highlighting what you don't have and to get that first job, you have to highlight what you do have. So I often tell people on your resume, on your LinkedIn, don't call yourself a junior developer. Don't call yourself an entry level. Don't say you're aspiring to be. You are. You are a developer. If you have studied software development, you can write software, you're a software developer. Make that your own title and let the company figure out what level you are. So just call yourself a developer and start applying for those jobs. The other advice that I tend to give people is you don't have to feel like you meet a 100% of the requirements in any job posts. As a hiring manager, when I read those job posts often, it's like, this is my birthday wish list. I hope I can find this mythical unicorn that has all of these traits [laughter] and skills and characteristics and that person doesn't exist. In fact, if I ever got a resume where they claim to have all that stuff, I would immediately probably throw the resume in the bin because they're probably lying, because either they have all those skills and they're about to hit me up for double the salary, or they're just straight up lying that they really don't have all those skills. As a hiring manager, those are things that we have to discern over time as we're evaluating people and talking with them and so on. But I would say if you meet like 30 to 40% of those skills, you could probably still apply. The challenge then is when you get that phone call, how do you convince them that you're worth taking a shot, that you're worth them taking the risk of hiring you, helping train you up in the skills that you don't have. But on those calls, you still need to present this is what I do bring to the company. I'm bringing energy, I'm bringing passion, and I'm bringing other experience and background and perspectives on things, hopefully from – just increasing the diversity in tech, just as an example. You're coming from a background, or a walk of life that maybe we don't currently have on the team and that's great for us and great for our team because you're going to open our eyes to things that we might not have thought of. So I think apply anyway. If they're asking for a couple of years' experience and you don't have it, apply anyway. If they're asking for programming languages you don't know, apply anyway. The languages you do know, a lot of that skill is going to transfer into a new language anyway. And I think a lot of companies are really missing out on the malleability and how they can shape an entry-level developer into the kind of developer and kind of engineer that they want to have on the team. Now you use that person as an example and say, “Now we've trained them with the process that we want, with the language and the tools that we want. They know the company goals.” We've trained them. We've built them up. We've invested in them and now everybody else we hire, we're going to hold to that standard and say, “If we're going to hire from outside, this is what we want,” and if we hire someone who doesn't have that level of skill, we're going to bring them up to that skill. I think a lot of companies are missing out on that whole aspect of hiring, that is they can take a chance on somebody who's got the people skills and the collaboration skills and that background and the experiences of life and not necessarily the technical skills and just train them on the technical skills. I went on a rant on this on LinkedIn the other day, where I was saying the return on investment. If a company is spending months and months and months trying to hire somebody, that's expensive. You're paying a recruiter, you're paying engineers, you're paying managers to screen all these people, interview all these people, and you're not quite finding that 100% skill match. Well, what if you just hired somebody months ago, spend $5,000 training them on the skills they didn't have, and now you're months ahead of the game. You could have saved yourself so much money so much time. You would have had an engineer on the team now. And I think a lot of companies are kind of missing that point. Sorry, I know I get very soapbox-y on some of the stuff. ARTY: I think it's important just highlighting these dynamics and stuff that are broken in our industry and all of the hoops and challenges that come with trying to get a job. You mentioned a couple of things on the other side of one, is that the interview processes themselves don't align to what it is we actually need skill-wise day-to-day. What are the things that you think are driving the creation of interviews that don't align with the day-to-day stuff? Like what factors are bringing those things so far out of alignment? IAN: That's a great question. I would say I have my suspicions. So don't take this as gospel truth, but from my own perspective, this is what I think. The big, big tech companies out there, like the big FAANG companies, they have a very specific target in mind of the kind of engineers that they want on their team. They have studied very deep data structures and algorithms, the systems thinking and the system design, and all this stuff. Like, they've got that knowledge, they've got that background because those big companies need that level of knowledge for things like scaling to billions of users, highly performant, and resilient systems. Where the typical startup and typical small and mid-sized company, they don't typically need that. But those kinds of companies look at FAANG companies and go, “We want to be like them. Therefore, we must interview like them and we must ask the same questions that they ask.” I think this has this cascading effect where when FAANG companies do interviews in a particular way, we see that again, with this ripple effect idea and we see that ripple down in the industry. Back in the early 2000s, mid 2000s—well, I guess right around the time when Google was getting started—they were asking a lot of really oddball kinds of questions. Like how many golf balls fit in a school bus and those were their interview challenges. It's like, how do you actually go through the calculation of how many golf balls would fit in a school bus and after a while, I think by 2009, they published an article saying, “Yeah, we're going to stop asking those questions. We weren't getting good signals. Everybody's breaking down those problems the same way and it wasn't really helpful.” Well, leading up to that point, everyone else was like, “Oh, those are cool questions. We're going to ask those questions, too,” and then when Google published that paper, everyone else was like, “Yeah, those questions are dumb. We're not going to ask those questions either.” And then they started getting into what we now see as like the LeetCode, HackerRank type of technical challenges being asked within interviews. I think that there's a time and place for some of that, but I think that the types of challenges that they're asking candidates to do should still be aligned with what the company does. One criticism that I've got. For example, I was looking at a technical challenge from one particular company that they asked this one particular problem and it was using a data structure called Heap. It was, find a quantity of location points closest to a target. So you're given a list of latitude, longitude values, and you have to find the five latitude and longitude points that are closest to a target. It's like, okay and so, I'm thinking through the challenge, how would I solve that if I had to solve it? But then I got thinking that company has nothing to do with latitude and longitude. That company has nothing to do with geospatial work of any kind. Why are they even asking that problem? Like, it's so completely misaligned that anybody they interview, that's the first thing that's going to go through their mind as a candidate is like, “Why are they asking me this kind of question?” Like, “This has nothing to do with the job. It had nothing to do with the role. I don't study global positioning and things like that. I know what latitude and longitude are, but I've never done any kind of math to try to figure out what those things would be and how you would detect differences between them.” Like, I could kind of guess with simple math, but unless you've studied that stuff, it's not going to be this, “Oh yeah, sure, no problem. It's this formula, whatever.” We shouldn't have to expect that candidates coming to a business are going to have that a, formula memorized, especially when that's not what your company does. And a lot of companies are like, “Oh, we're got to interview somebody. Quick, go to LeetCode and find a problem to ask them.” All you're going to do is you're going to bias your interview process towards people that have studied those problems on LeetCode and you're not actually going to find people that can actually solve your day-to-day challenges that your company is actually facing. ARTY: And instead, you're selecting for people that are really good at things that you don't even need. [chuckles] It's like, all right! It totally skews who you end up hiring toward people that aren't even necessarily competent in the skills that they actually need day-to-day. Like you mentioned FAANG companies need these particular skills. I don't even think that for resilience, to be able to build these sort of systems, and even on super hardcore systems, it's very seldom that you end up writing algorithmic type code. Usually, most of the things that you deal with in scaling and working with other humans and stuff, it's a function of design and being able to organize things in conceptual ways that make sense so that you can deconstruct a complex, fuzzy problem into little pieces that make sense and can fit together like a jigsaw puzzle. I have a very visual geometric way of thinking, which I find actually is a core ability that makes me good at code because I can imagine it visually laid out and think about the dependencies between things as like tensors between geographically located little code bubbles, if you will. IAN: Sure. ARTY: Being able to think that way, it's fundamentally different than solving algorithm stuff. But that deconstruction capability of just problem breakdown, being able to break down problems, being able to organize things in ways that make sense, being able to communicate those concepts and come up with abstractions that are easy enough for other people on your team to understand, ideally, those are the kinds of engineers we want on the teams. Our interview processes ought to select for those day-to-day skills of things that are the common bread and butter. [chuckles] IAN: I agree. ARTY: What we need to succeed on a day-to-day basis. IAN: Yeah. We need the people skills more than we need the hard technical skills sometimes. I think if our interview process could somehow tap into that and focus more on how do you collaborate, how do you do code reviews, how do you evaluate someone else's code for quality, how do you make the tradeoff between readability and optimization—because those are typically very polarized, opposite ends of the scale—how do you function on a team, or do you prefer to go heads down and just kind of be by yourself and just tackle tasks on your own? I believe that there's a time and place for that, too and there are personality types where you prefer to go heads down and just have peace and quiet and just get your work done and there's nothing wrong with that. But I think if we can somehow tap into the collaborative process as part of the interview, I think it's going to open a lot of companies up to like, “Oh, this person's actually going to be a really great team member. They don't quite have this level of knowledge in database systems that we hope they'd have, but that's fine. We'll just send them on this one-week database training class that happens in a week, or two and now they'll be trained.” [overtalk] MANDY: Do they want to learn? IAN: Right. Do they want to learn? Are they eager to learn? Because if they don't want to learn, then that's a whole other thing, too. But again, that's something that you can screen for. Like, “Tell me what you're learning on the side, or “What kinds of concepts do you want to learn?” Or “In this role, we need you to learn this thing. Is that even of interest to you?” Of course, everyone's going to lie and say, “Yeah,” because they want the paycheck. But I think you can still narrow it down a little bit more what area of training does this person need. So we can just hire good people on the team and now our team is full of good people and collaborative, team-based folks that are willing to work together to solve problems together and then worry about the technical skills as a secondary thing. MANDY: Yeah. I firmly believe anybody can learn anything, if they want to. I mean, that's how I've gotten here. IAN: Yeah, for sure. Same with me. I'm mostly self-taught. I studied computer engineering in college, so I can tell you how all the little microchips in your computer work. I did that for the first 4 years of my career and then I threw all that out the window and I taught myself web development and taught myself how the internet works. And then every job I had, that innate curiosity in me is like, “Oh, I wonder how e-commerce works.” Well, I went and got an e-commerce job, it's like, okay, well now I wonder how education works and I got into the education sector. Now, I wonder how you know this, or that works and so, I got into financial systems and I got into whatever and it just kind of blew my mind. I was like, “Wow, this is how all these things kind of talk to each other,” and that for me was just fascinating, and then turning around and sharing that knowledge with other people. But some people are just very fixed mindset and they want to learn one thing, they want to do that thing, and that's all they know. But I think, like we kind of talked about early in the podcast, you sign up for a career in this industry and you're signing up for lifelong learning. There's no shortage to things that you can go learn, but you have to be willing to do it. MID-ROLL: Rarely does a day pass where a ransomware attack, data breach, or state sponsored espionage hits the news. It's hard to keep up with all this and also to know if you're protected. Don't worry, Kaspersky's got you covered. Each week their team looks at the latest news, stories, and topics you might have missed during the week on the Transatlantic Cable Podcast. Mixing in-depth discussion, expert guests from around the world, a pinch of humor, and all with an easy to consume style - be sure you check them out today. ARTY: What kind of things could we do to potentially influence the way hiring is done and these practices with unicorn skilled searches and just the dysfunctional aspects on the hiring side? Because you're teaching all these tech interview skills for what to expect in the system and how to navigate that and succeed, even though it's broken. But what can we do to influence the broken itself and help improve these things? IAN: That's a great question. Breaking it from the inside out is a good start. I think if we can collectively get enough people together within these, especially the bigger companies and say like, “Hey, collectively, as an industry, we need to do interviewing differently.” And then again, see that ripple effect of oh, well, the FAANG companies are doing it that way so we're going to do it that way, too. But I don't think that's going to be a fast change by any stretch. I think there are always going to be some types of roles where you do have to have a very dedicated, very deep knowledge of system internals and how to optimize things, and pure algorithmic types of thinking. I think those kinds of jobs are always going to be out there and so, there's no fully getting away from something like a LeetCode challenge style interview. But I think that for a lot of small, mid-sized, even some large-sized companies, they don't have to do interviewing that way. But I think we can all stand on our soapbox and yell and scream, “Do it differently, do it differently,” and it's not going to make any impact at all because those companies are watching other companies for how they're doing it. So I think gradually, over time, we can just start to do things differently within our own company. And I think for example, if the company that I was working at, if we completely overhauled our interview process that even if we don't hire somebody, if someone can walk away from that going, “Wow, that was a cool interview experience. I've got to tell my friends about this.” That's the experience that we want when you walk away from the company if we don't end up hiring. If we hire you, it's great. But even if we don't hire you, I want to make sure that you've still got a really cool interview experience that you enjoyed the process, that it didn't just feel like another, “Okay, well, I could have just grind on LeetCode for three months to get through that interview.” I don't ever want my interviews to feel like that. So I think as more of us come to this understanding of it's okay to do it differently and then collectively start talking about how could we do it differently—and there are companies out there that are doing it differently, by the way. I'm not saying everyone in the industry is doing all these LeetCode style interviews. There are definitely companies out there that are doing things differently and I applaud them for doing that. And I think as awful as it was to have the pandemic shut everything down to early 2020, where no hiring happened, or not a lot of hiring happened over the summer, it did give a lot of companies pause and go, “Well, hey, since we're not hiring, since we got nobody in the backlog, let's examine this whole interview process and let's see if this is really what we want as a company.” And some companies did. They took the time, they took several months and they were like, “You know what, let's burn this whole thing down and start over” as far as their interview process goes. Some of them completely reinvented what their interview process was and turned it into a really great process for candidates to go through. So even if they don't get the job, they still walk away going, “Wow, that was neat.” I think if enough of us start doing that to where candidates then can say, “You know what, I would really prefer not to go through five, or six rounds of interviews” because that's tiring and knowing that what you're kind of what you're in for, with all the LeetCode problems and panel after panel after panel. Like, nobody wants to sit through that. I think if enough candidates stand up for themselves and say, “You know what, I'm looking for a company that has an easier process. So I'm not even going to bother applying.” I think there are enough companies out there that are desperately trying to hire that if they start getting the feedback of like you know what, people don't want to interview with us because our process is lousy. They're going to change the process, but it's going to take time. Unfortunately, it's going to drag out because companies can be stubborn and candidates are also going to be stubborn and it's not going to change quickly. But I think as companies take the step to change their process and enough candidates also step up to say, “Nah, you know what, I was going to apply there,” or “Maybe I got through the first couple of rounds, but you're telling me there's like three more rounds to go through? Nah, I'm not going to bother.” Companies are now starting to see candidates ghost them and walk away from the interview process because they just don't want to be bothered. I think that's a good signal for a company to take a step back and go, “Okay, we need to change our process to make it better so the people do want to apply and enjoy that interview process as they come through.” But it's going to take a while to get there. ARTY: Makes me think about we were talking early on about open source and the power of open source. I wonder with this particular challenge, if you set up a open source hiring manifesto, perhaps of we're going to collaborate on figuring out how to make hiring better. Well, what does that mean? What is it we're aiming for? We took some time to actually clarify these are the things we ought to be aiming for with our hiring process and those are hard problems to figure out. How do we create this alignment between what it is we need to be able to do to be successful day-to-day versus what it is we're selecting for with our interview process? Those things are totally out of whack. I think we're at a point, at least in our industry, where it's generally accepted that how we do interviewing and hiring in these broken things—I think it's generally accepted that it's broken—so that perhaps it's actually a good opportunity right now to start an initiative like that, where we can start collaborating and putting our knowledge together on how we ought to go about doing things better. Even just by starting something, building a community around it, getting some companies together that are working on trying to improve their own hiring processes and learning together and willing to share their knowledge about things that are working better, such that everybody in the industry ultimately benefits from us getting better at these kinds of things. As you said, being able to have an interview process that even if you don't get the job, it's not a miserable experience for everyone involved. [chuckles] Like there's no reason for that. IAN: Yeah. MANDY: That's how we – I mean, what you just explained, Arty isn't that how we got code of conducts? Everybody's sitting down and being like, “Okay, this is broken. Conferences are broken. What are we going to all do together?” So now why don't we just do the same thing? I really like that idea of starting an open source initiative on interviewing. Like have these big FAANG companies be like, “I had a really great interview with such and such company.” Well, then it all spirals from there. I think that's super, super exciting. ARTY: Yeah. And what is it that made this experience great? You could just have people analyze their interview experiences that they did have, describe well, what are the things that made this great, that made this work and likewise, you could collect anti-patterns. Some of the things that you talked about of like, are we interviewing for geolocation skills when that actually has absolutely nothing to do with our business? We could collect these things as these funky anti-patterns of things so that people could recognize those things easier in there because it's always hard to see yourself. It's hard to see yourself swinging. IAN: An interesting idea along those lines is what if companies said like, “Hey, we want the community to help us fix our interview process. This is who we are, this is what our business does. What kinds of questions do you think we should be asking?” And I think that the community would definitely rally behind that and go, “Oh, well, you're an e-commerce platform so you should be asking people about shopping cart implementations and data security around credit cards and have the interview process be about what the company actually does.” I think that that would be an interesting thing to ask the community like, “What do you think we should be asking in these interviews?” Not that you're going to turn around and go, “Okay, that's exactly what we're going to do,” but I think it'll give a lot of companies ideas on yeah, okay, maybe we could do a take-home assignment where you build a little shopping cart and you submit that to us. We'll evaluate how you did, or what you changed, or we're going to give you some code to start with and we're going to ask you to fix a bug in it, or something like, I think that there's a bigger movement now, especially here in Canada, in the US of doing take-home assignments. But I think at the same time, there are pros and cons of doing take-home assignments versus the on-site technical challenges. But what if we gave the candidate a choice as part of that interview process, too and say, “Hey, cool. We want to interview you. Let's get through the phone screen and now that you've done the phone screen, we want to give you the option of, do you want to do a small take-home assignment and then do a couple of on-site technical challenges? Do you want to do a larger take-home and maybe fewer on-site technical challenges?” I think there's always going to be some level of “Okay, we need to see you code in front of us to really make sure that you're the one that wrote that code.” I got burned on that back in 2012 where I thought somebody wrote some code and they didn't. They had a friend write it as their take-home assignment and so, I brought them in for the interview and I'm like, “Cool, I want you to fix this bug,” and they had no idea what to do. They hadn't even looked at the code that their friend wrote for them it's like, why would you do that? So I think that there's always going to be some amount of risk and trust that needs to take place between the candidates and the companies. But then on the flip side of that, if it doesn't work out, I really wish companies would be better about giving feedback to people instead of just ghosting them, or like, “Oh, you didn't and pass that round. So we're just not even going to call you back and tell you no. We're just not ever just going to call.” The whole ghosting thing is, by far, the number one complaint in the tech industry right now is like, “I applied and I didn't even get a thanks for your resume. I got nothing,” or maybe you get some automated reply going, “We'll keep you in mind if you're a match for something.” But again, those apple looking at tracking systems are biased because the developers building them and the people reading the resumes are going to have their own inherent bias in the search terms and the things that they're looking for and so on. So there's bias all over the place that's going to be really hard to get rid of. But I think if companies were to take a first step and say like, “Okay, we're going to talk to the community about what they would like to see the interview process be,” and start having more of those conversations. And then I think as we see companies step up and make those changes, those are going to be the kinds of companies where people are going to rally behind them and go, “I really want to work there because that interview process is pretty cool.” And that means the company is – well, it doesn't guarantee the company's going to be cool, but it shows that they care about the people that are going to work there. If people know that the company is going to care about you as an employee, you're far more likely to want to work there. You're far more likely to be loyal and stay there for a long term as opposed to like oh, I just need to collect a paycheck for a year to get a little bit of experience and then job hop and go get a better title, better pay. So I think it can come down to company loyalty and stuff, too. MANDY: Yeah. Word of mouth travels fast in this industry. IAN: Absolutely. MANDY: And to bring up the code of conduct thing and now people are saying, “If straight up this conference doesn't have a code of conduct, I'm not going.” IAN: Yeah. I agree. It'll be interesting to see how something like this tech interview overhaul open source idea could pick up momentum and what kinds of companies would get behind it and go, “Hey, we think our interview process is pretty good already, but we're still going to be a part of this and watch other companies step up to.” When I talked earlier about that ripple effect where Google, for example, stopped asking how many golf balls fit in a school bus kind of thing and everyone else is like, “Yeah, those questions are dumb.” We actually saw this summer, Facebook and Amazon publicly say, “We're no longer going to ask dynamic programming problems in our interviews.” It's going to be interesting to see how long that takes to ripple out into the industry and go, “Yeah, we're not going to ask DP problems either,” because again, people want to be those big companies. They want to be billion- and trillion-dollar companies, too and so, they think they have to do everything the same way and that's not always the case. But there's also something broken in the system, too with hiring. It's not just the interview process itself, but it's also just the lack of training. I've been guilty of this myself, where I've got an interview with somebody and I've got back-to-back meetings. So I just pull someone on my team and be like, “Hey, Arty, can you come interview this person?” And you're like, “I've never interviewed before. I guess, I'll go to LeetCode and find a problem to give them.” You're walking in there just as nervous as the candidate is and you're just throwing some technical challenge at them, or you're giving them the technical challenge that you've done most recently, because you know the answer to it and you're like, “Okay, well, I guess they did all right on it. They passed,” or “I think they didn't do well.” But then companies aren't giving that feedback to people either. There's this thinking in the industry of oh, if we give them feedback, they're going to sue us and they're going to say it's discriminatory and they're going to sue us. Aline Lerner from interviewing.io did some research with her team and literally nobody in recent memory has been sued for giving feedback to candidates. If anything, I think that it would build trust between companies and the candidates to say, “Hey, this is what you did. Well, this is what we thought you did okay on. We weren't happy with the performance of the code that you wrote so we're not moving forward,” and now you know exactly what to go improve. I was talking to somebody who was interviewing at Amazon lately and they said, “Yeah, the recruiter at Amazon said that I would go through all these steps,” and they had like five, or six interviews, or something to go through. And they're like, “Yeah, and they told me at the end of it, we're not going to give you any feedback, but we will give you a yes, or no.” It's like so if I get a no, I don't even find out what I didn't do well. I don't know anything about how to improve to want to go apply there in the future. You're just going to tell me no and not tell me why? Why would I want to reapply there in the future if you're not going to tell me how I'm going to get better? I'm just going to do the same thing again and again. I'm going to be that little toy that just bangs into the wall and doesn't learn to steer away from the wall and go in a different direction. If you're not going to give me any feedback, I'm just going to keep banging my head against this wall of trying to apply for a job and you're not telling me why I'm not getting it. It's not helpful to the candidate and that's not helpful to the industry either. It starts affecting mental health and it starts affecting other things and I think it erodes a lot of trust between companies and candidates as well. ARTY: Yeah. The experience of just going through trying to get a job and going through the rejection, it's an emotional experience, an emotionally challenging experience. Of all things that affect our feels a lot, it's like that feeling of social rejection. So being able to have just healthier relationships and figuring out how to see another person as a human, help figure out how you can help guide and support them continuing on their journey so that the experience of the interview doesn't hurt so much even when the relationship doesn't work out, if we could get better at those kinds of things. There's all these things that if we got better at, it would help everybody. IAN: I agree. ARTY: And I think that's why a open source initiative kind of thing maybe make sense because this is one of those areas that if we got better at this as an industry, it would help everybody. It's worth putting time in to learn and figure out how we can do better and if we all get better at it and stuff, there's just so many benefits and stuff from getting better at doing this. Another thing I was thinking about. You were mentioning the language thing of how easy it is to map skills that we learned from one language over to another language, such that even if you don't know the language that they're coding in at a particular job, you should apply anyway. [chuckles] I wonder if we had some data around how long it takes somebody to ramp up on a new language when they already know similar-ish languages. If we had data points on those sort of things that we're like, “Okay, well, how long did it actually take you?” Because of the absence of that information, people just assume well, the only way we can move forward is if we have the unicorn skills. Maybe if it became common knowledge, that it really only takes say, a couple months to become relatively proficient so that you can be productive on the team in another language that you've never worked in before. Maybe if that was a common knowledge thing, that people wouldn't worry about it so much, that you wouldn't see these unicorn recruiting efforts and stuff. People would be more inclined to look for more multipurpose general software engineering kinds of skills that map to whatever language that you're are doing. That people will feel more comfortable applying to jobs and going, “Oh, cool. I get the opportunity to learn a new language! So I know that I may be struggling a bit for a couple months with this, but I know I'll get it and then I can feel confident knowing that it's okay to learn my way through those things.” I feel if maybe we just started collecting some data points around ramp up time on those kind of things, put a database together to collect people's experiences around certain kind of things, that maybe those kinds of things would help everyone to just make better decisions that weren't so goofy and out of alignment with reality. IAN: Yeah, and there are lots of cheat sheets out there like, I'm trying to remember the name of it. I used to have it bookmarked. But you could literally pull up two programming languages side by side in the same browser window and see oh, if this is how you do it in JavaScript, this is how you do it on Python, or if this is how you write this code in C++, here's how you do it in Java. It gives you a one-to-one correlation for dozens, or hundreds of different kinds of blocks of code. That's really all you need to get started and like you said, it will take time to come proficient to where you don't have to have that thing up on your screen all the time. But at the same time, I think the company could invest and say, “You know what, take a week and just pour everything you've got into learning C Sharp because that's the skill we want you to have for this job.” It's like, okay, if you are telling me you trust me and you're making me the job offer and you're going to pay me this salary and I get to work in tech, but I don't happen to have that skill, but you're willing to me in that skill, why would I not take that job? You're going to help me learn and grow. You're offering me that job with a salary. Those are all great signals to send. Again, I think that a lot of companies are missing out and they're like, “No, we're not going to hire that person. We're just going to hold out until we find the next person that's a little bit better.” I think that that's where some things really drop off in the process, for sure is companies hold out too long and next thing they know, months have gone by and they've wasted tons of money when they could have just hired somebody a long time ago and just trained them. I think the idea of an open source collective on something like this is pretty interesting. At the same time, it would be a little subjective on “how quickly could someone ramp up on a, or onboard on a particular technology.” Because everybody has different learning styles and unless you're finding somebody to curate – like if you're a Ruby programmer and you're trying to learn Python, this is the de facto resource that you need to look at. I think it could be a little bit subjective, but I think that there's still some opportunity there to get community input on what should the interview process be? How long should it really be? How many rounds of interviews should there be from, both the candidates experience as well as the company experience and say, as a business, this is why we have you doing these kinds of things. That's really what I've been to teach as part of the Tech Interview Guide and the daily email series is from my perspective in the business, this is why. This is why I have you do a certain number of rounds, or this is why I give you this kind of technical challenge, or this is why I'm asking you this kind of question. Because I'm trying to find these signals about you that tell me that you're someone that I can trust to bring on my team. It's a tough system when not many people are willing to talk about it because I think a lot of people are worried that others are going to try to game the system and go, “Oh, well, now that I know everything about your interview process, I know how to cheat my way through it and now you're going to give me that job and I really don't know what I'm doing.” But I think that at the same time, companies can also have the higher, slow fire, fast mentality of like, “All right, you're not cutting it.” Like you're out right away and just rehire for that position. Again, if you're willing to trust and willing to extend that offer to begin with. If it doesn't work out, it doesn't work out. It's a business decision; it's not a personal thing. But it's still devastating to the person when they don't get the job, or if they get fired right away because they're not pulling their weight, but if they're cheating their way through it, then they get what they deserve to. MANDY: Awesome. Well, I think that's a great place to put a pin in this discussion. It is definitely not a great place to end it. I think we should head over to our reflection segment. For me, there were so many things I wrote down. I loved that you said that people's tech journey is like a choose your own adventure. You can learn one thing and then find yourself over here and then the next thing you know, you find yourself over here. But you've picked up all these skills along the way and that's the most important thing is that as you go along this journey, you keep acquiring these skills that ultimately will make you the best programmer that you can be. Also, I really like that you also said something about it being a lifelong learning. Tech is lifelong learning and not just the technical skills. It's the people skills. It's the behavioral skills. Those are the important skills. Those skills are what ultimately it comes down to being in this industry is, do you have the desire to learn? Do you have the desire to grow? I think that should be one of the most important things that companies are aware of when they are talking to candidates that it's not about can this person do a Fibonacci sequence. It's can they learn, are they a capable person? Are they going to show up? Are they going to be a good person to have in the office? Are they going to be a light? Are they going to be supportive? Are they going to be caring? That's the ultimate. That right there for me is the ultimate and thank you for all that insight. ARTY: Well, I really, really loved your story, Ian at the very beginning of just curiosity and how you started your journey, getting into programming and then ended up finding ways to give back and getting really excited about seeing people's light bulbs go off and how much joy you got from those experiences, connecting with another individual and making that happen. I know we've gotten on this long tangent of pretty abstract, big topics of just like, here's the brokenness in the industry and what are some strategies that we can solve these large-scale problems. But I think you said some really important things back of just the importance of these one-on-one connections and the real change happens in the context of a relationship. Although, we're thinking about these big things. To actually make those changes, to actually make that difference, it happens in our local context. It happens in our companies. It happens with the people that we interact with on a one-on-one basis and have a genuine relationship with. If we want to create change, it happens with those little ripples. It happens with affecting that one relationship and that person going and having their own ripple effects. We all have the power to influence these things through the relationships with the individuals around us. IAN: I think my big takeaway here is we have been chatting for an hour and just how easy it is to have conversation about hey, what if we did this? How quickly it can just turn into hey, as a community, what if? And just the willingness of people being in the community, wanting to make the community better,

The Cloud Pod
144: Oh the Places You'll Go at re:Invent 2021

The Cloud Pod

Play Episode Listen Later Nov 24, 2021 61:35


The Cloud Pod: Oh the Places You'll Go at re:Invent 2021 — Episode 144 On The Cloud Pod this week, as a birthday present to Ryan, the team didn't discuss his advanced age, and focused instead on their AWS re:Invent predictions. Also, the Google Cybersecurity Action Team launches a product, and Microsoft announces a new VM series in Azure. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located.  This week's highlights

BIMThoughts
E2140 – Dalton Goodwin

BIMThoughts

Play Episode Listen Later Nov 23, 2021


In this episode, We chat with Dalton Goodwin of The BIM Coordinator YouTube channel fame. We talk about his love for all things AEC and Data related like, Dynamo, Revit, Python, Knime, Postman, Codewars, Bimbeats and more! Show Notes Episode Hashtag: Here is where you can find Dalton on the World Wide Web: Website: The … Read More →

Paul's Security Weekly
Max Headroom - ASW #175

Paul's Security Weekly

Play Episode Listen Later Nov 23, 2021 69:32


This week, we welcome Liam Randall, CEO at Cosmonic, to talk about wasmCloud - Distributed Computing With WebAssembly! CNCF wasmCloud helps developers to build distributed microservices in WebAssembly that they can run across clouds, browsers, and everywhere securely! In the AppSec News: What would CVEs for CSPs look like, clever C2 in malicious Python packages, diversity in bounty programs, shared responsibility and secure defaults, breach costs to influence AppSec programs!   Show Notes: https://securityweekly.com/asw175 Segment Resources: https://webassembly.org/ https://wasmcloud.com/   Visit https://www.securityweekly.com/asw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly

Paul's Security Weekly TV
CVEs 4 CSPs, Malicious PyPi, Bounty Programs, Shared Responsibility, & Breach Costs - ASW #175

Paul's Security Weekly TV

Play Episode Listen Later Nov 23, 2021 35:16


This week in the AppSec News: What would CVEs for CSPs look like, clever C2 in malicious Python packages, diversity in bounty programs, shared responsibility and secure defaults, breach costs to influence AppSec programs!   Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw175

Outspoken with Shana Cosgrove
I'm Speaking: Lauren Lockwood, Founder & Principal, Bloom Works LLC, and Emily Wright-Moore, Principal at Bloom Works.

Outspoken with Shana Cosgrove

Play Episode Listen Later Nov 23, 2021 58:04


Remote Collaboration, Tech Efficiency, and Career Journeys.In this episode of The Outspoken Podcast, host Shana Cosgrove talks to Lauren Lockwood, Founder & Principal, Bloom Works LLC, and Emily Wright-Moore, Principal at Bloom Works LLC. Lauren and Emily give their insight on the very real effects of outdated vital government services and what can be done to mitigate those effects. They also discuss their very different paths that led them to starting Bloom and why following their passion has been so powerful. Lauren and Emily also reveal their favorite projects with Bloom and how those projects reflect Bloom's mission. Finally, we hear what they wanted to be when they grew up, combating ageism and sexism within tech, and their advice to their younger selves! QUOTES “One of my favorite moments in a meeting was early on, working with the state. Me, Emily, and this other woman show up for our first meeting with their team to redesign some of their systems. The people from this agency walk in and the Commissioner is a woman, her deputy is a woman... they sit down with us, and they were so relieved that we weren't just a team of dudes walking into the room.” - Lauren Lockwood [50:10] “I feel like we default to this specialist mindset. And I think being a generalist is pretty, pretty great. I like knowing a lot of things and I like pulling in a lot of past jobs and past skills. So I feel like I would just stop worrying about that. Like, it's fine” - Emily Wright-Moore [52:33] “Switching careers is not a failure, I don't think. I think it is like you learned a lot and it was good for that time, but not necessarily forever. Think of this time as an exploratory time, rather than a first step on a journey.” - Lauren Lockwood [53:14]   TIMESTAMPS  [00:04] Intro [01:43] Meet Lauren Lockwood and Emily Wright-Moore [02:11] How Lauren and Emily met [03:04] What is Digital Transformation? [04:57] The Cost of Inefficiencies in Vital Services [05:26] Updating Outdated Technology [08:28] What is Bloom Works? [10:04] Bloom's Tech Stack [11:30] Bloom's Ideal Sample Size of Users [12:51] User Groups [15:45] Lauren's Favorite Bloom Projects [18:30] Emily's Favorite Bloom Projects [19:57] Sharing Bloom's Work with the World [21:00] Starting Bloom Works [21:58] Lauren's Experience after Harvard [26:14] Lauren's Unique Career Path [29:09] Lauren's Work with the Mayor of Boston [30:53] Redesigning Boston.gov [31:30] What Lauren Wanted to be as a Kid [33:31] What Emily Wanted to be as a Kid [36:39] Emily's Experience at Parsons School of Design [39:46] Correcting Misperceptions in the Industry [41:35] What is United States Digital Service [43:56] Emily's Journey after her Tour [44:27] Lauren's Work Background [46:31] Combating Ageism within Tech [50:04] Sexism in Tech [52:17] Advice to their Younger Selves [54:01] Lauren's Book Club [54:37] Final Thoughts [57:44] Outro   RESOURCES https://www.va.gov/ (US Department of Veterans Affairs) https://www.cnbc.com/2020/07/09/coronavirus-unemployment-benefits-claims-are-the-worst-in-history.html (Issues with Unemployment during COVID) https://www.microsoft.com/en-us/microsoft-365/outlook/email-and-calendar-software-microsoft-outlook (Microsoft Outlook) https://maps.google.com/ (Google Maps) https://reactjs.org/ (React) https://dotnet.microsoft.com/ (Microsoft .net) https://www.microfocus.com/en-us/what-is/cobol (COBOL) https://www.figma.com/ (Figma) https://www.nsf.gov/news/special_reports/i-corps/ (I-Corps) https://usds.gov/ (United States Digital Service) https://www.thinkof-us.org/ (Think Of Us) https://www.thinkof-us.org/about/who-we-are/ceo (Sixto Cancel) https://www.hbs.edu/ (Harvard Business School) https://www.morganstanley.com/ (Morgan Stanley) https://www.boston.gov/departments/mayors-office/martin-j-walsh (Marty Walsh) https://www.python.org/ (Python) https://www.defense.gov/ (US Department of Defense)...

Talk Python To Me - Python conversations for passionate developers
#342: Python in Architecture (as in actual buildings)

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 23, 2021 61:28


At PyCon 2017, Jake Vanderplas gave a great keynote where he said, "Python is a mosaic." He described how Python is stronger and growing because it's being adopted and used by people with diverse technical backgrounds. In this episode, we're adding to that mosaic by diving into how Python is being used in the architecture, engineering, and construction industry. Our guest, Gui Talarico, has worked as an architect who help automate that world by bringing Python to solve problems others were just doing by point-and-click tooling. I think you'll enjoy this look into that world. We also touch on his project pyairtable near the end as well. Links from the show Pyninsula Python in Architecture Talk: youtube.com Using technology to scale building design processes at WeWork talk: youtube.com Revit software: autodesk.com Creating a command in pyRevit: notion.so IronPython: ironpython.net Python.NET: github.com revitpythonwrapper: readthedocs.io aec.works site: aec.works Speckle: speckle.systems Ladybug Tools: ladybug.tools Airtable: airtable.com PyAirtable: pyairtable.readthedocs.io PyAirtable ORM: pyairtable.readthedocs.io Revitron: github.com WeWork: wework.com Article: Using Airtable as a Content Backend: medium.com Python is a Mosaic Talk: youtube.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm ---------- Stay in touch with us ---------- Subscribe on YouTube (for live streams): youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors Shortcut Linode AssemblyAI Talk Python Training

Python Bytes
#260 It's brutally simple: made just from pickle and zip

Python Bytes

Play Episode Listen Later Nov 23, 2021 48:49


Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut - Get started at shortcut.com/pythonbytes Special guest: Chris Patti Brian #1: Using cog to update --help in a Markdown README file Simon Willison I've wanted to have a use case for Ned Batchelder's cog Cog is a utility that looks for specially blocks [[[cog some code ]]] and [[[end]]] These block can be in comments, [HTML_REMOVED] for markdown. When you run cog on a file, it runs the “some code” and puts the output after the middle ]]] and before the [[[end]]]. Simon has come up with an excellent use, running --help and capturing the output for a README.md file for a CLI project. He even wrote a test, pytest of course, to check if the README.md needs updated. Michael #2: An oral history of Bank Python Bank Python implementations are effectively proprietary forks of the entire Python ecosystem which are in use at many (but not all) of the biggest investment banks. The first thing to know about Minerva is that it is built on a global database of Python objects. Barbara is a simple key value store with a hierarchical key space. It's brutally simple: made just from pickle and zip. Applications also commonly store their internal state in Barbara - writing dataclasses straight in and out with only very simple locking and transactions (if any). There is no filesystem available to Minerva scripts and the little bits of data that scripts pick up has to be put into Barbara. Barbara also has some "overlay" features: # connect to multiple rings: keys are 'overlaid' in order of # the provided ring names db = barbara.open("middleoffice;ficc;default") # get /Etc/Something from the 'middleoffice' ring if it exists there, # otherwise try 'ficc' and finally the default ring some_obj = db["/Etc/Something"] Lots of info about modeling with classes (instruments, books, etc) If you understand excel you will be starting to recognize similarities. In Excel, spreadsheets cells are also updated based on their dependencies, also as a directed acyclic graph. Dagger allows people to put their Excel-style modelling calculations into Python, write tests for them, control their versioning without having to mess around with files like CDS-OF-CDS EURO DESK 20180103 Final (final) (2).xlsx. Dagger is a key technology to get financial models out of Excel, into a programming language and under tests and version control. Time to drop a bit of a bombshell: the source code is in Barbara too, not on disk. Remain composed. It's kept in a special Barbara ring called sourcecode. Interesting table structures, like Pandas, but closer to a DB (MnTable) Over time the divergence between Bank Python and Open Source Python grows. Technology churns on both sides, much faster outside than in of course, but they do not get closer. Minerva has its own IDE - no other IDEs work if you keep your source files in a giant global database. What I can't understand is why it contains its own web framework. Investment banks have a one-way approach to open source software: (some of) it can come in, but none of it can go out BTW, I “read” this with naturalreaders app Chris #3: Pyxel Pyxel is a ‘retro gaming console' written in Python! This might seem old and un-shiny, but the restrictions imposed by the environment gift simplicity Vastly decreased learning time and effort compared to something like Unity or even Pygame Straight forward simple commands, just like it was for micro-computers in the 80s cls(), line(), rect(), circ() etc. Pyxel is somewhat more Python and less console than others like PICO-8 or TIC-80 but this is a feature! Use your regular development environment to build. Brian #4: How to Ditch Codecov for Python Projects Hynek Schlawack Codecov is a third party service that checks your coverage output and fails a build if coverage dropped. It's not without issues. Hynek is using coverage.py --fail-under flag in place of this in GitHub actions. It's not as simple as just adding a flag if you are using --parallel to combine coverage for multiple test runs into one report. Hynek is utilizing the coverage output as an artifact for each test, then pulling them all together in a coverage stage combine and check coverage. He provides the snippet of GH Action, and even links to a working workflow file using this process. Nice! Michael #5: tiptop (like glances) via Zach Villers tiptop is a command-line system monitoring tool in the spirit of top. It displays various interesting system stats, graphs it, and works on all operating systems. Really nice visualization for your servers Good candidate for pipx install tiptop Chris #6: pyc64 A Commodore 64 emulator written in pure Python! Not 100% complete - screen drawing is PETSCII character mode only This still allows for a lot of interesting apps & exploration Actual machine emulation using py65 - a pure Python 6502 chip emulator! You can pop to a Python REPL from inside the emulator and examine data structures like memory, registers, etc! An incredible example of what Python is capable of 0.6 Mhz with CPython and over 2Mhz with pypy! Extras Michael: Michael's FlaskCon 2021 HTMX Talk Chris: Amazon OpsTech IT is hiring! (If deemed appropriate :) Joke: I hate how the screens get bright so early this time of year

Daily Growth Podcast
Cyber Security News. Nov 22, 2021

Daily Growth Podcast

Play Episode Listen Later Nov 22, 2021 16:32


Cyber Security Articles Insurance companies cutting cyber coverage amid surge in ransomware attacks Nearly 600,000 records stolen from Utah medical services provider Police and banks tell shoppers to be vigilant for Black Friday scams Facebook delays Messenger and Instagram End-to-End Encryption even further Researchers discovered 11 malicious Python packages in the PyPI repository that can steal Discord access tokens, passwords, and conduct attacks. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/neomoses/support

Test & Code - Python Testing & Development
171: How and why I use pytest's xfail - Paul Ganssle

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 22, 2021 38:26


Paul Ganssle, is a software developer at Google, core Python dev, and open source maintainer for many projects, has some thoughts about pytest's xfail. He was an early skeptic of using xfail, and is now an proponent of the feature. In this episode, we talk about some open source workflows that are possible because of xfail. Special Guest: Paul Ganssle.

The History of Computing
Perl, Larry Wall, and Camels

The History of Computing

Play Episode Listen Later Nov 21, 2021 15:00


Perl was started by Larry Wall in 1987. Unisys had just released the 2200 series and only a few years stopped using the name UNIVAC for any of their mainframes. They merged with Burroughs the year before to form Unisys. The 2200 was a continuation of the 36-bit UNIVAC 1107, which went all the way back to 1962. Wall was one of the 100,000 employees that helped bring in over 10 and a half billion in revenues, making Unisys the second largest computing company in the world at the time. They merged just in time for the mainframe market to start contracting. Wall had grown up in LA and Washington and went to grad school at the University of California at Berkeley. He went to the Jet Propulsion Laboratory after Grad School and then landed at System Development Corporation, which had spun out of the SAGE missile air defense system in 1955 and merged into Burroughs in 1986, becoming Unisys Defense Systems. The Cold War had been good to Burroughs after SDC built the timesharing components of the AN/FSQ-32 and the JOVIAL programming language. But changes were coming. Unix System V had been released in 1983 and by 1986 there was a rivalry with BSD, which had been spun out of UC - Berkeley where Wall went to school. And by then AT&T had built up the Unix System Development Laboratory, so Unix was no longer just a language for academics. Wall had some complicated text manipulation to program on these new Unix system and as many of us have run into, when we exceed a certain amount of code, awk becomes unwieldy - both from a sheer amount of impossible to read code and from a runtime perspective. Others were running into the same thing and so he got started on a new language he named Practical Extraction And Report Language, or Perl for short. Or maybe it stands for Pathologically Eclectic Rubbish Lister. Only Wall could know. The rise of personal computers gave way to the rise of newsgroups, and NNTP went to the IETF to become an Internet Draft in RFC 977. People were posting tools to this new medium and Wall posted his little Perl project to comp.sources.unix in 1988, quickly iterating to Perl 2 where he added the languages form of regular expressions. This is when Perl became one of the best programming languages for text processing and regular expressions available at the time. Another quick iteration came when more and more people were trying to write arbitrary data into objects with the rise of byte-oriented binary streams. This allowed us to not only read data from text streams, terminated by newline characters, but to read and write with any old characters we wanted to. And so the era of socket-based client-server technologies was upon us. And yet, Perl would become even more influential in the next wave of technology as it matured alongside the web. In the meantime, adoption was increasing and the only real resource to learn Perl was a the manual, or man, page. So Wall worked with Randal Schwartz to write Programming Perl for O'Reilly press in 1991. O'Reilly has always put animals on the front of their books and this one came with a Camel on it. And so it became known as “the pink camel” due to the fact that the art was pink and later the art was blue and so became just “the Camel book”. The book became the primary reference for Perl programmers and by then the web was on the rise. Yet perl was still more of a programming language for text manipulation. And yet most of what we did as programmers at the time was text manipulation. Linux came around in 1991 as well. Those working on these projects probably had no clue what kind of storm was coming with the web, written in 1990, Linux, written in 1991, php in 1994, and mysql written in 1995. It was an era of new languages to support new ways of programming. But this is about Perl - whose fate is somewhat intertwined. Perl 4 came in 1993. It was modular, so you could pull in external libraries of code. And so CPAN came along that year as well. It's a repository of modules written in Perl and then dropped into a location on a file system that was set at the time perl was compiled, like /usr/lib/perl5. CPAN covers far more libraries than just perl, but there are now over a quarter million packages available, with mirrors on every continent except Antartica. That second edition coincided with the release of Perl 5 and was published in 1996. The changes to the language had slowed down for a bit, but Perl 5 saw the addition of packages, objects, references, and the authors added Tom Christiansen to help with the ever-growing camel book. Perl 5 also brought the extension system we think of today - somewhat based off the module system in Linux. That meant we could load the base perl into memory and call those extensions. Meanwhile, the web had been on the rise and one aspect of the power of the web was that while there were front-ends that were stateless, cookies had come along to maintain a user state. Given the variety of systems html was able to talk to mod_perl came along in 1996, from Gisle Was and others started working on ways to embed perl into pages. Ken Coar chaired a working group in 1997 to formalize the concept of the Common Gateway Interface. Here, we'd have a common way to call external programs from web servers. The era of web interactivity was upon us. Pages that were constructed on the fly could call scripts. And much of what was being done was text manipulation. One of the powerful aspects of Perl was that you didn't have to compile. It was interpreted and yet dynamic. This meant a source control system could push changes to a site without uploading a new jar - as had to be done with a language like Java. And yet, object-oriented programming is weird in perl. We bless an object and then invoke them with arrow syntax, which is how Perl locates subroutines. That got fixed in Perl 6, but maybe 20 years too late to use a dot notation as is the case in Java and Python. Perl 5.6 was released in 2000 and the team rewrote the camel book from the ground up in the 3rd edition, adding Jon Orwant to the team. This is also when they began the design process for Perl 6. By then the web was huge and those mod_perl servlets or CGI scripts were, along with PHP and other ways of developing interactive sites, becoming common. And because of CGI, we didn't have to give the web server daemons access to too many local resources and could swap languages in and out. There are more modern ways now, but nearly every site needed CGI enabled back then. Perl wasn't just used in web programming. I've piped a lot of shell scripts out to perl over the years and used perl to do complicated regular expressions. Linux, Mac OS X, and other variants that followed Unix System V supported using perl in scripting and as an interpreter for stand-alone scripts. But I do that less and less these days as well. The rapid rise of the web mean that a lot of languages slowed in their development. There was too much going on, too much code being developed, too few developers to work on the open source or open standards for a project like Perl. Or is it that Python came along and represented a different approach with modules in python created to do much of what Perl had done before? Perl saw small slow changes. Python moved much more quickly. More modules came faster, and object-oriented programming techniques hadn't been retrofitted into the language. As the 2010s came to a close, machine learning was on the rise and many more modules were being developed for Python than for Perl. Either way, the fourth edition of the Camel Book came in 2012, when Unicode and multi-threading was added to Perl. Now with Brian Foy as a co-author. And yet, Perl 6 sat in a “it's coming so soon” or “it's right around the corner” or “it's imminent” for over a decade. Then 2019 saw Perl 6 finally released. It was renamed to Raku - given how big a change was involved. They'd opened up requests for comments all the way back in 2000. The aim was to remove what they considered historical warts, that the rest of us might call technical debt. Rather than a camel, they gave it a mascot called Camelia, the Raku Bug. Thing is, Perl had a solid 10% market share for languages around 20 years ago. It was a niche langue maybe, but that popularity has slowly fizzled out and appears to be on a short resurgence with the introduction of 6 - but one that might just be temporary. One aspect I've always loved about programming is the second we're done with anything, we think of it as technical debt. Maybe the language or server matures. Maybe the business logic matures. Maybe it's just our own skills. This means we're always rebuilding little pieces of our code - constantly refining as we go. If we're looking at Perl 6 today we have to look at whether we want to try and do something in Python 3 or another language - or try and just update Perl. If Perl isn't being used in very many micro-services then given the compliance requirements to use each tool in our stack, it becomes somewhat costly to think of improving our craft with Perl rather than looking to use possibly more expensive solutions at runtime, but less expensive to maintain. I hope Perl 6 grows and thrives and is everything we wanted it to be back in the early 2000s. It helped so much in an era and we owe the team that built it and all those modules so much. I'll certainly be watching adoption with fingers crossed that it doesn't fade away. Especially since I still have a few perl-based lamda functions out there that I'd have to rewrite. And I'd like to keep using Perl for them!

Financial Investing Radio
FIR 135: Interview - Can AI See Better Than Humans ??

Financial Investing Radio

Play Episode Listen Later Nov 20, 2021 20:12


In this episode we look at can AI help me see better in a cost effective way! Grant Everybody, welcome to another episode of click AI radio. Okay, I have in the house today with me, someone I've been very excited to talk with. He and his organization reached out to me and I was quite surprised when I saw the cool AI solution that they have been bringing to the market. And Carlos has been giving me a little background on this. And I think you'll be excited to hear what it is he's putting together. But first and foremost, welcome, Carlos Anchia. You got Yeah. All right. There you go. Carlos, please, welcome and introduce yourself. Carlos Hey, Grant. Thanks a lot for having us on. Like you said, my name is Carlos Anchia. I'm the CEO of Plainsight AI. And we're bringing to market an end to end computer vision AI platform. I'm really, really happy to be here love talking about AI, computer vision, and how we can get more people to use it. Grant So okay, so tell me a little bit about what got you going down here. As you and I were just chatting a moment ago, there's so many components to AI, or it's such a broad range of technologies there. What got you thinking about the CV or the computer vision space? What problem? What How did you get started there? Carlos Yeah, that's a really good question. So like you said, AI, the breadth of AI is huge, you have deep learning, you have machine learning, forecasting, prediction, computer vision. And these are just a few. There's a lot of different applications for AI and places you can go down and succeed in. From our respect, we really, we really focus in on computer vision, specifically how to apply that to imagery and video. Today, there's a huge amounts of data going throughout the internet and in enterprise storage classes, where you can't really extract the value of that data unless you actually perform some sort of computer vision machine learning on that type of data. So we're really extracting the value of the picture or the video. So it can be understood by machines. So think of a dog and a cat in a in a picture, right? Those cases, the machine doesn't know it's a dog and a cat, you have to train it. And that's where computer vision comes in. And really, we got into it because we were pulled in by customers, customers of ours wanted to start doing more computer vision and leveraging our platform that we had around high throughput, ingestion, and event driven pipelines. So these customers came to us and hey, you know, this is great, we'd love to really use this for computer vision. And the more and more that kept happening, we kept retooling around the platform. And finally, the platform from end to end is purpose built to do computer vision technology. And it really allows us to focus in on on what we're good at today. Right? And that's really delivering value within the computer vision space. Grant So I remember the first time I wrote some of the OpenCV framework code, right. And that was my first introduced introduction to it. This is a number of years ago. And I started thinking, Oh, this is so cool. So I'm writing all this Python code, right, building this stuff out. And then I'm thinking, how many people you know, are actually leveraging this platform and look at even though open CV is cool, and it's got a lot of capability, it still takes a lot, you know, to get everything out of there. So can you talk about how you relate to that open CV? And what is it that you're doing relative to that? And how much easier do you guys make this? Carlos Yeah, so I mean, you hit the nail on the head there, right? So from a developer perspective, it's really around, I need to learn open CV, I need to learn Python, I need to learn containerization I need to learn deployments. There's a variety of different companies that, you know, they're all great in their own right, right. Every one of those companies that we just talked about organizations are contributing tremendously to AI. But from a developer's perspective, you really have to learn a little bit of everything to be able to orchestrate a solution. And finally, when you get to, hey, I use AI. Let's pretend we're looking at strawberries. Hey, look, I built a model that the Texas strawberry that is your over the moon excited, but the very next thing is around, okay, how do I take that and deploy it 1000 times over in a field across the world and understand how to make that in an operational fashion where you know, it can be supported, maintained update, and that's really where we have this this crux of an organization where it's really different building something on a on a desk for a one time use. And and there's a lot of wins through that process. But then taking that and operationalizing for business driving revenues driving corporate goals around, why would that feature is being implemented, that's really where we come in, we want to be able to take off that that single single path of workflow where it's a little bit of everything to orchestrate a solution, and provide a centralized place where other people, including developers can go and help build that workflow in a meaningful way where it's complete. Grant So operationalizing, those models, I find, that's one of the biggest, or the most challenging aspects to this, it's one thing as you know, to, to build out enough to sort of prove something out and get some of the initial feedback, but to actually get it into production. I think I saw MIT not long ago, maybe this a year ago, now, they had come out with this report, it was through the Boston Consulting Group as well, they'd mentioned something about, hey, you know, 10% of organizations doing AI are getting return on their investment. And, and, of course, when you look at all of the investment of the takes for the business to really stand up all the data scientists and all the ML work. And you can see why the numbers translate that way. So to me, it feels like not only doing this in the area of CV, but the problem you're really trying to solve it feels like is you're attacking that ROI problem, which is you could take this kind of capability into business say you don't have to stand up all of these deep technical capabilities. Rather, you can achieve ROI sooner than rather than laters. Is that Is that accurate? Carlos That's correct. And I think it's really through the adoption of technology and you hit a you hit a really strong point for us there around the the difference between it works and it's operational. That's that's really the path of your your, you're less there in the CV world and more there with the DevOps ml ops portion of it, getting machines running consistently, with the right versions of deployment strategy, that latter half of it, just as important as the model building pieces of it. But even after you get to that piece, you need a way to improve, and improvement in the model is very costly if it's not automated. Because I mean, you can just look at the the loss for a simple detector, like a strawberry, where, you know, if the model starts to perform poorly, you're not pulling as many strawberries out of the field. So you need a way to be able to update that model, quickly, get training data into a platform and push a new model back out. And it's really around how fast you can go end to end with that workflow. Again, and again. And again. And again, this is that continuous improvement that we have born into us from previous software development life, but really in in machine learning and computer vision, your ability to train, retrain, and redeploy. That is where you really get the benefit out of your workflow. Grant Well, that really confirms my experience with AI will. Typically I'll refer to the term that we've been using, as we call it a SmartStep. It's that notion that I need to be able to refactor my models and take in consideration that changed context around me, whether it comes in from the world or from the customer, or whatever that means, some level of adjustments taken place that begins to invalidate my previous AI model. And I need to be able to quickly make those adjustments. That's fascinating. How long typically does it take for you to do that kind of refactoring of your models? Is that Is it a day? Is it a year, a month? Or the answer is? Well, it depends. Carlos So it's twofold, right? So it's, it's hours to do that. But it really depends on the complexity of the model, and how long you have to train. But in an automated workflow, you're you're continuously adding data to your training set, that are lower quality predictions, where you can retrain automatically when you hit a certain threshold, and then validate the model and push that back out into your production alized environment. So it's it when you go to develop these sort of workflows. You really have to start with whatever I build, I know I have to improve on later. So that improvement cycle ends up costing a lot if it's not part of the initial discussion around how do we count strawberries, right? So it's evident you and I can nerd out on this. Let me shift the focus a little bit and ask about it from say your your customer Write from their perspective, what is it that they need to be able to do to be successful with your solution? What skills or capabilities do they have to bring to the table? Yeah, and I think it's, I think I have this conversation a lot with our clients. And it's really less about them having technology around data science and building model, and more around a collaborative environment, where organizations, you know, they have a culture of success. But that culture of success is really borne by holding hands through the fire, it's, it's being able to commit and lean in when the organization sees something that's really important to them, either either from a technical perspective or revenue perspective. And it's these companies and these types of people that get to rally around a centralized platform where they can build and collaborate with machine learning computer vision applications. And, you know, it's it's a, it's really interesting to see the companies that succeed here, because it's really based on a culture of winning, right, where the wind doesn't have to be the hardest, most technical, logically difficult problem, because complexity really drives timelines. And if you're looking to change from an organization's perspective, start getting the little wins, get the little wins, start having some adoption within the company around, wow, computer vision is working. We've identified these problems in a few hours, we have a solution deployed, you start building this sense of confidence in the organization where you can take on those larger tasks. But you have to start with a build up, you can't just go right to the highest ROI problem. No one starts at human genome sequencing. Grant Have you, or do we got a problem? Yeah. Back up, back up. So So all right. So it means to me it sounds like as an organization to succeed with this getting my problem definition, understood or crisply put together first, what would be an obvious thing to do? But how long does it take for me to iterate? Before I know that I've got value, that I've pursued the right level of the problem? You You made an interesting comment a minute ago, you're like, oh, within a couple hours, I could potentially retrain the model and have that back operationally. That means if I can fail fast, right, if I can pick my problem space, get something out there operation, try it fail fast, and then continue to iterate with AI as my helper that that's really, really quite powerful is that the model that the your person? Carlos It is the model? That's exactly right. And it's not just hours to retrain, it takes hours to start, right. And just to highlight you kind of started with, we have to define that problem set first. So even after we define that problem set, a lot of times we have to go back and redefine that problem set, and really the piece around failing fast. It's it's experimentation. And do we have the right cameras? Do we have the right vantage point is the model correct, you want to be able to cycle as fast as you can through that experimentation phase. And sometimes you have to go back and redefine that problem set. Because you're learning more as you go, right? And you're evolving into okay, I now understand the corner cases a bit better. And with the platform, you really can cycle that quickly. I mean, machine learning at scale is really how fast you can iterate through improvement. Grant It's quite, I think it's quite a testament to how the AI just world in general is improving. I know that you and I were talking earlier, years ago, you know, when I first started writing some TensorFlow code and Keras code, you know, the, the time it took to fail was much longer, right. And then the cycles were huge and, and getting this down to a matter of hours or even a few days, you know, for an enterprise's is massive. What's the, from what you've seen terms of different industries? Are there certain industries that tend to be leaning into this and adopting it or the is there no pattern yet? Carlos No, there's a there's a definite pattern. And in 2022, we'll all see kind of what that what that's looking like. And it's really an industry that has traditionally not been able to go through that digital transformation. So think of think of think of a piece that's a very manual piece, right, like physical inspection, where humans would look at something, they'd write down their notes on a piece of paper, then that that item would go through either pass or fail or some criteria for rework. That's all possible. Now with computer vision years ago, that was impossible the accuracy wasn't high enough, plain and simple. It just took it wasn't, it wasn't better than a human. Now we have models that are better than a human for visual inspection. And these industries are digitizing their workflow. So it's not only the feature of computer vision, but it's also now I have a digital record of all the transactions, I have extracted video information, it makes auditing easy for those sectors that have a lot of regulatory compliance, those that require proactive compliance to audit requirements, as well as visibility. Visibility has always been an it's funny when we talk about visibility, but it's computer vision, but like a lot of human processes, that there's zero visibility in it there, it's really difficult to audit, you know, why is this working better or not? So having that, that digitization of the flow with the feature of computer vision allows us to extract the value. And industries like agriculture are going I mean, agriculture has been a leader in technology for a while, but now you're really seeing adoption at livestock and row crop with drone technology. It's a very rich image environment, medical space, medical space for computer vision in 2022 to 2028, is estimated to be billions of dollars just with medical imaging. And that's not that's not the the total addressable market for the hardware, it's just the imaging piece. So we see we see a lot of growth in sectors that are going through this digital transformation that are adopting technologies that are now getting to the point where they can get pushed down into the masses instead of just the top five companies in the world. Grant Excellent, it seems to you part of your comment earlier made me think about process optimization for organizations, and the ability to extract processes, you're familiar with process mining, right? The ability to extract, you know, out of logs of these organizations and doing something like that where you can produce this visual representation of that, and then building models against that, to optimize your process might be an interesting use case. Yeah, that's fascinating. Carlos That's a really good point, right? Because that's, that's a different portion of AI that can be applied to like just log analysis, that then would allow you to go back and Okay, now that we have the process mind, where can we improve along the process? Grant Yeah, yeah. Amazing. So many uses and use cases around around this CV area, for sure. So let's say that someone listening to this wanted to learn more about it, where would they go? How would they? How would they find more about your organization? Carlos You can find us everywhere, right? We have a website, plainsight.ai. We're all over LinkedIn, we have Twitter, we're on Reddit, we have a Medium blog, there's a Slack channel where we geek out around computer vision use cases and how we can improve the world through computer vision. We're really we're really out there and feel free if you have questions come reach out to us. We have amazing staff that are looking to empower people in AI. So if it's through just just a question around how does this thing work, we'd love to talk to you if it's Hey, we're kind of stuck in our journey. We need some help reach out to us we can help you. Grant That's awesome. Carlos I can't thank you enough for reaching out to me and for a listening to click AI radio, but also for reaching out and sharing what it is you are you and your organization are bringing to the market think you're solving some awesome problems. Carlos Thanks a lot, Grant. Appreciate it. always appreciated talking about computer vision and AI and thank you to you and your listeners and really appreciate what you're doing to the AI space. Grant Alright, thanks again, Carlos. And again, everybody. Thanks for joining and until next time, go get some computer vision from Plainsight.

The CyberWire
Using bidirectionality override characters to obscure code. [Research Saturday]

The CyberWire

Play Episode Listen Later Nov 20, 2021 26:25


Guests Nicholas Boucher and Ross Anderson from the University of Cambridge join Dave Bittner to discuss their research, "Trojan Source: Invisible Vulnerabilities." The researchers present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. ‘Trojan Source' attacks, as they call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. They present working examples of Trojan-Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, and Python. They propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack. The project website and research can be found here: Trojan Source: Invisible Source Code Vulnerabilities project website Trojan Source: Invisible Vulnerabilities research paper

Research Saturday
Using bidirectionality override characters to obscure code.

Research Saturday

Play Episode Listen Later Nov 20, 2021 26:25


Guests Nicholas Boucher and Ross Anderson from the University of Cambridge join Dave Bittner to discuss their research, "Trojan Source: Invisible Vulnerabilities." The researchers present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. ‘Trojan Source' attacks, as they call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. They present working examples of Trojan-Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, and Python. They propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack. The project website and research can be found here: Trojan Source: Invisible Source Code Vulnerabilities project website Trojan Source: Invisible Vulnerabilities research paper

The Real Python Podcast
Building a Content Aggregator and Working With RSS in Python

The Real Python Podcast

Play Episode Listen Later Nov 19, 2021 57:55


Have you wanted to work with RSS feeds in Python? Maybe you're looking for a new project to build for your portfolio that uses Django, unit tests, and custom commands. This week on the show, we have Real Python author Ricky White to talk about his recent step-by-step project titled, "Build a Content Aggregator in Python."

Paul's Security Weekly TV
Building Vulnerable Docker Containers (On Purpose) - PSW #719

Paul's Security Weekly TV

Play Episode Listen Later Nov 19, 2021 50:34


I needed to create some vulnerable targets for testing exploits and my default password finder I wrote in Python (featured in previous episodes). I found a few useful projects, including Vulhub, that made the task of building an insecure lab environment pretty easy. I've made several additions and improvements to the available code, which I will run through in this segment.   Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw719

Software Defined Talk
Episode 330: The marketing became the technology

Software Defined Talk

Play Episode Listen Later Nov 19, 2021 79:21


This week we discuss Splunk's CEO Transition, Crypto.com renames the Staples Center and Netlify's attempt to realize Git Push Nirvana. Plus, when do house shoes become just shoes. Rundown Splunk stock plunges as CEO Doug Merritt steps down (https://www.cnbc.com/2021/11/15/splunk-stock-plunges-as-ceo-doug-merritt-steps-down.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Git Push Nirvana Heroku itself isn't what people want. What we actually want is (https://twitter.com/bryanl/status/1460286401199718401?s=21) git push heroku main (https://twitter.com/bryanl/status/1460286401199718401?s=21) Netlify Raises $105 Million to Transform Development for the Modern Web (https://www.netlify.com/press/netlify-raises-usd105-million-to-transform-development-for-the-modern-web) Goodbye, Staples Center. Hello, Crypto.com Arena (https://www.latimes.com/business/story/2021-11-16/crypto-staples) Relevant to your interests ManualsLib - Makes it easy to find manuals online! (https://www.manualslib.com/) Spotify expands into audiobooks with acquisition of Findaway (https://techcrunch.com/2021/11/11/spotify-expands-into-audiobooks-with-acquisition-of-findaway/) Apple's unexciting 2021 Mac and iPhone software prove it should take a break from annual OS updates (https://www.theverge.com/22771079/apple-macos-monteray-ios-15-mac-iphone-software-operating-system-updates-space-out-features) Red Hat 8.5 released with SQL Server and .NET 6 (https://www.theregister.com/2021/11/11/red_hat_8_5/) Business Essentials (https://www.apple.com/business/essentials/) FBI system hacked to email 'urgent' warning about fake cyberattacks (https://www.bleepingcomputer.com/news/security/fbi-system-hacked-to-email-urgent-warning-about-fake-cyberattacks/) Citrix Systems Inc. Layoffs - TheLayoff.com (https://www.thelayoff.com/citrix-systems) if you use the "Cost Explorer" to check the details of your bill, they will charge you $0.01 (yes, 1 cent) per each request? (https://twitter.com/preraksanghvi/status/1458126728900001796?s=21) Vizio's profit on ads, subscriptions, and data is double the money it makes selling TVs (https://www.theverge.com/2021/11/10/22773073/vizio-acr-advertising-inscape-data-privacy-q3-2021) Seven years after last venture investment, Mixpanel scores $200M Series C – TechCrunch (https://techcrunch.com/2021/11/15/seven-years-after-last-venture-investment-mixpanel-scores-200m-series-c/) Twitter acquires Threader, an app that compiles and shares threads – TechCrunch (https://techcrunch.com/2021/11/15/twitter-acquires-threader-an-app-that-compiles-and-shares-threads/) PlanetScale raises $50M Series C as its enterprise database service hits general availability – TechCrunch (https://techcrunch.com/2021/11/16/planetscale-raises-50m-series-c-as-its-enterprise-database-service-hits-general-availability) Frontpage -- Terms of Service; Didn't Read (https://tosdr.org/) Snowflake adds Python option for developers (https://www.theregister.com/2021/11/16/snowflake_python_support/) 5 ways to improve mental health for software developers (https://techcrunch.com/2021/11/11/5-ways-to-improve-mental-health-for-software-developers/) Here's why identity software firm Okta plans to open a retail location in New York City (https://www.cnbc.com/2021/11/17/heres-why-identity-software-firm-okta-plans-to-open-a-retail-location-in-new-york-city.html) Akamai Exec Rick McConnell to Succeed John Van Siclen as Dynatrace CEO - GovCon Wire (https://www.govconwire.com/2021/11/akamai-exec-rick-mcconnell-to-succeed-john-van-siclen-as-dynatrace-ceo/#:~:text=Rick%20McConnell%2C%20president%20and%20general,2022%2C%20the%20company%20said%20Monday) Twilio CEO touts company's long-term growth outlook after recent stock plunge (https://www.cnbc.com/2021/11/15/twilio-ceo-touts-companys-growth-outlook-after-recent-stock-plunge.html) AWS Channel Chief Doug Yeum Stepping Down (https://www.crn.com/news/cloud/aws-channel-chief-doug-yeum-stepping-down) The future of work (https://pbs.twimg.com/media/FEV0DS0UUAUZtl8.jpg) Apple announces Self Service Repair (https://www.apple.com/newsroom/2021/11/apple-announces-self-service-repair/) Nonsense Introducing the Icelandverse (https://www.youtube.com/watch?v=enMwwQy_noI) MoviePass is coming back, under original founder Stacy Spikes (https://www.protocol.com/bulletins/moviepass-is-coming-back) Dude, he's back. (https://twitter.com/DellTech/status/1460344411863322632) Mattress company Casper to be sold in private equity deal (https://www.axios.com/casper-sells-private-equity-dc40779f-57af-402a-9f62-40285b1434fe.html) Corey Quinn explains AppConfig (https://twitter.com/quinnypig/status/1458667547818016769?s=21) Sponsors strongDM — Manage and audit remote access to infrastructure. Start your free 14-day trial today at strongdm.com/SDT (http://strongdm.com/SDT) CBT Nuggets — Training available for IT Pros anytime, anywhere. Start your 7-day Free Trial today at cbtnuggets.com/sdt (https://cbtnuggets.com/sdt) Conferences THAT Conference comes to Texas January 17-20, 2022 (https://that.us/events/tx/2022/) — Now with the right link Listener Feedback Tim wants you to work as a Principal Architect, Commercial and Medical IT (https://jobs.smartrecruiters.com/Biogen/743999777039406-principal-architect-commercial-and-medical-it) in Warsaw, Boston or RTP Jeffrey wants to work at Blizzard as Reliability Engineer (https://careers.blizzard.com/global/en/job/R010526/Software-Engineer-Reliability) SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), LinkedIn (https://www.linkedin.com/company/software-defined-talk/) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=823) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté's book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: InterStellar BBQ (https://www.theinterstellarbbq.com) Coté: Travels with My Aunt (https://www.goodreads.com/book/show/48858.Travels_with_My_Aunt). Photo Credits Banner Art (https://unsplash.com/photos/OT1D53cUbnI) Cover Art (https://unsplash.com/photos/UT8LMo-wlyk)

Teaching Python
Episode 79: Working with Student Data

Teaching Python

Play Episode Listen Later Nov 18, 2021 39:58


This episode is all about working with the data we generate for students, whether it's in the classroom, your school, or your district. Special guest star Rusti Gregory joins us to talk about his transition from the classroom to the data manager role. Special Guest: Rusti Gregory.

The Cloud Pod
143: It's Chaos in the Cloud Pod Studio

The Cloud Pod

Play Episode Listen Later Nov 18, 2021 46:36


On The Cloud Pod this week, the pod squad is down to the OG three while Ryan is away. Also AWS announces serverless pipelines, GCP releases Spot Pods, and Azure introduces Chaos Studio.  A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located.  This week's highlights

Adafruit Industries
Python on Hardware weekly video 157

Adafruit Industries

Play Episode Listen Later Nov 18, 2021 5:11


The wonderful world of Python on hardware! Episode 157 (November 17, 2021). This is our weekly Python video-newsletter-podcast! Ladyada and PT review the Python on hardware news & highlights of the week. The news comes from the Python community, Discord, Adafruit communities and more. It's part of the comprehensive newsletter we do each week. The video playlist of episodes is here: http://adafru.it/pohepisodes Sign up for the Python on Microcontrollers weekly email newsletter here: https://www.adafruitdaily.com/ Read the newsletters past and present at https://www.adafruitdaily.com/category/circuitpython/ Learn all about CircuitPython here: https://www.circuitpython.org/ https://adafruit.com/circuitpython/ Join us on Discord! https://adafru.it/discord/ Visit the Adafruit shop online, we're open for business - http://www.adafruit.com Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/

Talk Python To Me - Python conversations for passionate developers
#341: 25 Pandas Functions You Didn't Know Existed

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 17, 2021 59:16


Do you do anything with Jupyter notebooks? If you do, there is a very good chance you're working with the pandas library. This is one of THE primary tools of anyone doing computational work or data exploration with Python. Yet, this library is massive and knowing the idiomatic way to use it can be hard to discover. That's why I've invited Bex Tuychiev to be our guest. He wrote an excellent article highlighting 25 idiomatic Pandas functions and properties we should all keep in our data toolkit. I'm sure there is something here for all of us to take away and use pandas that much better. Links from the show Bex Tuychiev: linkedin.com Bex's Medium profile: ibexorigin.medium.com Numpy 25 functions article: towardsdatascience.com missingno package: coderzcolumn.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm ---------- Stay in touch with us ---------- Subscribe on YouTube (for live streams): youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors Shortcut Linode AssemblyAI Talk Python Training

Python Bytes
#259 That argument is a little late-bound

Python Bytes

Play Episode Listen Later Nov 17, 2021 47:24


Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut - Get started at shortcut.com/pythonbytes Special guest: Renee Teate Michael #1: pypi-changes via Brian Skinn, created by Bernát Gábor Visually show you which dependencies in an environment are out of date. See the age of everything you depend upon. Also, shoutout again to pipdeptree Brian #2: Late-bound argument defaults for Python Default values for arguments to functions are evaluated at function definition time. If a value is a short expression that uses a variable, that variable is in the scope of the function definition. The expression cannot use other arguments. Example of what you cannot do: def foo(a, b = None, c = len(a)): ... There's a proposal by Chris Angelico to add a =: operator for late default evaluation. syntax still up in the air. => and ?= also discussed However, it's non-trivial to add syntax to an established language, and this article notes: At first blush, Angelico's idea to fix this "wart" in Python seems fairly straightforward, but the discussion has shown that there are multiple facets to consider. It is not quite as simple as "let's add a way to evaluate default arguments when the function is called"—likely how it was seen at the outset. That is often the case when looking at new features for an established language like Python; there is a huge body of code that needs to stay working, but there are also, sometimes conflicting, aspirations for features that could be added. It is a tricky balancing act. Renee #3: pandas.read_sql Since I wrote my book SQL for Data Scientists, I've gotten several questions about how I use SQL in my python scripts. It's really simple: You can save your SQL as a text file and then import the dataset into a pandas dataframe to do the rest of my data cleaning, feature engineering, etc. Pandas has a built-in way to use SQL as a data source. You set up a connection to your database using another package like SQL Alchemy, then send the SQL string and the connection to the pandas.read_sql function. It returns a dataframe with the results of your query. Michael #4: pyjion by Anthony Shaw Pyjion is a JIT for Python based upon CoreCLR Check out live.trypyjion.com *to see it in action.* Requires Python 3.10, .NET Core 6 Enable with just a couple of lines: >>> import pyjion >>> pyjion.enable() Brian #5: Tips for debugging with print() Adam Johnson 7 tips altogether, but I'll highlight a few I loved reading about Debug variables with f-strings and = print(f``"``{myvar=}``") Saves typing over print(f``"``myvar={myvar}") with the same result Make output “pop” with emoji (Brilliant!) print("

Google Cloud Platform Podcast
Managing ML Lifecycles with Vertex AI with Erwin Huizenga

Google Cloud Platform Podcast

Play Episode Listen Later Nov 17, 2021 38:41


We’re learning all about Vertex AI this week as Carter Morgan and Jay Jenkins host guest Erwin Huizenga. He helps us understand what is meant by Asia Pacific and how Machine Learning is growing there. APAC’s Machine Learning scene is exciting for its enterprise companies leveraging ML for innovative projects at scale. The ML journey of many of these customers revealed challenges with things like efficiency that Vertex AI was built to solve. The Vertex AI platform boasts tools that help with everything from the beginning stages of data collection to analysis, validation, transformation, model training, evaluation, serving the model, and metadata tracking. Erwin offers detailed examples of this pipeline process and describes how Feature Store helps clients manage their projects. Using Vertex AI not only simplifies the initial development process but streamlines the iteration process as the model is adjusted over time. Pipelines offers automation options that help with this, Erwin explains. ML Operations are also built into Vertex AI to ensure everything is done in compliance with industry standards, even at scale. Using customer recommendations as an example, Erwin walks us through how Vertex AI can employ embedding to enhance customer experiences through ML. By using Vertex AI in combination with other Google offerings like AutoML, companies can effectively build working ML projects without data science experience. We talk about the Vertex AI user interface and the other tools and APIS that are available there. Erwin tells us how Digits Financial uses Vertex AI and Pipeline to bring models to production in days rather than months, and how others can get started with Vertex AI, too. Erwin Huizenga Erwin Huizenga is a Data Scientist at Google specializing in TensorFLow, Python, and ML. Cool things of the week Announcing Spot Pods for GKE Autopilot—save on fault tolerant workloads blog Indosat Ooredoo and Google Launch Strategic Partnership to Accelerate Digitalization Across SMBs and Enterprises in Indonesia site Indosat Ooredoo dan Google Luncurkan Kemitraan Strategis untuk Percepatan Digitalisasi UMKM dan Perusahaan di Indonesia site Interview Vertex AI site Google Cloud in Asia Pacific blog Introduction to Vertex AI docs What Is a Machine Learning Pipeline? site TensorFlow site PyTorch site Vertex AI Feature Store docs AutoML site BigQuery ML site Vertex AI Matching Engine docs ScaNN site Announcing ScaNN: Efficient Vector Similarity Search blog Vertex AI Workbench site Vertex Pipeline Case Study: Digits Financial site Intro to Vertex Pipelines Codelab site Vertex AI: Training and serving a custom model Codelab site Vertex AI Workbench: Build an image classification model with transfer learning and the notebook executor Codelab site APAC Best of Next 2021 site TFX: A TensorFlow-Based Production-Scale Machine Learning Platform site Rules of Machine Learning site Google Cloud Skills Boost: Build and Deploy Machine Learning Solutions on Vertex AI site Monitoring feature attributions: How Google saved one of the largest ML services in trouble blog What’s something cool you’re working on? Jay is working on APAC Best of Next and will be doing a session on sustainability! Carter is working on transitioning the GCP Podcast to a video format!

SuperDataScience
SDS 523: Open-Source Analytical Computing (pandas, Apache Arrow)

SuperDataScience

Play Episode Listen Later Nov 16, 2021 87:35


Wes McKinney joins us to discuss the history and philosophy of pandas and Apache Arrow as well as his continued work in open source tools. In this episode you will learn: • History of pandas [7:29] • The trends of R and Python [23:33] • Python for Data Analysis [25:58] • pandas updates and community [30:10] • Apache Arrow [41:50] • Voltron Data [55:10] • Origin of Wes's project names [1:08:14] • Wes's favorite tools [1:09:46] • Audience Q&A [1:15:34] Additional materials: www.superdatascience.com/523

The Social-Engineer Podcast
Ep. 158 - Security Awareness Series - Don't Act Old And Other Advice with Paul Asadoorian

The Social-Engineer Podcast

Play Episode Listen Later Nov 15, 2021 53:59


This month, Chris Hadnagy and Ryan MacDougall are joined by Paul Asadoorian.  Paul is the founder of Security Weekly, a security podcast network. Paul spends time “in the trenches” coding in Python, testing security products and evaluating and implementing open-source software. Paul's career began by implementing security programs for a lottery company and then a large university. As Product Evangelist for Tenable Network Security, Paul also built a library of materials on the topic of vulnerability management. When not hacking IoT devices, web applications or Linux, Paul can be found researching his next set of headphones, devices for smoking meat, and e-bikes. November 15, 2021.  00:00 – Intro  Social-Engineer.com Managed Voice Phishing  Managed Email Phishing  Adversarial Simulations   Social-Engineer channel on SLACK  CLUTCH  innocentlivesfoundation.org  Human Behavior Conference  03:34 – Paul Asadoorian Intro  05:08 – How did you get started in infosec?  13:19 – When did you decide you were going to start a podcast?  24:26 – What have you learned from the guests you've had on your podcasts over all of these years?  27:00 – What is your perspective on the shifting of hacking culture in the community?  34:53 – What are the best qualities someone could have to be attractive to a potential employer in this industry?  37:14 – How do we get the younger generation to have the qualities we are not seeing?  41:38 – Who is your greatest mentor?  Laurie Baker  Stephen Northcutt @ SANS  Ed Skoudis @ SANS  46:00 – Book Recommendations  Code Girls The Phoenix Project The Unicorn Project Countdown to Zero Day The Cuckoo's Egg Cyberpunk 51:00 – Guest Wrap Up  https://securityweekly.com    www.twitter.com/securityweekly  53:31 – Outro  innocentlivesfoundation.org

Datacenter Technical Deep Dives
Getting Started With Python Visualizations with Jessica Garson

Datacenter Technical Deep Dives

Play Episode Listen Later Nov 12, 2021 63:16


Jessica Garson presents "Getting Started With Python Visualizations" Jessica Garson is a senior developer advocate for twitter as well as a musician who creates her sound with python! In this episode she live codes how to quickly stand up a data visualization using Python & the Twitter API! Resources: https://twitter.com/jessicagarson https://developer.twitter.com/en/docs/getting-started https://dev.to/twitterdev/getting-started-with-data-visualization-with-dash-and-recent-search-counts-22jn https://getting-started-with-dash.herokuapp.com/ https://twittercommunity.com/

Embedded
286: Twenty Cans of Gas (Repeat)

Embedded

Play Episode Listen Later Nov 12, 2021 60:13


Colin O'Flynn (@colinoflynn) spoke with us about security research, power analysis, and hotdogs. Colin's company is NewAE and you can see his Introduction to Side-Channel Power Analysis video as an intro to his training course. Or you can buy your own ChipWhisperer and go through his extensive tutorials on the wiki pages. ChipWhisperer on Hackaday ColinOFlynn.com Some FPGA resource mentioned: Fpga4fun.com TinyFpga.com MyHdl.org (Python!)

Screaming in the Cloud
The Future of Google Cloud with Richard Seroter

Screaming in the Cloud

Play Episode Listen Later Nov 11, 2021 40:47


About RichardHe's also an instructor at Pluralsight, a frequent public speaker, and the author of multiple books on software design and development. Richard maintains a regularly updated blog (seroter.com) on topics of architecture and solution design and can be found on Twitter as @rseroter. Links: Twitter: https://twitter.com/rseroter LinkedIn: https://www.linkedin.com/in/seroter Seroter.com: https://seroter.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: You know how git works right?Announcer: Sorta, kinda, not really Please ask someone else!Corey: Thats all of us. Git is how we build things, and Netlify is one of the best way I've found to build those things quickly for the web. Netlify's git based workflows mean you don't have to play slap and tickle with integrating arcane non-sense and web hooks, which are themselves about as well understood as git. Give them a try and see what folks ranging from my fake Twitter for pets startup, to global fortune 2000 companies are raving about. If you end up talking to them, because you don't have to, they get why self service is important—but if you do, be sure to tell them that I sent you and watch all of the blood drain from their faces instantly. You can find them in the AWS marketplace or at www.netlify.com. N-E-T-L-I-F-Y.comCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Once upon a time back in the days of VH1, which was like MTV except it played music videos, would have a show that was, “Where are they now?” Looking at former celebrities. I will not use the term washed up because that's going to be insulting to my guest.Richard Seroter is a returning guest here on Screaming in the Cloud. We spoke to him a year ago when he was brand new in his role at Google as director of outbound product management. At that point, he basically had stars in his eyes and was aspirational around everything he wanted to achieve. And now it's a year later and he has clearly failed because it's Google. So, outbound products are clearly the things that they are going to be deprecating, and in the past year, I am unaware of a single Google Cloud product that has been outright deprecated. Richard, thank you for joining me, and what do you have to say for yourself?Richard: Yeah, “Where are they now?” I feel like I'm the Leif Garrett of cloud here, joining you. So yes, I'm still here, I'm still alive. A little grayer after twelve months in, but happy to be here chatting cloud, chatting whatever else with you.Corey: I joke a little bit about, “Oh, Google winds up killing things.” And let's be clear, your consumer division which, you know, Google is prone to that. And understanding a company's org chart is a challenge. A year or two ago, I was of the opinion that I didn't need to know anything about Google Cloud because it would probably be deprecated before I really had to know about it. My opinion has evolved considerably based upon a number of things I'm seeing from Google.Let's be clear here, I'm not saying this to shine you on or anything like that; it's instead that I've seen some interesting things coming out of Google that I consider to be the right moves. One example of that is publicly signing multiple ten-year deals with very large, serious institutions like Deutsche Bank, and others. Okay, you don't generally sign contracts with companies of that scale and intend not to live up to them. You're hiring Forrest Brazeal as your head of content for Google Cloud, which is not something you should do lightly, and not something that is a short-term play in any respect. And the customer experience has continued to improve; Google Cloud products have not gotten worse, and I'm seeing in my own customer conversations that discussions about Google Cloud have become significantly less dismissive than they were over the past year. Please go ahead and claim credit for all of that.Richard: Yeah. I mean, the changes a year ago when I joined. So, Thomas Kurian has made a huge impact on some of that. You saw us launch the enterprise APIs thing a while back, which was, “Hey, here's, for the most part, every one of our products that has a fixed API. We're not going to deprecate it without a year's notice, whatever it is. We're not going to make certain types of changes.” Maybe that feels like, “Well, you should have had that before.” All right, all we can do is improve things moving forward. So, I think that was a good change.Corey: Oh, I agree. I think that was a great thing to do. You had something like 80-some-odd percent coverage of Google Cloud services, and great, that's going to only increase with time, I can imagine. But I got a little pushback from a few Googlers for not being more congratulatory towards them for doing this, and look, it's a great thing. Don't get me wrong, but you don't exactly get a whole lot of bonus points and kudos and positive press coverage—not that I'm press—for doing the thing you should have been doing [laugh] all along.It's, “This is great. This is necessary.” And it demonstrates a clear awareness that there was—rightly or wrongly—a perception issue around the platform's longevity and that you've gone significantly out of your way to wind up addressing that in ways that go far beyond just yelling at people on Twitter they don't understand the true philosophy of Google Cloud, which is the right thing to do.Richard: Yeah, I mean, as you mentioned, look, the consumer side is very experimental in a lot of cases. I still mourn Google Reader. Like, those things don't matter—Corey: As do we all.Richard: Of course. So, I get that. Google Cloud—and of course we have the same cultural thing, but at the same time, there's a lifecycle management that's different in Google Cloud. We do not deprecate products that much. You know, enterprises make decade-long bets. I can't be swap—changing databases or just turning off messaging things. Instead, we're building a core set of things and making them better.So, I like the fact that we have a pretty stable portfolio that keeps getting a little bit bigger. Not crazy bigger; I like that we're not just throwing everything out there saying, “Rock on.” We have some opinions. But I think that's been a positive trend, customers seem to like that we're making these long-term bets. We're not going anywhere for a long time and our earnings quarter after quarter shows it—boy, this will actually be a profitable business pretty soon.Corey: Oh, yeah. People love to make hay, and by people, I stretch the term slightly and talk about, “Investment analysts say that Google Cloud is terrible because at your last annual report you're losing something like $5 billion a year on Google Cloud.” And everyone looked at me strangely, when I said, “No, this is terrific. What that means is that they're investing in the platform.” Because let's be clear, folks at Google tend to be intelligent, by and large, or at least intelligent enough that they're not going to start selling cloud services for less than it costs to run them.So yeah, it is clearly an investment in the platform and growth of it. The only way it should be turning a profit at this point is if there's no more room to invest that money back into growing the platform, given your market position. I think that's a terrific thing, and I'm not worried at all about it losing money. I don't think anyone should be.Richard: Yeah, I mean, strategically, look, this doesn't have to be the same type of moneymaker that even some other clouds have to be to their portfolio. Look, this is an important part, but you look at those ten-year deals that we've been signing: when you look at Univision, that's a YouTube partnership; you look at Ford that had to do with Android Auto; you look at these others, this is where us being also a consumer and enterprise SaaS company is interesting because this isn't just who's cranking out the best IaaS. I mean, that can be boring stuff over time. It's like, who's actually doing the stuff that maybe makes a traditional company more interesting because they partner on some of those SaaS services. So, those are the sorts of deals and those sorts of arrangements where cloud needs to be awesome, and successful, and make money, doesn't need to be the biggest revenue generator for Google.Corey: So, when we first started talking, you were newly minted as a director of outbound product management. And now, you are not the only one, there are apparently 60 of you there, and I'm no closer to understanding what the role encompasses. What is your remit? Where do you start? Where do you stop?Richard: Yeah, that's a good question. So, there's outbound product management teams, mostly associated with the portfolio area. So network, storage, AI, analytics, database, compute, application modernization-y sort of stuff—which is what I cover—containers, dev tools, serverless. Basically, I am helping make sure the market understands the product and the product understands the market. And not to be totally glib, but a lot of that is, we are amplification.I'm amplifying product out to market, analysts, field people, partners: “Do you understand this thing? Can I help you put this in context?” But then really importantly, I'm trying to help make sure we're also amplifying the market back to our product teams. You're getting real customer feedback: “Do you know what that analyst thinks? Have you heard what happened in the competitive space?”And so sometimes companies seem to miss that, and PMs poke their head up when I'm about to plan a product or I'm about to launch a product because I need some feedback. But keeping that constant pulse on the market, on customers, on what's going on, I think that can be a secret weapon. I'm not sure everybody does that.Corey: Spending as much time as I do on bills, admittedly AWS bills, but this is a pattern that tends to unfold across every provider I've seen. The keynotes are chock-full of awesome managed service announcements, things that are effectively turnkey at further up the stack levels, but the bills invariably look a lot more like, yeah, we spend a bit of money on that and then we run 10,000 virtual instances in a particular environment and we just treat it like it's an extension of our data center. And that's not exciting; that's not fun, quote-unquote, but it's absolutely what customers are doing and I'm not going to sit here and tell them that they're wrong for doing it. That is the hallmark of a terrible consultant of, “I don't understand why you're doing what you're doing, so it must be foolish.” How about you stop and gain some context into why customers do the things that they do?Richard: No, I send around a goofy newsletter every week to a thousand or two people, just on things I'm learning from the field, from customers, trying to make sure we're just thinking bigger. A couple of weeks ago, I wrote an idea about modernization is awesome, and I love when people upgrade their software. By the way, most people migration is a heck of a lot easier than if I can just get this into your cloud, yeah love that; that's not the most interesting thing, to move VMs around, but most people in their budget, don't have time to rewrite every Java app to go. Everybody's not changing .NET framework to .NET core.Like, who do I think everybody is? No, I just need to try to get some incremental value first. Yes, then hopefully I'll swap out my self-managed SQL database for a Spanner or a managed service. Of course, I want all of that, but this idea that I can turn my line of business loan processing app into a thousand functions overnight is goofy. So, how are we instead thinking more pragmatically about migration, and then modernizing some of it? But even that sort of mindset, look, Google thinks about innovation modernization first. So, also just trying to help us take a step back and go, “Gosh, what is the normal path? Well, it's a lot of migration first, some modernization, and then there's some steady-state work there.”Corey: One of the things that surprised me the most about Google Cloud in the market, across the board, has been the enthusiastic uptake for enterprise workloads. And by enterprise workloads, I'm talking about things like SAP HANA is doing a whole bunch of deployments there; we're talking Big Iron-style enterprise-y things that, let's be honest, countervene most of the philosophy that Google has always held and espoused publicly, at least on conference stages, about how software should be built. And I thought that would cut against them and make it very difficult for you folks to gain headway in that market and I could not have been more wrong. I'm talking to large enterprises who are enthusiastically talking about Google Cloud. I've got a level with you, compared to a year or two ago, I don't recognize the place.Richard: Mmm. I mean, some of that, honestly, in the conversations I have, and whatever I do a handful of customer calls every week, I think folks still want something familiar, but you're looking for maybe a further step on some of it. And that means, like, yes, is everybody going to offer VMs? Yeah, of course. Is everyone going to have MySQL? Obviously.But if I'm an enterprise and I'm doing these generational bets, can I cheat a little bit, and maybe if I partner with a more of an innovation partner versus maybe just the easy next step, am I buying some more relevance for the long-term? So, am I getting into environment that has some really cool native zero-trust stuff? Am I getting into environment with global backend services and I'm not just stitching together a bunch of regional stuff? How can I cheat by using a more innovation vendor versus just lifting and shifting to what feels like hosted software in another cloud? I'm seeing more of that because these migrations are tough; nobody should be just randomly switching clouds. That's insane.So, can I make, maybe, one of these big bets with somebody who feels like they might actually even improve my business as a whole because I can work with Google Pay and improve how I do mobile payments, or I could do something here with Android? Or, heck, all my developers are using Angular and Flutter; aren't I going to get some benefit from working with Google? So, we're seeing that, kind of, add-on effect of, “Maybe this is a place not just to host my VMs, but to take a generational leap.”Corey: And I think that you're positioning yourselves in a way to do it. Again, talk about things that you wouldn't have expected to come out of Google of all places, but your console experience has been first-rate and has been for a while. The developer experience is awesome; I don't need to learn the intricacies of 12 different services for what I'm trying to do just in order to get something basic up and running. I can stop all the random little billing things in my experimental project with a single click, which that admittedly has a confirm, which you kind of want. But it lets you reason about these things.It lets you get started building something, and there's a consistency and cohesiveness to the console that, again, I am not a graphic designer, by any stretch of the imagination. My most commonly used user interface is a green-screen shell prompt, and then I'm using Vim to wind up writing something horrifying, ideally in Python, but more often in YAML. And that has been my experience, but just clicking around the console, it's clear that there was significant thought put into the design, the user experience, and the way of approaching folks who are starting to look very different, from a user persona perspective.Richard: I can—I mean, I love our user research team; they're actually fun to hang out with and watch what they do, but you have to remember, Google as a company, I don't know, cloud is the first thing we had to sell. Did have to sell Gmail. I remember 15 years ago, people were waiting for invites. And who buys Maps or who buys YouTube? For the most part, we've had to build things that were naturally interesting and easy-to-use because otherwise, you would just switch to anything else because everything was free.So, some of that does infuse Google Cloud, “Let's just make this really easy to use. And let's just make sure that, maybe, you don't hate yourself when you're done jumping into a shell from the middle of the console.” It's like, that should be really easy to do—or upgrade a database, or make changes to things. So, I think some of the things we've learned from the consumer good side, have made their way to how we think of UX and design because maybe this stuff shouldn't be terrible.Corey: There's a trope going around, where I wound up talking about the next million cloud customers. And I'm going to have to write a sequel to it because it turns out that I've made a fundamental error, in that I've accepted the narrative that all of the large cloud vendors are pushing, to the point where I heard from so many folks I just accepted it unthinkingly and uncritically, and that's not what I should be doing. And we'll get to what I was wrong about in a minute, but the thinking goes that the next big growth area is large enterprises, specifically around corporate IT. And those are folks who are used to managing things in a GUI environment—which is fine—and clicking around in web apps. Now, it's easy to sit here on our high horse and say, “Oh, you should learn to write code,” or YAML, which is basically code. Cool.As an individual, I agree, someone should because as soon as they do that, they are now able to go out and take that skill to a more lucrative role. The company then has to backfill someone into the role that they just got promoted out of, and the company still has that dependency. And you cannot succeed in that market with a philosophy of, “Oh, you built something in the console. Now, throw it away and do it right.” Because that is maddening to that user persona. Rightfully so.I'm not that user persona and I find it maddening when I have to keep tripping over that particular thing. How did that come to be, from your perspective? First, do you think that is where the next million cloud customers come from? And have I adequately captured that user persona, or am I completely often the weeds somewhere?Richard: I mean, I shared your post internally when that one came out because that resonated with me of how we were thinking about it. Again, it's easy to think about the cloud-native operators, it's Spotify doing something amazing, or this team at Twitter doing something, or whatever. And it's not even to be disparaging. Like, look, I spent five years in enterprise IT and I was surrounded by operators who had to run dozen different systems; they weren't dedicated to just this thing or that. So, what are the tools that make my life easy?A lot of software just comes with UIs for quick install and upgrades, and how does that logic translate to this cloud world? I think that stuff does matter. How are you meeting these people a little better where they are? I think the hard part that we will always have in every cloud provider is—I think you've said this in different forums, but how do I not sometimes rub the data center on my cloud or vice versa? I also don't want to change the experience so much where I degrade it over the long term, I've actually somehow done something worse.So, can I meet those people where they are? Can we pull some of those experiences in, but not accidentally do something that kind of messes up the cloud experience? I mean, that's a fine line to walk. Does that make sense to you? Do you see where there's a… I don't know, you could accidentally cater to a certain audience too much, and change the experience for the worse?Corey: Yes, and no. My philosophy on it is that you have to meet customers where they are, but only to a point. At some point, what they're asking for becomes actively harmful or disadvantageous to wind up providing for them. “I want you to run my data center for me,” is on some level what some cloud environments look like, and I'm not going to sit here and tell people they're inherently wrong for that. Their big reason for moving to the cloud was because they keep screwing up replacing failed hard drives in their data center, so we're going to put it in the cloud.Is it more expensive that way? Well, sure in terms of actual cash outlay, it almost certainly is, but they're also not going down every month when a drive fails, so once the value of that? It's a capability story. That becomes interesting to me, and I think that trying to sit here in isolation, and say that, “Oh, this application is not how we would build it at Google.” And it's, “Yeah, you're Google. They are insert an entire universe of different industries that look nothing whatsoever like Google.” The constraints are different, the resources are different, and—Richard: Sure.Corey: —their approach to problem-solving are different. When you built out Google, and even when you're building out Google Cloud, look at some of the oldest craftiest stuff you have in your entire all of Google environment, and then remember that there are companies out there that are hundreds of years old. It's a different order of magnitude as far as era, as far as understanding of what's in the environment, and that's okay. It's a very broad and very diverse world.Richard: Yeah. I mean, that's, again, why I've been thinking more about migration than even some of the modernization piece. Should you bring your network architecture from on-prem to the cloud? I mean, I think most cases, no. But I understand sometimes that edge firewall, internal trust model you had on-prem, okay, trying to replicate that.So, yeah, like you say, I want to meet people where they are. Can we at least find some strategic leverage points to upgrade aspects of things as you get to a cloud, to save you from yourself in some places because all of a sudden, you have ten regions and you only had one data center before. So, many more rooms for mistakes. Where are the right guardrails? We're probably more opinionated than others at Google Cloud.I don't really apologize for that completely, but I understand. I mean, I think we've loosened up a lot more than maybe people [laugh] would have thought a few years ago, from being hyper-opinionated on how you run software.Corey: I will actually push back a bit on the idea that you should not replicate your on-premises data center in your cloud environment. Sure, are there more optimal ways to do it that are arguably more secure? Absolutely. But a common failure mode in moving from data center to cloud is, “All right, we're going to start embracing this entirely new cloud networking paradigm.” And it is confusing, and your team that knows how the data center network works really well are suddenly in way over their heads, and they're inadvertently exposing things they don't intend to or causing issues.The hard part is always people, not technology. So, when I glance at an environment and see things like that, perfect example, are there more optimal ways to do it? Oh, from a technology perspective, absolutely. How many engineers are working on that? What's their skill set? What's their position on all this? What else are they working on? Because you're never going to find a team of folks who are world-class experts in every cloud? It doesn't work that way.Richard: No doubt. No doubt, you're right. There's areas where we have to at least have something that's going to look similar, let you replicate aspects of it. I think it's—it'll just be interesting to watch, and I have enough conversations with customers who do ask, “Hey, where are the places we should make certain changes as we evolve?” And maybe they are tactical, and they're not going to be the big strategic redesign their entire thing. But it is good to see people not just trying to shovel everything from one place to the next.Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn't think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don't use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you'll have a chance to prove yourself. Compete in four unique lab challenges, where they'll be awarding more than $2000 in cash and prizes. I'm not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That's cloudacademy.com/corey. We're gonna have some fun with this one!Corey: Now, to follow up on what I was saying earlier, what I think I've gotten wrong by accepting the industry talking points on is that the next million cloud customers are big enterprises moving from data centers into the cloud. There's money there, don't get me wrong, but there is a larger opportunity in empowering the creation of companies in your environment. And this is what certain large competitors of yours get very wrong, where it's we're going to launch a whole bunch of different services that you get to build yourself from popsicle sticks. Great. That is not useful.But companies that are trying to do interesting things, or people who want to found companies to do interesting things, want something that looks a lot more turnkey. If you are going to be building cloud offerings, that for example, are terrific building blocks for SaaS companies, then it behooves you to do actual investments, rather than just a generic credit offer, into spurring the creation of those types of companies. If you want to build a company that does payroll systems, in a SaaS, cloud way, “Partner with us. Do it here. We will give you a bunch of credits. We will introduce you to your first ten prospective customers.”And effectively actually invest in a company success, as opposed to pitch-deck invest, which is, “Yeah, we'll give you some discounting and some credits, and that's our quote-unquote, ‘investment.'” actually be there with them as a partner. And that's going to take years for folks to wrap their heads around, but I feel like that is the opportunity that is significantly larger, even than the embedded existing IT space because rather than fighting each other for slices of the pie, I'm much more interested in expanding that pie overall. One of my favorite questions to get asked because I think it is so profoundly missing the point is, “Do you think it's possible for Google to go from number three to number two,” or whatever the number happens to be at some point, and my honest, considered answer is, “Who gives a shit?” Because number three, or number five, or number twelve—it doesn't matter to me—is still how many hundreds of billions of dollars in the fullness of time. Let's be real for a minute here; the total addressable market is expanding faster than any cloud or clouds are going to be able to capture all of.Richard: Yeah. Hey, look, whoever who'll be more profitable solving user problems, I really don't care about the final revenue number. I can be the number one cloud tomorrow by making Google Cloud free. What's the point? That's not a sustainable business. So, if you're just going for who can deploy the most VCPUs or who can deploy the most whatever, there's ways to game that. I want to make sure we are just uniquely solving problems better than anybody else.Corey: Sorry, forgive me. I just sort of zoned out for a second there because I'm just so taken aback and shocked by the idea of someone working at a large cloud provider who expresses a philosophy that isn't lying awake at night fretting over the possibility of someone who isn't them as making money somewhere.Richard: [laugh]. I mean, your idea there, it'll be interesting to watch, kind of, the maker's approach of are you enabling that next round of startups, the next round of people who want to take—I mean, honestly, I like the things we're doing building block-wise, even with our AI: we're not just handing you a vision API, we're giving you a loan processing AI that can process certain types of docs, that more packaged version of AI. Same with healthcare, same with whatever. I can imagine certain startups or a company idea going, “Hey, maybe I could disrupt or serve a new market.”I always love what Square did. They've disrupted emerging markets, small merchants here in North America, wherever, where I didn't need a big expensive point of sale system. You just gave me the nice, right building blocks to disrupt and run my business. Maybe Google Cloud can continue to provide better building blocks, but I do like your idea of actually investment zones, getting part of this. Maybe the next million users are founders and it's not just getting into some of these companies with, frankly, 10, 20, 30,000 people in IT.I think there's still plenty of room in these big enterprises to unlock many more of those companies, much more of their business. But to your point, there's a giant market here that we're not all grabbing yet. For crying out loud, there's tons of opportunity out here. This is not zero-sum.Corey: Take it a step further beyond that, and today, if you have someone who's enterprising, early on in their career, maybe they just got out of school, maybe they have just left their job and are ready to snap, or they have some severance money that they want to throw into something. Great. What do they want to do if they have an idea for a company? Well today, that answer looks a lot like, well, time to go to a boot camp and learn to code for six months so you can build a badly done MVP well enough to get off the ground and get some outside investment, and then go from there. Well, what if we cut that part out entirely?What if there were building blocks of I don't need to know or care that there's a database behind it, or what a database looks like. Picture Visual Basic in a web browser for building apps, and just take this bit of information I give you and store it and give it back to me later. Sure, you're going to have some significant challenges in the architecture or something like that as it goes from this thing that I'm talking about as an MVP to something planet-scale—like a Spotify for example—but that's not most businesses, and that's okay. Get out of the way and let people innovate and iterate on what it is they're doing more rapidly, and make it more accessible to teach people. That becomes huge; that gets the infrastructure bits that cloud providers excel at out of the way, and all it really takes is packaging those things into a golden path of what a given company of a particular profile should be doing, if—unless they have reason to deviate from it—and instead of having this giant paradox of choice issue, it's, “Oh, okay, I'll drag-drop, build things accordingly.”And under the hood, it's doing all the configuration of services and that's great. But suddenly, you've made being a founder of a software company—fundamentally—accessible to people who are not themselves software engineers. And I know that's anathema to some people, and I don't even slightly care because I am done with gatekeeping.Richard: Yeah. No, it's exciting if that can pull off. I mean, it's not the years ago where, how much capital was required to find the rack and do all sorts of things with tech, and hire some developers. And it's an amazing time to be software creators, now. The more we can enable that—yeah, I'm along for that journey, sign me up.Corey: I'm looking forward to seeing how it winds up shaking out. So, I want to talk a little bit about the paradox of choice problem that I just mentioned. If you take a look at the various compute services that every cloud provider offers, there are an awful lot of different choices as far as what you can run. There's the VM model, there's containers—if you're in AWS, you have 17 ways to run those—and you wind up—any of the serverless function story, and other things here and there, and managed services, I mean and honestly, Google has a lot of them, nowhere near as many as you do failed messaging products, but still, an awful lot of compute options. How do customers decide?What is the decision criteria that you see? Because the worst answer you can give someone who doesn't really know what they're doing is, “It depends,” because people don't know how to make that decision. It's, “What factors should I consider then, while making that decision?” And the answer has to be something somewhat authoritative because otherwise, they're going to go on the internet and get yelled at by everyone because no one is ever going to agree on this, except that everyone else is wrong.Richard: Mm-hm. Yeah, I mean, on one hand, look, I like that we intentionally have fewer choices than others because I don't think you need 17 ways to run a container. I think that's excessive. I think more than five is probably excessive because as a customer, what is the trade-off? Now, I would argue first off, I don't care if you have a lot of options as a vendor, but boy, the backends of those better be consistent.Meaning if I have a CI/CD tool in my portfolio and it only writes to two of them, shame on me. Then I should make sure that at least CI/CD, identity management, log management, monitoring, arguably your compute runtime should be a late-binding choice. And maybe that's blasphemous because somebody says, “I want to start up front knowing it's a function,” or, “I want to start it's a VM.” How about, as a developer, I couldn't care less. How about I just build cool software and maybe even at deploy time, I say, “This better fits in running in Kubernetes.” “This is better in a virtual machine.”And my cost of changing that later is meaningless because, hey, if it is in the container, I can switch it between three or four different runtimes, the identity management the same, it logs the exact same way, I can deploy CI/CD the same way. So, first off, if those things aren't the same, then the vendor is messing up. So, the customer shouldn't have to pay the cost of that. And then there gets to be other actual criteria. Look, I think you are looking at the workload itself, the team who makes it, and the strategy to figure out the runtime.It's easy for us. Google Compute Engine for VMs, containers go in GKE, managed services that need some containers, there are some apps around them, are Cloud Functions and Cloud Run. Like, it's fairly straightforward and it's going to be an OR situation—or an AND situation not an OR, which is great. But we're at least saying the premium way to run containers in Google Cloud for systems is GKE. There you go. If you do have a bunch of managed services in your architecture and you're stitching them together, then you want more serverless things like Cloud Run and Cloud Functions. And if you want to just really move some existing workload, GCE is your best choice. I like that that's fairly straightforward. There's still going to be some it depends, but it feels better than nine ways to run Kubernetes engines.Corey: I'm sure we'll see them in the fullness of time.Richard: [laugh].Corey: So, talk about Anthos a bit. That was a thing that was announced a while back and it was extraordinarily unclear what it was. And then I looked at the pricing and it was $10,000 a month with a one-year minimum commitment, and is like, “Oh, it's not for me. That's why I don't get it.” And I haven't really looked back at it since. But it is something else now. It almost feels like a wrapper brand, in some respects. How's it going? [unintelligible 00:29:26]?Richard: Yeah. Consumption, we'll talk more upcoming months on some of the adoption, but we're finally getting the hockey stick, which always comes delayed with platforms because nobody adopts platforms quickly. They buy the platform and a year later they start to actually build new development, migrate the things they have. So, we're starting to see the sort of growth. But back to your first point. And I even think I poorly tried to explain it a year ago with you. Basically, look, Anthos is the ability to manage fleets of GKE clusters, wherever they are. I don't care if they're on-prem, I don't care if they're in Google Cloud, I don't care if they're Amazon. We have one customer who only uses Anthos on AWS. Awesome, rock on.So, how do I put GKE clusters everywhere, but then do fleet management because look, some people are doing an app per cluster. They don't want to jam 50 apps in the cluster from different teams because they don't like the idea that this app requires root access; now you can screw around with mine. Or, you didn't update; that broke the cluster. I don't want any of that. So, you're going to see companies more, doing even app per cluster, app per developer per cluster.So, now I have a fleet problem. How do I keep it in sync? How do I make sure policy is consistent? Those sorts of things. So, Anthos is kind of solving the fleet management challenge and replacing people's first-gen app platform.Seeing a lot of those use cases, “Hey, we're retiring our first version of Docker Enterprise, Mesos, Cloud Foundry, even OpenShift,” saying, “All right, now's the time for our next version of our app platform. How about GKE, plus Cloud Run on top of it, plus other stuff?” Sounds good. So, going well is a, sort of—as you mentioned, there's a brand story here, mainly because we've also done two things that probably matter to you. A, we changed the price a lot.No minimum commit, remarkably at 20% of the cost it was when we launched, on purpose because we've gotten better at this. So, much cheaper, no minimum commit, pay as you go. Be on-premises, on bare metal with GKE. Pay by the hour, I don't care; sounds great. So, you can do that sort of stuff.But then more importantly, if you're a GKE customer and you just want config management, service mesh, things like that, now you can buy all of those independently as well. And Anthos is really the brand for fleet management of GKE. And if you're on Google Cloud only, it adds value. If you're off Google Cloud, if you're multi-cloud, I don't care. But I want to manage fleets of compute clusters and create them. We're going to keep doubling down on that.Corey: The big problem historically for understanding a lot of the adoption paradigm of Kubernetes has been that it was, to some extent, a reimagining of how Google ran and built software internally. And I thought at the time, the idea was—from a cynical perspective—that, “All right, well, your crappy apps don't run well on Google-style infrastructure so we're going to teach the entire world how to write software the way that we do.” And then you end up with people running their blog on top of Kubernetes, where it's one of those, like, the first blog post is, like, “How I spent the last 18 months building Kubernetes.” And, okay, that is certainly a philosophy and an approach, but it's almost approaching Windows 95 launch level of hype, where people who didn't own computers were buying copies of it, on some level. And I see the term come up in conversations in places where it absolutely has no place being brought up. “How do I run a Kubernetes cluster inside of my laptop?” And, “It's what you got going on in there, buddy?”Richard: [laugh].Corey: “What do you think you're trying to do here because you just said something that means something that I think is radically different to me than it is to you.” And again, I'm not here to judge other people's workflows; they're all terrible, except for mine, which is an opinion held by everyone about their own workflow. But understanding where people are, figuring out how to get there, how to meet customers where they are and empower them. And despite how heavily Google has been into the Kubernetes universe since its inception, you're very welcoming to companies—and loud-mouth individuals on Twitter—who have no use for Kubernetes. And working through various products you offer, I don't ever feel like a second-class citizen. There's really something impressive about that, of not letting the hype dictate the product and marketing decisions of it.Richard: Yeah, look, I think I tweeted it recently, I think the future of software is managed services with containers in the gap, for the most part. Whereas—if you can use managed services, please do. Use them wherever you can. And if you have to sling some code, maybe put it in a really portable thing that's really easy to run in lots of places. So, I think that's smart.But for us, look, I think we have the best container workflow from dev tools, and build tools, and artifact registries, and runtimes, but plenty of people are running containers, and you shouldn't be running Kubernetes all over the place. That makes sense for the workload, I think it's better than a VM at the retail edge. Can I run a small cluster, instead of a weird point-of-sale Windows app? Maybe. Maybe it makes sense to have a lightweight Kubernetes cluster there for consistency purposes.So, for me, I think it's a great medium for a subset of software. Google Cloud is going to take whatever you got, which is great. I think containers are great, but at the same time, I'm happily going to let you deploy a function that responds to you adding a storage item to a bucket, where at the same time give you a SaaS service that replaces the need for any code. All of those are terrific. So yeah, we love Kubernetes. We think it's great. We're going to be the best version to run it. But that's not going to be your whole universe.Corey: No, and I would argue it absolutely shouldn't be.Richard: [laugh]. Right. Agreed. Now again, for some companies, it's a great replacement for this giant fleet of VMs that all runs at eight percent utilization. Can I stick this into a bunch of high-density clusters? Absolutely you should. You're going to save an absolute fortune doing that and probably pick up some resilience and functionality benefits.But to your point, “Do I want to run a WordPress site in there?” I don't know, probably not. “Do I need to run my own MySQL?” I'd prefer you not do that. So, in a lot of cases, don't use it unless you have to. That should go for all compute nowadays. Use managed services.Corey: I'm a big believer in going down that approach just because it is so much easier than trying to build it yourself from popsicle sticks because you theoretically might have to move it someday in the future, even though you're not.Richard: [laugh]. Right.Corey: And it lets me feel better about a thing that isn't going to be used by anything that I'm doing in the near future. I just don't pretend to get it.Richard: No, I don't install a general purpose electric charger in my garage for any electric car I may get in the future; I charge for the one I have now. I just want it to work for my car; I don't want to plan for some mythical future. So yeah, premature optimization over architecture, or death in IT, especially nowadays where speed matters, don't waste your time building something that can run in nine clouds.Corey: Richard, I want to thank you for coming on again a year later to suffer my slings, arrows, and other various implements of misfortune. If people want to learn more about what you're doing, how you're doing it, possibly to pull a Forrest Brazeal and go work with you, where can they find you?Richard: Yeah, we're a fun place to work. So, you can find me on Twitter at @rseroter—R-S-E-R-O-T-E-R—hang out on LinkedIn, annoy me on my blog seroter.com as I try to at least explore our tech from time to time and mess around with it. But this is a fun place to work. There's a lot of good stuff going on here, and if you work somewhere else, too, we can still be friends.Corey: Thank you so much for your time today. Richard Seroter, director of outbound product management at Google. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment into which you have somehow managed to shove a running container.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Python Bytes
#258 Python built us an anime dog!

Python Bytes

Play Episode Listen Later Nov 11, 2021 43:09


Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut - Get started at shortcut.com/pythonbytes Special guest: Karen Dalton Brian #1: stale : github bot to “Close Stale Issues and PRs” Was one response to a question by Will McGugan Something like “An issue filed on an open source project, I've asked a followup question about the issue, and filer doesn't respond. Is there an easy way to close the issue after a set time period of inactivity.” Just trying to get a reference to Will out of the way early in the episode. stale does this: Warns and then closes issues and PRs that have had no activity for a specified amount of time. The configuration must be on the default branch and the default values will: Add a label "Stale" on issues and pull requests after 60 days of inactivity and comment on them Close the stale issues and pull requests after 7 days of inactivity If an update/comment occur on stale issues or pull requests, the stale label will be removed and the timer will restart If defaults seem too short or harsh, everything is configurable Michael #2: jut - JUpyter notebook Terminal viewer via kidpixo The command line tool view the IPython/Jupyter notebook in the terminal. Even works against remote ipynb files (via http) Karen #3: JupyterLyte via Marcel Milcent @MarcelMilcent JupyterLite is a JupyterLab distribution that runs entirely in the browser and is interactive Built from using JupyterLab components and extensions Being developed by core Jupyter developers, but the project is still unofficial Example: https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html Offers JupyterLab or RetroLab (a.k.a JupyterLab Classic) look No application server required, cacheable Try "import this"! Brian #4: Feature comparison of ack, ag, git-grep, GNU grep and ripgrep ack now, supplies are limited! Tangent for those unfamiliar with grep grep is an essential tool for many developers that prints lines that match a pattern grep foo *.py - list all lines containing “foo” in this directory grep -l foo **/*.py | grep -v venv **``*/**``.py Recursively find all Python files this directory and all subdirectories -l Print just the name of the file if it contains a “foo” in it. | grep -v venv Exclude virtual environments, because there's a lot of “foo” in there. (There's gotta be a better way to do this, someone suggest a better way, please). Article compares ack, ag “The silver Searcher”, git-grep, grep, and rg “ripgrep” Language, Licence, and regex versions Features like parallelism, config, etc. Fine grain feature comparisons searching capability regular expression style search output file presentation file finding inclusion, exclusion file type specification random other features This is on the ack website, and kinda makes my want to try ripgrep. Michael #5: Python Client for Airtable: pyairtable by Gui Talarico What is Airtable? Hmm kind of like: Excel Trello boards CI Pipelines A big player on nocode/lowcode community Check out the quickstart to see how it works. Karen #6: Black can now format notebooks via Marco Gorelli gh: MarcoGorelli (creator of nbQA [isort, pyupgrade, mypy, pylint, flake8, and more on Jupyter Notebooks]) pip install black[jupyter] black mynotebook.ipynb “…it should be significantly more robust than the current third-party tools” Extras Michael Trying a new password manager (sorta): Bitwarden The PSF is looking for an Executive Director Want a person in anime form? Python 3.11.0a2 is out (via PyCoders) Karen Volunteer in your local Python community (or volunteer to speak) Joke:

Adafruit Industries
Python on Hardware weekly video 156

Adafruit Industries

Play Episode Listen Later Nov 11, 2021 5:02


The wonderful world of Python on hardware! Episode 156 (November 10, 2021). This is our weekly Python video-newsletter-podcast! Ladyada and PT review the Python on hardware news & highlights of the week. The news comes from the Python community, Discord, Adafruit communities and more. It's part of the comprehensive newsletter we do each week. The video playlist of episodes is here: http://adafru.it/pohepisodes Sign up for the Python on Microcontrollers weekly email newsletter here: https://www.adafruitdaily.com/ Read the newsletters past and present at https://www.adafruitdaily.com/category/circuitpython/ Learn all about CircuitPython here: https://www.circuitpython.org/ https://adafruit.com/circuitpython/ Join us on Discord! https://adafru.it/discord/ Visit the Adafruit shop online, we're open for business - http://www.adafruit.com Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/

MōKuest Studios
Ep. 81: 12 Monkeys (How Does It Make Sense?)

MōKuest Studios

Play Episode Listen Later Nov 10, 2021 65:30


Cinekuest Video Presents 12 Monkeys AKA The One Where We Wonder How It Makes Sense Time travel, questioning reality, eco-terrorist actions, looooooove, and Bruce Willis' beautiful backside with a backdrop of Brad Pitt's pre-Fight Club nuttiness are the driving force of 1996's 12 Monkeys directed by Python alum who also directed a previous entry in … Continue reading "Ep. 81: 12 Monkeys (How Does It Make Sense?)"

Coder Radio
439: Github NoPilot

Coder Radio

Play Episode Listen Later Nov 10, 2021 59:13


Microsoft has a bunch of new goodies for developers, but Mike is becoming more and more concerned about an insidious new feature.

Ken's Nearest Neighbors
Can You Learn Data Science From a Book? (Tyler Richards) - KNN Ep. 73

Ken's Nearest Neighbors

Play Episode Listen Later Nov 10, 2021 56:06


We are actually giving away 4 copies of Tyler's book! Comment on the YouTube video with why you want to learn streamlit for a chance to win one of the copies! You can also win by commenting on the twitter post, my instagram post, or my linkedin post related to this podcast! Tyler is a data scientist at Facebook who recently published a book on the Python library Streamlit called 'Getting Started with Streamlit for Data Science'. He graduated from the University of Florida in 2018, and worked on election integrity problems for nonprofits and research labs while there.

Talk Python To Me - Python conversations for passionate developers
#340: Time to JIT your Python with Pyjion?

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 10, 2021 73:38


Is Python slow? We touched on that question with Guido and Mark last episode. This time we welcome back friend of the show, Anthony Shaw. Here's there to share the massive amount of work he's been doing to answer that question and speed things up where they answer is yes. He's just released version 1.0 of the Pyjion project. Pyjion is a drop-in JIT compiler for Python 3.10. Pyjion uses the power of the .NET 6 cross-platform JIT compiler to optimize Python code on the fly, with NO changes to your source code required. It runs on Linux, macOS, and Windows, x64 and ARM64. Links from the show Anthony on Twitter: @anthonypjshaw Pyjion: github.com Restarting Pyjion Presentation: youtube.com Hathi: SQL host scanner and dictionary attack tool: github.com Try Pyjion online: trypyjion.com Pyjion optimizations: readthedocs.io Pyjion docs: readthedocs.io .NET: dotnet.microsoft.com PEP 523: python.org Pydantic validation decorator: helpmanual.io Tortoise ORM: github.com pypy: pypy.org Numba: numba.pydata.org NGen AOT Compiler: microsoft.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm ---------- Stay in touch with us ---------- Subscribe on YouTube (for live streams): youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors Shortcut Linode AssemblyAI Talk Python Training

The Bike Shed
315: Emotions Are A Pendulum

The Bike Shed

Play Episode Listen Later Nov 9, 2021 41:23


Steph talks about starting a new project and identifying "focused" tests while Chris shares his latest strategy for managing flaky tests. They also ponder the squishy "it depends" side of software and respond to a listener question about testing all commits in a pull request. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. rspec-retry (https://github.com/NoRedInk/rspec-retry) Cassidy Williams - It Depends - GitHub Universe 2021 (https://www.youtube.com/watch?v=aMWh2uLO9OM) Say No To More Process (https://thoughtbot.com/blog/say-no-to-more-process-say-yes-to-trust) StandardRB (https://github.com/testdouble/standard) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: My new computer is due on the fourth. I'm so close. STEPH: On the fourth? CHRIS: On the fourth. STEPH: That's so exciting. CHRIS: And I'm very excited. But no, I don't want to upgrade any software on this computer anymore. Never again shall I update a piece of software on this computer. STEPH: [laughs] CHRIS: This is its final state. And then I will take its soul and move it into the new computer, and we'll go from there. [chuckles] STEPH: Take its soul. [laughs] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we learn along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Let's see. It's been kind of a busy week. It's been a busy family week. Utah, my dog, hasn't been feeling well as you know because you and I have chatted off-mic about that a bit. So he is still recovering from something, I don't know what. He's still on most days his normal captain chaos self, but then other days, he's not feeling well. So I'm just keeping a close eye on him. And then I also got some other family illnesses going on. So it has been a busy family week for sure. On the more technical project side, I am wrapping up my current project. So I have one more week, and then I will shift into a new project, which I'm very excited about. And you and I have chatted about this several times. So there's always just that interesting phase where you're trying to wrap up and hand things off and then accomplish last-minute wishlist items for a project before then you start with a new one. So I am currently in that phase. CHRIS: How long were you on this project for? STEPH: It'll be a total of I think eight months. CHRIS: Eight months, that's healthy. That's a bunch. It's always interesting to be on a project for that long but then not longer. There were plenty of three and four-month projects that I did. And you can definitely get a large body of work done. You can look back at it and proudly stare at the code that you have written. But that length of time is always interesting to me because you end up really...for me, when I've had projects that went that long but then not longer, I always found that to be an interesting breaking point. How are you feeling moving on from it? Are you ready for something new? Are you sad to be moving on? Do you feel attached to things? STEPH: It's always a mix. I'm definitely attached to the team, and then there are always lots of things that I'd still love to work on with that team. But then, I am also excited to start something new. That's why I love this role of consulting because then I get to hop around and see new projects and challenges and work with new people. I'm thinking seven to eight months might be a sweet spot for me in terms of the length of a project. Because I find that first month with a project, I'm really still ramping up, I'm getting comfortable, I'm getting in the groove, and I'm contributing within a short amount of time. But I still feel like that first month; I'm getting really comfortable with this new environment that I'm in. And so then I have that first month. And then, at six months, I have more of heads-down time. And I get to really focus and work with a team. And then there's that transition period, and it's nice to know when that's coming up for several weeks, so then I have a couple of weeks to then start working on that transition phase. So eight months might be perfect because then it's like a month for onboarding, ramping up, getting comfortable. And then six months of focus, and then another month of just focusing on what needs to be transitioned so then I can transition off the team. CHRIS: All right. Well, now we've defined it - eight months is the perfect length of a project. STEPH: That's one of the things I like about the Boost team is because we typically have longer engagements. So that was one of the reasons when we were splitting up the teams in thoughtbot that I chose the Boost team because I was like, yeah, I like the six-month-plus project. Speaking of that wishlist, there are little things that I've wanted to make improvements on but haven't really had time to do. There's one that's currently on my mind that I figured I'd share with you in case you have thoughts on it. But I am a big proponent of using the RSpec focus filter for when running tests. So that way, I can just prefix a context it block or describe block with F, and then RSpec I can just run all the tests. But RSpec will only run the tests that I've prefixed with that F focus command., and I love it. But we are running into some challenges with it because right now, there's nothing that catches that in a pull request. So if you commit that focus filter on some of your tests, and then that gets pushed up, if someone doesn't notice it while reviewing your pull request, then that gets merged into main. And all of the tests are still green, but it's only a subset of the tests that are actually running. And so it's been on my mind that I'd love something that's going to notice that, that's going to catch it, something that is not just us humans doing our best but something that's automated that's going to notice it for us. And I have some thoughts. But I'm curious, have you run into something like this? Do you have a way that you avoid things like that from sneaking into the main branch? CHRIS: Interestingly, I have not run into this particular problem with RSpec, and that's because of the way that I run RSpec tests. I almost never use the focus functionality where you actually change the code file to say, instead of it, it is now fit to focus that it. I tend to lean into the functionality where RSpec you can pass it the line number just say, file: and then line number. And RSpec will automatically figure out which either spec or context block or entire file. And also, I have Vim stuff that allows me to do that very easily from the file. It's very rare that I would want to run more than one file. So basically, with that, I have all of the flexibility I need. And it doesn't require any changes to the file. So that's almost always how I'm working in that mode. I really love that. And it makes me so sad when I go to JavaScript test runners because they don't have that. That said, I've definitely felt a very similar thing with ESLint and ESLint yelling at me for having a console.log. And I'm like, ESLint, I'm working here. I got to debug some stuff, so if you could just calm down for a minute. And what I would like is a differentiation between these are checks that should only run in CI but definitely need to run in CI. And so I think an equivalent would be there's probably a RuboCop rule that says disallow fit or disallow any of the focus versions for RSpec. But I only want those to run in CI. And this has been a pain point that I felt a bunch of times. And it's never been painful enough that I put in the effort to fix it. But I really dislike particularly that version of I'm in my editor, and I almost always want there to be no warnings within the editor. I love that TypeScript or ESLint, or other things can run within the editor and tell me what's going on. But I want them to be contextually aware. And that's the dream I've yet to get there. STEPH: I like the idea of ESLint having a work mode where you're like, back off, I am in work mode right now. [chuckles] I understand that I won't commit this. CHRIS: I'm working here. [laughter] STEPH: And I like the idea of a RuboCop. So that's where my mind went initially is like, well, maybe there's a custom cop, or maybe there's an existing one, and I just haven't noticed it yet. But so I'm adding a rule that says, hey, if you do see an fcontext, fdescribe, ffit, something like that, please fail. Please let us know, so we don't merge this in. So that's on my wishlist, not my to-don't list. That one is on my to-do list. CHRIS: I'm also intrigued, though, because the particular failure mode that you're describing is you take what is an entire spec suite, and instead, you focus down to one context block within a given file. So previously, there were 700 specs that ran, and now there are 12. And that's actually something that I would love for Circle or whatever platform you're running your tests on to be like, hey, just as a note, you had been slowly creeping up and had hit a high watermark of roughly 700 specs. And then today, we're down to 12. So either you did some aggressive grooming, or something's wrong. But a heuristic analysis of like, I know sometimes people delete specs, and that's a thing that's okay but probably not this many. So maybe something went wrong there. STEPH: I feel like we're turning CI into this friend at the bar that's like, "Hey, you've had a couple of drinks. I just wanted to check in with you to make sure that you're good." [laughs] CHRIS: Yes. STEPH: "You've had 100 tests that were running and now only 50. Hey, friend, how are you? What's going on?" CHRIS: "This doesn't sound like you. You're normally a little more level-headed." [laughs] And that's the CI that is my friend that keeps me honest. It's like, "Wait, you promised never to overspend anymore, and yet you're overspending." I'm like, "Thank you, CI. You're right; I did say I want the test to pass." STEPH: [laughs] I love it. I'll keep you posted if I figure something out; if I either turn CI into that friend, that lets me know when my behavior has changed in a concerning way, and an intervention is needed. Or, more likely, I will see if there's a RuboCop or some other process that I can apply that will check for this, which I imagine will be fast. I mean, we're very mindful about ensuring our test suite doesn't slow down as we're running it. But I'm just thinking about this out loud. If we add that additional cop, I imagine that will be fast. So I don't think that's too much of an overhead to add to our CI process. CHRIS: If you've already got RuboCop in there, I'm guessing the incremental cost of one additional cop is very small. But yeah, it is interesting. That general thing of I want CI to go fast; I definitely feel that feel. And we're slowly creeping up on the project I'm working on. I think we're at about somewhere between five to six minutes, but we've gotten there pretty quickly where not that long ago; it was only three minutes. We're adding a lot of features specs, and so they are definitely accruing slowdowns in our CI. And they're worth it; I think, because they're so valuable. And they test the whole integration of everything, but it's a thing that I'm very closely watching. And I have a long list of things that I might pursue when I decide it's time for CI to get a haircut, as it were. STEPH: I have a very hot tip for a way to speed up your test, and that is to check if any of your tests have a very long sleep in them. That came up recently [chuckles] this week where someone was working in a test and found some relic that had been added a while back that then wasn't caught. And I think it was a sleep 30. And they were like, "Hey, I just sped up our test by 30 seconds." I was like, ooh, we should grep now to see if there's anything else like that. [laughs] CHRIS: Oh, I love the sentence we should grep now. [laughter] The correct response to this is to grep immediately. I thought you were going to go with the pro tip of you can just focus down to one context block. And then the specs will run so much faster because you're ignoring most of them, but we don't want to do that. The sleep, though, that's a pro tip. And that does feel like a thing that there could be a cop for, like, never sleep more than...frankly, let's try not to sleep at all but also, add a sleep in our specs. We can sleep in life; it's important, but anyway. [chuckles] STEPH: [laughs] That was the second hot tip, and you got it. CHRIS: Lots of hot tips. Well, I'm going to put this in the category of good idea, terrible idea. I won't call it a hot tip. It's a thing we're trying. So much as we have tried to build a spec suite that is consistent and deterministic and tells us only the truth, feature specs, even in our best efforts, still end up flaking from time to time. We'll have feature specs that fail, and then eventually, on a subsequent rerun, they will pass. And I am of the mindset that A, we should try and look into those and see if there is a real cause to it. But sometimes, just the machinery of feature specs, there's so much going on there. We've got the additional overhead of we're running it within a JavaScript context. There's just so much there that...let me say what I did, and then we can talk more about the context. So there's a gem called RSpec::Retry. It comes from the wonderful folks over at NoRedInk, a well-known Elm shop for anyone out there in the Elm world. But RSpec::Retry does basically what it says in the name. If the spec fails, you can annotate specs. In our case, we've only enabled this for the feature specs. And you can tell it to retry, and you can say, "Retry up to this many times," and et cetera, et cetera. So I have enabled this for our feature specs. And I've only enabled it on CI. That's an important distinction. This does not run locally. So if you run a feature spec and it fails locally, that's a good chance for us to intervene and look at whether or not there's some flakiness there. But on CI, I particularly don't want the case where we have a pull request, everything's great, and we merge that pull request, and then the subsequent rebuild, which again, as a note, I would rather that Circle not rebuild it because we've already built that one. But that is another topic that I have talked about in the past, and we'll probably talk about it again in the future. But setting that aside, Circle will rebuild on the main branch when we merge in, and sometimes we'll see failures there. And that's where it's most painful. Like, this is now the deploy queue. This is trying to get this out into whatever environment we're deploying to. And it is very sad when that fails. And I have to go in and manually say, hey, rebuild. I know that this works because it just worked in the pull request, and it's the same commit hash. So I know deterministically for reasons that this should work. And then it does work on a rebuild. So we introduced RSpec::Retry. We have wrapped it around our feature specs. And so now I believe we have three possible retries. So if it fails once, it'll try it again, and then it'll try it a third time. So far, we've seen each time that it has had to step in; it will pass on the subsequent run. But I don't know; there was some very gentle pushback or concerns; let's call them when I introduced this pull request from another developer on the team, saying, "I don't know, though, I feel like this is something that we should solve at the root layer. The failures are a symptom of flaky tests, or inconsistency or et cetera, and so I'd rather not do this." And I said, "Yeah, I know. But I'm going to merge it," and then I merged it. We had a better conversation about that. I didn't just broadly overrule. But I said, "I get it, but I don't see the obvious place to shore this up. I don't see where we're doing weird inconsistent things in our code. This is just, I think, inherent complexity of feature specs." So I did it, but yeah, good idea, terrible idea. What do you think, Steph? Maybe terrible is too strong of a word. Good idea, mediocre idea. STEPH: I like the original branding. I like the good idea, terrible idea. Although you're right, that terrible is a very strong branding. So I am biased right now, so I'm going to lead in answering your question by stating that because our current project has that problem as well where we have these flaky tests. And it's one of those that, yes, we need to look at them. And we have fixed a large number of them, but there are still more of them. And it becomes a question of are we actually doing something wrong here that then we need to fix? Or, like you said, is it just the nature of these features-specs? Some of them are going to occasionally fail. What reasonable improvements can we make to address this at the root cause? I'm interested enough that I haven't heard of RSpec::Retry that I want to check it out because when you add that, you annotate a test. When a test fails, does it run the entire build, or will it rerun just that test? Do you happen to know? CHRIS: Just the test. So it's configured as in a round block on the feature specs. And so you tell it like, for any feature spec, it's like config.include for feature specs RSpec::Retry or whatever. So it's just going to rerun the one feature spec that failed when and if that happens. So it's very, very precise as well in that sense where when we have a failure merging into the main branch, I have to rebuild the whole thing. So that's five or six minutes plus whatever latency for me to notice it, et cetera, whereas this is two more seconds in our CI runtime. So that's great. But again, the question is, am I hiding? Am I dealing with the symptoms and not the root cause, et cetera? STEPH: Is there a report that's provided at the end that does show these are the tests that failed and we had to rerun them? CHRIS: I believe no-ish. You can configure it to output, but it's just going to be outputting to standard out, I believe. So along with the sea of green dots, you'll see had to retry this one. So it is visible, but it's not aggregated. And the particular thing is there's the JUnit reporter that we're using. So the XML common format for this is how long our tests took to run, and these ones passed and failed. So Circle, as a particular example, has platform-level insights for that kind of stuff. And they can tell you these are your tests that fail most commonly. These are the tests that take the longest run, et cetera. I would love to get it integrated into that such that retried and then surface this to Circle. Circle could then surface it to us. But right now, I don't believe that's happening. So it is truly I will not see it unless I actively go search for it. To be truly honest, I'm probably not doing that. STEPH: Yeah, that's a good, fair, honest answer. You mentioned earlier that if you want a test to retry, you have to annotate the test. Does that mean that you get to highlight specific tests that you're marking those to say, "Hey, I know that these are flaky. I'm okay with that. Please retry them." Or does it apply to all of them? CHRIS: I think there are different ways that you can configure it. You could go the granular route of we know this is a flaky spec, so we're going to only put the retry logic around it. And that would be a normal RSpec annotation sort of tagging the spec, I think, is the terminology there. But we've configured it globally for all feature specs. So in a spec support file, we just say config.include Rspec::Retry where type is a feature. And so every feature spec now has the possibility to retry. If they pass on the first pass, which is the hope most of the time, then they will not be tried. But if they don't, if they fail, then they'll be retried up to three times or up to two additional times, I think is the total. STEPH: Okay, cool. That's helpful. So then I think I have my answer. I really think it's a good idea to automate retrying tests that we have identified that are flaky. We've tried to address the root, and our resolution was this is fine. This happens sometimes. We don't have a great way to improve this, and we want to keep the test. So we're going to highlight that this test we want to retry. And then I'm going to say it's not a great idea to turn it on for all of them just because then I have that same fear about you're now hiding any flaky tests that get introduced into the system. And nobody reasonably is going to go and read through to see which tests are going to get retried, so that part makes me nervous. CHRIS: I like it. I think it's a balanced and reasonable set of good and terrible idea. Ooh, it's perfect. I don't think we've had a balanced answer on that yet. STEPH: I don't think so. CHRIS: This is a new outcome for this segment. I agree. Ideally, in my mind, it would be getting into that XML format, the output from the tests, so that we now have this artifact, we can see which ones are flaky and eventually apply effort there. What you're saying feels totally right of we should be more particular and granular. But at the same time, the failure mode and the thing that I'm trying, I want to keep deploys going. And I only want to stop deploys if something's really broken. And if a spec retries, then I'm fine with it is where I've landed, particularly because we haven't had any real solutions where there was anything weird in our code. Like, there's just flakiness sometimes. As I say it, I feel like I'm just giving up. [laughs] And I can hear this tone of stuff's just hard sometimes, and so I've taken the easy way out. And I guess that's where I'm at right now. But I think what you're saying is a good, balanced answer here. I like it. I don't know if I'm going to do anything about it, but...[laughter] STEPH: Well, going back to when I was saying that I'm biased, our team is feeling this pain because we have flaky tests. And we're creating tickets, and we're trying to do all the right things. We create a ticket. We have that. So it's public. So people know it's been acknowledged. If someone's working on it, we let the team know; hey, I'm working on this. So we're not duplicating efforts. And so, we are trying to address all of them. But then some of them don't feel like a great investment of our time trying to improve. So that's what I really do like about the RSpec::Retry is then you can still have a resolution. Because it's either right now your resolution is to fix it or to change the code, so then maybe you can test it in a different way. There's not really a good medium step there. And so the retry feels like an additional good outcome to add to your tool bag to say, hey, I've triaged this, and this feels reasonable that we want to retry this. But then there's also that concern of we don't want to hide all of these flaky tests from ourselves in case we have done it and there is an opportunity for us to improve it. So I think that's what I do really like about it because right now, for us, when a test fails, we have to rerun the entire build, and that's painful. So if tests are taking about 20 minutes right now, then one spec fails, and then you have to wait another 20 minutes. CHRIS: I would have turned this on years ago with a 20-minute build time. [chuckles] STEPH: [laughs] Yeah, you're not wrong. But also, I didn't actually know about RSpec::Retry until today. So that may be something that we introduce into our application or something that I bring up to the team to see if it's something that we want to add. But it is interesting that initial sort of ooh kind of feeling that the team will give you introducing because it feels bad. It feels wrong to be like, hey, we're just going to let these flaky tests live on, and we're going to automate retrying them to at least speed us up. And it's just a very interesting conversation around where we want to invest our time and between the risk and pay off. And I had a similar experience this week where I had that conversation, but this one was more with myself where I was working through a particular issue where we have a state in the application where something weird was done in the past that led us to a weird state. And so someone raised a very good question where it's like, well, if what you're saying is technically an impossible state, we should make it impossible, like at the database layer. And I love that phrase. And yet, there was a part of me that was like, yes, but also doing that is not a trivial investment. And we're here because of a very weird thing that happened before. It felt one of those interesting, like, do we want to pursue the more aggressive, like, let's make this impossible for the future? Or do we want to address it for now and see if it comes back up, and then we can invest more time in it? And I had a hard time walking myself through that because my initial response was, well, yeah, totally, we should make it impossible. But then I walked through all the steps that it would take to make that happen, and it was not very trivial. And so it was one of those; it felt like the change that we ended up with was still an improvement. It was going to prevent users from seeing an error. It was still going to communicate that this state is an odd state for the application to be in. But it didn't go as far as to then add in all of the safety measures. And I felt good about it. But I had to convince myself to feel good about it. CHRIS: What you're describing there, the whole thought sequence, really feels like the encapsulation of it depends. And that being part of the journey of learning how to do software development and what it means. And you actually shared a wonderful video with me yesterday, and it was Cassidy Williams at GitHub Universe. And it was her talking to her younger self, and just it depends, and it was so true. So we will include a link to that in the show note because that was a wonderful thing for you to share. And it really does encapsulate this thing. And from the outside, before I started doing software development, I'm like, it's cool. I'm going to learn how to sling code and fix the stuff and hack, and it'll be great, and obvious, and correct, and knowable. And now I'm like, oh man, squishy nonsense. That's all it is. STEPH: [laughs] CHRIS: Fun squishy, and I like it. It's so good. But it depends. Exactly that one where you're like, I know that there's a way to get to correctness here but is it worth the effort? And looping back to...I'm surprised at the stance that I've taken where I'm just like, yeah, I'm putting in RSpec::Retry. This feels like the right thing. I feel good about this decision. And so I've tried to poke at it a tiny bit. And I think what matters to me deeply in a list of priorities is number one correctness. I care deeply that our system behaves correctly as intended and that we are able to verify that. I want to know if the system is not behaving correctly. And that's what we've talked about, like, if the test suite is green, I want to be able to deploy. I want to feel confident in that. Flaky specs exist in this interesting space where if there is a real underlying issue, if we've architected our system in a way that causes this flakiness and that a user may ever experience that, then that is a broken system. That is an incorrect system, and I want to resolve that. But that's not the case with what we're experiencing. We're happy with the architecture of our system. And when we're resolving it, we're not even really resolving them. We're just rerunning manually at this point. We're just like, oh, that spec flaked. And there's nothing to do here because sometimes that just happens. So we're re-running manually. And so my belief is if I see all green, if the specs all pass, I know that I can deploy to production. And so if occasionally a spec is going to flake and retrying it will make it pass (and I know that pass doesn't mean oh, this time it happened to pass; it's that is the correct outcome) and we have a false negative before, then I'm happy to instrument the system in a way that hides that from me because, at this point, it does feel like noise. I'm not doing anything else with the failures when we were looking at them more pointedly. I'm not resolving those flaky specs. There are no changes that we've made to the underlying system. And they don't represent a failure mode or an incorrectness that an end-user might see. So I honestly want to paper over and hide it from myself. And that's why I've chosen this. But you can see I need to defend my actions here because I feel weird. I feel a little off about this. But as I talk through it, that is the hierarchy. I care about correctness. And then, the next thing I care about is maintaining the deployment pipeline. I want that to be as quick and as efficient as possible. And I've talked a bunch about explorations into the world of observability and trying to figure out how to do continuous deployment because I think that really encourages overall better engineering outcomes. And so first is correctness. Second is velocity. And flaky specs impact velocity heavily, but they don't actually impact correctness in the particular mode that we're experiencing them here. They definitely can. But in this case, as I look at the code, I'm like, nah, that was just noise in the system. That was just too much complexity stacked up in trying to run a feature spec that simulates a browser and a user clicking in JavaScript and all this stuff and the things. But again, [laughs] here I am. I am very defensive about this apparently. STEPH: Well, I can certainly relate because I was defending my answer to myself earlier. And it is really interesting what you're pointing out. I like how you appreciate correctness and then velocity, that those are the two things that you're going after. And flaky tests often don't highlight an incorrect system. It is highlighting that maybe our code or our tests are not as performant as we would like them to be, but the behavior is correct. So I think that's a really important thing to recognize. The part where I get squishy is where we have encountered on this project some flaky tests that did highlight that we had incorrect behavior, and there's only been maybe one or two. It was rare that it happened, but it at least has happened once or twice where it highlighted something to us that when tests were run...I think there's a whole lot of context. I won't get into it. But essentially, when tests were being run in a particular way that made them look like a flaky test, it was actually telling us something truthful about the system, that something was behaving in a way that we didn't want it to behave. So that's why I still like that triage that you have to go through. But I also agree that if you're trying to get out at a deploy, you don't want to have to deal with flaky tests. There's a time to eat your vegetables, and I don't know if it's when you've got a deploy that needs to go out. That might not be the right time to be like, oh, we've got a flaky test. We should really address this. It's like, yes; you should note to yourself, hey, have a couple of vegetables tomorrow, make a ticket, and address that flaky test but not right now. That's not the time. So I think you've struck a good balance. But I also do like the idea of annotating specific tests instead of just retrying all of them, so you don't hide anything from yourself. CHRIS: Yeah. And now that I'm saying it and now that I'm circling back around, what I'm saying is true of everything we've done so far. But it is possible that now this new mode that the system behaves in where it will essentially hide flaky specs on CI means that any new flaky regressions, as it were, will be hidden from us. And thus far, almost all or I think all of the flakiness that we've seen has basically been related to timeouts. So a different way to solve this would potentially be to up the Capybara wait time. So there are occasionally times where the system's churning through, and the various layers of the feature specs just take a little bit longer. And so they miss...I forget what it is, but it's like two seconds right now or something like that. And I can just bump that up and say it's 10 seconds. And that's a mode that if eventually, the system ends in the state that we want, I'm happy to wait a little longer to see that, and that's fine. But there are...to name some of the ways that flaky tests can actually highlight truly incorrect things; race conditions are a pretty common one where this behaves fine most of the time. But if the background job happens to succeed before the subsequent request happens, then you'll go to the page. That's a thing that a real user may experience, and in fact, it might even be more likely in production because production has differential performance characteristics on your background jobs versus your actual application. And so that's the sort of thing that would definitely be worth keeping in mind. Additionally, if there are order issues within your spec suite if the randomize...I think actually RSpec::Retry wouldn't fix this, though, because it's going to retry within the same order. So that's a case that I think would be still highlighted. It would fail three times and then move on. But those we should definitely deal with. That's a test-related thing. But the first one, race conditions, that's totally a thing. They come up all the time. And I think I've potentially hidden that from myself now. And so, I might need to lock back what I said earlier because I feel like it's been true thus far that that has not been the failure mode, but it could be moving forward. And so I really want to find out if we got flaky specs. I don't know; I feel like I've said enough about this. So I'm going to stop saying anything new. [laughs] Do you have any other thoughts on this topic? STEPH: Our emotions are a pendulum. We swing hard one way, and then we have to wait till we come back and settle in the middle. But there's that initial passion play where you're really frustrated by something, and then you swing, and you settle back towards something that's a little more neutral. CHRIS: I don't trust anyone who pretends like their opinions never change. It doesn't feel like a good way to be. STEPH: Oh, I hope that...Do people say that? I hope that's not true. I hope we are all changing our opinions as we get more information. CHRIS: Me too. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Well, shifting only ever so slightly because it turns out it's a very related question, but we have a listener question. As always, thank you so much to everyone who sends in listener questions. We really appreciate them. And today's question comes from Mikhail, and he writes in, "Regarding the discussion in Episode 311 on requiring commits merged to be tested, I have a question on how you view multi commit PRs. Do you think all the commits in a PR should be tested or only the last one? If you test all commits in a PR, do you have any good tips on setups for that? Would you want all commits to pass all tests? For one, it helps a lot when using Git bisect. It is also a question of keeping the history clean and understandable. As a background on the project I currently work on, we have the opinion that all commits should be tested and working. We have now decided on single commit PRs only since this is the only way that we can currently get the setup reasonably on our CI. I would like to sometimes make PRs with more than one commit since I want to make commits as small as possible. In order to do that, we would have to find a way to make sure all commits in the PR are tested. There seems to be some hacky ways to accomplish this, but there is not much talk about it. Also, we are strict in requiring a linear history in all our projects. Kind regards, Mikhail." So, Steph, what do you think? STEPH: I remember reading this question when it came in. And I have an experience this week that is relevant to this mainly because I had seen this question, and I was thinking about it. And off the cuff, I haven't really thought about this. I haven't been very concerned about ensuring every single commit passes because I want to ensure that, ultimately, the final commit that I have is going in. But I also rarely have more than one commit in a PR. So that's often my default mode. There are a couple of times that I'll have two, maybe three commits, but I think that's pretty rare for me. I'll typically have just one commit. So I haven't thought about this heavily. And it's not something that frankly I've been concerned about or that I've run into issues with. From their perspective about using Git bisect, I could see how that could be troublesome, like if you're looking at a commit and you realize there's a particular commit that's already merged and that fails. The other area that I could think of where this could be problematic is if you're trying to roll back to a specific commit. And if you accidentally roll back to a commit that is technically broken, but you didn't know that because it was not the final commit as was getting tested on CI, that could happen. I haven't seen that happen. I haven't experienced it. So while that does seem like a legitimate concern, it's also one that I frankly just haven't had. But because I read this question from this person earlier this week, I actually thought about it when I was crafting a PR that had several commits in it, which is kind of unusual for me since I'm usually one or two commits in a PR. But for this one, I had several because we use standard RB in our project to handle all the formatting. And right now, we have one of those standard to-do files because we added it to the project. But there are still a number of manual fixes that need to be applied. So we just have this list of files that still need to be formatted. And as someone touches that file, we will format it, and then we'll take it out of that to-do list. So then standard RB will include it as it's linting all of our files. And I decided to do that for all of our spec files. Because I was like, well, this was the safest chunk of files to format that will require the least amount of review from folks. So I just want to address all of them in one go. But I separated the more interesting changes into different commits just to make others aware of, like, hey, this is something standard RB wants. And it was interesting enough that I thought I would point it out. So my first commit removed all the files from that to-do list, but then my other commits are the ones that made actual changes to some of those files that needed to be corrected. So technically, one or two of my middle commits didn't pass the standard RB linting. But because CI was only running that final commit, it didn't notice that. And I thought about this question, and so I intentionally went back and made sure each of those commits were correct at that point in time. And I feel good about that. But I still don't feel the need to add more process around ensuring each commit is going to be green. I think I would lean more in favor of let's keep our PR small to one or two commits. But I don't know; it's something I haven't really run into. It's an interesting question. How about you? What are your experiences, or what are your thoughts on this, Chris? CHRIS: When this question came through, I thought it was such an interesting example of considering the cost of process changes. And to once again reference one of our favorite blog posts by German Velasco, the Say No to More Process post, which we will, of course, link in the show notes. This is such a great example of there was likely a small amount of pain that was felt at one point where someone tried to run git bisect. They ran into a troublesome commit, and they were like, oh no, this happened. We need to add processes, add automation, add control to make sure this never happens again. Personally, I run git bisect very rarely. When I do, it's always a heroic moment just to get it started and to even know which is the good and which is the bad. It's always a thing anyway. So it would be sad if I ran into one of these commits. But I think this is a pretty rare outcome. I think in the particular case that you're talking about, there's probably a way to actually tease that apart. I think it sounds like you fixed those commits knowing this, maybe because you just put it in your head. But the idea that the process that this team is working on has been changed such that they only now allow single commit PRs feels like too much process in my mind. I think I'm probably 80%, maybe 90% of the time; it's only a single commit in a PR for me. But occasionally, I really value having the ability to break it out into discrete steps, like these are all logically grouped in one changeset that I want to send through. But they're discrete steps that I want to break apart so that the team can more easily review it so that we have granular separation, and I can highlight this as a reference. That's often something that I'll do is I want this commit to standalone because I want it to be referenced later on. I don't want to just fold it into the broader context in which it happened, but it's pretty rare. And so to say that we can't do that feels like we're adding process where it may not be worth it, where the cost of that process change is too high relative to the value that we're getting, which is speculatively being able to run git bisect and not hit something problematic in the future. There's also the more purist, dogmatic view of well, all commits should be passing, of course. Yeah, I totally agree with that. But what's it worth to you? How much are you willing to spend to achieve that goal? I care deeply about the correctness of my system but only the current correctness. I don't care about historical correctness as much, some. I think I'm diminishing this more than I mean to. But really back to that core question of yes, this thing has value, but is it worth the cost that we have to pay in terms of process, in terms of automation and maintenance of that automation over time, et cetera or whatever the outcome is? Is it worth that cost? And in this case, for me, this would not be worth the cost. And I would not want to adopt a workflow that says we can only ever have single commit PRs, or all commits must be run on CI or any of those variants. STEPH: This is an interesting situation where I very much agree with everything you're saying. But I actually feel like what Mikhail wants in this world; I want it too. I think it's correct in the way that I do want all the commits to pass, and I do want to know that. And I think since I do fall into the default, like you mentioned, 80%, 90% of my PRs are one commit. I just already have that. And the fact that they're enforcing that with their team is interesting. And I'm trying to think through why that feels cumbersome to enforce that. And I'm with you where I'll maybe have a refactor commit or something that goes before. And it's like, well, what's wrong with splitting that out into a separate PR? What's the pain point of that? And I think the pain point is the fact that one, you have two PRs that are stacked on each other. So you have the first one that you need to get reviewed, and then the second one; there's that bit of having to hop between the two if there's some shared context that someone can't just easily review in one pull request. But then there's also, as we just mentioned, there's CI that has to run. And so now it's running on both of them, even though maybe that's a good thing because it's running on both commits. I like the idea that every commit is tested, and every commit is green. But I actually feel like it's some of our other processes that make it cumbersome and hard to get there. And if CI did run on every commit, I think it would be ideal, but then we are increasing our CI time by running it on every commit. And then it comes down to essentially what you said, what's the risk? So if we do merge in a commit that doesn't work or has something that's failing about it but then the next commit after that fixes it, what's the risk that we're going to roll back to that one specific commit that was broken? If that's a high risk for you and your team, then adding this process is probably the really wise thing to do because you want to make sure the app doesn't go down for users. That's incredibly important. If that's not a high risk for your team, then I wouldn't add the process. CHRIS: Yeah, I totally agree. And to clarify my stances, for me, this change, this process change would not be worth the trade-off. I love the idea. I love the goal of it. But it is not worth the process change, and that's partly because I haven't particularly felt the pain. CI is not an inexhaustible resource I have learned. I'm actually somewhat proud our very small team that is working on the project that we're working on; we just recently ran out of our CI budget, and Circle was like, "Hey, we got to charge you more." And I was like, "Cool, do that." But it was like, there is cost both in terms of the time, clock time, and each PR running and all of those. We have to consider all of these different things. And hopefully, we did a useful job of framing the conversation, because as always, it depends, but it depends on what. And in this case, there's a good outcome that we want to get to, but there's an associated cost. And for any individual team, how you weigh the positive of the outcome versus how you weigh the cost will alter the decision that you make. But that's I think, critically, the thing that we have to consider. I've also noticed I've seen this conversation play out within teams where one individual may acutely feel the pain, and therefore they're anchored in that side. And the cost is irrelevant to them because they're like, I feel this pain so acutely, but other people on the team aren't working in that part of the codebase or aren't dealing with bug triage in the same way that that other developer is. And so, even within a team, there may be different levels of how you measure that. And being able to have meaningful conversations around that and productively come to a group decision and own that and move forward with that is the hard work but the important work that we have to do. STEPH: Yeah. I think that's a great summary; it depends. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Stack Overflow Podcast
Web3 won't save us

The Stack Overflow Podcast

Play Episode Listen Later Nov 5, 2021 37:45


What is Web3? The Decentralized Internet of the FutureCassidyCeoraRyanBenThanks to our lifeboat badge winner of the week, Tadeck, for showing us how to design a : Function for Factorial in Python

Talk Python To Me - Python conversations for passionate developers
#339: Making Python Faster with Guido and Mark

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 4, 2021 61:02


There has a been a bunch of renewed interested in making Python faster. While for some of us, Python is already plenty fast. For others, such as those in data science, scientific computing, and even the large tech companies, making Python even a little faster would be a big deal. This episode is the first of several that dive into some of the active efforts to increase the speed of Python while maintaining compatibility with existing code and packages. Who better to help kick this off than Guido van Rossum and Mark Shannon? They both join us to share their project to make Python faster. I'm sure you'll love hearing what they are up to. Links from the show Guido van Rossum: @gvanrossum Mark Shannon: linkedin.com Faster Python Plan: github.com/faster-cpython The “Shannon Plan”: github.com/markshannon Sam Gross's nogil work: docs.google.com Watch this episode on YouTube: youtube.com ---------- Stay in touch with us ---------- Subscribe on YouTube (for live streams): youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors Shortcut Linode AssemblyAI Talk Python Training

Python Bytes
#257 Python Launcher - Launching Python Everywhere

Python Bytes

Play Episode Listen Later Nov 4, 2021 40:25


Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut Special guest: Morleh So-kargbo Michael #1: Django 4.0 beta 1 released Django 4.0 beta 1 is now available. Django 4.0 has an abundance of new features The new *expressions positional argument of UniqueConstraint() enables creating functional unique constraints on expressions and database functions. The new scrypt password hasher is more secure and recommended over PBKDF2. The new django.core.cache.backends.redis.RedisCache cache backend provides built-in support for caching with Redis. To enhance customization of Forms, Formsets, and ErrorList they are now rendered using the template engine. Brian #2: py - The Python launcher py has been bundled with Python for Windows only since Python 3.3, as py.exe See Python Launcher for Windows I've mostly ignored it since I use Python on Windows, MacOS, and Linux and don't want to have different workflows on different systems. But now Brett Cannon has developed python-launcher which brings py to MacOS and various other Unix-y systems or any OS which supports Rust. Now py is everywhere I need it to be, and I've switched my workflow to use it. Usage py : Run the latest Python version on your system py -3 : Run the latest Python 3 version py -3.9 : Run latest 3.9 version py -2.7 : Even run 2.x versions py --``list : list all versions (with py-launcher, it also lists paths) py --``list-paths : py.exe only - list all versions with path Why is this cool? - I never have to care where Python is installed or where it is in my search path. - I can always run any version of Python installed without setting up symbolic links. - Same workflow works on Windows, MacOS, and Linux Old workfow Make sure latest Python is found first in search path, then call python3 -m venv venv For a specific version, make sure python3.8, for example, or python38 or something is in my Path. If not, create it somewhere. New workflow. py -m venv venv - Create a virtual environment with the latest Python installed. After activation, everything happens in the virtual env. Create a specific venv to test something on an older version: py -3.8 venv venv --``prompt '``3.8``' Or even just run a script with an old version py -3.8 script_name.py Of course, you can run it with the latest version also py script_name.py Note: if you use py within a virtual environment, the default version is the one from the virtual env, not the latest. Morleh #3: Transformers As General-Purpose Architecture The Attention Is All You Need paper first proposed Transformers in June 2017. The Hugging Face (