POPULARITY
In this news episode, the trio discuss the Spark connector for Fabric warehouse, a better alternative to the Azure Update page, Windows Recall (again), and the conundrum of Azure DevOps and GitHub Enterprise!Show Notes Hosted on Acast. See acast.com/privacy for more information.
Episode 72: In this episode of Critical Thinking - Bug Bounty Podcast Justin and Joel discuss some hot research from the past couple months. This includes ways to smuggle payloads in phone numbers and IPv6 Addresses, the NextJS SSRF, the PDF.JS PoC drop, and a GitHub Enterprise Indirect Method Information bug. Also, we have an attack vector featured from Monke!Follow us on twitter at: @ctbbpodcastShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:------ Ways to Support CTBBPodcast ------Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.Resources:PDF.JS Bypass to XSShttps://github.com/advisories/GHSA-wgrm-67xf-hhpqhttps://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/PDFiumNextJS SSRF by AssetNoteBetter Bounty Transparency for hackersSlonser IPV6 ResearchSmuggling payloads in phone numbers Automatic Plugin SQLiDomPurify Bypass Bug Bounty JP PodcastGithub Enterprise send() bughttps://x.com/creastery/status/1787327890943873055https://x.com/Rhynorater/status/1788598984572813549 Timestamps:(00:00:09) Introduction(00:03:20) PDF.JS XSS and NextJS SSRF(00:12:52) Better Bounty Transparency(00:20:01) IPV6 Research and Phone Number Payloads (00:28:20) Community Highlight and Automatic Plugin CVE-2024-27956(00:33:26) DomPurify Bypass and Github Enterprise send() bug(00:46:12) Caido cookie and header extension updates
In today's episode, we explore a critical GitHub Enterprise Server vulnerability (CVE-2024-4985) that allows authentication bypass and the necessary updates for protection (https://thehackernews.com/2024/05/critical-github-enterprise-server-flaw.html), EPA's enforcement actions against water utilities lacking cybersecurity measures (https://www.cybersecuritydive.com/news/epa-enforcement-water-utilities-cyber/716719/), and newly discovered security flaws in the Python package llama_cpp_python (CVE-2024-34359) and Firefox's PDF.js library (CVE-2024-4367), highlighting potential risks and the importance of vigilant security practices (https://thehackernews.com/2024/05/researchers-uncover-flaws-in-python.html). 00:00 Cybersecurity Threats to US Water Utilities 01:02 Deep Dive into Water Utility Cybersecurity Flaws 03:26 Strategies for Enhancing Cybersecurity in Water Utilities 04:49 EPA's Enforcement Actions and the Importance of Cybersecurity 06:38 GitHub Enterprise Server's Critical Security Flaw 08:00 Emerging Cybersecurity Threats and Updates Tags: GitHub, Enterprise Server, CVE, SAML SSO, cybersecurity, vulnerability, GitHub updates, EPA, cyberattacks, water utilities, vulnerabilities, security enforcement, Checkmarx, Llama Drama, Mozilla, PDF.js Search Phrases: GitHub Enterprise Server CVE-2024-4985 vulnerability SAML SSO security breach in GitHub How to secure GitHub Enterprise Server EPA cyberattack vulnerabilities in water utilities Steps to mitigate water utility cyber threats Llama Drama security flaw in llama_cpp_python High-severity vulnerability in Mozilla PDF.js Protecting systems from PDF.js exploits Checkmarx reports on Llama Drama Latest cybersecurity vulnerabilities December 2023 May22 The EPA has announced that over 70% of us water utilities inspected are vulnerable to cyber attacks due to outdated security measures like default passwords and single log-ins. What specific vulnerabilities put major water utilities at risk. And how is the EPA planning to address them? A high severity vulnerability in Mozilla's PDF dot JS have been uncovered allowing threat actors to execute arbitrary code and. Compromise millions of systems globally. What methods can users implement to help protect their systems from these vulnerabilities? And finally an alarming get hub enterprise server vulnerability now threatens unauthorized administrative access through. SAML single sign-on prompting crucial updates. From GitHub to prevent exploitation. How can organizations secure their get hub enterprise server instances against this vulnerability? You're listening to the daily decrypt. The environmental protection agency or EPA announced that the majority of us water utilities. The inspected are vulnerable to cyber attacks due to using default passwords and single log-ins. And to get a little more specific over 70% of water utilities that were inspected since September of last year, failed to comply with the safe drinking water act. By commonly using single log-ins for multiple employees. And not revoking access for former employees. So being a cybersecurity professional, it's really hard for me to even imagine using the same login as somebody else. This is such a terrible idea for many reasons. Some of which are obvious and some of which might not be like, first of all, multiple people know your password. Which is kept. Under wraps. Like if it's kept locked down, that's not a huge issue, but it's not being kept locked down. If this is a practice it's not being kept, locked down. So what if one of the people who's using that log in? Already has that password memorized and they decide to use it on a different site. Maybe even with that same email address and that site gets breached. And the email address is probably water company related. So any attacker that comes across these credentials will instantly have access to. The water utilities. Infrastructure. So say someone gets into the water utilities, infrastructure using those credentials. It will be impossible to go back and look at logs and see where the error was. It could be across many different people. So they're not even able to identify the root cause of the breach. Logging is essential. So you want to make sure that you know exactly who is doing what actions on which computer. Sharing credentials makes that impossible. You can also lock down different permissions by each user account. And then monitor. Uh, activities based on those permissions. So if you see an account, that's trying to do something that they shouldn't be doing. It's an indicator of compromise. So, how do I know what this account that's being shared across multiple people should be doing? Can you be logged in, in multiple places at once? Is one of the people using that account in Nigeria. Who knows. Right? So this is just terrible. And then the second issue is former employees. Credentials are not being revoked. They're not being closed down. So that means that if anybody comes across the username and password, Of a former employee, they can access the system. That includes the former employee. What if they got fired? What, if they have a malicious intent against their boss, they can log in after being terminated or leaving the job and mess things up for the company. Now I understand that these two things take resources to fix. It's going to take a bigger it team. It's going to take some automation tools. But I cannot stress this enough. Uh, compromise. Will cost more. Then the tools use to prevent it. So if you're maintaining one of these infrastructures, Please talk to your boss every day. Schedule an email. Talk to your investors, talk to the board, make sure they understand that if this place gets compromised, it's going to cost them way more than hiring another it person or buying a tool that can automate this process. And if you're feeling ambitious, One of the other things you can do with former employees accounts is to create a decoy account. Which is essentially a honeypot. So say someone does come up. Upon these credentials and they try to log in. You have already set up alerting that no one should be logging in with these credentials. But if an attacker is in the environment and finds these credentials, they will see a history of usage, which makes those credentials more enticing. And that's something you can't get with just a brand new account. It turned into a decoy. So it's recommended to repurpose every former employees account as a decoy set up an alert. Nobody should be logging in. Nobody should be touching these credentials or even attempting to log in with these credentials, if they are. You've been breached. It's one of the easiest ways to detect a breach. Alright, lecture aside. Let's finish up this news. The EPA has taken more than 100 enforcement actions. Against the community water systems since 2020 and plans to increase future inspections. Criminal enforcement may occur. If there's imminent danger. So you can be prosecuted as a criminal for neglecting to secure your network. If you work for a water plant or in a water agency. Because. Imminent danger is upon us. If you don't secure our network, right? What are the consequences for a compromise at the source of our water? Well, we don't get water and what do we need to live water? In fact, in recent months, Iran, China and Russia, as well as criminal ransomware gangs have targeted us and UK. Water treatment facilities. And they will continue to target these facilities because they are critical infrastructure for the United States. Right. The president needs water. The Congress needs water police force needs water, military needs. Everyone needs water. So it's going to be a top target and we don't have the funding to secure it. So according to. SISA. 95% of the 150,000 water utilities in the us do not have a cybersecurity professional on staff. And that sounds like a staggering amount, but it's pretty expensive to have a cybersecurity professional on staff. We get paid a lot of money. Um, And what I'd like to know is if any of these. Water treatment facilities are contracting out to cybersecurity professionals. So. There are companies out there that will provide advice for a fee. So you don't have to have someone on your staff. There are also companies out there that will monitor your networks for a fee. So you don't have to build out your own security operation center. If you'd like recommendations on either of these services or to be pointed in the right direction, feel free to shoot us a DM on Instagram or YouTube. And we will get back to you. All right. There is a new maximum severity flaw in get hub enterprise server that could allow attackers to bypass authentication protections. This flaw score is a perfect 10 out of 10 on the CVSs scale. Which indicates it's extremely critical. And so as mentioned, the vulnerability allows unauthorized access by forging a SAML response to provision or gain access to a user with admin privileges, but only in instances using SAML single sign-on with optional encrypted assertions. The issue affects all G H G S versions prior to 3.1 3.0. Get hub has released patches. And in some versions of 3.9, three point 10, three point 11 and three point 12. So if you're using these versions or earlier, Please go update. Instances without SAML SSO or those using SAML SSO without encrypted assertions are not affected by this flaw. If your setup doesn't involve encrypted assertions, you're in the clear. But encrypted assertions, improve security by encrypting messages from the SAML identity provider during authentication. However. This feature led to the discovered vulnerability when not properly updated. So just keep your crap up to date. I know it's tough. And finally researchers have uncovered a severe security flaw in the Lama CPP Python package tracked as CVE 20 24 3 4 3 5 9 with a CVSs score of 9.7. So. Pretty dang critical. This. Vulnerability is named llama drama. And can enable threat actors to execute arbitrary code, potentially compromising data and operations. The vulnerability stems from the misuse of the Jinja two template engine. Leading to server-side template injection. The flaw has been patched in version 0.2 0.72. And if you're using this package, you should update immediately. Additionally Mozilla discovered a high severity flaw in the PDF dot JS JavaScript library used by Firefox. This flaw allows arbitrary JavaScript execution. When a maliciously crafted PDF document is opened inside of Firefox. The issue has been resolved in Firefox 1 26 or Firefox ESR, one 15 dot 11. So make sure to update your browser as soon as possible. As well as any related software. To their latest versions. This has been the Daily Decrypt. If you found your key to unlocking the digital domain, show your support with a rating on Spotify or Apple Podcasts. It truly helps us stand at the frontier of cyber news. Don't forget to connect on Instagram or catch our episodes on YouTube. Until next time, keep your data safe and your curiosity alive.
Today, we'll talk about GitHub Enterprise. This is based on a recent adventure in getting it to work for us, and the lessons learned are the essence of this episode. What are the plans? What do you need to configure? What got us confused? Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(04:06) - Show content starts.Show links- GitHub Enterprise Trial- Jussi's Whoop 4.0 (Jussi's referral code, if you want 1 month free and Jussi gets 1 month free: link) - Give us feedback!
Freelancing 29. Februar 2024, Jochen Ein ungewöhnlich hoher Anteil der Hosts dieses Podcasts
The Cloud Pod recaps all of the positives and negatives of Amazon ReInvent 2022, the annual conference in Las Vegas, bringing together 50,000 cloud computing professionals. This year's keynote speakers include Adam Selpisky, CEO of Amazon Web Services, Swami Sivasubramanian, Vice President of Data and Machine Learning at AWS and Werner Vogels, Amazon's CTO. Attendees and web viewers were treated to new features and products, such as AWS Lambda Snapstart for Java Functions, New Quicksight capabilities and quality-of-life improvements to hundreds of services. Justin, Jonathan, Ryan, Peter and Special guest Joe Daly from the Finops foundation talk about the show and the announcements. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ AWS Pricing Calculator now supports modernization cost estimates for Microsoft workloads. ⏰ AWS Re:Invent 2022 announcements and keynote updates. Top Quote
PlanetScale, what is it? This has been something that's been trending lately, we've been really excited about it. Is it GitHub for databases or stateless storage in the cloud? We think it's definitely the next great leap for databases., and to talk about it, we have invited Lilli Seyther-Besecke and Johannes Nicolai from PlanetScale. Related content: -Transform your business with GitHub Enterprise: https://hubs.li/Q01n__fz0 -From Git to zero-day delivery in one go: https://hubs.li/Q01p01Jn0 -Blog: War of the CI servers – GitLab vs. GitHub vs. Jenkins: https://hubs.li/Q01n__XJ0 -Lilli Seyther-Besecke on LinkedIn: https://www.linkedin.com/in/lilli-seyther-besecke/ -Johannes Nicolai :https://www.linkedin.com/in/johannes-nicolai-b508208/
Marko Klemetti, CTO of Eficode, is joined by Mike McQuaid, Staff engineer at GitHub, and the HomeBrew project leader. Marko and Mike discuss how Homebrew came to be, how Homebrew built an approach to contributions from the community, and what companies could learn from these experiences when they decide on their approach to open source contributions. How Homebrew came to be? How to combine employment and open source work? How can companies enable open source contributions at work? What is the right investment level for open source in a company? What is the role of DevOps in open source projects? Register to The DEVOPS Conference - for free Online - March 8th and 9th, 2022 https://hubs.li/Q014j4xT0 Related content: -Open source projects: https://hubs.li/Q014j4Y30 -DEVOPS 2020 Talk: Survival of the most open - Microsoft's open source journey by Sasha Rosenbaum, GitHub: https://hubs.li/Q014j5gK0 -The DEVOPS Conference 2021 Talk: The top 5 InnerSource myths, Martin Woodward, GitHub: https://hubs.li/Q014j5Xq0 -On-demand webinar: GitHub Enterprise on Azure as a managed service: https://hubs.li/Q014j6Z50 -From Git to zero-day delivery in one go: https://hubs.li/Q014j79C0 Marko Klemetti on LinkedIn https://www.linkedin.com/in/mrako/ Mike McQuaid on LinkedIn https://www.linkedin.com/in/mkmcqd/
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About Micheal BenedictMicheal Benedict leads Engineering Productivity at Pinterest. He and his team focus on developer experience, building tools and platforms for over a thousand engineers to effectively code, build, deploy and operate workloads on the cloud. Mr. Benedict has also built Infrastructure and Cloud Governance programs at Pinterest and previously, at Twitter -- focussed on managing cloud vendor relationships, infrastructure budget management, cloud migration, capacity forecasting and planning and cloud cost attribution (chargeback). Links: Pinterest: https://www.pinterest.com Twitter: https://twitter.com/micheal LinkedIn: https://www.linkedin.com/in/michealb/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: You know how git works right?Announcer: Sorta, kinda, not really Please ask someone else!Corey: Thats all of us. Git is how we build things, and Netlify is one of the best way I've found to build those things quickly for the web. Netlify's git based workflows mean you don't have to play slap and tickle with integrating arcane non-sense and web hooks, which are themselves about as well understood as git. Give them a try and see what folks ranging from my fake Twitter for pets startup, to global fortune 2000 companies are raving about. If you end up talking to them, because you don't have to, they get why self service is important—but if you do, be sure to tell them that I sent you and watch all of the blood drain from their faces instantly. You can find them in the AWS marketplace or at www.netlify.com. N-E-T-L-I-F-Y.comCorey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Sometimes when I have conversations with guests here, we run long. Really long. And then we wind up deciding it was such a good conversation, and there's still so much more to say that we schedule a follow-up, and that's what happened today. Please welcome back Micheal Benedict, who is, as of the last time we spoke and presumably still now, the head of engineering productivity at Pinterest. Micheal, how are you?Micheal: I'm doing great, and thanks for that introduction, Corey. Thankfully, yes, I am still the head of engineering productivity; I'm really glad to speak more about it today.Corey: The last time that we spoke, we went up one side and down the other of large-scale environments running on AWS and billing aspects thereof, et cetera, et cetera. I want to stay away from that this time and instead focus on the rest of engineering productivity, which is always an interesting and possibly loaded term. So, what is productivity engineering? It sounds almost like it's an internal dev tools team, or is it something more?Micheal: Well, thanks for asking because I get this question asked a lot of times. So, for one, our primary job is to enable every developer, at least at our company, to do their best work. And we want to do this by providing them a fast, safe, and a reliable path to take any idea into production without ever worrying about the infrastructure. As you clearly know, learning anything about how AWS works—or any public cloud provider works—is a ton of investment, and we do want our product engineers, our mobile engineers, and all the other folks to be focused on delivering amazing experiences to our Pinners. So, we could be doing some of the hard work in providing those abstractions for them in such way, and taking away the pain of managing infrastructure.Corey: The challenge, of course, that I've seen is that a lot of companies take the approach of, “Ah. We're going to make AWS available to all of our engineers in it's raw, unfiltered form.” And that lasts until the first bill shows up. And then it's, “Okay. We're going to start building some guardrails around that.” Which makes a lot of sense. There then tends to be a move towards internal platforms that effectively wrap cloud services.And for a while now, I've been generally down on the concept and publicly so in the general sense. That said, what I say that applies as a best practice or something that most people should consider does tend to fall apart when we talk about specific use cases. You folks are an extremely large environment; how do you view it? First off, do you do internal platforms like that? And secondly, would you recommend that other companies do the same thing?Micheal: I think that's such a great question because every company evolves with its own pace of development. And I wouldn't say Pinterest by itself had a developer productivity or an engineering productivity organization from the get-go. I think this happens when you start realizing that your core engineers who are working on product are now spending a certain fraction of time—which starts ballooning pretty fast—in managing the underlying systems and the infrastructure. And at that point in time, it's probably a good question to ask, how can I reduce the friction in those people's lives such that they could be focused more on the product. And, kind of, centralize or provide some sort of common abstractions through a central team which can take away all that pain.So, that is generally a good guiding principle to think about when your engineers are spending at least 30% of their time on operating the systems rather than building capabilities, that's probably a good time to revisit and see whether a central team would make sense to take away some of that. And just simple examples, right? This includes upgrading OS on your EC2 machines, or just trying to make sure you're patching all the right versions on your next big Kubernetes cluster you're running for serving x number of users. The moment you start seeing that, you want to start thinking about, if there is a central team who could take away that pain, what are the things they could be investing on to help up-level every other engineer within your organization. And I think that's one of the best ways to be thinking about it.And it was also a guiding principle for us within Pinterest to view what investments we could make in these central teams which can up-level each and every different type of engineer in the company as well. And just an example on that could be your mobile engineer would have very different expectations from your backend engineer who was working on certain aspects of code in your product. And it is truly important to understand where you want to centralize capabilities, which both these types of engineers could use, or you want to divest and have unique capabilities where it's going to make them productive. There's no one-size-fits-all solution for this, but I'm happy to talk about what we have at Pinterest, which has been reasonably working well. But I do think there's a lot more improvements we could be doing.Corey: Yeah, but let's also be clear that, as you've mentioned, you are heavily biased towards EC2 instances for a lot of what you do. If we look at the AWS console and we see hundreds of different services now, and it's easy to sit here and say, “Oh, internal platforms are terrible because all of those services are going to be enhanced in various ways and you're never going to be able to keep up with feature parity.” Yeah, but if you can wrap something like EC2 in an internal platform wrapper, that begins to be a different story because sure, someone's going to go and try something new with a different AWS service, they're going to need direct access. But the EC2 product across the board generally does not evolve in leaps and bounds with transformative changes overnight. Let's also not forget that at a company with the scale that Pinterest operates at, “Hey, AWS just dusted off a new feature and docs are still rolling out, and it's not in CloudFormation yet, but we're going to roll it out to production,” probably seems like the wrong direction to go in, I would assume.Micheal: And yes, I think that brings one of the key guardrails, I think, which these groups provide. So, when we start thinking about what teams, centralized teams like engineering productivity, developer tools, developer platforms actually do is they help with a couple of things. The top three are: they can help pave a path for the most common use cases. Like to your point, provisioning EC2 does take a set of steps, all the time. If you're going to have a thousand people doing that every time they're building a new service or trying to expand capacity playing with their launch templates, those are things you can start streamlining and making it simple by some wrapper because you want to address those 80% use cases which are usually common, and you can have a wrapper or could just automate that. And that's one of the key things: can you provide a paved path for those use cases?The second thing is, can you do that by having the right guardrails in place? How often have you heard the story that, “I just clicked a button and that now spun up, like, a thousand-plus instances.” And now you have to juggle between trying to stop them or do something about it.Corey: Back in 2013, you folks were still focusing on this fair bit. I remember because Jeremy Carroll, who I believe was your first SRE there once upon a time, wound up doing a whole series of talks around how Pinterest approached doing an AMI Factory. And back in those days, the challenges were, “Okay. We have the baseline AMI, and that's great, but we also want to do deployments of things and we don't really want to do a new deploy of an entire fleet of EC2 instances for a single line of config change, so how do we wind up weighing off of when you bake a new AMI versus when you just change something that has—in what is deployed to them?” And it was really a complicated problem back then.I'm not convinced it's not still a complicated problem, but the answers are a lot more cohesive. And making sure that every team—when you're talking about a company as large as Pinterest with that many teams—is doing things in the same way, seems like it's critically important otherwise you wind up with a whole bunch of unique-looking instances that each have to be managed by hand as opposed to something that can be reasoned around collectively.Micheal: Yep. And that last part you mentioned is extremely crucial as well because like I said, our audience or our customers are just not the engineers; we do work with our product managers and business partners as well because at times, we have to tie or change our architecture based on certain cost optimizations which would make sense, like you just articulated. We don't want to have all the instance types. It does not add much value to a developer unless they're explicitly seeking a high-memory instance or a [GP-based instance in a 00:10:25] certain way. So, we can then work with our business partners to make sure that we're committing to only a certain type of instances, and how we can abstract our tools to only give you that. For example, our deployment system, Teletraan which is an open-source system, actually condenses down all these instance types to a couple of categories like high-compute, high-memory—and you've probably seen that in many of the new cloud providers as well—so people don't have to learn or know the underlying instance type.When we moved from c3 to c5, it was just called as a high-compute system, so the next time someone provisioned a new service or deployed it using our system, they would just select high-compute as the de facto instance type and we would just automatically provision a C5 for them. So, that just reduces the extra complexity or the cognitive overhead individuals would have to go through in learning each instance type, what is the base AMI that comes on it, what are the different configurations that need to go in terms of setting up your AZ-scaling properties. We give them a good reasonable set of defaults to get started with, and then they can then work on optimizing or making changes to it.Corey: Ignoring entirely your mispronunciation of AMI, which is, of course, three syllables—and that is a petty hill upon which I will die—it occurs to me the more I work with AWS in various ways, the easier it gets. And I used to think in some respects, it was because the platform was so—it was improving so dramatically around me. But no, in many cases, it's because the first time you write some CloudFormation by hand, it's a nightmare and you keep smacking into weird issues. But the second or third time, it's super easy because you just copy the thing you've already built and change the relevant bits around. And that was the learning curve that I went through playing around with a lot of these things.When you start looking at this from a large-scale environment where it's not just about upskilling the people that you have to understand how these things integrate in AWS land, but also the consistent onboarding of engineers at a fairly progressive clip is, great, you effectively have to start doing trainings on all these things, and there's a lot of knobs and dials that can blow up and hurt people. At some point, building the guardrails or building the environment in which you are getting all the stuff abstracted away from where the application engineers have to think about this at all, it eventually reaches a tipping point where it starts to feel like it's no longer optional if you want to continue growing as a company because you don't have the luxury of spending six months of onboarding before you let someone touch the thing they were hired to build.Micheal: And you will see that many companies very often have very similar programming practices like you just described. Even I learned that the same way: you have a base template, you just copy-paste it and start from there on. And no one goes through the bootstrapping process manually anymore; you want to—I think we call it cargo-culting, but in general, just get something to bootstrap and start from there. But one of the things we learned in sort of the hard way is that can also lead to, kind of, you pushing, you know, not great practices because people don't know what is a blessed version of a good template or what actually would make sense. So, some of those things, we have been working on.And this is where centralized teams like engineering productivity are really helpful is we provide you with the blessed or the canonical way to do certain things. Case in point example is a CI/CD pipeline or delivery of software services. We have invested enough in experimenting on what works with some of the more nuanced use cases at Pinterest, in helping generate, sort of, a canonical version which would cover 80% of the use cases. Someone could just go and try to build a service and they could just use the same canonical pipeline without learning much or making changes to it. This also reduces that cargo-culting nature which I called, rather than copying it from unknown sources and trying to like—again, it may cause havoc to our systems, so we can avoid a lot of that because of these practices.Corey: So, let's step a little bit beyond AWS—I know I hate doing it, too—but I'm going to assume that your remit is broader than, oh, AWS whisperer-slash-Wrangler. So, tell me a little bit more about what it is that your day-to-day looks like if there is anything that could be said not to focus purely around AWS whispering.Micheal: So, one of the challenges—and I want to talk about this a bit more—is our environments have become extremely complex over time. And it's the nature of, like, rising entropy. Like, we've just noticed that there's two things: we have a diverse set of customer base, and these include everyone trying to do different workloads or work service types. What that essentially translates into is that we realized that our solution may not fit all of them. For example, what works for a machine-learning engineer in terms of iterating on building a model and delivering a model is not the same as someone working on a long-running service and trying to deploy that. The same would apply for someone trying to operate a Kafka system.And that has made, I think, definitely our job a bit challenging in trying to assess where do you actually draw the line on the abstraction? What is the right layer of abstraction across your local development experience, across when you move over to staging your code in a PR model and getting feedback and subsequently actually releasing it to production? Because this changes dramatically based on what is the workload type you're working on. And we feel like that has been one of the biggest challenges where I know I spent my day-to-day and my team does too, in trying to help provide some of the right solutions for these individuals. There's—very often we'll also get asked from individuals trying to do a very nuanced thing.Of late, we have been talking about thinking about how you operate functions, like provide Functions as a Service within the company? It just put us in a difficult spot at times because we have to ask the hard question, “Is this required?” I know the industry is doing it; it's definitely there. I personally believe, yes, it could be a future, but is that absolutely important? Is that going to benefit Pinterest in any formal way if we invest on some core abstractions?And those are difficult conversations to have because we have exciting engineers coming in trying to do amazing things; it puts us in a hard spot, as well, as to sometimes saying graciously, no. I know many companies deal with it when they have these centralized teams, but I think it's part of that job. Like when you say it's day-to-day, I would say I'm probably saying no a couple of times in that day.Corey: Let's pretend for the sake of argument that I am, tomorrow morning, starting another company—Twitter for Pets—and over the next ten years, it grows to be larger than Pinterest in terms of infrastructure, probably not revenue because it turns out pets are not the lucrative source of ad revenue that I was hoping it would be but, you know, directionally the same thing. It seems to me that building out this sort of function with this sort of approach to things is dramatically early as far as optimizations go when it's just me puttering around on something. I'm always cognizant of the wrong people taking the wrong message when we're talking about things that happen like this at scale. When does having an engineering productivity group begin to make sense?Micheal: I mentioned this earlier; like, yeah, there is definitely not a right answer, but we can start small. For example, this group actually started more as a delivery team. You know, when we started, we realized that we had different ways of deploying services or software at Pinterest, so we first gathered together to figure out, okay, what are the different ways and can we start simplifying that part? And that's where it started expanding. Okay, we are doing button-based deployments right now we have thousand-plus microservices, and we are seeing more incidents than we wanted to because anything where there's a human involved means there's a potential gap for error. I myself was involved in a SEV 0 incident, and I will be honest; we ended up deploying a Hello World application in one of our production fleet. Not the thing I wanted to be associated with my name, but, you know—Corey: And you were suddenly saying hello to the world, in fact—Micheal: [laugh].Corey: —and oops-a-doozy.Micheal: Yeah. So—and that really prompted us to rethink how we need to enable guardrails to do safe production rollouts. And that's how those conversations start ballooning out.Corey: And the healthy correct way. We've all broken production in various ways, and it's—you correctly are identifying, I believe, the direction you're heading in where this is a process problem and a tooling problem; it is not that you are secretly crap and should never have been allowed near anything in production. I mean, that's my excuse for me, but in your case, this is a common thing where it's, if someone can unintentionally cause issues like that, there needs to be better processes and procedures as the organization matures.Micheal: Yep. And that's kind of like always the route or the starting point for these discussions. And it starts growing from there on because, okay, you've helped improve the deploy process but now we're seeing insane amount of slowness, say on the build processes, or even post-deploy, there's, like, issues on how we monitor and look into data.And that I think forces these conversations, okay, where do we have these bespoke tools available? What are people doing today? And you have to ask those hard questions, like what can we actually remove from here? The goal is not to introduce yet another new system. Many a times, to be honest bash just gets the job done. [laugh].Personally, I'm okay with that as long as it's consistent and people, you know, are able to contribute to it and you have good practices in validating it, if it works, we should go for it rather than introducing yet another YAML [laugh] and some of that other aspects of doing that work. And that's what we encourage as well. That's how I think a lot of this starts connecting together in terms of, okay, now this is becoming a productivity group; they're focused on certain challenges where investing probably one person here may up-level a few other engineers who don't have to do that on a day-to-day basis. And I think that's one of the key items for, especially, folks who are running mid-sized companies to realize and start investing in these type of teams to really up-level, sort of, the rest of the engineering.Corey: You've been doing this for a fair while. If you were to go back and start over again on day one—which is always a terrifying question, on some level—what would you have done differently about building out this function as Pinterest continued to scale out?Micheal: Well, first, I must acknowledge that this was just not me, and there's, like, ton of people involved in helping make this happen.Corey: No, that's fair. We'll blame them for the missteps; that is—Micheal: [laugh].Corey: —just fine with me. I kid. I kid.Micheal: I think, definitely the nuances. If I look back, all the decisions that were made then at that point in time, there was a decision made to move to Phabricator, which was back then a great open-source code management system where with the current information at that point in time. And I'm not—I think it's very hard to always look back and say, “Oh, we could have chosen x at one point in time.” And I think in reality, that's how engineering organizations always evolve, that you have to make do with the information you have right now to make a decision that works for you over a couple of years.And I'll give you a small example of this. There was a time when Pinterest was actually on GitHub Enterprise—this was like circa 2013, I would say—and it really served as well for, like, five-plus years. Only then at certain point, we realized that it's hard to hire PHP engineers to support a tool like that, and we had to rethink what is the ROI and the investments we've made here? Can we ever map up or match back to one of the offerings in the industry today? And that's when you make decisions that, okay, at this point in time, it's clear that business continuity talks, you know, and it's hard to operate a system, which is, at this moment not supported, and then you make a call about making a shift or moving.And I think that's the key item. I don't think there's anything dramatically I would have changed since the start. Perhaps definitely investing a bit more individuals into the group and going from there. But that said, I'm really, sort of, at least proud of the fact that usually these teams are extremely lean and small, and they always have an outsized impact, especially when they're working with other engineers, other [opinionated 00:22:13] engineers for what it's worth.This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: Most folks show up intending to do good today, and you make the best decision at the time with the context and constraints that you have, but my question I think is less around, “Well, what were the biggest mistakes you made?” But more to do with the idea of, based upon what you've learned and as you have shown—as you've shined light on these dark areas, as you have been exploring it, has anything jumped out at you that is, “Oh, yeah. Now, that I know—if I had known then what I know now, I would definitely have made this other decision.” Ideally, something that applies a little more globally than specific within Pinterest, just because the whole idea, aspirationally, is that people might learn something from our conversation. At least I will, if nothing else.Micheal: No, I think that's a great question. And I think the three things that jump to me, top of mind. I think technology is means to an end unless it gives you a competitive edge. And it's really hard to figure out at what point in time what technology and why we adopted it, it's going to make the biggest difference. Humans always tend to have a bias towards aligning towards where we want to go. So, that's the first one in my mind.The second one is, and we spoke about this last time, embrace your cloud provider as much as possible. You'd want to avoid taking on operational burden which is not going to add value to the business. If there is something you see your operating which can be offloaded—because your provider can, trust me, do a way better job than you or your team of few can ever do—embrace that as soon as possible. It's better that way because then it frees up your time to focus on the most important thing, which I've realized over time is—I really think teams like ours are actually—we're probably the most value as a glue to all the different experiences a software engineer would go through as part of their SDLC lifecycle.If we can simplify someone's life by giving them a clear view as to where their commit or the work is in this grand scheme of rolling out and giving them the right amount of data to take action when something goes wrong, trust me, they will love you for what you're doing because you're saving them ton of time. Many times, we don't realize that when we publish 11 different ways for you to go and check to just get your basic validation of work done. We tend to so much focus on the technological aspect of what the tool does, rather than the experience of it, and I've realized, if you can bridge the experience, especially for teams like ours, people really don't even need to know whether you're running Kubernetes or any of those solutions behind the scenes. And I think that's one of the biggest takeaways I have.Corey: I want to double down on something you said about the fact that you are not going to be able to run these services as effectively as your provider can. And relatively recently—in fact, since the first time we spoke—AWS has released a investment report in Virginia. And from 2011 through 2020, they have invested in building AWS data centers there, $35 billion. I promise almost no company that employs people listening to this that are not themselves a cloud provider is going to make that kind of investment in running these things themselves.Now, do cloud providers have sharp edges? Yes, absolutely. That is what my entire career is about, unfortunately. But you're not going to do a better job of running things more sustainably, more reliably, et cetera, et cetera. But there are other problems with this—and that's what I want to start exploring here—where in the olden days, when I ran things in data centers and they went down a lot more as a result, sometimes when there were outages, I would have the CEO of the company just standing there nervous worrying over my shoulder as I frantically typed to fix things.Spoiler: my typing accuracy did not improve by having someone looming over me. Now, when there's an outage that your cloud provider takes, in many cases the thing that you are doing to fix it is reloading the status page and waiting for an update because it is completely out of your hands. Is that something that you've had to encounter? Because you can push buttons and turn dials when things are broken and you control it, but in an AWS—or other cloud provider—outage, all you can really do is wait unless you have a DR plan that is large-scale and effective enough that you won't feel foolish or have wasted a huge amount of time and energy migrating off and then—because then it gets repaired in ten minutes. How do you approach that, from your perspective? I guess, the expectation management piece?Micheal: It's definitely I know something which keeps a lot of folks within infrastructure up at night because, like you just said, at times we can feel extremely powerless when we obviously don't have direct control—or visibility at times, as well—on what's happening. One of the things we have realized over time as part of running on our cloud provider for over a decade now, it forces us to rethink a bit on our priority workflows, what we want our Pinners to always have access to, what they need to see, what is not important or critical. Because it puts into perspective, even for the infrastructure teams, is to what is the most important thing we should always have it available and running, what is okay to be in a degraded state, until what time, right? So, it actually forces us to define SLOs and availability criteria within the team where we can broadcast that to the larger audience including the executives. So, none of this comes as a surprise at that point.I mean, it's not the answer, probably, you're looking for because is there's nothing we can do except set expectations clearly on what we can do and how when you think about the business when these things do happen. So, I know people may have I have a different view on this; I'm definitely curious to hear as well, but I know at Pinterest at least we have converged on our priority workflows. When something goes out, how do we jump in to provide a degraded experience? We have very clear run books to do that, and especially when it's a SEV 0, we do have clear processes in place on how often we need to update our entire company on where things are. And especially this is where your partnership with the cloud provider is going to be a big, big boon because you really want to know or have visibility, at the minimum some predictability on when things can get resolved, and how you want to work with them on some creative solutions. This is outside the DR strategy, obviously; you should still be focused on a DR strategy, but these are just simple things we've learned over time on how to just make it predictable for individuals within the company, so not everyone is freaking out.Corey: Yeah, from my perspective, I think the big things that I found that have worked, in my experience—mostly by getting them wrong the first time—is explain that someone else running the infrastructure when they take an outage; there's not much we can do. And no, it's not the sort of thing where picking up the phone and screaming at someone is going to help us, is the sort of thing that is best to communicate to executive stakeholders when things are running well, not in the middle of that incident.Then when things break, it's one of those, “Great, you're an exec. You know what your job is? Literally anything other than standing in the middle of the engineering floor, making everyone freak out even more. We'll have a discussion later about what the contributing factors were when you demand that we fire someone because of an outage. Then we're going to have a long and hard talk about what kind of culture you're trying to build here again?” But there are no perfect answers here.It's easy to sit here in the silver light of day with things working correctly and say, “Oh, yeah. This is how outages should be handled.” But then when it goes down, we're all basically an inch away at best from running around with our hair on fire, screaming, “Fix it, fix it, fix it, fix it, now.” And I am empathetic to that. There's a reason but I fix AWS bills for a living, and one of those big reasons is that it's a strictly business-hours problem and I don't have to run production infrastructure that faces anything that people care about, which is kind of amazing and freeing for someone who spent too many years on call.Micheal: Absolutely. And one of the things is that this is not only with the cloud provider, I think in today's nature of how our businesses are set up, there's probably tons of other APIs you are using or you're working with you may not be aware of. And we ended up finding that the hard way as well. There were a certain set of APIs or services we were using in the critical path which we were not aware of. When these outages happen, that's when you find that out.So, you're not only beholden to your provider at that point in time; you have to have those SLO expectations set with your other SaaS providers as well, other folks you're working with. Because I don't think that's going to change; it's probably only going to get complicated with all the different types of tools you're using. And then that's a trade-off you need to really think about. An example here is just like—you know, like I said, we moved in the past from GitHub to Phabricator—I didn't close the loop on that because we're moving back to GitHub right now [laugh] and that's one of the key projects I'm working with. Yeah, it's circle of life.But the thing is, we did a very strong evaluation here because we felt like, “Okay, there's a probability that GitHub can go down and that means people will be not productive for that couple of hours. What do we do then?” And we had to put a plan together to how we can mitigate that part and really build that confidence with the engineering teams, internally. And it's not the best solution out there; the other solution was just run our own, but how is that going to make any other difference because we do have libraries being pulled out of GitHub and so many other aspects of our systems which are unknowingly dependent on it anyways. So, you have to still mitigate those issues at some point in your entire SDLC process.So, that was just one example I shared, but it's not always on the cloud provider; I think there are just many aspects of—at least today how businesses are run, you're dependent; you have critical dependencies, probably, on some SaaS provider you haven't really vetted or evaluated. You will find out when they go down.Corey: So, I don't think I've told this story before, but before I started this place, I was doing a fair bit of consulting work for other companies. And I was doing a project at Pinterest years ago. And this was one of the best things I've ever experienced at a company site, let alone a client site, where I was there early in the morning, eight o'clock or so, so you know, engineers love to show up at the crack of 11:30. But so I was working a little early; it was great. And suddenly my SSH session that I was using to remote into something or other hung.And it's tap up, tap enter a couple of times, tap it a couple more. It was hung hard. “What's the—” and then someone gently taps me on the shoulder. So, I take the headphones off. It was someone from corporate IT was coming around saying, “Hey, there's a slight problem with our corporate firewall that we're fixing. Here's a MiFi device just for you that you can tether to get back online and get worked on until the firewall gets back.”And it was incredible, just the level of just being on top of things, and the focus on keeping the people who were building things and doing expensive engineering work that was awesome—and also me—productive during that time frame was just something I hadn't really seen before. It really made me think about the value of where do you remove bottlenecks from people getting their jobs done? It was—it remains one of the most impressive things I've seen.Micheal: That is great. And as you were telling me that I did look up our [laugh] internal system to see whether a user called Corey Quinn existed, and I should confirm this with you. I do see entries over here, a couple of commits, but this was 2015. Was that the time you were around, or is this before that even?Corey: That would have been around then, yes. I didn't start this place until late 2016.Micheal: I do see your commits, like, from 2015, and I—Corey: And they're probably terrible, I have no doubt. There's a reason I don't read code for a living anymore.Micheal: Okay, I do see a lot of GIFs—and I hope it's pronounced as GIF—okay, this is cool. We should definitely have a chat about this separately, Corey?Corey: Oh, yeah. “Would you explain this code?” “Absolutely not. I wrote it. Of course, I have no idea what it does. That's the rule. That's the way code always works.”Micheal: Oh, you are an honorary Pinterest engineer at this point, and you have—yes—contributed to our API service and a couple of Puppet profiles I see over here.Corey: Oh, yes—Micheal: [Amazing 00:36:11]. [laugh].Corey: You don't wind up thinking that's a risk factor that should be disclosed. I kid. I kid. It's, I made a joke about this when VMware acquired SaltStack and I did some analytics and found that 60 some odd lines of code I had written, way back when that were still in the current version of what was being shipped. And they thought, “Wait, is this actually a risk?”And no, I am making a joke. The joke is, is my code is bad. Fortunately, there are smart people around me who review these things. This is why code review is so important. But there was a lot to admire when I was there doing various things at Pinterest. It was a fun environment to work in, the level of professionalism was phenomenal, and I was just a big fan of a lot of the automation stuff.Phabricator was great. I love working with it, and, “Great, I'm going to use this to the next place I go.” And I did and then it was—I looked at what it took to get it up and running, and oh, yeah, I can see why GitHub is so popular these days. But it was neat. It was interesting seeing that type of environment up close.Micheal: That is great to hear. You know, this is what I enjoy, like, hearing some of these war stories. I am surprised; you seem to have committed way more than I've ever done in my [laugh] duration here at Pinterest. I do managing for a living, but then again—Corey, the good news is your code is still running on production. And we—Corey: Oh dear.Micheal: —haven't—[laugh]. We haven't removed or made any changes to it, so that's pretty amazing. And thank you for all your contributions.Corey: Oh, please, you don't have to thank me. I was paid, it was fine. That's the value of—Micheal: [laugh].Corey: —[work 00:37:38] for hire. It's kind of amazing. And the best part about consultants is, is when we're done with a project, we get the hell out everyone's happy about it.More happy when it's me that's leaving because of obvious personality-related reasons. But it was just an interesting company from start to finish. I remember one other time, I wound up opening a ticket about having a slight challenge with a flickering on my then Apple-branded display that everyone was using before they discontinued those. And I expected there to be, “Oh, okay. You're a consultant. Great. How did we not put you in the closet with a printer next to that thing, breathing the toner?” Like most consulting clients tend to do, and sure enough, three minutes later, I'm getting that tap on the shoulder again; they have a whole replacement monitor. “Can you go grab a cup of coffee? We'll run the cable for it. It'll just be about five minutes.” I started to feel actively bad about requesting things because I did a lot of consulting work for a lot of different companies, and not to be unkind, but treating consultants and contractors super well is not something that a lot of companies optimize for. I can't necessarily blame them for that. It just really stood out.Micheal: Yep, I do hope we are keeping up with that right now because I know our team definitely has a lot of consultants working with us as well. And it's always amazing to see; we do want to treat them as FTs. It doesn't even matter at that point because we're all individuals and we're trying to work towards common goals. Like you just said, I think I personally have learned a few items as well from some of these folks. Which is again, I think speaks to how we want to work and create a culture of, like, we're all engineers; we want to be solving problems together, and as you were doing it, we want to do it in such a way that it's still fun, and we're not having the restrictions of titles or roles and other pieces. But I think I digressed. It was really fun to see your commits though, I do want to track this at some point before we move completely over to GitHub, at least keep this as a record, for what it's worth.Corey: Yeah basically look at this graffiti in the codebase of, “A shit-poster was here,” and here I am. And that tends to be, on some level, the mark we live on the universe. What's always terrifying is looking at things I did 15 years ago in my first Linux admin job. Can I still ping the thing that I built there? Yes, I can. And how is that even possible? That should not have outlived me; honestly, it should never have seen the light of day in production, but here we are. And you never know how long that temporary kluge you put together is going to last.Micheal: You know, one of the things I was recalling, I was talking to someone in my team about this topic as well. We always talk about 10x engineers. I don't know what your thoughts are on that, but the fact that you just mentioned you built something; it still pings. And there's a bunch of things, in my mind, when you are writing code or you're working on some projects, the fact that it can outlast you and live on, I think that's a big, big contribution. And secondly, if your code can actually help up-level, like, ten other people, I think you've really made the mark of 10x engineer at that point.Corey: Yeah, the idea of the superhuman engineer is always been a strange and dangerous one. If for nothing else, from where I sit, excellence is inherently situational. Like we just talked about someone at Pinterest: is potentially going to be able to have that kind of impact specifically because—to my worldview—that there's enough process and things around there that empower them to succeed. Then if you were to take that engineer and drop them into a five-person startup where none of those things exist, they might very well flounder. It's why I'm always a little suspicious of this is a startup founded by engineers from Google or Facebook, or wherever it is.It's, yeah, and what aspects of that culture do you think are one-to-one matches with the small scrappy startup in the garage? Right, I predicting some challenges here. Excellence is always situational. An amazing employee at one company can get fired at a second one for lack of performance, and that does not mean that there's anything wrong with them and it does not mean that they are a fraud. It means that what they needed to be successful was present in one of those shops, but not the other.Micheal: This is so true. And I really appreciate you bringing this up because whenever we discuss any form of performance management, that is a—in my view personally—I think that's an incorrect term to be using. It is really at that point in time, either you have outlived the environment you are in, or the environment is going in a different direction where I think your current skill set probably could be best used in the environment where it's going to work. And I know it's very fuzzy at that point, but like you said, yes, excellence really means you don't want to tie it to the number of commits you have pushed out, or any specific aspect of your deliverables or how you work.Corey: There are no easy answers to any of these things, and it's always situational. It's why I think people are sometimes surprised when I will make comments about the general case of how things should be, then I talk to a specific environment where they do the exact opposite, and I don't yell at them for it. It's there—in a general sense, I have some guidance, but they are usually reasons things are the way they are, and I'm interested in hearing them out. Everything's situational, the worst consultant in the world is the one that shows up, has no idea what's going on, and then asked, “What moron set this up?” Invariably, two said, quote-unquote, “Moron.” And the engagement doesn't go super well from there. It's, “Okay, why is this the way that it is? What constraints shaped it? What was the context behind the problem you were trying to solve?” And, “Well, why didn't you use this AWS service?” “Because it didn't exist for another three years when we were building that thing,” is a—Micheal: Yes.Corey: —common answer.Micheal: Yes, you should definitely appreciate that of all the decisions that have been made in past. People tend to always forget why they were made. You're absolutely right; what worked back then will probably not work now, or vice versa, and it's always situational. So, I think I can go on about this for hours, but I think you hit that to the point, Corey.Corey: Yeah, I do my best. I want to thank you for taking another block of time out of your day to wind up talking with me about various aspects of what it takes to effectively achieve better levels of engineering productivity at large companies, with many teams, working on shared codebases. If people want to learn more about what you're up to, where can they find you?Micheal: I'm definitely on Twitter. So, please note that I'm spelled M-I-C-H-E-A-L on Twitter. So, you can definitely read on to my tweets there. But otherwise, you can always reach out to me on LinkedIn, too.Corey: Fantastic and we will, of course, include a link to that in the [show notes 00:44:02]. Thanks once again for your time. I appreciate it.Micheal: Thanks a lot, Corey.Corey: Micheal Benedict, head of engineering productivity at Pinterest. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment telling me that you work at Pinterest, have looked at the codebase, and would very much like a refund and an apology.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
2021-11-09 Weekly News - Episode 125Watch the video version on YouTube at https://youtu.be/XkpNcuDzhhw Hosts: Gavin Pickin - Senior Developer for Ortus Solutions Eric Peterson - Senior Developer for Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and almost every other Box out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week Buy Ortus's new Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon SupportWe have 37 patreons providing 93% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions. Now offering Annual Memberships, pay for the year and save 10% - great for businesses.News and EventsColdBox Mail Services 2.0 Released - Fluent Mail For AllWe are so excited to bring you a major release of our cbmailservices module. This module has been around since our initial versions of ColdBox and it has now matured into a modern and fluent library for sending mail.https://www.ortussolutions.com/blog/coldbox-mail-services-20-fluent-mail-for-all https://www.forgebox.io/view/cbmailservices FORGEBOX 6 has landed!After several months of work, we are proud to announce the release of FORGEBOX 6. This has been a major undertaking spawning several months worth of work, a complete UI revamp for registered users, many bug fixes, multi-key API, and much more. We have also introduced our new Business Accounts (https://forgebox.io/plans) with the ability for organizations to have a simple and human way of managing their final package releases and their teams.https://www.ortussolutions.com/blog/forgebox-6-has-landed Tonight!!! - Mid Michigan CFUG Meeting - Using AI and machine learning along with ColdFusion to build a smarter call center with Nick KwiatkowskiTuesday 11/9/21 at 7 pm easternUsing AI and machine learning along with ColdFusion to build a smarter call center at the next Mid-Michigan CFUG meeting Tuesday 11/9/21 at 7 pm eastern. Michigan State University's, Nick Kwiatkowski, will be showing how to create voice and text-based chat bots that you can deploy to your contact centers (and help desks!) to help automate frequently asked questions.Meeting URL: https://bit.ly/3w9LZ7D Adobe 1 Day Workshop - Adobe ColdFusion Workshop with Damien BruyndonckxWed, November 10, 202109:00 - 17:00 CEST EUROPEANJoin the Adobe ColdFusion Workshop to learn how you and your agency can leverage ColdFusion to create amazing web content. This one-day training will cover all facets of Adobe ColdFusion that developers need to build applications that can run across multiple cloud providers or on-premise.https://coldfusion-workshop.meetus.adobeevents.com/ Ortus Webinar for November - Javier Quintero - FORGEBOX Business Plan: Introducing Organizations and TeamsNovember 19th at 11:00 AM Central Time (US and Canada)In this webinar, Javier Quintero, lead developer of FORGEBOX, will present the new features and the improved UI that is now available on FORGEBOX 6. Moreover, he'll explore in depth the Business Plan that is directed towards organizations and teams so they can collaborate and support their software building needs. He will show us how to create a new organization, how you can add members to it with specific roles, and how you can control teams, members, packages and publish access.with Javier Quinterohttps://us02web.zoom.us/meeting/register/tZclfuGopjkiG9TIMoC93YbKIcLM1ok_KKlwOnline CF Meetup - "Avoiding Server-Side Request Forgery (SSRF) Vulns in CFML", with Brian ReillyThursday, November 11, 2021 - 9:00 AM to 10:00 AM PSTServer-Side Request Forgery (SSRF) vulnerabilities allow an attacker to make arbitrary web requests (and in some cases, other protocols too) from the application environment. Exploiting these flaws can lead to leaking sensitive data, accessing internal resources, and under certain circumstances, remote command execution.Several ColdFusion/CFML tags and functions can process URLs as file path arguments -- including some tags and and functions that you might not expect. If these tags and functions process unvalidated user-controlled input, this can lead to SSRF vulnerabilities in your applications. In addition to providing a list of affected tags and functions, I'll cover some approaches for identifying and remediating vulnerable code. My goal for this talk is to raise awareness about what may be a security blindspot for some ColdFusion/CFML developers.https://www.meetup.com/coldfusionmeetup/events/281850930/ ICYMI - Online CF Meetup - "Migrating apps to ColdFusion 2021 from earlier versions", with Charlie ArehartThursday, November 4, 20219:00 AM to 10:00 AM PDTWhile CF2021 has been out now for a year (released in Nov 2020), many orgs may only now be considering moving to it, whether from CF2018 or perhaps CF2016, CF11, CF10, or even earlier. How have the versions changed, in ways that some older code may not run on CF2021? And if you're skipping some CF version/s, what might have tripped you up in those, though not really "new" in CF2021 itself? And what can you do to mitigate such challenges?In this session, CF troubleshooter Charlie Arehart will share from his experience helping folks make such migrations the past year (and for years with previous CF versions), whether in his role as an independent consultant or providing assistance to the CF community. He'll cover things you can consider in advance of the migration as well as things that might help during or after the migration. Most importantly, this talk will focus on the differences between CF2021 and various earlier CF versions. (Note that he has previously given a talk on migrating CF admin settings, and he plans a future talk on some other aspects of migration.)https://www.meetup.com/coldfusionmeetup/events/281800384/ Recording: https://www.youtube.com/watch?v=QQBHnQExFqc CFCasts Content Updateshttps://www.cfcasts.com Just ReleasedYouth Trainings - Universidad Don BoscoControl de Versiones Coming this week Youth Trainings - Universidad Don Bosco SoapBox Video Podcast A new series of ForgeBox coming very soonSend your suggestions at https://cfcasts.com/supportConferences and TrainingDeploy by Digital OceanTHE VIRTUAL CONFERENCE FOR GLOBAL DEVELOPMENT TEAMSNovember 16-17, 2021 https://deploy.digitalocean.com/homeAWS re:InventNOV. 29 – DEC. 3, 2021 | LAS VEGAS, NVCELEBRATING 10 YEARS OF RE:INVENTVirtual: FreeIn Person: $1799https://reinvent.awsevents.com/ Postgres BuildOnline - FreeNov 30-Dec 1 2021https://www.postgresbuild.com/ ITB Latam 2021December 2-3, 2021Into the Box LATAM is back and better than ever! Our virtual conference will include speakers from El Salvador and all over the world, who'll present on the latest web and mobile technologies in Latin America.Registration is completely free so don't miss out!ITB Latam Schedule Postedhttps://latam.intothebox.org/ Adobe ColdFusion Summit 2021December 7th and 8th - VirtualAgenda is out!!!@Adobe @coldfusion #CFSummit2021 keynote we will be featuring @ashleymcnamara! Her talk will focus on the history & future of DevRel how we got here & where we're going.2 tracks - 1 all CFML - the other a mix of CFML and semi-related topicsRegister for Free - https://cfsummit.vconfex.com/site/adobe-cold-fusion-summit-2021/1290Blog - https://coldfusion.adobe.com/2021/09/adobe-coldfusion-summit-2021-registrations-open/ jConf.devNow a free virtual eventDecember 9th starting at 8:30 am CDT/2:30 pm UTC.https://2021.jconf.dev/?mc_cid=b62adc151d&mc_eid=8293d6fdb0 VueJS Nation ConferenceOnline Live EventJanuary 26th & 27th 2022Register for FreeCall for Speakers is openhttps://vuejsnation.com/ More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets and Videos of the WeekBlog - Ben Nadel - Writing To The Standard Out / Console Using WriteDump() In Adobe ColdFusion 2021As I'm starting to modernize my ColdFusion blogging platform, one thing that I am missing terribly from Lucee CFML is the ability to write to the standard out (stdout) and standard error (stderr) streams. In a Docker / containerized context, writing to the output streams is a powerful debugging tool (not to mention a log aggregation technique). A few months ago, I looked at porting the systemOutput() function from Lucee CFML to Adobe ColdFusion; but, I just recently discovered that the CFDump tag and the writeDump() function in Adobe ColdFusion can write directly to the "console" (Standard Out) instead of to the browser. This isn't as seamless as systemOutput(); but, it may just be good enough!https://www.bennadel.com/blog/4150-writing-to-the-standard-out-console-using-writedump-in-adobe-coldfusion-2021.htm Blog - Ben Nadel - ColdFusion Component Setters / Accessors Are Chainable For Easy Dependency-InjectionThis is primarily a note-to-self; but the other day, I stumbled upon / remembered that the auto-generated accessors in a ColdFusion component are chainable. At work, I never think about this because we use a dependency-injection framework which performs all the setter-injection for us. However, in my blogging platform, all the components are wired-up manually in my onApplicationStart() event-handler. As such, the fact that I can chain my setter accessors leads to a lovely, fluent API.https://www.bennadel.com/blog/4149-coldfusion-component-setters-accessors-are-chainable-for-easy-dependency-injection.htm Blog - Ben Nadel - Considering An isError() Decision Function In ColdFusionAs I mentioned earlier today, I'm looking to use Rollbar's Java SDK in my Adobe ColdFusion 2021 app (namely, this blog). The Rollbar SDK exposes a fairly simple API. However, that simple API uses a data-type that I almost never think about in my code: java.lang.Throwable. To be clear, I deal with error objects all the time in ColdFusion; but, I'm usually serializing them to the "Standard Error" stream (where they get slurped-up into our log aggregator) - I'm never worrying about the actual data-type and what impact it may have on Java method signatures. It got me thinking about decision functions; and, why there is no isError() built-in function (BIF).https://www.bennadel.com/blog/4148-considering-an-iserror-decision-function-in-coldfusion.htm Blog - Javier Quintero - Ortus Solutions - FORGEBOX 6 has landed!After several months of work, we are proud to announce the release of FORGEBOX 6. This has been a major undertaking spawning several months worth of work, a complete UI revamp for registered users, many bug fixes, multi-key API, and much more. We have also introduced our new Business Accounts (https://forgebox.io/plans) with the ability for organizations to have a simple and human way of managing their final package releases and their teams.https://www.ortussolutions.com/blog/forgebox-6-has-landed Blog - Adam Cameron - A question about the overhead of OOP in CFMLA question cropped up on the CFML Slack channel the other day. My answer was fairly long-winded so I decided to post it here as well. I asked the original questioner, and they are OK with me reproducing their question.Again, I have a question to experienced OOP cfml coders. From the clean code concept I know I should break code into smaller (er even its smallest ) pieces. Is there any possible reason to stop doing that at a certain level in CFML? Eg. for performance reasons? Eg. lets assume I have a component named Car.cfc. Should I always break a Car.cfc component into Wheel.cfc, Engine.cfc, CarBody.cfc accordingly? Does the createObject behave like include files that would come with a certain overhead because of physical file request? What is when I also break Engine.cfc into many little pieces (and Wheel.cfc also)?Andreas @ CFML Slack ChannelHere's my answer. I've tidied up the English in some places, but have not changed any detail of what I said.This is interesting as Eric is battling this in quick and has made some amazing strides latelyhttps://blog.adamcameron.me/2021/11/a-question-about-overhead-of-oop-in-cfml.html Blog - Ben Nadel - Getting Rollbar's Java SDK 1.7.10 Working In Adobe ColdFusion 2021As I mentioned the other day, I'm preparing to pour some love into my ColdFusion blogging platform. One area in much need of love is my error logging. If you can even imagine, this blog still uses email as the primary means to report errors! *Ring ring ring* - Hello. What's that? The 1990's called and they want their error handling back? As a step towards modernization, I thought I would try out Rollbar - they have both a client-side JavaScript SDK and a server-side Java SDK. And, I think they have a cool name. Getting Rollbar's Java SDK 1.7.10 working with Adobe ColdFusion 2021 turned out to be a bit of a battle.https://www.bennadel.com/blog/4147-getting-rollbars-java-sdk-1-7-10-working-in-adobe-coldfusion-2021.htm CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 227 ColdFusion positions from 102 companies across 123 locations in 5 Countries.1 new jobs listedFull-Time - ColdFusion Developer at Gold Coast QLD - Australia Posted Nov 03https://www.getcfmljobs.com/jobs/index.cfm/australia/ColdFusion-Developer-at-Gold-Coast-QLD/11375 ForgeBox Module of the WeekColdBox Mail Services 2.0 by Luis Majano and Ortus SolutionsWe are so excited to bring you a major release of our cbmailservices module. This module has been around since our initial versions of ColdBox and it has now matured into a modern and fluent library for sending mail.https://www.ortussolutions.com/blog/coldbox-mail-services-20-fluent-mail-for-all https://www.forgebox.io/view/cbmailservices VS Code Hint Tips and Tricks of the WeekNew Relic CodeStream: GitHub, GitLab, Bitbucket PRs and Code ReviewNew Relic CodeStream is a developer collaboration platform that integrates essential dev tools into VS Code. Eliminate context-switching and simplify code discussion and code review by putting collaboration tools in your IDE.Integrations Code Hosts: Bitbucket, Bitbucket Server, GitHub, GitHub Enterprise, GitLab, GitLab Self-Managed Issue Trackers: Asana, Azure DevOps, Bitbucket, Clubhouse, GitHub, GitHub Enterprise, GitLab, GitLab Self-Managed, Jira, Linear, Trello, YouTrack Observability: New Relic One, Pixie Messaging Services: Slack, Microsoft Teams CodeStream is now part of New Relic - This must be very recenthttps://marketplace.visualstudio.com/items?itemName=CodeStream.codestream Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox, ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsNow offering Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website Patreons John Wilson - Synaptrix Eric Hoffman Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Jonathan Perret Jeffry McGee - Sunstar Media Dean Maunder Joseph Lamoree Don Bellamy Jan Jannek Laksma Tirtohadi Carl Von Stetten Dan Card Jeremy Adams Jordan Clark Matthew Clemente Daniel Garcia Scott Steinbeck - Agri Tracking Systems Ben Nadel Mingo Hagen Brett DeLine Kai Koenig Charlie Arehart Jonas Eriksson Jason Daiger Jeff McClain Shawn Oden Matthew Darby Ross Phillips Edgardo Cabezas Patrick Flynn Stephany Monge Kevin Wright Steven Klotz You can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors ★ Support this podcast on Patreon ★
Two years of simmering discord came to a head last week as the .NET OSS maintainers openly revolted against the .NET Foundation for years of non-communication, the Executive Director resigned, and newly elected board members are left to pick up the pieces.It was a wild week.First, there was some discord due to the .NET Foundation saying a board member left ‘for personal reasons' when in reality they left due to the nature of the .NET Foundation itself.Second, during this brouhaha and when finding out the Executive Director merged a PR without communicating, the .NET community learned that their projects were moved to the Foundation's Github Enterprise account without their consent, that the DNFAdmin service account was basically a trojan horse (an actual Trojan Horse, not the virus variety), and that even if they signed the ‘contributor model' contracts, they may not own their own projects.As I said, it was a wild week.So, the Executive Director apologized, not for the lack of communication, or moving the projects to the .NET Foundation's Github Enterprise account, or misstating why Rodney Littles II left the board, or for the fact that the foundation has not been up front with what it means to have a project join the .NET Foundation, but for… forcing through a PR on a project that the foundation ostensibly owned.Naturally members of the community asked for the Executive Director's resignation, and they got it. And we sit, a few days later, watching more communication from a single member of the board than we had from entire previous Boards of Directors, particularly around most of the painpoints the community mentioned previously. One of the board members spoke up during the incident but said nothing of consequence, except to say, “Likewise, I think that the community and projects may have not understood what they were agreeing to when they were brought under the .NET Foundation umbrella.”. That's what we in the biz like to call an understatement. I'm also not the only person to call this entire thing a brouhaha.And since I'm writing this newsletter, I get to have my say.I don't think Claire Novotny should have resigned as the Executive Director of the .NET Foundation. I believe her to be a scapegoat for the structural issues the .NET Foundation has, as I've written about and spoken about previously. We've had entire Boards of Directors come and go from the .NET foundation with nary a peep from them in public about their work, no after-action review or postmortem, nothing outside of their initial interview to become a member of the Board of Directors.I believe if anyone should resign, it should be the Boards of Directors. They ultimately are responsible for what the Executive Director and what the .NET Foundation does, and while half the board is fresher than a prince from Bel-air, the other half aren't, and in some form of irony, it's only the new people who are speaking out. I think they're Good People, but they either have no idea what they're doing or they haven't seen and felt the issue simmering for the last few years, in which case they most assuredly shouldn't be representing the community in the .NET Foundation.It really all comes back to a single question: What does the .NET Foundation do? or, taken further: Why does the .NET Foundation exist?. We haven't really gotten an answer to that question yet; especially the vague “commercially friendly” mission statement.I'm willing to bet the Board of Directors haven't been taking minutes for their daily meetings over the past week, even though the bylaws require them to, and so I've taken to asking that the bylaws be amended to require that the minutes are shared for review by the membership of the foundation.If the .NET foundation is going to exist, then it's going to have a vision and a purpose. If you care about .NET and the future of .NET, you should be right there, holding their feet to the fire. Otherwise we're going to get what we've always got, a mono-culture that seeks to fulfill Microsoft's whims about .NET; not what the actual OSS community wants or needs of .NET.With that bit of news in the can, let's see what else happened Last Week in .NET:
While every developer loves a good story about discovering and fixing a gnarly bug, not everyone enjoys the work of finding those bugs. Most folks would prefer to be writing business logic and solving new problems. But those input validation errors and resource leaks won't solve themselves. Or will they?AWS Bug Bust is a global competition launched with the goal of finding and fixing one million bugs in codebases around the world. It takes the traditional bug bash and turns it into a competition that anyone can enter. Got a repo or two that you've been meaning to clean up? Enter the Bug Bust and start squashing. This competition awards points to organizations, as well as individuals within an organization, for every bug that they fix in their own repos. A little friendly competition can motivate developers to fix more bugs in order to move up the leaderboards. How do you think we built Stack Overflow? Fake internet points are very important around here. With the Bug Bust competition, it's not just fake internet points and personal glory; top bug squashers—overall and within top organizations—can win all expense paid trips to re:Invent 2021. In a traditional bug bust, someone has to find the bugs, file tickets on all of them, then collect them for squashing. In the Bug Bust, Amazon has managed to automate that part of the process. That's because the Bug Bust is built on their AI-powered code review and profiling tool, CodeGuru. CodeGuru uses static analysis and machine learning with some additional automated reasoning to find bugs in code; everything from best practices to concurrency issues, resource leaks, security problems, and more. AI isn't here to take your jobs, it's here to automated away the tedious stuff. Developers get to harness the power of artificial intelligence in their everyday lives.Concurrency and resource leak issues tend to drain the soul out of the developers. You could spend all day trying to optimize and close those. CodeGuru includes a function profiler that looks for a codebase's most expensive calls. It's a lightweight agent actively running and looking for ways to reduce the cost of the running application. These bugs, along with security issues and AWS API calls, are the ones that earn the most points. But all bugs earn their bashers points; CodeGuru spots code inefficiencies, duplications, and general code quality detectors, and performs input validation. The model behind this is pretrained on years of Amazon bug hunting experience. The system does learn from you as to what is a good bug in your codebase, but it's not training on your code. It's your feedback that makes CodeGuru a better bug hunter.If you have Java and Python code in a GitHub, GitHub Enterprise, Bitbucket, or AWS CodeCommit repository, you can jump into the competition. Sign up with your email and you get 30 days to run as many Bug Busts as you want for free. The top ten individual bug busters get VIP treatment at the 2021 re:Invent conference (and an all-expense-paid trip there), which is being held in person this year. Top participating organizations get a ticket to give to one of their developers as well. For those bashers outside of the top ten, you can still earn some sweet swag by passing some point milestones. The contest to win the trip to re:Invent 2021 runs through September, but you can still automate your bug bashes and get swag anytime. Want to get started? Head over to the AWS Bug Bust site now.
While every developer loves a good story about discovering and fixing a gnarly bug, not everyone enjoys the work of finding those bugs. Most folks would prefer to be writing business logic and solving new problems. But those input validation errors and resource leaks won't solve themselves. Or will they?AWS Bug Bust is a global competition launched with the goal of finding and fixing one million bugs in codebases around the world. It takes the traditional bug bash and turns it into a competition that anyone can enter. Got a repo or two that you've been meaning to clean up? Enter the Bug Bust and start squashing. This competition awards points to organizations, as well as individuals within an organization, for every bug that they fix in their own repos. A little friendly competition can motivate developers to fix more bugs in order to move up the leaderboards. How do you think we built Stack Overflow? Fake internet points are very important around here. With the Bug Bust competition, it's not just fake internet points and personal glory; top bug squashers—overall and within top organizations—can win all expense paid trips to re:Invent 2021. In a traditional bug bust, someone has to find the bugs, file tickets on all of them, then collect them for squashing. In the Bug Bust, Amazon has managed to automate that part of the process. That's because the Bug Bust is built on their AI-powered code review and profiling tool, CodeGuru. CodeGuru uses static analysis and machine learning with some additional automated reasoning to find bugs in code; everything from best practices to concurrency issues, resource leaks, security problems, and more. AI isn't here to take your jobs, it's here to automated away the tedious stuff. Developers get to harness the power of artificial intelligence in their everyday lives.Concurrency and resource leak issues tend to drain the soul out of the developers. You could spend all day trying to optimize and close those. CodeGuru includes a function profiler that looks for a codebase's most expensive calls. It's a lightweight agent actively running and looking for ways to reduce the cost of the running application. These bugs, along with security issues and AWS API calls, are the ones that earn the most points. But all bugs earn their bashers points; CodeGuru spots code inefficiencies, duplications, and general code quality detectors, and performs input validation. The model behind this is pretrained on years of Amazon bug hunting experience. The system does learn from you as to what is a good bug in your codebase, but it's not training on your code. It's your feedback that makes CodeGuru a better bug hunter.If you have Java and Python code in a GitHub, GitHub Enterprise, Bitbucket, or AWS CodeCommit repository, you can jump into the competition. Sign up with your email and you get 30 days to run as many Bug Busts as you want for free. The top ten individual bug busters get VIP treatment at the 2021 re:Invent conference (and an all-expense-paid trip there), which is being held in person this year. Top participating organizations get a ticket to give to one of their developers as well. For those bashers outside of the top ten, you can still earn some sweet swag by passing some point milestones. The contest to win the trip to re:Invent 2021 runs through September, but you can still automate your bug bashes and get swag anytime. Want to get started? Head over to the AWS Bug Bust site now.
THE NEWS FROM REDMOND Announcing Experimental Mobile Blazor Bindings February update .NET Interactive is here! .NET Notebooks Preview 2 .NET Framework February 2020 Security and Quality Rollup Making our Unity Analyzers Open-Source Introducing Scalar: Git at scale for everyone Windows Terminal Preview v0.9 Release AndroidX NuGet Packages are Stable! VS Code January 2020 (version 1.42) Accessibility Improvements in Visual Studio 2019 for Mac Using .NET for Apache Spark to Analyze Log Data Decompilation of C# code made easy with Visual Studio February 2020 release of Azure Data Studio is now available GitHub Enterprise is now free through Microsoft for Startups AROUND THE WORLD Rider 2019.3.2 is Available! ReSharper Ultimate 2019.3.2 is Out! AWS SDK for .NET v3.5 Preview JetBrains .NET Day Online 2020 – Call for Speakers Announcing PostSharp 6.5 RC Rider 2020.1 Roadmap PROJECTS OF THE WEEK NetLearner - Shahed Chowdhuri NetLearner is an ASP .NET Core web app to allow any user to consolidate multiple learning resources all under one umbrella. The codebase itself is a way for new/existing .NET developers to learn ASP .NET Core, while a deployed instance of NetLearner can be used as a curated link-sharing web application. Also, be sure and check out the Project of the Week archives! SHOUT-OUTS / PLUGS .NET Bytes on Twitter Matt Groves is: Tweeting on Twitter Live Streaming on Twitch Calvin Allen is: Tweeting on Twitter Live Streaming on Twitch
Learn how Cox Automotive started its journey with GitHub Enterprise. Hear how the company improved its processes around managing GitHub Enterprise on AWS and its plans to streamline operations even further in the future. Millions of developers and thousands of businesses rely on GitHub to collaborate on code and build better software faster. GitHub Enterprise is the self-hosted solution for businesses that you can deploy and manage in your own secure environment, and what better place to do that than on AWS. This session is brought to you by AWS partner, GitHub.
AWS Lambda has emerged as a powerful and cost-effective way for enterprises to quickly deploy services without the need to provision and manage virtual servers. This session includes a hands-on demo of how to use GitHub as the core of a DevOps toolchain. Learn how to leverage AWS integrations with Jenkins, the AWS CLI, and open source software to build, test, and deploy a service to AWS Lambda. We also explore key product updates to GitHub and GitHub Enterprise that are designed to make serverless development easier and more efficient. Session sponsored by GitHub, Inc.
In this episode we talk to Volker Hilsheimer, VP of Engineering, Global Scale at Telenor Digital about Github Enterprise and how that will help us develop software based services across teams, nations and other organisational barriers.
I had a chance to speak with Jean Louis Vignaud of IBM about the recently announced GitHib Enteprise as a Service on IBM BlueMix. This allows enterprises who want to use the very popular Git service but for whatever reason cannot use the public cloud community version. Organizations can now run their own "private label" GitHub Enterprise on their own premises. You can find out more about this exciting new offering at: 1. Blog : Introducing the first-ever GitHub Enterprise as a hosted service: https://developer.ibm.com/bluemix/2016/06/16/github-enterprise-hosted-service-on-bluemix/ 2. Video : Benefits of GitHub Enterprise with IBM Bluemix Dedicated: https://www.youtube.com/watch?v=AxGTFZzZ7vU 3. SlideShare: IBM Bluemix Dedicated – GitHub Enterprise: http://www.slideshare.net/IBMDevOps/ibm-bluemix-dedicated-github-enterprise
Brian talks with Matt Colyer (@mcolyer; Product Manager at @github | Founder of Flagr and Easel) about the evolution of “software is eating the world” - how application development is evolving; how companies are changing the organizations and process; how IT organizations are dealing with open source software and how SaaS applications like GitHub have to evolve. Show Links: Get a free book from O'Reilly media or use promo code PCBW for a discount - 40% off Print Books and 50% off eBooks and videos Github Homepage Github for Business Show Notes: Topic 1 - Welcome to the show. Let’s talk about your background, not only at Github, but also as a developer and entrepreneur prior to Github. Topic 2 - We all know the famous “software is eating the world” quote from Marc Andreessen, but this means that more companies must be building their own software. Let talk about what Github sees from that perspective - establishing software development as a core business competency. (“insourcing”) Topic 3 - It’s been interesting to watch more companies, not just vendors, actively participate in open source communities - not just using the software, but actively contributing to existing projects and starting their own projects. Topic 4 - The public cloud has awesome services available, but not every company feels like they can use the public cloud, so we’re seeing more “on-premises” offerings from public cloud companies. Can you talk about the benefits and challenges of these offerings? Topic 5 - What’s a trend that you’re seeing from smaller companies that you don’t think enough larger business follow today, but could be easily adopted to make them better at software development or just more agile as a business? Feedback? Email:show at thecloudcast dot net Twitter:@thecloudcastnet YouTube:Cloudcast Channel
Gource — open source visualization tool, example Haydle visualization Ember 1.8.0 — the move to HTMLBars React.js: How does it fit in with everything else? GitHub Enterprise on AWS At AWS ReInvent this week – AWS Lambda – cloud computing functionally– oh and there’s support for Docker via containers Rob Eisenberg leaves Angular team Khan ... Read More The post DevNews #93 – Angular 2.0 news, Ember reaches 1.8.0, and Minecraft to learn programming? appeared first on Chariot Solutions.
This is the sixteenth episode of Hack To Start. Your hosts, Franco Varriano (on Twitter @ FrancoVarriano) and Tyler Copeland (on Twitter @ TylerCopeland), speak with Zach Holman(on Twitter @ Holman), the ninth employee at GitHub and the founder of speaking.io. He speaks with us about open source projects, building useful products, and how to speak effectively in public. Zach initially worked on what would become GitHub Enterprise and now mostly speaks on the subjects of building products, growing startups, and how to give great talks.
Naoya Itoさんをゲストに迎えて、iPad Air 2, Kindle Voyage, Google Computing Live, AWS re:Invent, Aurora, GitHub Enterprise などについて話しました。 Show Notes Please welcome Skype for Web (Beta) - Skype Blogs Apple - iPad - Compare iPad models dankogaiメソッド AnandTech | Apple A8X's GPU - GXA6850, Even Better Than I Thought Nexus 9 vs. iPad Air 2: A (Mostly) Subjective Comparison Kindle Voyage A Voyage to 2009 - Marco.org New ニンテンドー 3DS Google Container Engine - Google Cloud Platform GoogleCloudPlatform/kubernetes Managed VMs - Google App Engine - Google Cloud Platform Amazon EC2 Container Service (ECS) - Container Management for the AWS Cloud Amazon Aurora - New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS Aurora lets you simulate failures using SQL The Netflix Tech Blog: Introducing Dynomite - Making Non-Distributed Databases, Distributed AWS Lambda - Run Code in the Cloud New AWS Tools for Code Management and Deployment .NET Core is Open Source - .NET Blog Mobile App Development & App Creation Software - Xamarin .NET Foundation Welcome, New Emacs Developers | Random Thoughts GNU Emacs、プロジェクトのソースコード管理ツールをBazaarからGitへ移行させる Go team member here. I've used five different code review tools, and Github is ... | Hacker News Reviewable - GitHub Code Reviews Done Right GitHub Enterprise - The best way to build and ship software Octocat Ad
Masayoshi Sekimura さんをゲストに迎えて、エンジニアの英語、Github Enterprise, Phabricator, Kibana などについて話しました。 Show Notes Android KitKat KitKat mocks Apple with Android 4.4 Parody Video Have a break. #KitKat Slow OEM step aside: Google is defragging Android Founder's Accents 創業者の訛り Why knowing English is important for every software developer English has been my pain for 15 years 英語は私にとって15年にわたって悩みの種です Phabricator Pivotal Tracker cookpad/kage Kibana 3 Interview with the Github Elasticsearch team Splunk SolrとElasticSearchの比較 任天堂社長が訊く「すれ違い通信中継所」 Amazon CloudSearch Karma
本期由 Daniel 主持,参与嘉宾有 SaitoWu, Dingding Ye。武鑫(Saito) 是著名自托管 Git 项目仓库开源项目 GitLab 的核心开发者之一,也是 RubyConfChina 2012 的讲师。本期 SaitoWu 同学继续第六期的话题,跟我们聊聊 Gitlab 的故事,包括 Gitlab 的项目领导人 Randx 其人,武鑫如何成为 Gitlab 的核心开发者之一,Gitlab 这个开源项目对武鑫产生的影响可以给我们带来什么样的启发,以及对 Gitlab 将来发展的展望。在节目的最后,武鑫回答了大家关心的问题。 Gitlab 专题问题收集帖 How Gitlab Works How Gitlab Works PPT Randx Gitlab core team members Gitlab.com Randx join Gitlab.com as a co-founder IRC Gitlab Googlegroup grit_ext Skyrim 上古卷轴5:天际 Github Enterprise 打造Facebook 王淮 Smart HTTP Support Scott Chacon Gitosis Gitolite grack Git Transfer Protocols GitLab 5.0 Gitlab Shell Github Hooks Gitlab Hooks Project Management Tools Gitlab-CI Travis-CI Jenkins-CI Gitolite Mirror Gitlab-grit Gitlab-recipes gitlab-vagrant-vm Feedly gitlab_meta Special Guest: saitowu.
本期由 Daniel 主持,参与嘉宾有 SaitoWu, Dingding Ye。武鑫(Saito) 是著名自托管 Git 项目仓库开源项目 GitLab 的核心开发者之一,也是 RubyConfChina 2012 的讲师,给大家做了期很精彩的 GitLab 实现介绍。本期我们很荣幸能请到 SaitoWu 同学来跟大家聊聊他的从业经历,他所感兴趣的话题,包括 Git,GitHub,以及 GitLab。 Why Git is better than X Git 为什么这么好? Unlocking the Secrets of Git Git scaling at GitHub Chatops at Github Sinatra How gitlab works SVN VPN git-svn 蓝光党 Ruby Tuesday http://clojure.org/ Haskell Rich Hickey 七周七语言 JDK8 Github Enterprise authorized_keys Gitosis Gitolite Twisted GitHub is Moving to Rackspace! Engine Yard And GitHub Transition Zach Holman Github Boxen libgit2 Hubot Play - Company's DJ Redcarpet Unicorn html-pipeline Resque Gitcafe Redmine Use pull request Scrum要素 Component Hexo Special Guest: saitowu.