Open source web server and a reverse proxy server
POPULARITY
In this engaging conversation, Bill Kennedy interviews Peter Kelly, VP of Engineering at Tigera, exploring his journey from early experiences with technology to his current role in the tech industry. They discuss the impact of education, sports, and family background on Peter's career path, as well as the challenges faced by young people today in navigating their futures. The conversation also delves into hiring practices and the importance of personal connections in the recruitment process.00:00 Introduction01:00 What is Peter Doing Today?O4:20 First Memory of a Computer9:30 Family Background12:00 Secondary School19:00 Passion for Soccer24:00 Interviewing and Hiring31:00 Entering University 40:30 Work Experience 54:00 AI Tooling 01:07:00 First Go Experience1:14:00 Beginning of Tigera1:37:30 Contact InfoConnect with Peter: Linkedin: https://ie.linkedin.com/in/peterkellyonlineMentioned in this Episode:Tigera: https://www.tigera.io/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Installing Oracle GoldenGate 23ai is more than just running a setup file—it's about preparing your system for efficient, reliable data replication. In this episode, Lois Houston and Nikita welcome back Nick Wagner to break down system requirements, storage considerations, and best practices for installing GoldenGate. You'll learn how to optimize disk space, manage trail files, and configure network settings to ensure a smooth installation. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we took a close look at the security strategies of Oracle GoldenGate 23ai. In this episode, we'll discuss all aspects of installing GoldenGate. 00:48 Nikita: That's right, Lois. And back with us today is Nick Wagner, Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I'm going to get straight into it. What are the system requirements for a typical GoldenGate installation? Nick: As far as system requirements, we're going to split that into two sections. We've got an operating system requirements and a storage requirements. So with memory and disk, and I know that this isn't the answer you want, but the answer is that it varies. With GoldenGate, the amount of CPU usage that is required depends on the number of extracts and replicats. It also depends on the number of threads that you're going to be using for those replicats. Same thing with RAM and disk usage. That's going to vary on the transaction sizes and the number of long running transactions. 01:35 Lois: And how does the recovery process in GoldenGate impact system resources? Nick: You've got two things that help the extract recovery. You've got the bonded recovery that will store transactions over a certain length of time to disk. It also has a cache manager setting that determines what gets written to disk as part of open transactions. It's not just the simple answer as, oh, it needs this much space. GoldenGate also needs to store trail files for the data that it's moving across. So if there's network latency, or if you expect a certain network outage, or you have certain SLAs for the target database that may not be met, you need to make sure that GoldenGate has enough room to store its trail files as it's writing them. The good news about all this is that you can track it. You can use parameters to set them. And we do have some metrics that we'll provide to you on how to size these environments. So a couple of things on the disk usage. The actual installation of GoldenGate is about 1 to 1.5 gig in size, depending on which version of GoldenGate you're going to be using and what database. The trail files themselves, they default to 500 megabytes apiece. A lot of customers keep them on disk longer than they're necessary, and so there's all sorts of purging options available in GoldenGate. But you can set up purge rules to say, hey, I want to get rid of my trail files as soon as they're not needed anymore. But you can also say, you know what? I want to keep my trail files around for x number of days, even if they're not needed. That way they can be rebuilt. I can restore from any previous point in time. 03:15 Nikita: Let's talk a bit more about trail files. How do these files grow and what settings can users adjust to manage their storage efficiently? Nick: The trail files grow at about 30% to 35% of the generated redo log data. So if I'm generating 100 gigabytes of redo an hour, then you can expect the trail files to be anywhere from 30 to 35 gigabytes an hour of generated data. And this is if you're replicating everything. Again, GoldenGate's got so many different options. There's so many different ways to use it. In some cases, if you're going to a distributed applications and analytics environment, like a Databricks or a Snowflake, you might want to write more information to the trail file than what's necessary. Maybe I want additional information, such as when this change happened, who the user was that made that change. I can add specific token data. You can also tell GoldenGate to log additional records or additional columns to the trail file that may not have been changed. So I can always say, hey, GoldenGate, replicate and store the entire before and after image of every single row change to your trail file, even if those columns didn't change. And so there's a lot of different ways to do it there. But generally speaking, the default settings, you're looking at about 30% to 35% of the generated redo log value. System swap can fill up quickly. You do want this as a dedicated disk as well. System swap is often used for just handling of the changes, as GoldenGate flushes data from memory down to disk. These are controlled by a couple of parameters. So because GoldenGate is only writing committed activity to the trail file, the log reader inside the database is actually giving GoldenGate not only committed activity but uncommitted activity, too. And this is so it can stay very high speed and very low latency. 05:17 Lois: So, what are the parameters? Nick: There's a cache manager overall feature, and there's a cache directory. That directory controls where that data is actually stored, so you can specify the location of the uncommitted transactions. You can also specify the cache size. And there's not only memory settings here, but there's also disk settings. So you can say, hey, once a cache size exceeds a certain memory usage, then start flushing to disk, which is going to be slower. This is for systems that maybe have less memory but more high-speed disk. You can optimize these parameters as necessary. 05:53 Nikita: And how does GoldenGate adjust these parameters? Nick: For most environments, you're just going to leave them alone. They're automatically configured to look at the system memory available on that system and not use it all. And then as soon as necessary, it'll overflow to disk. There's also intelligent settings built within these parameters and within the cache manager itself that if it starts seeing a lull in activity or your traditional OLTP type responses to actually free up the memory that it has allocated. Or if it starts seeing more activity around data warehousing type things where you're doing large transactions, it'll actually hold on to memory a little bit longer. So it kinda learns as it goes through your environment and starts replicating data. 06:37 Lois: Is there anything else you think we should talk about before we move on to installing GoldenGate? Nick: There's a couple additional things you need to think of with the network as well. So when you're deploying GoldenGate, you definitely want it to use the fastest network. GoldenGate can also use a reverse proxy, especially important with microservices. Reverse proxy, typically we recommend Nginx. And it allows you to access any of the GoldenGate microservices using a single port. GoldenGate also needs either host names or IP addresses to do its communication and to ensure the system is available. It does a lot of communication through TCP and IP as well as WSS. And then it also handles firewalls. So you want to make sure that the firewalls are open for ingress and egress for GoldenGate, too. There's a couple of different privileges that GoldenGate needs when you go to install it. You'll want to make sure that GoldenGate has the ability to write to the home where you're installing it. That's kind of obvious, but we need to say it anyways. There's a utility called oggca.sh. That's the GoldenGate Configuration Assistant that allows you to set up your first deployments and manage additional deployments. That needs permissions to write to the directories where you're going to be creating the new deployments. The extract process needs connection and permissions to read the transaction logs or backups. This is not important for Oracle, but for non-Oracle it is. And then we also recommend a dedicated database user for the extract and replicat connections. 08:15 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 08:41 Nikita: Welcome back! So Nick, how do we get started with the installation? Nick: So when we go to the install, the first thing you're going to do is go ahead and go to Oracle's website and download the software. Because of the way that GoldenGate works, there's only a couple moving parts. You saw the microservices. There's five or six of them. You have your extract, your replicat, your distribution service, trail files. There's not a lot of moving components. So if something does go wrong, usually it affects multiple customers. And so it's very important that when you go to install GoldenGate, you're using the most recent bundle patch. And you can find this within My Oracle Support. It's not always available directly from OTN or from the Oracle e-delivery website. You can still get them there, but we really want people going to My Oracle Support to download the most recent version. There's a couple of environment variables and certificates that you'll set up as well. And then you'll run the Configuration Assistant to create your deployments. 09:44 Lois: Thanks, Nick, for taking us though the installation of GoldenGate. Because these are highly UI-driven topics, we recommend that you take a deep dive into the GoldenGate 23ai Fundamentals course, available on mylearn.oracle.com. Nikita: In our next episode, we'll talk about the Extract process. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 10:08 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, recorded live at DevWorld 2025 in Amsterdam, we sit down with Dave McAllister, Senior Open Source Technologist at NGINX, for a fast-paced, thought-provoking—and surprisingly funny—conversation about observability, statistics, and Kubernetes traffic management.Dave takes us on a journey through the real meaning behind metrics like mean, median, and mode, and explains why so many DevOps teams misread their alerts and dashboards. Using eye-opening anecdotes (yes, including one about beer sales and marriage licenses), he breaks down the danger of acting on misleading correlations and why using the wrong statistical model can lead to chaos.We also dive deep into the future of Ingress versus the Gateway API, the evolution of NGINX's role in Kubernetes environments, and what makes some tools “just good enough” while others aim for performance and reliability at scale.Expect insights on everything from Poisson distributions to eBPF, all wrapped in Dave's sharp storytelling style and decades of open source experience.Stuur ons een bericht.Support the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
Desde un login para #traefik hasta bloquear acceso por ip a tus servicios autoalojados y otras cinco recomendaciones para exprimir tu proxy inversoSi bien llevo utilizando Traefik como proxy inverso varios años mas, habiendo, incluso, superado la transición del 1.7 al 2.X, lo cierto es que no paro de descubrir nuevas características y opciones para exprimir el proxy. En general la mayoría de las recomendaciones de las que te voy a hablar son aplicables a cualquier proxy, y otras son mas particulares, o al menos mas fáciles de aplicar con Traefik. De cualquier forma, son ideas o conceptos que se pueden trasladar a otros proxy como Caddy o Nginx, de forma mas o menos sencilla. Aquí simplemente se trata de revisar estas recomendaciones y que dependiendo de la solución que tengas la apliques.Más información y enlaces en las notas del episodio
Desde un login para #traefik hasta bloquear acceso por ip a tus servicios autoalojados y otras cinco recomendaciones para exprimir tu proxy inversoSi bien llevo utilizando Traefik como proxy inverso varios años mas, habiendo, incluso, superado la transición del 1.7 al 2.X, lo cierto es que no paro de descubrir nuevas características y opciones para exprimir el proxy. En general la mayoría de las recomendaciones de las que te voy a hablar son aplicables a cualquier proxy, y otras son mas particulares, o al menos mas fáciles de aplicar con Traefik. De cualquier forma, son ideas o conceptos que se pueden trasladar a otros proxy como Caddy o Nginx, de forma mas o menos sencilla. Aquí simplemente se trata de revisar estas recomendaciones y que dependiendo de la solución que tengas la apliques.Más información y enlaces en las notas del episodio
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Apple Updates Apple released updates for iOS, iPadOS, macOS, and VisionOS. The updates fix two vulnerabilities which had already been exploited against iOS. https://isc.sans.edu/diary/Apple%20Patches%20Exploited%20Vulnerability/31866 Oracle Updates Oracle released it quarterly critical patch update. The update addresses 378 security vulnerabilities. Many of the critical updates are already known vulnerabilities in open-source software like Apache and Nginx ingress. https://www.oracle.com/security-alerts/cpuapr2025.html Oracle Breach Guidance CISA released guidance for users affected by the recent Oracle cloud breach. The guidance focuses on the likely loss of passwords. https://www.cisa.gov/news-events/alerts/2025/04/16/cisa-releases-guidance-credential-risks-associated-potential-legacy-oracle-cloud-compromise Google Chrome Update A Google Chrome update released today fixes two security vulnerabilities. One of the vulnerabilities is rated as critical. https://chromereleases.googleblog.com/2025/04/stable-channel-update-for-desktop_15.html CVE Updates CISA extended MITRE s funding to operate the CVE numbering scheme. However, a number of other organizations announced that they may start alternative vulnerability registers. https://euvd.enisa.europa.eu/ https://gcve.eu/ https://www.thecvefoundation.org/
In this episode of Linux Out Loud, we explore the latest in Linux hardware experiments, distro discoveries, and creative workflows. Wendy walks through her setup with Fedora 41 and DaVinci Resolve, Matt dives into Windows frustrations, and Nate teases his excitement about trying out Bazzite, even before getting hands-on with the OneXPlayer. We chat about eGPU setups, Wayland oddities, 3D printer troubleshooting with Mainsail and NGINX, and highlight the unique challenges (and wins) of gaming on Linux—like Dark Envoy. It's a lively mix of tech insights, problem-solving, and distro excitement—all wrapped in open-source goodness. Find the rest of the show notes at https://tuxdigital.com/podcasts/linux-out-loud/lol-109/ Contact info Matt (Twitter @MattTDN (https://twitter.com/MattTDN)) Wendy (Mastodon @WendyDLN (https://mastodon.online/@WendyDLN)) Nate (Website CubicleNate.com (https://cubiclenate.com/)) Bill (Discord: ctlinux, Mastodon @ctlinux)
Critical Remote Code Execution vulnerabilities affect Kubernetes controllers. Senior Trump administration officials allegedly use unsecured platforms for national security discussions. Even experts like Troy Hunt get phished. Google acknowledges user data loss but doesn't explain it. Chinese hackers spent four years inside an Asian telecom firm. SnakeKeylogger is a stealthy, multi-stage credential-stealing malware. A cybercrime crackdown results in over 300 arrests across seven African countries. Ben Yelin, Caveat co-host and Program Director, Public Policy & External Affairs at the University of Maryland Center for Health and Homeland Security, joins to discuss the Signal national security leak. Pew Research Center figures out how its online polling got slightly forked. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest We are joined by Ben Yelin, Caveat co-host and Program Director, Public Policy & External Affairs at the University of Maryland Center for Health and Homeland Security, on the Signal national security leak. Selected Reading IngressNightmare: critical Kubernetes vulnerabilities in ingress NGINX controller (Beyond Machines) Remote Code Execution Vulnerabilities in Ingress NGINX (Wiz) Ingress-nginx CVE-2025-1974: What You Need to Know (Kubernetes) Trump administration is reviewing how its national security team sent military plans to a magazine editor (NBC News) The Trump Administration Accidentally Texted Me Its War Plans (The Atlantic) How Russian Hackers Are Exploiting Signal 'Linked Devices' Feature for Real-Time Spying (SecurityWeek) Troy Hunt: A Sneaky Phish Just Grabbed my Mailchimp Mailing List (Troy Hunt) 'Technical issue' at Google deletes some customer data (The Register) Chinese hackers spent four years inside Asian telco's networks (The Record) Multistage Info Stealer SnakeKeylogger Attacking Individuals and Businesses to Steal Logins (Cyber Security News) Over 300 arrested in international crackdown on cyber scams (The Record) How a glitch in an online survey replaced the word ‘yes' with ‘forks' (Pew Research) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
W Short #66 analizujemy finansowy wyścig gigantów chmurowych z AWS, Microsoftem i Google. Omawiamy nowy model GPT-4o mini dla GitHub Copilot i przełomowe funkcje w OpenTofu 1.9. Przyglądamy się też pierwszemu chipowi kwantowemu Microsoftu z cząsteczkami Majorana. Cloudflare przechodzi na Open Telemetry, co znacząco upraszcza ich infrastrukturę monitorowania. Adidas migruje swoje Ingress Controllery w Kubernetes, wybierając standardowe rozwiązanie Nginx. Dyskutujemy również o security.txt i zabawnej wpadce z agentem AI, który usunął kod Coinbase. Jeśli używasz OpenTofu zamiast Terraform, wypróbuj nową pętlę for each dla providerów. Jeśli wdrażasz security.txt na swojej stronie, sprawdź usługę Cloudflare do dynamicznego wstrzykiwania pliku. A gdy pracujesz z agentami AI, pamiętaj o git commit przed wydaniem polecenia "usuń wszystko"! A teraz nie ma co się obijać!
Sponsor by SEC Playground --- Support this podcast: https://podcasters.spotify.com/pod/show/chillchillsecurity/support
Samenvatting:In deze aflevering van De Nederlandse Kubernetes Podcast duiken Ronald Kers ( CNCF Ambassador) en Jan Stomphorst (Solutions Architect bij ACC ICT) in de wereld van Kubernetes security en AI met Bart Salaets van F5 Networks. Bart deelt zijn unieke perspectief als CTO van een toonaangevend bedrijf in Application Delivery en Security.De aflevering start met een introductie van Bart's achtergrond in de telco-industrie en hoe hij uiteindelijk bij F5 terechtkwam. Hij legt uit hoe F5 zich heeft ontwikkeld van een traditionele load balancer in 1996 naar een veelzijdig platform voor application delivery en security dat applicaties ondersteunt in zowel on-premises als cloud-omgevingen.Hoogtepunten:F5 en Kubernetes:F5's Ingress controllers zijn ontworpen om moderne Kubernetes-omgevingen te ondersteunen met een kleinere footprint dankzij integratie met NGINX-technologie.Security-functionaliteiten, zoals web application firewalls en API-beveiliging, worden naadloos geïntegreerd in Kubernetes.DevOps en Security:Bart bespreekt de uitdaging van snelle applicatieontwikkeling door DevOps-teams versus de noodzaak voor strenge security.F5 introduceert oplossingen zoals Git-gebaseerde security policies om de kloof tussen DevOps en security-teams te overbruggen.AI en de Toekomst van Kubernetes:AI-workloads worden steeds vaker in Kubernetes gehost. Bart legt uit hoe F5 werkt aan oplossingen zoals Data Processing Units (DPU's) om AI-processen efficiënter en duurzamer te maken.De impact van AI op security wordt ook besproken, inclusief nieuwe risico's zoals prompt-injecties en data-lekken.Open Source en NGINX:F5 blijft investeren in open source en ondersteunt de grote NGINX-community.Bart hint naar nieuwe open-sourcefunctionaliteiten in de pipeline, terwijl NGINX Plus extra commerciële features biedt.Takeaways: Deze aflevering biedt inzicht in de complexe balans tussen innovatie, veiligheid en schaalbaarheid in Kubernetes-omgevingen. Bart benadrukt het belang van samenwerking tussen open source en commerciële technologieën om de groeiende uitdagingen van AI, cloud en security het hoofd te bieden.Luister nu naar aflevering 73 van De Nederlandse Kubernetes Podcast en ontdek hoe F5 Networks bijdraagt aan de toekomst van Kubernetes!Stuur ons een bericht.Like and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
Microsoft has revamped its AI-powered Recall feature, shifting from automatic to opt-in use with enhanced security measures like full encryption and Windows Hello authentication, addressing privacy concerns. California Governor Gavin Newsom vetoed SB-1047, which aimed to regulate AI models, citing concerns that smaller models outside the regulation's scope might pose greater risks. Meanwhile, OpenAI is facing a leadership shakeup as CTO Mira Murati and other key executives depart, coinciding with a shift in the company's focus toward a more traditional startup model under Sam Altman's leadership. This and more on the Gestalt IT Rundown. Time Stamps: 0:00 - Welcome to the Rundown 0:58 - Rackspace Gets Attacked, Logically 4:09 - Hurricane Helene Disrupts Critical Supply Chain 8:34 - Tech Giants and Government Support Spark a New Nuclear Renaissance? 14:11 - F5 Launches NGINX One 17:34 - NIST Gives Tips on Passwords 22:20 - AI Safety Roundup 40:39 - The Weeks Ahead 42:37 - Thanks for Watching Hosts: Tom Hollingsworth: https://www.linkedin.com/in/networkingnerd/ Jack Poller: https://www.linkedin.com/in/jackpoller/ Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT #Rundown, #Cybersecurity, #AISafety, #AI, @GestaltIT, @TheFuturumGroup, @TechFieldDay, @NetworkingNerd, @Poller, @Rackspace, @NGINX, @NIST, @OpenAI, @Microsoft,
This is a recap of the top 10 posts on Hacker News on September 6th, 2024.This podcast was generated by wondercraft.ai(00:39): Did Sandia use a thermonuclear secondary in a product logo?Original post: https://news.ycombinator.com/item?id=41463809&utm_source=wondercraft_ai(01:54): 2M users but no money in the bankOriginal post: https://news.ycombinator.com/item?id=41463734&utm_source=wondercraft_ai(02:58): Swift is a more convenient RustOriginal post: https://news.ycombinator.com/item?id=41464371&utm_source=wondercraft_ai(04:06): Nginx has moved to GitHubOriginal post: https://news.ycombinator.com/item?id=41466963&utm_source=wondercraft_ai(05:13): Effects of Gen AI on High Skilled Work: Experiments with Software DevelopersOriginal post: https://news.ycombinator.com/item?id=41465081&utm_source=wondercraft_ai(06:23): Study: Playing D&D helps autistic players in social interactionsOriginal post: https://news.ycombinator.com/item?id=41464347&utm_source=wondercraft_ai(07:41): Hardware Acceleration of LLMs: A comprehensive survey and comparisonOriginal post: https://news.ycombinator.com/item?id=41470074&utm_source=wondercraft_ai(08:55): Godot founders had desperately hoped Unity wouldn't 'blow up'Original post: https://news.ycombinator.com/item?id=41468667&utm_source=wondercraft_ai(10:06): Inertia.js – Build React, Vue, or Svelte apps with server-side routingOriginal post: https://news.ycombinator.com/item?id=41465900&utm_source=wondercraft_ai(11:11): Parkinson's may begin in the gut, study says, adding to growing evidenceOriginal post: https://news.ycombinator.com/item?id=41466724&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Scott and Wes serve up an episode on reverse proxy servers. They discuss popular options like CF Tunnels, Caddy, Nginx, Apache, and more, explaining why you might need one for load balancing, SSL certificates, security, and managing multiple servers. Show Notes 00:00 Welcome to Syntax! 01:30 Brought to you by Sentry.io. 02:25 What is reverse proxy? 03:16 Some examples of reverse proxies. 05:04 Why do you need a reverse proxy? 05:09 Combining multiple servers. 06:51 Load balancing. 07:23 SSL certificates. 10:30 Security. 10:37 Conceal your true IP. 11:24 Access management. 12:31 Routing static assets. 13:31 CDN / local. 15:55 Caddy × websocket support. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
More application platform pieces make your life better! While at Build in Seattle, Richard sat down with Buu Lam of F5 to discuss F5's latest offering, NGINX as a Service in Azure. Buu discussed how F5's products have evolved to run in the cloud, not just on their hardware. While you could run them as virtual machines or containers, providing them as services in Azure is better. You purchase the service in the marketplace and as part of your Azure billing. The conversation digs into the advantages of the services model in terms of updating and instrumentation, as well as reducing the complexity of your infrastructure as code. LinksNGINXKubernetesBIG-IP NextF5 Distributed CloudNGINX as a Service on AzureDevCentral at F5Recorded May 21, 2024
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
It's only been one week, but it's really been 14 years, and Paul is calling: Microsoft and Qualcomm have finally made Windows 11 on Arm both viable and desirable. Nothing is perfect, but this platform is pretty incredible. Some notes from the past week: Mission Accomplished As a reminder, Paul finally got the Yoga Slim 7x last week and updated on app compatibility, hardware compatibility, gaming, and in-box AI experiences in time for WW - only found one non-working app, Google Drive. Then... More app and game compatibility testing. Since then, played a lot more DOOM (2016), ran into one issue that's surely WOA-related (note beta graphics driver, though) Video encoding performance: Snapdragon X vs Snapdragon X vs Core Ultra 9 H-series vs MacBook Air M3 Initial thoughts on battery life and then More thoughts on battery life. The Yoga Slim 9x and Surface Laptop both get about 10 hours of real-world battery life (so far), compared to 15 hours for the MacBook Air 15-inch M3. Hardware compatibility update: Only one of my devices doesn't work, the Focusrite. Surface Laptop 7 first impressions and second impressions HP Elitebook Ultra first impressions Windows 11 After skipping Week D last Tuesday, Microsoft belatedly delivers a Week D preview update for Windows 11 version 24H2 No new features, so next Patch Tuesday will be light for 24H2 22H2 and 23H2 got a big Week D update last Tuesday, so Patch Tuesday will be meaningful As of July's Patch Tuesday, 22H2, 23H2, and 24H2 will all provide the same basic feature set Canary, Dev, Beta (last Friday): nothing exciting, a few small features or changes AI & Microsoft 365 European Commission "shifts" investigation of Microsoft/OpenAI partnership. Is Ken Starr in charge of this thing? Microsoft highlights new Copilot features coming to Microsoft 365 in July Copilot in OneNote can now recognize handwritten text. It's 2002 all over again! Pixel 9 family will promote unique "Google AI" features Brave introduces a BYOM plan for its web browser Thanks to AI, Google Translate now supports 110 new languages Xbox Xbox Cloud Gaming Fire Sticks it to Amazon Another two weeks of Xbox Game Pass Forza Horizon 4 (from 2018) to be delisted December 15. Why? Tips and Picks Tip of the week: Get $10 off Tony Redmond's Office 365 for IT Pros 11th Edition App pick of the week: Docs in Proton Drive RunAs Radio this week: NGINX as a Service with Buu Lam Brown liquor pick of the week: Jack Daniels Old No. 7 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT cachefly.com/twit
本期节目我们请到了继续和 Yuchen 聊聊 Cloudflare,以及他主导并开源网络框架 Pingora Pingora 是一个使用 Rust 开发的框架,可以让开发人员在上面实现自定义服务器。Pingora 的开发是基于 Cloudflare 多年的经验和需求,他们发现在代理中需要大量的业务逻辑代码而不是配置,并且用 Lua 或编写配置也不理想。此外,我们讨论了 Pingora 的开发过程中涉及的技术决策和挑战,以及 Cloudflare 的文化和招聘情况。 嘉宾 Yuchen Wu 主播 laixintao NadeshikoManju laike9m 时间点 00:03 Cloudflare Pingora 项目开发背后的故事与原因 04:53 以 Lua 嵌入 Nginx 的 openresty 为基础的强大编程工具 08:47 Lua 的特点和局限性分析 13:03 Nginx 的 C 开发和 Lua 维护的困难性及 ARM 上的问题 16:10 Indrax 架构的问题和需要解决的挑战 22:25 大家决定用 Rust 语言重新开发的决策过程 24:47 对于使用 Rust 语言开发的经验和公司中的实践 27:07 Rust 语言的开发和 API 设计 30:32 流量迁移和切换效果评估 32:53 开发速度改进和问题处理的讨论 37:15 Pingora 框架的开源故事及其 API 设计和扩展性 40:36 关于开源的讨论和决策过程,Rust 语言的优势以及担忧的原因 44:22 Nginx 的发展历程以及与 F5 的关系变动 46:06 Pingora 开源项目及其童话般的发展故事 50:18 Cloudflare 文化和招聘情况讨论 53:40 Cloudflare:科技领域无可匹敌的压倒性存在 链接 Pingora Nginx OpenResty Lua F5 Completes Acquisition of NGINX
Software Engineering Radio - The Podcast for Professional Software Developers
Infrastructure engineer and Kubernetes ingress-Nginx maintainer James Strong joins host Robert Blumen to discuss the Kubernetes networking layer. The discussion draws on content from Strong's book on the topic and covers a lot of ground, including: the Kubernetes network's use of different IP ranges than the host network; overlay network with its own IP ranges compared to using expanded portions of the host network ranges; adding routes with kernel extension points; programming kernel extension points with IP tables compared to eBPF; how routes are updated as the host network gains or loses nodes, the use of the Linux network namespace to isolate each pod; routing between pods on the same host; routing between pods across the host network; the container-network interface (CNI); the CNI ecosystem; differences between CNIs; choosing a CNI when running on a public cloud service; the Kubernetes service abstraction with a cluster-wide IP address; monitoring and telemetry of the Kubernetes network; and troubleshooting the Kubernetes network. Brought to you by IEEE Software magazine and IEEE Computer Society.
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Peter Guagenti is an accomplished business builder and entrepreneur with expertise in strategy, product development, marketing, sales, and operations. Peter has helped build multiple successful start-ups to exits, fueling high growth in each company along the way. He brings a broad perspective, deep problem-solving skills, the ability to drive innovation amongst teams, and a proven ability to convert strategy into action -- all backed up by a history of delivering results. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ MLOps podcast #222 with Peter Guagenti, President & CMO of Tabnine - What Business Stakeholders Want to See from the ML Teams. // Abstract Peter Guagenti shares his expertise in the tech industry, discussing topics from managing large-scale tech legacy applications and data experimentation to the evolution of the Internet. He returns to his history of building and transforming businesses, such as his work in the early 90s for People magazine's website and his current involvement in AI development for software companies. Guagenti discusses the use of predictive modeling in customer management and emphasizes the importance of re-architecting solutions to fit customer needs. He also delves deeper into the AI tools' effectiveness in software development and the value of maintaining privacy. Guagenti sees a bright future in AI democratization and shares his company's development of AI coding assistants. Discussing successful entrepreneurship, Guagenti highlights balancing technology and go-to-market strategies and the value of failing fast. // Bio Peter Guagenti is the President and Chief Marketing Officer at Tabnine. Guagenti is an accomplished business leader and entrepreneur with expertise in strategy, product development, marketing, sales, and operations. He most recently served as chief marketing officer at Cockroach Labs, and he previously held leadership positions at SingleStore, NGINX (acquired by F5 Networks), and Acquia (acquired by Vista Equity Partners). Guagenti also serves as an advisor to a number of visionary AI and data companies including DragonflyDB, Memgraph, and Treeverse. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AI Quality in Person Conference: https://www.aiqualityconference.com/ Measuring the impact of GitHub Copilot Survey: https://resources.github.com/learn/pathways/copilot/essentials/measuring-the-impact-of-github-copilot/ AWS Trainium and Inferentia: https://aws.amazon.com/machine-learning/trainium/ https://aws.amazon.com/machine-learning/inferentia/AI coding assistants: 8 features enterprises should seek: https://www.infoworld.com/article/3694900/ai-coding-assistants-8-features-enterprises-should-seek.htmlCareers at Tabnine: https://www.tabnine.com/careers --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Peter on LinkedIn: https://www.linkedin.com/in/peterguagenti/
Cet épisode discute du retour d'experience Java de Netflix, de jQuery, de gouvernance open source, d'Elon Musk, de Kubernetes, de Mistral (gagnant?), d'attaque des LLMs, de developpement de carrière et de Trouble du Déficit de l'Attention avec ou sans Hyperactivité. Enregistré le 15 mars 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-308.mp3 News Les cast codeurs veulent essayer quelque chose de nouveau et le sondage montre que vous aussi. On lance donc une section Ask Me Anything, posez nous une question sur https://lescastcodeurs.com/ama et nous prendrons certaines questions pour donner notre réponse. Lancez-vous, on pense que cette section pourra être sympa :) Langages Retour d'experience de Netflix sur ZGC https://netflixtechblog.com/bending-pause-times-to-your-will-with-generational-zgc-256629c9386b moins de tail latency ce qui veut dire moins de charge sur le systeme (moins de retry) aussi plus facile de trouver les vrais probleme de latence (plus cachés dans les latences de GC) et sans consommation superieure de CPU pour les memes perfs malgré les barrieres differentes de ZGC pas de tuning explicit de leur part (enfin presque) meme si les pointeurs ne sont pas compresses, l'efficacite du GC compense Librairies Sortie de Spock 2.4-M2 https://spockframework.org/spock/docs/2.4-M2/release_notes.html Support de plusieurs librairies de mocking Meilleur support dans les IDEs Et plein d'autres petites améliorations jQuery 4 est sorti ! jQuery est de retour ! https://www.infoq.com/news/2024/03/jquery-4-beta-release-note/ On parle régulièrement du dernier framework JavaScript à la mode, mais jQuery est toujours là Première release majeure depuis 8 ans Suppression de plein de features qui étaient deprecated et maintenant fournie souvent par défaut par les moteurs JavaScript des navigateurs jQuery continue d'être téléchargé de plus en plus au fil du temps, mais peut-être parce qu'il bénéficie du succès des projets qui l'utilisent comme Cypress, WordPress ou Drupal) Quarkus sort sa deuxieme LTS https://quarkus.io/blog/quarkus-3-8-released/ explique les changements importants depuis la LTS 3.2 Infrastructure Linkerd ou plutôt la boîte derrière va faire payer pour accéder aux builds stable du projet. Cela crée des conversations au sein de la CNCF https://www.techtarget.com/searchitoperations/news/366571035/Linkerd-paywall-prompts-online-debate-CNCF-TOC-review deploy envoy, c'est plus dur Buyoant est le principal contributeur derriere Linkerd et ils ont edcider de mettre les distributions stables derriere un paywall pour les societes de plus de 50 employés ($2000 par cluster) les gens se trouve floués par aider au succces et ensuite de trouver piégé La license reste ASL mais la version stable est derriere un paywall, comme red hat enterprise linuix recemment un autre exemple de projet open source qui vire commercial questionne la gouvernance open source, la CNCF va inestiguer et peut etre durcir ces criteres de graduiation Weavework (FLux) a fermé ces dernieres semaines aussi Cloudflare a reecrit un proxy HTTP en rust https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/ ils ont utilise NGinx pendant longtemps mais le single worker modele ne permetait pas ceratins optims et ils ont des besolins specifiques bref ils ont reecrit en rust, multi threaded et avec work stealing et ils sont content Le guide du “hater” sur Kubernetes https://paulbutler.org/2024/the-haters-guide-to-kubernetes/ L'auteur se plaint régulièrement de Kubernetes pour sa grande complexité mais reconnait que c'est quand même un grand morceau de technologie A utiliser surtout quand on a besoin de : Exécuter plusieurs processus/serveurs/tâches planifiées. Les exécuter de manière redondante et répartir la charge entre eux. Les configurer, ainsi que les relations entre eux, sous forme de code. L'auteur liste ensuite les fonctionnalités qu'il utilise, qu'il fait attention quand il les utilise, et celles qu'il préfère éviter Utilise : deployments, services, cron jobs, config maps et secrets Attention : stateful set, persistent volume et RBAC Evite : le YAML à la main, les opérateurs et resources customs, Helm, tout ce qui est mesh, les resources ingress, essayer de répliquer la stack K8S complète localement sur sa machine Data et Intelligence Artificielle Mistral AI et Microsoft font un accord sur le modele le plus puissant de Mistral et certains ne sont pas content https://www.latribune.fr/technos-medias/informatique/l-alliance-entre-mistral-et-micr[…]usion-de-l-independance-technologique-europeenne-991558.html Mistral avancait son approche open source mais son modele le plus puissant ne l'est pas ils ont un partenariat exclusif avec Microsoft pour le distribuer Et MS rentre dans le capital Au revoir l'independance de l'IA européenne Au revoir les modeles open source larges cela va a l'encontre du loby et de son positinnement aupres de la commission europeenne ca fait grincer des dents a bruxelles qui avait alléger les contraintes sur les modeles fondamentaux a la demande de Mistral qui menacait de de voir s'allier avec MS si ce n'était pas le cas. Mistral était un fer de lance des modeles open sources pour eviter les biais ils en garderont masi pas les modeles specialisés ou optimisés cela reste une bonne decisione conomique pour Mistral Infinispan 15 est sorti https://infinispan.org/blog/2024/03/13/infinispan-15 JDK 17 Redis Hot Replacement donnant: multi thread, clustering, replication cross site, diff stores de persistence en disk, avoir des caches differentes en namespace différentes avec des règles appliquées à chaque cas d'usage Recherche Vectorielle et stockage des embeddings Integration avec Langchain (Python), Langchain4j, et Quarkus Langchain Améliorations du search, replication cross site, la console, tracing, l'Operateur Kubernetes … Support du Protobuf 3 avec la release de Protostream 5 et meilleur API Outillage Ne pas signer ses commits cryptographiquement ? https://blog.glyph.im/2024/01/unsigned-commits.html L'article cite comme seul avantage d'avoir le petit badge vert sur Github indiquant “vérifié” Responsabilité future inconnue et potentiellement illimitée pour les conséquences de l'exécution du code dans un commit que vous avez signé. Renforcement implicite de GitHub en tant qu'autorité de confiance centralisée dans le monde de l'open source. Introduction de problèmes de fiabilité inconnus dans l'infrastructure qui repose sur les signatures de commit. Une violation temporaire de vos identifiants GitHub entraîne désormais des conséquences potentiellement permanentes si quelqu'un parvient à y introduire une nouvelle clé de confiance. Nouveau type de surcharge de processus continu : les clés de signature de commit deviennent une nouvelle infrastructure permanente à gérer, avec de nouvelles questions comme « que faire des clés expirées », « à quelle fréquence dois-je les renouveler », etc. on peut empecher de pousser des commits non signés Sécurité Des modèles avec des backdoors uploadés sur hugging faces non détecté. https://arstechnica.com/security/2024/03/hugging-face-the-github-of-ai-hosted-code-that-backdoored-user-devices/ par les chercheurs de JFrog Une centaine détectés dont 10 malicieux Des tests de chercheurs mais un faisant un reverse ssh S'appuye sur le format de serialisation pickle en python. Populaire mais connu comme dangereux Une première side attack channel sur les LLMs https://arstechnica.com/security/2024/03/hackers-can-read-private-ai-assistant-chats-even-though-theyre-encrypted/ cela s'appuie sur la taille des packets chiffrés envoyés et leur timing pour détecter la longueur des tokens Ensuite un LLM spécialisé reconstruit la suite de mots la plus probable vu la longueur C'est du à l'UX qui envoie les tokens au fil de l'eau Ć'est facilement corrigeable en rendant les paquets de taille fixe et en ajoutant du hasard de délai d'envoie. Mais c'est rigolo comment les LLMs peuvent amplifier les side channel attacks Architecture Netflix et Java https://www.infoq.com/presentations/netflix-java/ Netflix est un java shop La “stack NEtflix” connue du public a beaucoup evolué Pleins de microservices Gen1: groovy en gateway front end for backend , RxJava et Histrix Gen2: GraphQL et GraphQL federé ; plus de reactif sand dans la gateway Java 17 : 2800 apps java utilisent Azul JDK avait du Java 8 sur du guice et app custom utilisent G1, Java 17 = -20% CPU et Shenandoah pour la gateway Zuul Plans pour Java 21 (ZGC, virtual threads) apres speculatif Ils ont standardisé sur Spring Boot il n'y a pas si longtemps Un long article sur les microservices https://mcorbin.fr/posts/2024-02-12-microservice/ encore un me direz vous oui amis si vous etre en pleine crise existentielle avec votre equipe c'est du mon materiel il va sur les points importants comme synchrone vs asynchrone, les patterns de communication, la copie de données, comment tester le “monotithe” ou plutot comment ne pas le faire etc c'est un peu long mais ca recadre bien Méthodologies Opinion: est-ce qu'on peut devenir dev à partir de 40 ans https://www.codemotion.com/magazine/dev-life/can-you-become-a-programmer-after-40/?utm_source=ActiveCampaign&utm_medium=email&utm_content=5+Frontend+Trends+we+Didn+t+See+Coming+in+2024&utm_campaign=NL_EN_240215+%28Copy%29&vgo_ee=sFCRn4bbw8NuvJwlxj4PgXiVS4eICnA1ZPdkH4DGKyhNNwh6NQ%3D%3D%3Au3g96%2Fz3Uf7kZHAF7tezy9Y0ZJ6paAsE programmeur de CSS a 40 ans, je sais pas :stuck_out_tongue_winking_eye: l'auteur regrette les pubs pour devenir ev a 40 ansd facilement developpeur c'est beaucoup de connaissance et de travail et doit etre un choix, pas un choix pas default ou facile ils decrit certains biais comme un 20 ans sans experience est plus pardonné qu'à 40, le temps a y consacré est différent etc compensé par des actes de motivation (GitHub, participation open source, meetups etc) mais le temps d'apprendre de ces erreurs n'as pas vraiment de court circuit bref une fonrmation c'est bien mais aps suffisant Navigate your own way https://www.infoq.com/presentations/lessons-opportunities-carrier/ IBMer for 21 years. I'm a Java champion Réfléchir à sa carrière en mode time box, chercher sa promotion ? Tu peux décider ton chemin Momentum pandémie ça lui a fait bcp réfléchir sur sa vie et où elle était. Moment où elle quittait IBM pour aller vers RH (cœur se brise) Essentiels pour prendre ton propre chemin Se connaître soit même, reconnaître les différences avec les autres connaître tes valeurs: c'est quoi important pour toi, c'est quoi qui te motive, ce qui te démotive. Écrire des mots Se fixer des objectifs avec l'aide des autres Repusher ses limites, sur des sujets dont tu penses que ce n'est pas possible pour toi Participe activement, entoure toi bien Un talk très personnel et inspirant Un article sur le trouble TDAH chez le développeur adulte https://rlemaitre.com/fr/posts/2023/11/hacker-le-tdah-strat%C3%A9gies-pour-le-d%C3%A9veloppeur-moderne/ Diagnostiqué à 44 ans Schéma d'inattention et d'hyperactivite/implusivite qui interfere avec le fonctionnement Affecte le fonctionnement social scolaire ou professionnel Non diagnistiqué: burn out anxiété ou depression Souvent non diagnostiqué jusqu'à ce que se propres enfants soient diagnostiqués Mais cela amène du positif: hyperfocus, resolution creative de problèmes, adaptation rapide aux changements qui sont du pain béni Le négatif c'est la gestion du temps, organisation, instabilité Discute ensuite les phénomènes dans le cerveau Et donne des techniques et des pièges à éviter Vous avez sûrement des collègues TDAH ou l'êtes-vous meme Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-15 mars 2024 : pgDayParis - Paris (France) 17-18 mars 2024 : Cloud Native Rejekts EU 2024 - Paris (France) 19 mars 2024 : AppDeveloperCon - Paris (France) 19 mars 2024 : ArgoCon - Paris (France) 19 mars 2024 : BackstageCon - Paris (France) 19 mars 2024 : Cilium + eBPF Day - Paris (France) 19 mars 2024 : Cloud Native AI Day Europe - Paris (France) 19 mars 2024 : Cloud Native StartupFest Europe - Paris (France) 19 mars 2024 : Cloud Native Wasm Day Europe - Paris (France) 19 mars 2024 : Data on Kubernetes Day - Paris (France) 19 mars 2024 : Istio Day Europe - Paris (France) 19 mars 2024 : Kubeflow Summit Europe - Paris (France) 19 mars 2024 : Kubernetes on Edge Day Europe - Paris (France) 19 mars 2024 : Multi-Tenancy Con - Paris (France) 19 mars 2024 : Observabiity Day Europe - Paris (France) 19 mars 2024 : OpenTofu Day Europe - Paris (France) 19 mars 2024 : Platform Engineering Day - Paris (France) 19 mars 2024 : ThanosCon Europe - Paris (France) 19 mars 2024 : PaaS Forward by OVHcloud | Rancher by SUSE - Paris (France) 19-21 mars 2024 : CloudNativeHacks - Paris (France) 19-21 mars 2024 : IT & Cybersecurity Meetings - Paris (France) 19-22 mars 2024 : KubeCon + CloudNativeCon Europe 2024 - Paris (France) 21 mars 2024 : IA & Data Day Strasbourg - Strasbourg (France) 22-23 mars 2024 : Agile Games France - Valence (France) 26-28 mars 2024 : Forum INCYBER Europe - Lille (France) 27 mars 2024 : La Conf Data | IA - Paris (France) 28-29 mars 2024 : SymfonyLive Paris 2024 - Paris (France) 28-30 mars 2024 : DrupalCamp Roazhon - Rennes (France) 4 avril 2024 : SoCraTes Rennes 2024 - Rennes (France) 4-6 avril 2024 : Toulouse Hacking Convention - Toulouse (France) 8 avril 2024 : Lyon Craft - Lyon (France) 9 avril 2024 : Unconf HackYourJob - Lyon (France) 11 avril 2024 : CI/CDay - Paris (France) 17-19 avril 2024 : Devoxx France - Paris (France) 18-20 avril 2024 : Devoxx Greece - Athens (Greece) 22 avril 2024 : React Connection 2024 - Paris (France) 23 avril 2024 : React Native Connection 2024 - Paris (France) 25-26 avril 2024 : MiXiT - Lyon (France) 25-26 avril 2024 : Android Makers - Paris (France) 3-4 mai 2024 : Faiseuses Du Web 3 - Dinan (France) 8-10 mai 2024 : Devoxx UK - London (UK) 16-17 mai 2024 : Newcrafts Paris - Paris (France) 22-25 mai 2024 : Viva Tech - Paris (France) 24 mai 2024 : AFUP Day Nancy - Nancy (France) 24 mai 2024 : AFUP Day Poitiers - Poitiers (France) 24 mai 2024 : AFUP Day Lille - Lille (France) 24 mai 2024 : AFUP Day Lyon - Lyon (France) 28-29 mai 2024 : Symfony Live Paris - Paris (France) 1 juin 2024 : PolyCloud - Montpellier (France) 6-7 juin 2024 : DevFest Lille - Lille (France) 6-7 juin 2024 : Alpes Craft - Grenoble (France) 7 juin 2024 : Fork it! Community - Rouen (France) 11-12 juin 2024 : OW2con - Paris (France) 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Get my backend course https://backend.win Cloudflare has announced they are opening sources Pingora as a networking framework! Big news, let us discuss 0:00 Intro 0:30 Reasons why Cloudflare built Pingora? 3:00 It is a framework! 7:30 What in Pingora? 11:50 Security in Pingora 13:45 Multi-threading in Pingora 21:00 Customization vs Configuration 25:00 Summary https://blog.cloudflare.com/pingora-open-source/?utm_campaign=cf_blog&utm_content=20240228&utm_medium=organic_social&utm_source=twitter
In this episode of the Laravel Podcast, we are diving into the highlights of Laracon EU including the unveiling of Laravel 11 and the introduction of Laravel Reverb. Taylor Otwell shares insights on the streamlined application structure and new features in Laravel 11. We also discuss the launch of Laravel Herd for Windows and Herd Pro, offering power user features for local development, and provide some exciting updates about the upcoming Laracon US.Taylor Otwell's Twitter - https://twitter.com/taylorotwellMatt Stauffer's Twitter - https://twitter.com/stauffermattLaravel Twitter - https://twitter.com/laravelphpLaravel Website - https://laravel.com/Tighten Website - https://tighten.com/Laracon EU Photo Gallery Tweet - https://x.com/LaraconEU/status/1755957896209113444?s=20Laravel Reverb - https://laravel.com/docs/master/reverbLaravel 11 - https://laravel.com/docs/master/releasesThiery Laverdure's Project - https://github.com/tlaverdure/laravel-echo-serverPusher - https://pusher.com/Ably - https://ably.com/Laravel Herd - https://herd.laravel.com/Adam Wathan Twitter - https://twitter.com/adamwathanJess Archer Twitter - https://twitter.com/jessarchercodesLuke Twitter Downing Twitter - https://twitter.com/lukedowning19Daniel Coulbourne Twitter - https://twitter.com/DCoulbourneJoe Dixon Twitter - https://twitter.com/_joedixonPhilo Hermans Twitter - https://twitter.com/Philo01?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorLaracon US - https://laracon.us/Laracon CFP Talk Submission Form - https://docs.google.com/forms/d/e/1FAIpQLSdlyTDvqeKNB3r-wVNmDBlE23oHKEL4m8lzL5nci0YPH_5WYA/viewform-----Editing and transcription sponsored by Tighten.
We chat about VMware's rug pull with Bret, aka Raid Owl, and then get into Unraid's big changes and more. Special Guest: Raid Owl.
This week, we discuss open source forks, what's going on at OpenAI and checkin on the IRS Direct File initiative. Plus, plenty of thoughts on taking your annual Code of Conduct Training. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=PAwXvnb53iY) 455 (https://www.youtube.com/watch?v=PAwXvnb53iY) Runner-up Titles I live my life one iCal screen at a time We always have sparklers Meta-parenting Everyone is always tired Cheaper version of Red Hat This week in “Do we need to be angry?” All we get is wingdings. I'm in a Socialist mood this week Pies shot out of my eyes and stuff Those dingalings bought my boat Dingalings of the mind Rundown CIQ Offers Long-Term Support for Rocky Linux 8.6, 8.8 and 9.2 Images Through AWS Marketplace (https://ciq.com/press-release/ciq-offers-long-term-support-for-rocky-linux-8-6-8-8-and-9-2-images-through-aws-marketplace/) Will CIQ's new support program alienate the community (https://medium.com/@gordon.messmer/will-ciqs-new-support-program-alienate-the-community-it-built-on-an-objection-to-subscriber-only-fb58ea6a810e) NGINX fork (https://narrativ.es/@janl/111935559549855751)? freenginx.org (http://freenginx.org/en/) Struggling database company MariaDB could be taken private in $37M deal (https://techcrunch.com/2024/02/19/struggling-database-company-mariadb-could-be-taken-private-in-a-37m-deal/) Tofu (https://opentofu.org) So Where's That New OpenAI Board? (https://www.theinformation.com/articles/so-wheres-that-new-openai-board?utm_source=ti_app&rc=giqjaz) The IRS has all our tax data. Why doesn't its new website use it? (https://www.washingtonpost.com/business/2024/02/04/direct-file-irs-taxes/) Relevant to your Interests Apple on course to break all Web Apps in EU within 20 days - Open Web Advocacy (https://open-web-advocacy.org/blog/apple-on-course-to-break-all-web-apps-in-eu-within-20-days/) Bringing Competition to Walled Gardens - Open Web Advocacy (https://open-web-advocacy.org/walled-gardens-report/#apple-has-effectively-banned-all-third-party-browsers) Introducing the Column Explorer: a bird's-eye view of your data (https://motherduck.com/blog/introducing-column-explorer/?utm_medium=email&_hsmi=294232392&_hsenc=p2ANqtz-8vobC3nom9chsGc_Y8KM9pO75KKvrGTtL7uS-sfcNQ1sNd8ThaMnP5KsfbSUWCWW2KOjlPpa3AwC4ToYbaCmYOAMva0rvKIZ2jkB461YKJX2TLQtg&utm_content=294233055&utm_source=hs_email) Apple TV+ Became HBO Before HBO Could Become Netflix (https://spyglass.org/its-not-tv-its-apple-tv-plus/?utm_source=substack&utm_medium=email) Sora: Creating video from text (https://openai.com/sora) Sustainability, a surprisingly successful KPI: GreenOps survey results - ClimateAction.Tech (https://climateaction.tech/blog/sustainability-kpi-greenops-survey-results/) Slack AI has arrived (https://slack.com/intl/en-gb/blog/news/slack-ai-has-arrived) What's new and cool? - Adam Jacob (https://youtu.be/gAYMg6LNEMs?si=9PRiK1BBHaBGSypy) Apple is reportedly working on AI updates to Spotlight and Xcode (https://www.theverge.com/2024/2/15/24074455/apple-generative-ai-xcode-spotlight-testing) Apple Readies AI Tool to Rival Microsoft's GitHub Copilot (https://www.bloomberg.com/news/articles/2024-02-15/apple-s-ai-plans-github-copilot-rival-for-developers-tool-for-testing-apps) VMs on Kubernetes with Kubevirt session at Kubecon (https://kccnceu2024.sched.com/event/1YhIE/sponsored-keynote-a-cloud-native-overture-to-enterprise-end-user-adoption-fabian-deutsch-senior-engineering-manager-red-hat-michael-hanulec-vice-president-and-technology-fellow-goldman-sachs) Air Canada must honor refund policy invented by airline's chatbot (https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/?comments=1&comments-page=1) Microsoft 'retires' Azure IoT Central in platform rethink (https://www.theregister.com/2024/02/15/microsoft_retires_azure_iot_central/) The big design freak-out: A generation of design leaders grapple with their future (https://www.fastcompany.com/91027996/the-big-design-freak-out-a-generation-of-design-leaders-grapple-with-their-future) Most of the contents of the Xerox PARC team's work were tossed into a dumpster (https://x.com/DynamicWebPaige/status/1759071289401368635?s=20) 1Password expands its endpoint security offerings with Kolide acquisition (https://techcrunch.com/2024/02/20/1password-expands-its-endpoint-security-offerings-with-kolide-acquisition/) Microsoft Will Use Intel to Manufacture Home-Grown Processor (https://www.bloomberg.com/news/articles/2024-02-21/microsoft-will-use-intel-to-manufacture-home-grown-processor) In a First, Apple Captures Top 7 Spots in Global List of Top 10 Best-selling Smartphones - Counterpoint (https://www.counterpointresearch.com/insights/apple-captures-top-7-spots-in-global-top-10-best-selling-smartphones/) Google Is Giving Away Some of the A.I. That Powers Chatbots (https://www.nytimes.com/2024/02/21/technology/google-open-source-ai.html) Apple Shuffles Leadership of Team Responsible for Audio Products (https://www.bloomberg.com/news/articles/2024-02-20/apple-shuffles-leadership-of-team-responsible-for-audio-products?srnd=premium) Signal now lets you keep your phone number private with the launch of usernames (https://techcrunch.com/2024/02/20/signal-now-lets-you-keep-your-phone-number-private-with-the-launch-of-usernames/) How Google is killing independent sites like ours (https://housefresh.com/david-vs-digital-goliaths/) VMware takes a swing at Nutanix, Red Hat with VM converter (https://www.theregister.com/2024/02/21/vmware_kvm_converter/) (https://narrativ.es/@janl/111935559549855751)## Nonsense An ordinary squirt of canned air achieves supersonic speeds - engineer spots telltale shock diamonds (https://www.tomshardware.com/desktops/pc-building/an-ordinary-squirt-of-canned-air-achieves-supersonic-speeds-engineer-spots-telltale-shock-diamonds) Conferences SCaLE 21x/DevOpsDays LA, March 14th (https://www.socallinuxexpo.org/scale/21x)– (https://www.socallinuxexpo.org/scale/21x)17th, 2024 (https://www.socallinuxexpo.org/scale/21x) — Coté speaking (https://www.socallinuxexpo.org/scale/21x/presentations/we-fear-change), sponsorship slots available. KubeCon EU Paris, March 19 (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)– (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)22 (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/) — Coté on the wait list for the platform side conference. Get 20% off with the discount code KCEU24VMWBC20. DevOpsDays Birmingham, April 17–18, 2024 (https://talks.devopsdays.org/devopsdays-birmingham-al-2024/cfp) Exe (https://ismg.events/roundtable-event/dallas-robust-security-java-applications/?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter&utm_content=newsletterUpcoming)cutive dinner in Dallas that Coté's hosting on March 13st, 2024 (https://ismg.events/roundtable-event/dallas-robust-security-java-applications/?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter&utm_content=newsletterUpcoming). If you're an “executive” who might want to buy stuff from Tanzu to get better at your apps, than register. There is also a Tanzu exec event coming up in the next few months, email Coté (mailto:cote@broadcom.com) if you want to hear more about it. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Fair Play (https://www.netflix.com/title/81674326) on Netflix (https://www.netflix.com/title/81674326) Matt: Julia Evans: Popular Git Config Options (https://jvns.ca/blog/2024/02/16/popular-git-config-options/) Coté: Anker USB C Charger (Nano II 65W) Pod 3-Port PPS Fast Charger (https://www.amazon.de/dp/B09LLRNGSD?psc=1&ref=ppx_yo2ov_dt_b_product_details). Photo Credits Header (https://unsplash.com/photos/a-couple-of-large-sculptures-sitting-on-top-of-a-cement-floor-g4xIcepnx6I) Google Gemini
The US Department of Justice is at it again with a new team for Operation Dying Ember. Sounds spooky, right? This time it was to undertake a secret court order to remove malware from Ubiquiti devices infected by Fancy Bear. The devices in question had default administration passwords as well as remote admin access on the public Internet. The DOJ reinfected the routers with the original malware used to compromise them in the first place and then used that compromise to remove remote access and clean up the secondary payload that had been installed to turn them into a potential botnet. The DOJ said it would then notify users to do a factory reset and install the latest firmware as well as changing their admin password. There's a lot to unpack here! This and more on the Gestalt IT Rundown hosted by Tom Hollingsworth and guest Max Mortillaro. Hosts: Tom Hollingsworth: https://www.linkedin.com/in/networkingnerd/ Max Mortillaro: https://www.linkedin.com/in/maxmortillaro/ Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT Tags: #Rundown, #Security, #AI, #DataCenters, #GenAI, #Data, @NGINX, @LockbitTeam, @GestaltIT, @NetworkingNerd, @MaxMortillaro
DSL is back, but it's bigger! There's a CUDA implementation for AMD, The Linux Topology code is getting cleaned up, and there's a bit of a tussle over who's the first to ship KDE 6. Nginx forks over a CVE, AMD has new chips, and Asahi is beating Apple on OpenGL. For tips there's zypper for package management, cmp for comparing files, UFW for firewall simplicity, and a quick primer on how Wine handles serial ports! Catch the show notes at https://bit.ly/49z3PDs and enjoy the show! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Nginx is forked, Broadcom/VMware kills ESXi, dedup is finally fixed in ZFS, using multiple network interfaces on a NAS, and more. Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes News announcing freenginx.org Broadcom-owned VMware kills the free version of ESXi virtualization software OpenZFS Native Encryption Use […]
Nginx is forked, Broadcom/VMware kills ESXi, dedup is finally fixed in ZFS, using multiple network interfaces on a NAS, and more. Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes News announcing freenginx.org Broadcom-owned VMware kills the free version of ESXi virtualization software OpenZFS Native Encryption Use... Read More
Scott and Wes talk about the benefits of owning your own PaaS (platform as a service), the main alternatives in the space, and ways to make passion projects more financially viable. Show Notes 00:00 Welcome to Syntax! 01:12 Brought to you by Sentry.io. 01:56 What is a PaaS? NGINX 04:21 Challenges with payment structures. Render 07:02 What is Kubernetes? Kubernetes 07:51 What are the differences between Kubernetes and Docker? Docker Swarm 09:15 Reasons to own your own PaaS. Nelify Bluehost 15:05 “Pokémon or Web Service” Original 150 Pokémon Characters 16:49 The players and their pros and cons. 18:51 Where can you host these services? 19:47 Kubero. Kubero 21:50 Coolify. Coolify Coolify pricing 28:15 Caprover. Caprover 29:03 Dokku. Dokku Shokku Ledokku Atlas Nixpacks 32:53 Piku. Piku 33:24 Cuber. Cuber 34:13 Acorn. Acorn Coolify creator, Andras Bacsai on X 36:44 The challenges of hosting your own PaaS. 38:46 Jekyll ran on a PC under a desk. Jekyll 39:36 Sometimes less is, in fact, more. 40:09 Final thoughts. 45:03 Scott got Bun to work on Coolify. 51:01 Sick Picks + Shameless Plugs. Sick Picks Wes: GripStic Chip Bag Sealer Amazon, GripStic Chip Bag Sealer Aliexpress Scott: Caseta Diva Smart Dimmer Shameless Plugs Wes: Syntax YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott:X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Deploying Nextcloud the Nix way promises a paradise of reproducibility and simplicity. But is it just a painful trek through configuration hell? We built the dream Nextcloud using Nix and faced reality. Special Guest: Alex Kretzschmar.
In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future API Gateway, Ingress Controller or Service Mesh: When to Use What and Why Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In episode 40 of The Kubelist Podcast, Marc and Benjie speak with open source pioneer Dave McAllister. Dave shares stories and lessons from his 40-year career in tech including working for DEC, NASA, Adobe, Red Hat, Splunk, and NGINX. Additionally, they discuss Linux's rise to popularity in the early days of open source, SGI's contribution to modern cinematic effects, predictions around AI, and the overlap of open source and LLMs.
In episode 40 of The Kubelist Podcast, Marc and Benjie speak with open source pioneer Dave McAllister. Dave shares stories and lessons from his 40-year career in tech including working for DEC, NASA, Adobe, Red Hat, Splunk, and NGINX. Additionally, they discuss Linux's rise to popularity in the early days of open source, SGI's contribution to modern cinematic effects, predictions around AI, and the overlap of open source and LLMs.
Mike Perham is the creator of Sidekiq, a background job processor for Ruby. He's also the creator of Faktory a similar product for multiple language environments. We talk about the RubyConf keynote and Ruby's limitations, supporting products as a solo developer, and some ideas for funding open source like a public utility. Recorded at RubyConf 2023 in San Diego. -- A few topics covered: Sidekiq (Ruby) vs Faktory (Polyglot) Why background job solutions are so common in Ruby Global Interpreter Lock (GIL) Ractors (Actor concurrency) Downsides of Multiprocess applications When to use other languages Getting people to pay for Sidekiq Keeping a solo business Being selective about customers Ways to keep support needs low Open source as a public utility Mike Mike's blog mastodon Sidekiq faktory From Employment to Independence Ruby Ractor The Practical Effects of the GVL on Scaling in Ruby Transcript You can help correct transcripts on GitHub. Introduction [00:00:00] Jeremy: I'm here at RubyConf San Diego with Mike Perham. He's the creator of Sidekiq and Faktory. [00:00:07] Mike: Thank you, Jeremy, for having me here. It's a pleasure. Sidekiq [00:00:11] Jeremy: So for people who aren't familiar with, I guess we'll start with Sidekiq because I think that's what you're most known for. If people don't know what it is, maybe you can give like a small little explanation. [00:00:22] Mike: Ruby apps generally have two major pieces of infrastructure powering them. You've got your app server, which serves your webpages and the browser. And then you generally have something off on the side that... It processes, you know, data for a million different reasons, and that's generally called a background job framework, and that's what Sidekiq is. [00:00:41] It, Rails is usually the thing that, that handles your web stuff, and then Sidekiq is the Sidekiq to Rails, so to speak. [00:00:50] Jeremy: And so this would fit the same role as, I think in Python, there's celery. and then in the Ruby world, I guess there is, uh, Resque is another kind of job. [00:01:02] Mike: Yeah, background job frameworks are quite prolific in Ruby. the Ruby community's kind of settled on that as the, the standard pattern for application development. So yeah, we've got, a half a dozen to a dozen different, different examples throughout history, but the major ones today are, Sidekiq, Resque, DelayedJob, GoodJob, and, and, and others down the line, yeah. Why background jobs are so common in Ruby [00:01:25] Jeremy: I think working in other languages, you mentioned how in Ruby, there's this very clear, preference to use these job scheduling systems, these job queuing systems, and I'm not. I'm not sure if that's as true in, say, if somebody's working in Java, or C sharp, or whatnot. And I wonder if there's something specific about Ruby that makes people kind of gravitate towards this as the default thing they would use. [00:01:52] Mike: That's a good question. What makes Ruby... The one that so needs a background job system. I think Ruby, has historically been very single threaded. And so, every Ruby process can only do so much work. And so Ruby oftentimes does, uh, spin up a lot of different processes, and so having processes that are more focused on one thing is, is, is more standard. [00:02:24] So you'll have your application server processes, which focus on just serving HTTP responses. And then you have some other sort of focused process and that just became background job processes. but yeah, I haven't really thought of it all that much. But, uh, you know, something like Java, for instance, heavily multi threaded. [00:02:45] And so, and extremely heavyweight in terms of memory and startup time. So it's much more frequent in Java that you just start up one process and that's it. Right, you just do everything in that one process. And so you may have dozens and dozens of threads, both serving HTTP and doing work on the side too. Um, whereas in Ruby that just kind of naturally, there was a natural split there. Global Interpreter Lock [00:03:10] Jeremy: So that's actually a really good insight, because... in the keynote at RubyConf, Mats, the creator of Ruby, you know, he mentioned the, how the fact that there is this global, interpreter lock, [00:03:23] or, or global VM lock in Ruby, and so you can't, really do multiple things in parallel and make use of all the different cores. And so it makes a lot of sense why you would say like, okay, I need to spin up separate processes so that I can actually take advantage of, of my, system. [00:03:43] Mike: Right. Yeah. And the, um, the GVL. is the acronym we use in the Ruby community, or GIL. Uh, that global lock really kind of is a forcing function for much of the application architecture in Ruby. Ruby, uh, applications because it does limit how much processing a single Ruby process can do. So, uh, even though Sidekiq is heavily multi threaded, you can only have so many threads executing. [00:04:14] Because they all have to share one core because of that global lock. So unfortunately, that's, that's been, um, one of the limiter, limiting factors to Sidekiq scalability is that, that lock and boy, I would pay a lot of money to just have that lock go away, but. You know, Python is going through a very long term experiment about trying to remove that lock and I'm very curious to see how well that goes because I would love to see Ruby do the same and we'll see what happens in the future, but, it's always frustrating when I come to another RubyConf and I hear another Matt's keynote where he's asked about the GIL and he continues to say, well, the GIL is going to be around, as long as I can tell. [00:04:57] so it's a little bit frustrating, but. It's, it's just what you have to deal with. Ractors [00:05:02] Jeremy: I'm not too familiar with them, but they, they did mention during the keynote I think there Ractors or something like that. There, there, there's some way of being able to get around the GIL but there are these constraints on them. And in the context of Sidekiq and, and maybe Ruby in general, how do you feel about those options or those solutions? [00:05:22] Mike: Yeah, so, I think it was Ruby 3. 2 that introduced this concept of what they call a Ractor, which is like a thread, except it does not have the global lock. It can run independent to the global lock. The problem is, is because it doesn't use the global lock, it has pretty severe constraints on what it can do. [00:05:47] And the, and more specifically, the data it can access. So, Ruby apps and Rails apps throughout history have traditionally accessed a lot of global data, a lot of class level data, and accessed all this data in a, in a read only fashion. so there's no race conditions because no one's changing any of it, but it's still, lots of threads all accessing the same variables. [00:06:19] Well, Ractors can't do that at all. The only data Ractors can access is data that they own. And so that is completely foreign to Ruby application, traditional Ruby applications. So essentially, Ractors aren't compatible with the vast majority of existing Ruby code. So I, I, I toyed with the idea of prototyping Sidekiq and Ractors, and within about a minute or two, I just ran into these, these, uh... [00:06:51] These very severe constraints, and so that's why you don't see a lot of people using Ractors, even still, even though they've been out for a year or two now, you just don't see a lot of people using them, because they're, they're really limited, limited in what they can do. But, on the other hand, they're unlimited in how well they can scale. [00:07:12] So, we'll see, we'll see. Hopefully in the future, they'll make a lot of improvements and, uh, maybe they'll become more usable over time. Downsides of multiprocess (Memory usage) [00:07:19] Jeremy: And with the existence of a job queue or job scheduler like Sidekiq, you're able to create additional processes to get around that global lock, I suppose. What are the... downsides of doing so versus another language like we mentioned Java earlier, which is capable of having true parallelism in the same process. [00:07:47] Mike: Yeah, so you can start up multiple Ruby processes to process things truly in parallel. The issue is that you do get some duplication in terms of memory. So your Ruby app maybe take a gigabyte per process. And, you can do copy on write forking. You can fork and get some memory sharing with copy on write semantics on Unix operating systems. [00:08:21] But you may only get, let's say, 30 percent memory savings. So, there's still a significant memory overhead to forking, you know, let's say, eight processes versus having eight threads. You know, you, you, you may have, uh, eight threads can operate in a gigabyte process, but if you want to have eight processes, that may take, let's say, four gigabytes of RAM. [00:08:48] So you, you still, it's not going to cost you eight gigabytes of RAM, you know, it's not like just one times eight, but, there's still a overhead of having those separate processes. [00:08:58] Jeremy: would you say it's more of a cost restriction, like it costs you more to run these applications, or are there actual problems that you can't solve because of this restriction. [00:09:13] Mike: Help me understand, what do you mean by restriction? Do you mean just the GVL in general, or the fact that forking processes still costs memory? [00:09:22] Jeremy: I think, well, it would be both, right? So you're, you have two restrictions right now. You have the, the GVL, which means you can't have parallelism within the same process. And then your other option is to spin up a bunch of processes, which you have said is the downside there is that you're using a lot more RAM. [00:09:43] I suppose my question is that Does that actually stop you from doing anything? Like, if you throw more money at the problem, you go like, we're going to have more instances, I'll pay for the RAM, it's fine, can that basically get you out of these situations or are these limitations actually stopping you from, from doing things you could do in other languages? [00:10:04] Mike: Well, you certainly have to manage the multiple processes, right? So you've gotta, you know, if one child process crashes, you've gotta have a parent or supervisor process watching all that and monitoring and restarting the process. I don't think it restricts you. Necessarily, it just, it adds complexity to your deployment. [00:10:24] and, and it's just a question of efficiency, right? Instead of being able to deploy on a, on a one gigabyte droplet, I've got to deploy to a four gigabyte droplet, right? Because I just, I need the RAM to run the eight processes. So it, it, it's more of just a purely a function of how much money am I going to have to throw at this problem. [00:10:45] And what's it going to cost me in operational costs to operate this application in production? When to use other languages? [00:10:53] Jeremy: So during the. Keynote, uh, Matz had mentioned that Rails, is really suitable as this one person framework, like you can have a very small team or maybe even yourself and, and build this product. And so I guess from... Your perspective, once you cross a certain threshold, is like, what Ruby and what Sidekiq provides not enough, and that's why you need to start looking into other languages? [00:11:24] Or like, where's the, turning point, or the, if you [00:11:29] Mike: Right, right. The, it's all about the problem you're trying to solve, right? At the end of the day, uh, the, the question is just what are we trying to solve and how are we trying to solve it? So at a higher level, you got to think about the architecture. if the problem you're trying to solve, if the service you're trying to build, if the app you're trying to operate. [00:11:51] If that doesn't really fall into the traditional Ruby application architecture, then you, you might look at it in another language or another ecosystem. something like Go, for instance, can compile down to a single binary, which makes deployment really easy. It makes shipping up a product. on to a user's machine, much simpler than deploying a Ruby application onto a user's desktop machine, for instance, right? [00:12:22] Um, Ruby does have this, this problem of how do you package everything together and deploy it somewhere? Whereas Go, when you can just compile to a single binary, now you've just got a single thing. And it's just... Drop it on the file system and execute it. It's easy. So, um, different, different ecosystems have different application architectures, which empower different ways of solving the same problems. [00:12:48] But, you know, Rails as a, as a one man framework, or sorry, one person framework, It, it, I don't, I don't necessarily, that's a, that's sort of a catchy marketing slogan, but I just think of Rails as the most productive framework you can use. So you, as a single person, you can maximize what you ship and the, the, the value that you can create because Rails is so productive. [00:13:13] Jeremy: So it, seems like it's maybe the, the domain or the type of application you're making. Like you mentioned the command line application, because you want to be able to deliver it to your user easily. Just give them a binary, something like Go or perhaps Rust makes a lot more sense. and then I could see people saying that if you're doing something with machine learning, like the community behind Python, it's, they're just, they're all there. [00:13:41] So Room for more domains in Ruby [00:13:41] Mike: That was exactly the example I was going to use also. Yeah, if you're doing something with data or AI, Python is going to be a more, a more traditional, natural choice. that doesn't mean Ruby can't do it. That doesn't mean, you wouldn't be able to solve the problem with Ruby. And, and there's, that just also means that there's more space for someone who wants to come in and make an impact in the Ruby community. [00:14:03] Find a problem that Ruby's not really well suited to solving right now and build the tooling out there to, to try and solve it. You know, I, I saw a talk, from the fellow who makes the Glimmer gem, which is a native UI toolkit. Uh, a gem for building native UIs in Ruby, which Ruby traditionally can't do, but he's, he's done an amazing job at sort of surfacing APIs to build these, um, these native, uh, native applications, which I think is great. [00:14:32] It's awesome. It's, it's so invigorating to see Ruby in a new space like that. Um, I talked to someone else who's doing the Polars gem, which is focused on data processing. So it kind of takes, um, Python and Pandas and brings that to Ruby, which is, is awesome because if you're a Ruby developer, now you've got all these additional tools which can allow you to solve new sets of problems out there. [00:14:57] So that's, that's kind of what's exciting in the Ruby community right now is just bring it into new spaces. Faktory [00:15:03] Jeremy: In addition to Sidekiq, you have, uh, another product called Faktory, I believe. And so does that serve a, a similar purpose? Is that another job scheduling, job queueing system? [00:15:16] Mike: It is, yes. And it's, it's, it's similar in a way to Sidekiq. It looks similar. It's got similar concepts at the core of it. At the end of the day, Sidekiq is limited to Ruby. Because Sidekiq executes in a Ruby VM, it executes the jobs, and the jobs are, have to be written in Ruby because you're running in the Ruby VM. [00:15:38] Faktory was my attempt to bring, Sidekiq functionality to every other language. I wanted, I wanted Sidekiq for JavaScript. I wanted Sidekiq for Go. I wanted Sidekiq for Python because A, a lot of these other languages also could use a system, a background job system. And the problem though is that. [00:16:04] As a single man, I can't port Sidekiq to every other language. I don't know all the languages, right? So, Faktory kind of changes the architecture and, um, allows you to execute jobs in any language. it, it replaces Redis and provides a server where you just fetch jobs, and you can use it from it. [00:16:26] You can use that protocol from any language to, to build your own worker processes that execute jobs in whatever language you want. [00:16:35] Jeremy: When you say it replaces Redis, so it doesn't use Redis, um, internally, it has its own. [00:16:41] Mike: It does use Redis under the covers. Yeah, it starts Redis as a child process and, connects to it over a Unix socket. And so it's really stable. It's really fast. from the outside, the, the worker processes, they just talk to Faktory. They don't know anything about Redis at all. [00:16:59] Jeremy: I see. And for someone who, like we mentioned earlier in the Python community, for example, there is, um, Celery. For someone who is using a task scheduler like that, what's the incentive to switch or use something different? [00:17:17] Mike: Well, I, I always say if you're using something right now, I'm not going to try and convince you to switch necessarily. It's when you have pain that you want to switch and move away. Maybe you have Maybe there's capabilities in the newer system that you really need that the old system doesn't provide, but Celery is such a widely known system that I'm not necessarily going to try and convince people to move away from it, but if people are looking for a new system, one of the things that Celery does that Faktory does not do is Celery provides like data adapters for using store, lots of different storage systems, right? [00:17:55] Faktory doesn't do that. Faktory is more, has more of the Rails mantra of, you know, Omakase where we choose, I choose to use Redis and that's it. You don't, you don't have a choice for what to use because who cares, you know, at the end of the day, let Faktory deal with it. it's, it's not something that, You should even necessarily be concerned about it. [00:18:17] Just, just try Faktory out and see how it works for you. Um, so I, I try to take those operational concerns off the table and just have the user focus on, you know, usability, performance, and that sort of thing. but it is, it's, it's another background job system out there for people to try out and see if they like that. [00:18:36] And, and if they want to, um, if they know Celery and they want to use Celery, more power to Faktory them. Sidekiq (Ruby) or Faktory (Polyglot) [00:18:43] Jeremy: And Sidekiq and Faktory, they serve a very similar purpose. For someone who they have a new project, they haven't chosen a job. scheduling system, if they were using Ruby, would it ever make sense for them to use Faktory versus use Sidekiq? [00:19:05] Mike: Uh Faktory is excellent in a polyglot situation. So if you're using multiple languages, if you're creating jobs in Ruby, but you're executing them in Python, for instance, um, you know, if you've, I have people who are, Creating jobs in PHP and executing them in Python, for instance. That kind of polyglot scenario, Sidekiq can't do that at all. [00:19:31] So, Faktory is useful there. In terms of Ruby, Ruby is just another language to Faktory. So, there is a Ruby API for using Faktory, and you can create and execute Ruby jobs with Faktory. But, you'll find that in the Ruby community, Sidekiq is much widely... much more widely used and understood and known. So if you're just using Ruby, I think, I think Sidekiq is the right choice. [00:19:59] I wouldn't look at Faktory. But if you do need, find yourself needing that polyglot tool, then Faktory is there. Temporal [00:20:07] Jeremy: And this is maybe one, maybe one layer of abstraction higher, but there's a product called Temporal that has some of this job scheduling, but also this workflow component. I wonder if you've tried that out and how you think about that product? [00:20:25] Mike: I've heard of them. I don't know a lot about the product. I do have a workflow API, the Sidekiq batches, which allow you to fan out jobs and then, and then execute callbacks when all the jobs in that, in that batch are done. But I don't, provide sort of a, a high level. Graphical Workflow Editor or anything like that. [00:20:50] Those to me are more marketing tools that you use to sell the tool for six figures. And I don't think they're usable. And I don't think they're actually used day to day. I provide an API for developers to use. And developers don't like moving blocks of code around in a GUI. They want to write code. And, um, so yeah, temporal, I, like I said, I don't know much about them. [00:21:19] I also, are they a venture capital backed startup? [00:21:22] Jeremy: They are, is my understanding, [00:21:24] Mike: Yeah, that, uh, any, any sort of venture capital backed startup, um, who's building technical infrastructure. I, I would look long and hard at, I'm, I think open source is the right core to build on. Of course I sell commercial software, but. I'm bootstrapped. I'm profitable. [00:21:46] I'm going to be around forever. A VC backed startup, they tend to go bankrupt, because they either get big or they go out of business. So that would be my only comment is, is, be a little bit leery about relying on commercial venture capital based infrastructure for, for companies, uh, long term. Getting people to pay for Sidekiq [00:22:05] Jeremy: So I think that's a really interesting part about your business is that I think a lot of open source maintainers have a really big challenge figuring out how to make it as a living. The, there are so many projects that they all have a very permissive license and you can use them freely one example I can think of is, I, I talked with, uh, David Kramer, who's the CTO at Sentry, and he, I don't think they use it anymore, but they, they were using Nginx, right? [00:22:39] And he's like, well, Nginx, they have a paid product, like Nginx. Plus that or something. I don't know what the name is, but he was like, but I'm not going to pay for it. Right. I'm just going to use the free one. Why would I, you know, pay for the, um, the paid thing? So I, I, I'm kind of curious from your perspective when you were coming up with Sidekiq both as an open source product, but also as a commercial one, how did you make that determination of like to make a product where it's going to be useful in its open source form? [00:23:15] I can still convince people to pay money for it. [00:23:19] Mike: Yeah, the, I was terrified, to be blunt, when I first started out. when I started the Sidekiq project, I knew it was going to take a lot of time. I knew if it was successful, I was going to be doing it for the next decade. Right? So I started in 2012, and here I am in 2023, over a decade, and I'm still doing it. [00:23:38] So my expectation was met in that regard. And I knew I was not going to be able to last that long. If I was making zero dollars, right? You just, you burn out. Nobody can last that long. Well, I guess there are a few exceptions to that rule, but yeah, money, I tend to think makes things a little more sustainable for sure. [00:23:58] Especially if you can turn it into a full time job solving and supporting a project that you, you love and, and is, is, you know, your, your, your baby, your child, so to speak, your software, uh, uh, creation that you've given to the world. but I was terrified. but one thing I did was at the time I was blogging a lot. [00:24:22] And so I was telling people about Sidekiq. I was telling people what was to come. I was talking about ideas and. The one thing that I blogged about was financial experiments. I said bluntly to the, to, to the Ruby community, I'm going to be experimenting with financial stability and sustainability with this project. [00:24:42] So not only did I create this open source project, but I was also publicly saying I I need to figure out how to make this work for the next decade. And so eventually that led to Sidekiq Pro. And I had to figure out how to build a closed source Ruby gem, which, uh, There's not a lot of, so I was kind of in the wild there. [00:25:11] But, you know, thankfully all the pieces came together and it was actually possible. I couldn't have done it if it wasn't possible. Like, we would not be talking if I couldn't make a private gem. So, um, but it happened to work out. Uh, and it allowed me to, to gate features behind a paywall effectively. And, and yeah, you're right. [00:25:33] It can be tough to make people pay for software. but I'm a developer who's selling to other developers, not, not just developers, open source developers, and they know that they have this financial problem, right? They know that there's this sustainability problem. And I was blunt in saying, this is my solution to my sustainability. [00:25:56] So, I charge what I think is a very fair price. It's only a thousand dollars a year to a hobbyist. That may seem like a lot of money to a business. It's a drop in the bucket. So it was easy for developers to say, Hey, listen, we want to buy this tool for a thousand bucks. It'll ensure our infrastructure is maintained for the next decade. [00:26:18] And it's, and it's. And it's relatively cheap. It's way less than, uh, you know, a salary or even a laptop. So, so that's, that's what I did. And, um, it's, it worked out great. People, people really understood. Even today, I talk to people and they say, we, we signed up for Sidekiq Pro to support you. So it's, it's, it's really, um, invigorating to hear people, uh, thank me and, and they're, they're actively happy that they're paying me and our customers. [00:26:49] Jeremy: it's sort of, uh, maybe a not super common story, right, in terms of what you went through. Because when I think of open core businesses, I think of companies like, uh, GitLab, which are venture funded, uh, very different scenario there. I wonder, like, in your case, so you started in 2012, and there were probably no venture backed competitors, right? [00:27:19] People saying that we're going to make this job scheduling system and some VC is going to give me five million dollars and build a team to work on this. It was probably at the time, maybe it was Rescue, which was... [00:27:35] Mike: There was a venture backed system called IronMQ, [00:27:40] Jeremy: Hmm. [00:27:41] Mike: And I'm not sure if they're still around or not, but they... They took, uh, one or more funding rounds. I'm not sure exactly, but they were VC backed. They were doing, background jobs, scheduled jobs, uh, you know, running container, running container jobs. They, they eventually, I think, wound up sort of settling on Docker containers. [00:28:06] They'll basically spin up a Docker container. And that container can do whatever it wants. It can execute for a second and then shut down, or it can run for, for however long, but they would, um, yeah, I, yeah, I'll, I'll stop there because I don't know the actual details of exactly their system, but I'm not sure if they're still around, but that's the only one that I remember offhand that was around, you know, years ago. [00:28:32] Yeah, it's, it's mostly, you know, low level open source infrastructure. And so, anytime you have funded startups, they're generally using that open source infrastructure to build their own SaaS. And so SaaS's are the vast majority of where you see sort of, uh, commercial software. [00:28:51] Jeremy: so I guess in that way it, it, it gave you this, this window or this area where you could come in and there wasn't, other than that iron, product, there wasn't this big money that you were fighting against. It was sort of, it was you telling people openly, I'm, I'm working on this thing. [00:29:11] I need to make money so that I can sustain it. And, if you, yeah. like the work I do, then, you know, basically support me. Right. And, and so I think that, I'm wondering how we can reproduce that more often because when you see new products, a lot of times it is VC backed, right? [00:29:35] Because people say, I need to work on this. I need to be paid. and I can't ask a team to do this. For nothing, right? So [00:29:44] Mike: Yeah. It's. It's a wicked problem. Uh, it's a really, really hard problem to solve if you take vc you there, that that really kind of means that you need to be making tens if not hundreds of millions of dollars in sales. If you are building a small or relatively small. You know, put small in quotes there because I don't really know what that means, but if you have a small open source project, you can't charge huge amounts for it, right? [00:30:18] I mean, Sidekiq is a, I would call a medium sized open source project, and I'm charging a thousand bucks for it. So if you're building, you know, I don't know, I don't even want to necessarily give example, but if you're building some open source project, and It's one of 300 libraries that people's applications will depend on. [00:30:40] You can't necessarily charge a thousand dollars for that library. depending on the size and the capabilities, maybe you can, maybe you can't. But there's going to be a long tail of open source projects that just, they can't, they can't charge much, if anything, for them. So, unfortunately, we have, you know, these You kind of have two pathways. [00:31:07] Venture capital, where you've got to sell a ton, or free. And I've kind of walked that fine line where I'm a small business, I can charge a small amount because I'm bootstrapped. And, and I don't need huge amounts of money, and I, and I have a project that is of the right size to where I can charge a decent amount of money. [00:31:32] That means that I can survive with 500 or a thousand customers. I don't need to have a hundred million dollars worth of customers. Because I, you know, when I started the business, one of the constraints I said is I don't want to hire anybody. I'm just going to be solo. And part of the, part of my ability to keep a low price and, and keep running sustainably, even with just You know, only a few hundred customers is because I'm solo. [00:32:03] I don't have the overhead of investors. I don't have the overhead of other employees. I don't have an office space. You know, my overhead is very small. So that is, um, you know, I just kind of have a unique business in that way, I guess you might say. Keeping the business solo [00:32:21] Jeremy: I think that's that's interesting about your business as well But the fact that you've kept it you've kept it solo which I would imagine in most businesses, they need support people. they need, developers outside of maybe just one. Um, there's all sorts of other, I don't think overhead is the right word, but you just need more people, right? [00:32:45] And, and what do you think it is about Sidekiq that's made it possible for it to just be a one person operation? [00:32:52] Mike: There's so much administrative overhead in a business. I explicitly create business policies so that I can run solo. you know, my support policy is officially you get one email ticket or issue per quarter. And, and anything more than that, I can bounce back and say, well, you're, you're requiring too much support. [00:33:23] In reality, I don't enforce that at all. And people email me all the time, but, but things like. Things like dealing with accounting and bookkeeping and taxes and legal stuff, licensing, all that is, yeah, a little bit of overhead, but I've kept it as minimal as I can. And part of that is I don't want to hire another employee because then that increases the administrative overhead that I have. [00:33:53] And Sidekiq is so tied to me and my knowledge that if I hire somebody, they're probably not going to know Ruby and threading and all the intricate technical detail necessary to build and maintain and support the system. And so really you'll kind of regress a little bit. We won't be able to give as good support because I'm busy helping that other employee. Being selective about customers [00:34:23] Mike: So, yeah, it's, it's a tightrope act where you've got to really figure out how can I scale myself as far as possible without overwhelming myself. The, the overwhelming thing that I have that I've never been able to solve. It's just dealing with billing inquiries, customers, companies, emailing me saying, how do we buy this thing? [00:34:46] Can I get an invoice? Every company out there, it seems wants an invoice. And the problem with invoicing is it takes a lot more. manual labor and administrative overhead to issue that invoice to collect payment on the invoice. So that's one of the reasons why I have a very strict policy about credit card only for, for the vast majority of my customers. [00:35:11] And I demand that companies pay a lot more. You have to have a pretty big enterprise license if you want an invoice. And if the company, if the company comes back and complains and says, well, you know, that's ridiculous. We don't, we don't want to pay that much. We don't need it that much. Uh, you know, I, I say, okay, well then you have two, two things, two, uh, two things. [00:35:36] You can either pay with a credit card or you can not use Sidekiq. Like, that's, that's it. I'm, I don't need your money. I don't want the administrative overhead of dealing with your accounting department. I just want to support my, my customers and build my software. And, and so, yeah, I don't want to turn into a billing clerk. [00:35:55] So sometimes, sometimes the, the, the best thing in business that you can do is just say no. [00:36:01] Jeremy: That's very interesting because I think being a solo... Person is what probably makes that possible, right? Because if you had the additional staff, then you might say like, Well, I need to pay my staff, so we should be getting, you know, as much business as [00:36:19] Mike: Yeah. Chasing every customer you can, right. But yeah. [00:36:22] Every customer is different. I mean, I have some customers that just, they never contact me. They pay their bill really fast or right on time. And they're paying me, you know, five figures, 20, a year. And they just, it's a, God bless them because those are, are the. [00:36:40] Best customers to have and the worst customers are the ones who are paying 99 bucks a month and everything that they don't understand or whatever is a complaint. So sometimes, sometimes you, you want to, vet your customers from that perspective and say, which one of these customers are going to be good? [00:36:58] Which ones are going to be problematic? [00:37:01] Jeremy: And you're only only person... And I'm not sure how many customers you have, but [00:37:08] Mike: I have 2000 [00:37:09] Jeremy: 2000 customers. [00:37:10] Okay. [00:37:11] Mike: Yeah. [00:37:11] Jeremy: And has that been relatively stable or has there been growth [00:37:16] Mike: It's been relatively stable the last couple of years. Ruby has, has sort of plateaued. Um, it's, you don't see a lot of growth. I'm getting probably, um, 15, 20 percent growth maybe. Uh, so I'm not growing like a weed, like, you know, venture capital would want to see, but steady incremental growth is, is, uh, wonderful, especially since I do very little. [00:37:42] Sales and marketing. you know, I come to RubyConf I, I I tweet out, you know, or I, I toot out funny Mastodon Toots occasionally and, and, um, and, and put out new releases of the software. And, and that's, that's essentially my, my marketing. My marketing is just staying in front of developers and, and, and being a presence in the Ruby community. [00:38:06] But yeah, it, it's, uh. I, I, I see not a, not a huge amount of churn, but I see enough sales to, to, to stay up and keep my head above water and to keep growing, um, slowly but surely. Support needs haven't grown [00:38:20] Jeremy: And as you've had that steady growth, has the support burden not grown with it? [00:38:27] Mike: Not as much because once customers are on Sidekiq and they've got it working, then by and large, you don't hear from them all that much. There's always GitHub issues, you know, customers open GitHub issues. I love that. but yeah, by and large, the community finds bugs. and opens up issues. And so things remain relatively stable. [00:38:51] I don't get a lot of the complete newbie who has no idea what they're doing and wants me to, to tell them how to use Sidekiq that I just don't see much of that at all. Um, I have seen it before, but in that case, generally, I, I, I politely tell that person that, listen, I'm not here to educate you on the product. [00:39:14] It's there's documentation in the wiki. Uh, and there's tons of, of more Ruby, generic Ruby, uh, educational material out there. That's just not, not what I do. So, so yeah, by and large, the support burden is, is not too bad because once people are, are up and running, it's stable and, and they don't, they don't need to contact me. [00:39:36] Jeremy: I wonder too, if that's perhaps a function of the price, because if you're a. new developer or someone who's not too familiar with how to do job processing or what they want to do when you, there is the open source product, of course. but then the next step up, I believe is about a hundred dollars a month. [00:39:58] And if you're somebody who is kind of just getting started and learning how things work, you're probably not going to pay that, is my guess. And so you'll never hear from them. [00:40:11] Mike: Right, yeah, that's a good point too, is the open source version, which is what people inevitably are going to use and integrate into their app at first. Because it's open source, you're not going to email me directly, um, and when people do email me directly, Sidekiq support questions, I do, I reply literally, I'm sorry I don't respond to private email, unless you're a customer. [00:40:35] Please open a GitHub issue and, um, that I try to educate both my open source users and my commercial customers to try and stay in GitHub issues because private email is a silo, right? Private email doesn't help anybody else but them. If I can get people to go into GitHub issues, then that's a public record. [00:40:58] that people can search. Because if one person has that problem, there's probably a dozen other people that have that same problem. And then that other, those other 11 people can search and find the solution to their problem at four in the morning when I'm asleep. Right? So that's, that's what I'm trying to do is, is keep, uh, keep everything out in the open so that people can self service as much as possible. Sidekiq open source [00:41:24] Jeremy: And on the open source side, are you still primarily the main contributor? Or do you have other people that are [00:41:35] Mike: I mean, I'd say I do 90 percent of the work, which is why I don't feel guilty about keeping 100 percent of the money. A lot of open source projects, when they look for financial sustainability, they also look for how can we split this money amongst the team. And that's, that's a completely different topic that I've. [00:41:55] is another reason why I've stayed solo is if I hire an employee and I pay them 200, 000 a year as a developer, I'm meanwhile keeping all the rest of the profits of the company. And so that almost seems a little bit unfair. because we're both still working 40 hours a week, right? Why am I the one making the vast majority of the, of the profit and the money? [00:42:19] Um, so, uh, I've always, uh, that's another reason why I've stayed solo, but, but yeah, having a team of people working on something, I do get, regular commits, regular pull requests from people, fixing a bug that they found or just making a tweak that. that they saw, that they thought they could improve. [00:42:42] A little more rarely I get a significant improvement or feature, as a pull request. but Sidekiq is so stable these days that it really doesn't need a team of people maintaining it. The volume of changes necessary, I can easily keep up with that. So, I'm still doing 90 95 percent of the work. Are there other Sidekiq-like opportunities out there? [00:43:07] Jeremy: Yeah, so I think Sidekiq has sort of a unique positioning where it's the code base itself is small enough where you can maintain it yourself and you have some help, but primarily you're the main maintainer. And then you have enough customers who are willing to, to pay for the benefit it gives them on top of what the open source product provides. [00:43:36] cause it's, it's, you were talking about how. Every project people work on, they have, they could have hundreds of dependencies, right? And to ask somebody to, to pay for each of them is, is probably not ever going to happen. And so it's interesting to think about how you have things like, say, you know, OpenSSL, you know, it's a library that a whole bunch of people rely on, but nobody is going to pay a monthly fee to use it. [00:44:06] You have things like, uh, recently there was HashiCorp with Terraform, right? They, they decided to change their license because they, they wanted to get, you know, some of that value back, some of the money back, and the community basically revolted. Right? And did a fork. And so I'm kind of curious, like, yeah, where people can find these sweet spots like, like Sidekiq, where they can find this space where it's just small enough where you can work on it on your own and still get people to pay for it. [00:44:43] It's, I'm trying to picture, like, where are the spaces? Open source as a public utility [00:44:48] Mike: We need to look at other forms of financing beyond pure capitalism. If this is truly public infrastructure that needs to be maintained for the long term, then why are we, why is it that we depend on capitalism to do that? Our roads, our water, our sewer, those are not Capitalist, right? Those are utilities, that's public infrastructure that we maintain, that the government helps us maintain. [00:45:27] And in a sense, tech infrastructure is similar or could be thought of in a similar fashion. So things like Open Collective, things like, uh, there's a, there's a organization in Europe called NLNet, I think, out of the Netherlands. And they do a lot of grants to various open source projects to help them improve the state of digital infrastructure. [00:45:57] They support, for instance, Mastodon as a open source project that doesn't have any sort of corporate backing. They see that as necessary social media infrastructure, uh, for the long term. And, and I, and I think that's wonderful. I like to see those new directions being explored where you don't have to turn everything into a product, right? [00:46:27] And, and try and market and sale, um, and, and run ads and, and do all this stuff. If you can just make the case that, hey, this is, this is useful public infrastructure that so many different, um, Technical, uh, you know, applications and businesses could rely on, much like FedEx and DHL use our roads to the benefit of their own, their own corporate profits. [00:46:53] Um, why, why, why shouldn't we think of tech infrastructure sort of in a similar way? So, yeah, I would like to see us explore more. in that direction. I understand that in America that may not happen for quite a while because we are very, capitalist focused, but it's encouraging to see, um, places like Europe, uh, a little more open to, to trialing things like, cooperatives and, and grants and large long term grants to, to projects to see if they can, uh, provide sustainability in, in, you know, in a new way. [00:47:29] Jeremy: Yeah, that's a good point because I think right now, a lot of the open source infrastructure that we all rely on, either it's being paid for by large companies and at the whim of those large companies, if Google decides we don't want to pay for you to work on this project anymore, where does the money come from? [00:47:53] Right? And on the other hand, there's the thousands, tens of thousands of people who are doing it. just for free out of the, you know, the goodness of their, their heart. And that's where a lot of the burnout comes from. Right. So I think what you're saying is that perhaps a lot of these pieces that we all rely on, that our, our governments, you know, here in the United States, but also around the world should perhaps recognize as this is, like you said, this is infrastructure, and we should be. [00:48:29] Paying these people to keep the equivalent of the roads and, and, uh, all that working. [00:48:37] Mike: Yeah, I mean, I'm not, I'm not claiming that it's a perfect analogy. There's, there's, there's lots of questions that are unanswered in that, right? How do you, how do you ensure that a project is well maintained? What does that even look like? What does that mean? you know, you can look at a road and say, is it full of potholes or is it smooth as glass, right? [00:48:59] It's just perfectly obvious, but to a, to a digital project, it's, it's not as clear. So, yeah, but, but, but exploring those new ways because turning everybody into a businessman so that they can, they can keep their project going, it, it, it itself is not sustainable, right? so yeah, and that's why everything turns into a SaaS because a SaaS is easy to control. [00:49:24] It's easy to gatekeep behind a paywall and it's easy to charge for, whereas a library on GitHub. Yeah. You know, what do you do there? You know, obviously GitHub has sponsors, the sponsors feature. You've got Patreon, you've got Open Collective, you've got Tidelift. There's, there's other, you know, experiments that have been run, but nothing has risen to the top yet. [00:49:47] and it's still, it's still a bit of a grind. but yeah, we'll see, we'll see what happens, but hopefully people will keep experimenting and, and maybe, maybe governments will start. Thinking in the direction of, you know, what does it mean to have a budget for digital infrastructure maintenance? [00:50:04] Jeremy: Yeah, it's interesting because we, we started thinking about like, okay, where can we find spaces for other Sidekiqs? But it sounds like maybe, maybe that's just not realistic, right? Like maybe we need more of a... Yeah, a rethinking of, I guess the, the structure of how people get funded. Yeah. [00:50:23] Mike: Yeah, sometimes the best way to solve a problem is to think at a higher level. You know, we, the, the sustainability problem in American Silicon Valley based open source developers is naturally going to tend toward venture capital and, and capitalism. And I, you know, I think, I think that's, uh, extremely problematic on a, on a lot of different, in a lot of different ways. [00:50:47] And, and so sometimes you need to step back and say, well, maybe we're, maybe we just don't have the right tool set to solve this problem. But, you know, I, I. More than that, I'm not going to speculate on because it is a wicked problem to solve. [00:51:04] Jeremy: Is there anything else you wanted to, to mention or thought we should have talked about? [00:51:08] Mike: No, I, I, I loved the talk, of sustainability and, and open source. And I, it's, it's a, it's a topic really dear to my heart, obviously. So I, I am happy to talk about it at length with anybody, anytime. So thank you for having me. [00:51:25] Jeremy: All right. Thank you very much, Mike.
Liam Crilly, Senior Director of Product Management at NGINX, discussed the potential of WebAssembly (Wasm) during this recording at the Open Source Summit in Bilbao, Spain. With over three decades of experience, Crilly highlighted WebAssembly's promise of universal portability, allowing developers to build once and run anywhere across a network of devices.While Wasm is more mature on the client side in browsers, its deployment on the server side is less developed, lacking sufficient runtimes and toolchains. Crilly noted that WebAssembly acts as a powerful compiler target, enabling the generation of well-optimized instruction set code. Despite the need for a virtual machine, WebAssembly's abstraction layer eliminates hardware-specific concerns, providing near-native compute performance through additional layers of optimization.Learn more from The New Stack about WebAssembly and NGINX:WebAssembly Overview, News and TrendsWhy WebAssembly Will Disrupt the Operating SystemTrue Portability Is the Killer Use Case for WebAssembly4 Factors of a WebAssembly Native World
Today I am speaking with Kevin Jones, Developer Relations Engineer at Edge & Node, a core development team actively contributing to The Graph. Although Kevin is a fresh addition to the Edge & Node team, his presence has already been felt significantly through his engagement in hackathons and leadership of key initiatives.In this engaging conversation, Kevin unveils his captivating career journey. He takes us from his early studies in graphic design through a stint in retail, working in sales at Best Buy – a chapter that included an adventure living in Hawaii. We then explore his trajectory to becoming a well-regarded thought leader within the Ethereum ecosystem, a journey that included some time at the industry-leading NGINX.Kevin also shares the pivotal moments when he encountered The Graph and made the transition to full-time work in web3. He provides insights into his role at Edge & Node and dives into some of the transformative initiatives he is driving. Throughout, Kevin offers valuable insights into open source projects, Scaffold-ETH, BuildersDAO – a new DAO within The Graph ecosystem – and the sources of his drive and determination. Show Notes and TranscriptsThe GRTiQ Podcast takes listeners inside web3 and The Graph (GRT) by interviewing members of the ecosystem. Please help support this project and build the community by subscribing and leaving a review.Twitter: GRT_iQwww.GRTiQ.com
4 Months of BSD, Self Hosted Calendar and address Book, Ban scanners IPs from OpenSMTP logs, Self-hosted git page, Bastille template example, Restrict nginx Access by Geographical Location on FreeBSD, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines 4 Months of BSD (https://danterobinson.dev/BSD/4MonthsofBSD) Self Hosted Calendar and address Book (https://www.tumfatig.net/2023/self-hosted-calendar-and-addressbook-services-on-openbsd/) News Roundup Ban scanners IPs from OpenSMTP logs (https://dataswamp.org/~solene/2023-06-22-opensmtpd-block-attempts.html) Self-hosted git page with stagit (featuring ed, the standard editor) (https://sebastiano.tronto.net/blog/2022-11-23-git-host/) Bastille template example (https://bastillebsd.org/blog/2022/01/03/bastille-template-examples-adguardhome/) Nginx: How to Restrict Access by Geographical Location on FreeBSD (https://herrbischoff.com/2021/05/nginx-how-to-restrict-access-by-geographical-location-on-freebsd/) Beastie Bits Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Chris - ARM (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/520/feedback/Chris%20-%20arm.md) Matthew - Groups (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/520/feedback/matthew%20-%20groups.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Linux and FreeBSD Firewalls Part 1, Why Netflix Chose NGINX as the Heart of Its CDN, Protect your web servers against PHP shells and malwares, Installing and running Gitlab howto, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Linux vs. FreeBSD : Linux and FreeBSD Firewalls – The Ultimate Guide : Part 1 (https://klarasystems.com/articles/freebsd-linux-and-freebsd-firewalls/) Why Netflix Chose NGINX as the Heart of Its CDN (https://www.nginx.com/blog/why-netflix-chose-nginx-as-the-heart-of-its-cdn/) News Roundup FreeBSD: Protect your web servers against PHP shells and malwares (https://ozgurkazancci.com/freebsd-protect-your-web-server-against-php-shells-and-malwares/) HowTo: Installing and running Gitlab (https://forums.FreeBSD.org/threads/howto-installing-and-running-gitlab.89436/) Beastie Bits • [World built in 36 hours on a Pentium 4!](https://www.reddit.com/r/freebsd/comments/13undl9/world_built_in_36_hours_on_a_pentium_4/) • [Fart init](https://x61.sh/log/2023/05/23052023153621-fart-init.html](https://x61.sh/log/2023/05/23052023153621-fart-init.html) • [Organized Freebies](https://mwl.io/archives/22832) • [OpenSMTPD 7.3.0p0 released](http://undeadly.org/cgi?action=article;sid=20230617111340) • [shutdown/reboot now require membership of group _shutdown](http://undeadly.org/cgi?action=article;sid=20230620064255) • [Where does my computer get the time from?](https://dotat.at/@/2023-05-26-whence-time.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. *** Feedback/Questions sam - fav episodes (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/515/feedback/sam%20-%20fav%20episodes.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
In this episode of Critical Thinking - Bug Bounty Podcast, we're back with Joel, fresh (haha) off of back-to-back live hack events in London and Seoul. We start with his recap of the events, and the different vibes of each LHE, then we dive into the technical thick of it, and talk web browsers, XSS vectors, new tools, CVSS 4, and much more than we can fit in this character limit. Just trust us when we say you don't want to miss it!Follow us on twitter at: @ctbbpodcastWe're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:https://twitter.com/0xteknogeekhttps://twitter.com/rhynorater______Episode 26 links:https://linke.to/Episode26Notes______Timestamps:(00:00:00) Introduction(00:04:10) LHE Vibes(00:07:45) "Hunting for NGINX alias traversals in the wild"(00:12:30) Various payouts in bug bounty programs(00:16:05) New XSS vectors and popovers(00:24:15) The "magical math element" in Firefox(00:27:15) LiveOverflow's research on HTML parsing quirks(00:32:10) Mr. Tux Racer, Woocommerce, and WordPress(00:40:00) Changes in the CVSS 4 draft spec(00:45:00) TomNomNom's new tool Jsluise(00:51:15) JavaScript's import function(00:55:30) Gareth Hayes' book "JavaScript for Hackers"(01:02:24) Injecting JavaScript variables(01:09:15) Prototype pollution(01:13:15) DOM clobbering(01:18:10) Exploiting HTML injection using meta and base tags(01:25:00) CSS Games(01:28:00) Base tags
2023-06-27 Weekly News - Episode 199Watch the video version on YouTube at https://youtube.com/live/YhGqAVLYZk4?feature=shareHosts: Gavin Pickin - Senior Developer at Ortus Solutions Brad Wood - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our Repos Star all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsCFCamp was a blastBrad said: Back on US soil again, but still smiling from the wonderful experience at CFCamp. It was so good to be back in Germany and see my EU friends again in person. I'd say the first time back since Covid was a smashing success!Alex Well said: Back at home from my trip to 2023‘s #CFCamp
In a wonderful blog, Kyle explores the pains he faced managing a Postgres instance for a startup he works for and how enabling partitioning sigintfically created wait events causing the backend and subsequently NGINX to through 500 errors. We discuss this in this video/podcast https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits
2023-06-13 Weekly News - Episode 198Watch the video version on YouTube at https://youtube.com/live/r1L8Aec5-mk?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus Solutions Grant Copley - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsOrtus Training - ColdBox Zero to HeroOctober 4th and 5thVenue Confirmation in Progress - will be less than 2 miles from the Mirage.Registration will be open soon!CF Camp Pre Conference Workshop DiscountWe can offer a 30% discount by using the code "OrtusPre30".Thank you for your ongoing support!https://www.eventbrite.com/e/cfcamp-pre-conference-workshops-by-ortus-solutions-tickets-641489421127 New Releases and UpdatesColdBox 6.9.0 ReleasedWe are excited to announce the release of ColdBox 6.9.0 LTS, packed with new features, improvements, and bug fixes. In this version, we focused on enhancing the debugging capabilities, improving the ScheduledTasks module, and fixing an important issue related to RestHandler. Let's dive into the details of these updates.https://www.ortussolutions.com/blog/coldbox-690-released Lucee 6 Beta 2Following a long last few weeks of final development, testing and bug fixing, the Lucee team is really proud to present Lucee 6 BETA 2https://dev.lucee.org/t/lucee-6-0-451-beta-2/12673 Lucee 6.0 Launch at @cf_camp
Jean Yang, CEO of Akita Software, joins Corey on Screaming in the Cloud to discuss how she went from academia to tech founder, and what her company is doing to improve monitoring and observability. Jean explains why Akita is different from other observability & monitoring solutions, and how it bridges the gap from what people know they should be doing and what they actually do in practice. Corey and Jean explore why the monitoring and observability space has been so broken, and why it's important for people to see monitoring as a chore and not a hobby. Jean also reveals how she took a leap from being an academic professor to founding a tech start-up. About JeanJean Yang is the founder and CEO of Akita Software, providing the fastest time-to-value for API monitoring. Jean was previously a tenure-track professor in Computer Science at Carnegie Mellon University.Links Referenced: Akita Software: https://www.akitasoftware.com/ Aki the dog chatbot: https://www.akitasoftware.com/blog-posts/we-built-an-exceedingly-polite-ai-dog-that-answers-questions-about-your-apis Twitter: https://twitter.com/jeanqasaur TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone whose company has… well, let's just say that it has piqued my interest. Jean Yang is the CEO of Akita Software and not only is it named after a breed of dog, which frankly, Amazon service namers could take a lot of lessons from, but it also tends to approach observability slash monitoring from a perspective of solving the problem rather than preaching a new orthodoxy. Jean, thank you for joining me.Jean: Thank you for having me. Very excited.Corey: In the world that we tend to operate in, there are so many different observability tools, and as best I can determine observability is hipster monitoring. Well, if we call it monitoring, we can't charge you quite as much money for it. And whenever you go into any environment of significant scale, we pretty quickly discover that, “What monitoring tool are you using?” The answer is, “Here are the 15 that we use.” Then you talk to other monitoring and observability companies and ask them which ones of those they've replace, and the answer becomes, “We're number 16.” Which is less compelling of a pitch than you might expect. What does Akita do? Where do you folks start and stop?Jean: We want to be—at Akita—your first stop for monitoring and we want to be all of the monitoring, you need up to a certain level. And here's the motivation. So, we've talked with hundreds, if not thousands, of software teams over the last few years and what we found is there is such a gap between best practice, what people think everybody else is doing, what people are talking about at conferences, and what's actually happening in software teams. And so, what software teams have told me over and over again, is, hey, we either don't actually use very many tools at all, or we use 15 tools in name, but it's you know, one [laugh] one person on the team set this one up, it's monitoring one of our endpoints, we don't even know which one sometimes. Who knows what the thresholds are really supposed to be. We got too many alerts one day, we turned it off.But there's very much a gap between what people are saying they're supposed to do, what people in their heads say they're going to do next quarter or the quarter after that and what's really happening in practice. And what we saw was teams are falling more and more into monitoring debt. And so effectively, their customers are becoming their monitoring and it's getting harder to catch up. And so, what Akita does is we're the fastest, easiest way for teams to quickly see what endpoints you have in your system—so that's API endpoints—what's slow and what's throwing errors. And you might wonder, okay, wait, wait, wait, Jean. Monitoring is usually about, like, logs, metrics, and traces. I'm not used to hearing about API—like, what do APIs have to do with any of it?And my view is, look, we want the most simple form of what might be wrong with your system, we want a developer to be able to get started without having to change any code, make any annotations, drop in any libraries. APIs are something you can watch from the outside of a system. And when it comes to which alerts actually matter, where do you want errors to be alerts, where do you want thresholds to really matter, my view is, look, the places where your system interfaces with another system are probably where you want to start if you've really gotten nothing. And so, Akita view is, we're going to start from the outside in on this monitoring. We're turning a lot of the views on monitoring and observability on its head and we just want to be the tool that you reach for if you've got nothing, it's middle of the night, you have alerts on some endpoint, and you don't want to spend a few hours or weeks setting up some other tool. And we also want to be able to grow with you up until you need that power tool that many of the existing solutions out there are today.Corey: It feels like monitoring is very often one of those reactive things. I come from the infrastructure world, so you start off with, “What do you use for monitoring?” “Oh, we wait till the help desk calls us and users are reporting a problem.” Okay, that gets you somewhere. And then it becomes oh, well, what was wrong that time? The drive filled up. Okay, so we're going to build checks in that tell us when the drives are filling up.And you wind up trying to enumerate all of the different badness. And as a result, if you leave that to its logical conclusion, one of the stories that I heard out of MySpace once upon a time—which dates me somewhat—is that you would have a shift, so there were three shifts working around the clock, and each one would open about 5000 tickets, give or take, for the monitoring alerts that wound up firing off throughout their infrastructure. At that point, it's almost, why bother? Because no one is going to be around to triage these things; no one is going to see any of the signal buried and all of that noise. When you talk about doing this for an API perspective, are you running synthetics against those APIs? Are you shimming them in order to see what's passing through them? What's the implementation side look like?Jean: Yeah, that's a great question. So, we're using a technology called BPF, Berkeley Packet Filter. The more trendy, buzzy term is EBPF—Corey: The EBPF. Oh yes.Jean: Yeah, Extended Berkeley Packet Filter. But here's the secret, we only use the BPF part. It's actually a little easier for users to install. The E part is, you know, fancy and often finicky. But um—Corey: SEBPF then: Shortened Extended BPF. Why not?Jean: [laugh]. Yeah. And what BPF allows us to do is passively watch traffic from the outside of a system. So, think of it as you're sending API calls across the network. We're just watching that network. We're not in the path of that traffic. So, we're not intercepting the traffic in any way, we're not creating any additional overhead for the traffic, we're not slowing it down in any way. We're just sitting on the side, we're watching all of it, and then we're taking that and shipping an obfuscated version off to our cloud, and then we're giving you analytics on that.Corey: One of the things that strikes me as being… I guess, a common trope is there are a bunch of observability solutions out there that offer this sort of insight into what's going on within an environment, but it's, “Step one: instrument with some SDK or some agent across everything. Do an entire deploy across your fleet.” Which yeah, people are not generally going to be in a hurry to sign up for. And further, you also said a minute ago that the idea being that someone could start using this in the middle of the night in the middle of an outage, which tells me that it's not, “Step one: get the infrastructure sparkling. Step two: do a global deploy to everything.” How do you go about doing that? What is the level of embeddedness into the environment?Jean: Yeah, that's a great question. So, the reason we chose BPF is I wanted a completely black-box solution. So, no SDKs, no code annotations. I wanted people to be able to change a config file and have our solution apply to anything that's on the system. So, you could add routes, you could do all kinds of things. I wanted there to be no additional work on the part of the developer when that happened.And so, we're not the only solution that uses BPF or EBPF. There's many other solutions that say, “Hey, just drop us in. We'll let you do anything you want.” The big difference is what happens with the traffic once it gets processed. So, what EBPF or BPF gives you is it watches everything about your system. And so, you can imagine that's a lot of different events. That's a lot of things.If you're trying to fix an incident in the middle of the night and someone just dumps on you 1000 pages of logs, like, what are you going to do with that? And so, our view is, the more interesting and important and valuable thing to do here is not make it so that you just have the ability to watch everything about your system but to make it so that developers don't have to sift through thousands of events just to figure out what went wrong. So, we've spent years building algorithms to automatically analyze these API events to figure out, first of all, what are your endpoints? Because it's one thing to turn on something like Wireshark and just say, okay, here are the thousand API calls, I saw—ten thousand—but it's another thing to say, “Hey, 500 of those were actually the same endpoint and 300 of those had errors.” That's quite a hard problem.And before us, it turns out that there was no other solution that even did that to the level of being able to compile together, “Here are all the slow calls to an endpoint,” or, “Here are all of the erroneous calls to an endpoint.” That was blood, sweat, and tears of developers in the night before. And so, that's the first major thing we do. And then metrics on top of that. So, today we have what's slow, what's throwing errors. People have asked us for other things like show me what happened after I deployed. Show me what's going on this week versus last week. But now that we have this data set, you can imagine there's all kinds of questions we can now start answering much more quickly on top of it.Corey: One thing that strikes me about your site is that when I go to akitasoftware.com, you've got a shout-out section at the top. And because I've been doing this long enough where I find that, yeah, you work at a company; you're going to say all kinds of wonderful, amazing aspirational things about it, and basically because I have deep-seated personality disorders, I will make fun of those things as my default reflexive reaction. But something that AWS, for example, does very well is when they announce something ridiculous on stage at re:Invent, I make fun of it, as is normal, but then they have a customer come up and say, “And here's the expensive, painful problem that they solved for us.”And that's where I shut up and start listening. Because it's a very different story to get someone else, who is presumably not being paid, to get on stage and say, “Yeah, this solved a sophisticated, painful problem.” Your shout-outs page has not just a laundry list of people saying great things about it, but there are former folks who have been on the show here, people I know and trust: Scott Johnson over at Docker, Gergely Orosz over at The Pragmatic Engineer, and other folks who have been luminaries in the space for a while. These are not the sort of people that are going to say, “Oh, sure. Why not? Oh, you're going to send me a $50 gift card in a Twitter DM? Sure I'll say nice things,” like it's one of those respond to a viral tweet spamming something nonsense. These are people who have gravitas. It's clear that there's something you're building that is resonating.Jean: Yeah. And for that, they found us. Everyone that I've tried to bribe to say good things about us actually [laugh] refused.Corey: Oh, yeah. As it turns out that it's one of those things where people are more expensive than you might think. It's like, “What, you want me to sell my credibility down the road?” Doesn't work super well. But there's something like the unsolicited testimonials that come out of, this is amazing, once people start kicking the tires on it.You're currently in open beta. So, I guess my big question for you is, whenever you see a product that says, “Oh, yeah, we solve everything cloud, on-prem, on physical instances, on virtual machines, on Docker, on serverless, everything across the board. It's awesome.” I have some skepticism on that. What is your ideal application architecture that Akita works best on? And what sort of things are you a complete nonstarter for?Jean: Yeah, I'll start with a couple of things we work well on. So, container platforms. We work relatively well. So, that's your Fargate, that's your Azure Web Apps. But that, you know, things running, we call them container platforms. Kubernetes is also something that a lot of our users have picked us up and had success with us on. I will say our Kubernetes deploy is not as smooth as we would like. We say, you know, you can install us—Corey: Well, that is Kubernetes, yes.Jean: [laugh]. Yeah.Corey: Nothing in Kubernetes is as smooth as we would like.Jean: Yeah, so we're actually rolling out Kubernetes injection support in the next couple of weeks. So, those are the two that people have had the most success on. If you're running on bare metal or on a VM, we work, but I will say that you have to know your way around a little bit to get that to work. What we don't work on is any Platform as a Service. So, like, a Heroku, a Lambda, a Render at the moment. So those, we haven't found a way to passively listen to the network traffic in a good way right now.And we also work best for unencrypted HTTP REST traffic. So, if you have encrypted traffic, it's not a non-starter, but you need to fall into a couple of categories. You either need to be using Kubernetes, you can run Akita as a sidecar, or you're using Nginx. And so, that's something we're still expanding support on. And we do not support GraphQL or GRPC at the moment.Corey: That's okay. Neither do I. It does seem these days that unencrypted HTTP API calls are increasingly becoming something of a relic, where folks are treating those as anti-patterns to be stamped out ruthlessly. Are you still seeing significant deployments of unencrypted APIs?Jean: Yeah. [laugh]. So, Corey—Corey: That is the reality, yes.Jean: That's a really good question, Corey, because in the beginning, we weren't sure what we wanted to focus on. And I'm not saying the whole deployment is unencrypted HTTP, but there is a place to install Akita to watch where it's unencrypted HTTP. And so, this is what I mean by if you have encrypted traffic, but you can install Akita as a Kubernetes sidecar, we can still watch that. But there was a big question when we started: should this be GraphQL, GRPC, or should it be REST? And I read the “State of the API Report” from Postman for you know, five years, and I still keep up with it.And every year, it seemed that not only was REST, remaining dominant, it was actually growing. So, [laugh] this was shocking to me as well because people said, well, “We have this more structured stuff, now. There's GRPC, there's GraphQL.” But it seems that for the added complexity, people weren't necessarily seeing the value and so, REST continues to dominate. And I've actually even seen a decline in GraphQL since we first started doing this. So, I'm fully on board the REST wagon. And in terms of encrypted versus unencrypted, I would also like to see more encryption as well. That's why we're working on burning down the long tail of support for that.Corey: Yeah, it's one of those challenges. Whenever you're deploying something relatively new, there's this idea that it should be forward-looking and you, on some level, want to modernize your architecture and infrastructure to keep up with it. An AWS integration story I see that's like that these days is, “Oh, yeah, generate an IAM credential set and just upload those into our system.” Yeah, the modern way of doing that is role assumption: to find a role and here's how to configure it so that it can do what we need to do. So, whenever you start seeing things that are, “Oh, yeah, just turn the security clock back in time a little bit,” that's always a little bit of an eyebrow raise.I can also definitely empathize with the joys of dealing with anything that even touches networking in a Lambda context. Building the Lambda extension for Tailscale was one of the last big dives I made into that area and I still have nightmares as a result. It does a lot of interesting things right up until you step off the golden path. And then suddenly, everything becomes yaks all the way down, in desperate need of shaving.Jean: Yeah, Lambda does something we want to handle on our roadmap, but I… believe we need a bigger team before [laugh] we are ready to tackle that.Corey: Yeah, we're going to need a bigger boat is very often [laugh] the story people have when they start looking at entire new architectural paradigms. So, you end up talking about working in containerized environments. Do you find that most of your deployments are living in cloud environments, in private data centers, some people call them private cloud. Where does the bulk of your user applications tend to live these days?Jean: The bulk of our user applications are in the cloud. So, we're targeting small to medium businesses to start. The reason being, we want to give our users a magical deployment experience. So, right now, a lot of our users are deploying in under 30 minutes. That's in no small part due to automations that we've built.And so, we initially made the strategic decision to focus on places where we get the most visibility. And so—where one, we get the most visibility, and two, we are ready for that level of scale. So, we found that, you know, for a large business, we've run inside some of their production environments and there are API calls that we don't yet handle well or it's just such a large number of calls, we're not doing the inference as well and our algorithms don't work as well. And so, we've made the decision to start small, build our way up, and start in places where we can just aggressively iterate because we can see everything that's going on. And so, we've stayed away, for instance, from any on-prem deployments for that reason because then we can't see everything that's going on. And so, smaller companies that are okay with us watching pretty much everything they're doing has been where we started. And now we're moving up into the medium-sized businesses.Corey: The challenge that I guess I'm still trying to wrap my head around is, I think that it takes someone with a particularly rosy set of glasses on to look at the current state of monitoring and observability and say that it's not profoundly broken in a whole bunch of ways. Now, where it all falls apart, Tower of Babelesque, is that there doesn't seem to be consensus on where exactly it's broken. Where do you see, I guess, this coming apart at the seams?Jean: I agree, it's broken. And so, if I tap into my background, which is I was a programming languages person in my very recently, previous life, programming languages people like to say the problem and the solution is all lies in abstraction. And so, computing is all about building abstractions on top of what you have now so that you don't have to deal with so many details and you got to think at a higher level; you're free of the shackles of so many low-level details. What I see is that today, monitoring and observability is a sort of abstraction nightmare. People have just taken it as gospel that you need to live at the lowest level of abstraction possible the same way that people truly believe that assembly code was the way everybody was going to program forevermore back, you know, 50 years ago.So today, what's happening is that when people think monitoring, they think logs, not what's wrong with my system, what do I need to pay attention to? They think, “I have to log everything, I have to consume all those logs, we're just operating at the level of logs.” And that's not wrong because there haven't been any tools that have given people any help above the level of logs. Although that's not entirely correct, you know? There's also events and there's also traces, but I wouldn't say that's actually lifting the level of [laugh] abstraction very much either.And so, people today are thinking about monitoring and observability as this full control, like, I'm driving my, like, race car, completely manual transmission, I want to feel everything. And not everyone wants to or needs to do that to get to where they need to go. And so, my question is, how far are can we lift the level of abstraction for monitoring and observability? I don't believe that other people are really asking this question because most of the other players in the space, they're asking what else can we monitor? Where else can we monitor it? How much faster can we do it? Or how much more detail can we give the people who really want the power tools?But the people entering the buyer's market with needs, they're not people—you don't have, like, you know, hordes of people who need more powerful tools. You have people who don't know about the systems are dealing with and they want easier. They want to figure out if there's anything wrong with our system so they can get off work and do other things with their lives.Corey: That, I think, is probably the thing that gets overlooked the most. It's people don't tend to log into their monitoring systems very often. They don't want to. When they do, it's always out of hours, middle of the night, and they're confronted with a whole bunch of upsell dialogs of, “Hey, it's been a while. You want to go on a tour of the new interface?”Meanwhile, anything with half a brain can see there's a giant spike on the graph or telemetry stop coming in.Jean: Yeah.Corey: It's way outside of normal business hours where this person is and maybe they're not going to be in the best mood to engage with your brand.Jean: Yeah. Right now, I think a lot of the problem is, you're either working with monitoring because you're desperate, you're in the middle of an active incident, or you're a monitoring fanatic. And there isn't a lot in between. So, there's a tweet that someone in my network tweeted me that I really liked which is, “Monitoring should be a chore, not a hobby.” And right now, it's either a hobby or an urgent necessity [laugh].And when it gets to the point—so you know, if we think about doing dishes this way, it would be as if, like, only, like, the dish fanatics did dishes, or, like, you will just have piles of dishes, like, all over the place and raccoons and no dishes left, and then you're, like, “Ah, time to do a thing.” But there should be something in between where there's a defined set of things that people can do on a regular basis to keep up with what they're doing. It should be accessible to everyone on the team, not just a couple of people who are true fanatics. No offense to the people out there, I love you guys, you're the ones who are really helping us build our tool the most, but you know, there's got to be a world in which more people are able to do the things you do.Corey: That's part of the challenge is bringing a lot of the fire down from Mount Olympus to the rest of humanity, where at some level, Prometheus was a great name from that—Jean: Yep [laugh].Corey: Just from that perspective because you basically need to be at that level of insight. I think Kubernetes suffers from the same overall problem where it is not reasonably responsible to run a Kubernetes production cluster without some people who really know what's going on. That's rapidly changing, which is for the better, because most companies are not going to be able to afford a multimillion-dollar team of operators who know the ins and outs of these incredibly complex systems. It has to become more accessible and simpler. And we have an entire near century at this point of watching abstractions get more and more and more complex and then collapsing down in this particular field. And I think that we're overdue for that correction in a lot of the modern infrastructure, tooling, and approaches that we take.Jean: I agree. It hasn't happened yet in monitoring and observability. It's happened in coding, it's happened in infrastructure, it's happened in APIs, but all of that has made it so that it's easier to get into monitoring debt. And it just hasn't happened yet for anything that's more reactive and more about understanding what the system is that you have.Corey: You mentioned specifically that your background was in programming languages. That's understating it slightly. You were a tenure-track professor of computer science at Carnegie Mellon before entering industry. How tied to what your area of academic speciality was, is what you're now at Akita?Jean: That's a great question and there are two answers to that. The first is very not tied. If it were tied, I would have stayed in my very cushy, highly [laugh] competitive job that I worked for years to get, to do stuff there. And so like, what we're doing now is comes out of thousands of conversations with developers and desire to build on the ground tools that I'm—there's some technically interesting parts to it, for sure. I think that our technical innovation is our moat, but is it at the level of publishable papers? Publishable papers are a very narrow thing; I wouldn't be able to say yes to that question.On the other hand, everything that I was trained to do was about identifying a problem and coming up with an out-of-the-box solution for it. And especially in programming languages research, it's really about abstractions. It's really about, you know, taking a set of patterns that you see of problems people have, coming up with the right abstractions to solve that problem, evaluating your solution, and then, you know, prototyping that out and building on top of it. And so, in that case, you know, we identified, hey, people have a huge gap when it comes to monitoring and observability. I framed it as an abstraction problem, how can we lift it up?We saw APIs as this is a great level to build a new level of solution. And our solution, it's innovative, but it also solves the problem. And to me, that's the most important thing. Our solution didn't need to be innovative. If you're operating in an academic setting, it's really about… producing a new idea. It doesn't actually [laugh]—I like to believe that all endeavors really have one main goal, and in academia, the main goal is producing something new. And to me, building a product is about solving a problem and our main endeavor was really to solve a real problem here.Corey: I think that it is, in many cases, useful when we start seeing a lot of, I guess, overflow back and forth between academia and industry, in both directions. I think that it is doing academia a disservice when you start looking at it purely as pure theory, and oh yeah, they don't deal with any of the vocational stuff. Conversely, I think the idea that industry doesn't have anything to learn from academia is dramatically misunderstanding the way the world works. The idea of watching some of that ebb and flow and crossover between them is neat to see.Jean: Yeah, I agree. I think there's a lot of academics I super respect and admire who have done great things that are useful in industry. And it's really about, I think, what you want your main goal to be at the time. Is it, do you want to be optimizing for new ideas or contributing, like, a full solution to a problem at the time? But it's there's a lot of overlap in the skills you need.Corey: One last topic I'd like to dive into before we call it an episode is that there's an awful lot of hype around a variety of different things. And right now in this moment, AI seems to be one of those areas that is getting an awful lot of attention. It's clear too there's something of value there—unlike blockchain, which has struggled to identify anything that was not fraud as a value proposition for the last decade-and-a-half—but it's clear that AI is offering value already. You have recently, as of this recording, released an AI chatbot, which, okay, great. But what piques my interest is one, it's a dog, which… germane to my interest, by all means, and two, it is marketed as, and I quote, “Exceedingly polite.”Jean: [laugh].Corey: Manners are important. Tell me about this pupper.Jean: Yeah, this dog came really out of four or five days of one of our engineers experimenting with ChatGPT. So, for a little bit of background, I'll just say that I have been excited about the this latest wave of AI since the beginning. So, I think at the very beginning, a lot of dev tools people were skeptical of GitHub Copilot; there was a lot of controversy around GitHub Copilot. I was very early. And I think all the Copilot people retweeted me because I was just their earlies—like, one of their earliest fans. I was like, “This is the coolest thing I've seen.”I've actually spent the decade before making fun of AI-based [laugh] programming. But there were two things about GitHub Copilot that made my jaw drop. And that's related to your question. So, for a little bit of background, I did my PhD in a group focused on program synthesis. So, it was really about, how can we automatically generate programs from a variety of means? From constraints—Corey: Like copying and pasting off a Stack Overflow, or—Jean: Well, the—I mean, that actually one of the projects that my group was literally applying machine-learning to terabytes of other example programs to generate new programs. So, it was very similar to GitHub Copilot before GitHub Copilot. It was synthesizing API calls from analyzing terabytes of other API calls. And the thing that I had always been uncomfortable with these machine-learning approaches in my group was, they were in the compiler loop. So, it was, you know, you wrote some code, the compiler did some AI, and then it spit back out some code that, you know, like you just ran.And so, that never sat well with me. I always said, “Well, I don't really see how this is going to be practical,” because people can't just run random code that you basically got off the internet. And so, what really excited me about GitHub Copilot was the fact that it was in the editor loop. I was like, “Oh, my God.”Corey: It had the context. It was right there. You didn't have to go tabbing to something else.Jean: Exactly.Corey: Oh, yeah. I'm in the same boat. I think it is basically—I've seen the future unfolding before my eyes.Jean: Yeah. Was the autocomplete thing. And to me, that was the missing piece. Because in your editor, you always read your code before you go off and—you know, like, you read your code, whoever code reviews your code reads your code. There's always at least, you know, two pairs of eyes, at least theoretically, reading your code.So, that was one thing that was jaw-dropping to me. That was the revelation of Copilot. And then the other thing was that it was marketed not as, “We write your code for you,” but the whole Copilot marketing was that, you know, it kind of helps you with boilerplate. And to me, I had been obsessed with this idea of how can you help developers write less boilerplate for years. And so, this AI-supported boilerplate copiloting was very exciting to me.And I saw that is very much the beginning of a new era, where, yes, there's tons of data on how we should be programming. I mean, all of Akita is based on the fact that we should be mining all the data we have about how your system and your code is operating to help you do stuff better. And so, to me, you know, Copilot is very much in that same philosophy. But our AI chatbot is, you know, just a next step along this progression. Because for us, you know, we collect all this data about your API behavior; we have been using non-AI methods to analyze this data and show it to you.And what ChatGPT allowed us to do in less than a week was analyze this data using very powerful large-language models and I have this conversational interface that both gives you the opportunity to check over and follow up on the question so that what you're spitting out—so what we're spitting out as Aki the dog doesn't have to be a hundred percent correct. But to me, the fact that Aki is exceedingly polite and kind of goofy—he, you know, randomly woofs and says a lot of things about how he's a dog—it's the right level of seriousness so that it's not messaging, hey, this is the end all, be all, the way, you know, the compiler loop never sat well with me because I just felt deeply uncomfortable that an AI was having that level of authority in a system, but a friendly dog that shows up and tells you some things that you can ask some additional questions to, no one's going to take him that seriously. But if he says something useful, you're going to listen. And so, I was really excited about the way this was set up. Because I mean, I believe that AI should be a collaborator and it should be a collaborator that you never take with full authority. And so, the chat and the politeness covered those two parts for me both.Corey: Yeah, on some level, I can't shake the feeling that it's still very early days there for Chat-Gipity—yes, that's how I pronounce it—and it's brethren as far as redefining, on some level, what's possible. I think that it's in many cases being overhyped, but it's solving an awful lot of the… the boilerplate, the stuff that is challenging. A question I have, though, is that, as a former professor, a concern that I have is when students are using this, it's less to do with the fact that they're not—they're taking shortcuts that weren't available to me and wanting to make them suffer, but rather, it's, on some level, if you use it to write your English papers, for example. Okay, great, it gets the boring essay you don't want to write out of the way, but the reason you write those things is it teaches you to form a story, to tell a narrative, to structure an argument, and I think that letting the computer do those things, on some level, has the potential to weaken us across the board. Where do you stand on it, given that you see both sides of that particular snake?Jean: So, here's a devil's advocate sort of response to it, is that maybe the writing [laugh] was never the important part. And it's, as you say, telling the story was the important part. And so, what better way to distill that out than the prompt engineering piece of it? Because if you knew that you could always get someone to flesh out your story for you, then it really comes down to, you know, I want to tell a story with these five main points. And in some way, you could see this as a playing field leveler.You know, I think that as a—English is actually not my first language. I spent a lot of time editing my parents writing for their work when I was a kid. And something I always felt really strongly about was not discriminating against people because they can't form sentences or they don't have the right idioms. And I actually spent a lot of time proofreading my friends' emails when I was in grad school for the non-native English speakers. And so, one way you could see this as, look, people who are not insiders now are on the same playing field. They just have to be clear thinkers.Corey: That is a fascinating take. I think I'm going to have to—I'm going to have to ruminate on that one. I really want to thank you for taking the time to speak with me today about what you're up to. If people want to learn more, where's the best place for them to find you?Jean: Well, I'm always on Twitter, still [laugh]. I'm @jeanqasaur—J-E-A-N-Q-A-S-A-U-R. And there's a chat dialog on akitasoftware.com. I [laugh] personally oversee a lot of that chat, so if you ever want to find me, that is a place, you know, where all messages will get back to me somehow.Corey: And we will, of course, put a link to that into the [show notes 00:35:01]. Thank you so much for your time. I appreciate it.Jean: Thank you, Corey.Corey: Jean Yang, CEO at Akita Software. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry insulting comment that you will then, of course, proceed to copy to the other 17 podcast tools that you use, just like you do your observability monitoring suite.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Chris' sticky upgrade situation, and we chat with the developer behind an impressive mesh VPN with new tricks. Special Guest: Ryan Huber.