POPULARITY
With NixOS 25.05 around the corner, we sit down with a release manager to unpack what's new, what's changing, and what's finally getting easier. Spoiler: it's not just the tooling.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
Send us a textA comprehensive introduction to APIs, detailing how they facilitate data exchanges between different systems using protocols like HTTP and formats like JSON. It covers the principles of RESTful API design and discusses how APIs can be developed and refined iteratively, including the challenges and strategies involved in managing different versions of an API.Agenda• What is an API?• Data exchange between systems – frontend and backend• What are HTTP and JSON?• RestFul API basics• API versioning options and challengesMaciej GowinDeveloper, practitioner, easily accessible knowledge-sharing enthusiast. Head of Java Development at Ryanair Labs. Former programming teacher and PhD student at Jagiellonian University.Currently a lecturer and co-creator of the postgraduate program “Programming in Java” at WSB Merito University in Wrocław and Warsaw, Poland.Promoter of the idea of learning based on embedding subjects in the context of their actual application. Find us here: www.agileleanireland.org
Steve McDougall (aka JustSteveKing) is known as the "API guy" on Twitter. In today's episode we start with the question, "what if the best option is just a single page app with a good, RESTful API?"Links:HAL - Hypertext Application LanguageJSON:API SpecLaravel SanctumAPI Versioning Blog PostSteve on Twitter (follow for updates on upcoming course)
Topics covered in this episode: PostgresREST How Python Asyncio Works: Recreating it from Scratch Bend The Smartest Way to Learn Python Regular Expressions Extras Joke Watch on YouTube About the show Sponsored by Mailtrap: pythonbytes.fm/mailtrap Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: PostgresREST PostgREST serves a fully RESTful API from any existing PostgreSQL database. It provides a cleaner, more standards-compliant, faster API than you are likely to write from scratch. Speedy First the server is written in Haskell using the Warp HTTP server (aka a compiled language with lightweight threads). Next it delegates as much calculation as possible to the database. Finally it uses the database efficiently with the Hasql library PostgREST handles authentication (via JSON Web Tokens) and delegates authorization to the role information defined in the database. This ensures there is a single declarative source of truth for security. Brian #2: How Python Asyncio Works: Recreating it from Scratch Jacob Padilla Cool tutorial walking through how async works, including Generators Review The Event Loop Sleeping Yield to Await Await with AsyncIO Another great async resource is: Build your Own Async David Beasley talk from 2019 Michael #3: Bend A massively parallel, high-level programming language. With Bend you can write parallel code for multi-core CPUs/GPUs without being a C/CUDA expert with 10 years of experience. It feels just like Python! No need to deal with the complexity of concurrent programming: locks, mutexes, atomics... any work that can be done in parallel will be done in parallel. Brian #4: The Smartest Way to Learn Python Regular Expressions Christian Mayer, Zohaib Riaz, and Lukas Rieger Self published ebook on Python Regex that utilizes book form readings, links to video course sections puzzle challenges to complete online It's a paid resource, but the min is free. Extras Brian: Replay - A graphic memoir by Prince of Persia creator Jordan Mechner, recounting his own family story of war, exile and new beginnings. Michael: PyCon 2026 Joke: Shells Scripts
This week on the show, we discuss a variety of web and web-adjacent topics. Adam is feeling dubious about recommending a career in web development to his children (is it still worth it)? Ben legitimately wants to understand why we - the web development community - don't approach Testing with a YAGNI (You Ain't Gonna Need It) mindset. And, Tim wants to consider different ways to handle errors in a RESTful API.Follow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @WorkingCodePod on Twitter and Instagram. New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.
Software Engineering Radio - The Podcast for Professional Software Developers
Bastian Gruber, author of the book Rust Web Development, speaks with host Philip Winston about creating server-based web applications with Rust. They explore Rust language features, tooling, and web frameworks such as Warp and Tokio. From there, they examine the steps to build a simple web server and a RESTful API, as well as modules, logging and tracing, and other aspects of web development with Rust.
The digital banking industry has evolved significantly over the past few years. A major part of this transformation has involved banking institutes and Fintechs working together to provide services to their customers. This collaboration has been possible through APIs and governing standards. However, adoption of APIs with a proper understanding of compliance can be a major challenge in this multi-stakeholder setup. In this episode of the All About APIs podcast, we speak with David Biesack, Chief API officer at Apiture, a company focused on enabling financial institutions to give the best digital banking experience to their customers. As Chief API officer, David is responsible for the digital enablement for financial institutions through APIs working with different stakeholders to drive API education internally and enabling adoption externally. ------------------- David Biesack is Chief API Officer at Apiture and has served as Lead API Architect since the company's founding in 2017. David applies his passion for building great APIs and a compelling developer experience to the design and architecture of Apiture's open banking API ecosystem. David has over thirty years of industry experience creating reusable, robust, and elegant enterprise software systems. He strives to make software better, stronger, faster at solving complex customer problems. Prior to joining Apiture, David worked at SAS for 28 years, most recently leading SAS' API Center of Excellence and SAS' adoption of RESTful API design. David has a BS in Computer Science from Purdue University and an MS in CS from Rensselaer Polytechnic Institute. David is a contributor to Financial Data Exchange. He has presented at API and finance conferences and has authored numerous API blogs and articles. David's secret ambition is to narrate BBC nature documentaries. Visit David on social media: LinkedIn Mastadon GitHub Twitter
About AllenAllen is a cloud architect at Tyler Technologies. He helps modernize government software by creating secure, highly scalable, and fault-tolerant serverless applications.Allen publishes content regularly about serverless concepts and design on his blog - Ready, Set Cloud!Links Referenced: Ready, Set, Cloud blog: https://readysetcloud.io Tyler Technologies: https://www.tylertech.com/ Twitter: https://twitter.com/allenheltondev Linked: https://www.linkedin.com/in/allenheltondev/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while I wind up stumbling into corners of the internet that I previously had not traveled. Somewhat recently, I wound up having that delightful experience again by discovering readysetcloud.io, which has a whole series of, I guess some people might call it thought leadership, I'm going to call it instead how I view it, which is just amazing opinion pieces on the context of serverless, mixed with APIs, mixed with some prognostications about the future.Allen Helton by day is a cloud architect at Tyler Technologies, but that's not how I encountered you. First off, Allen, thank you for joining me.Allen: Thank you, Corey. Happy to be here.Corey: I was originally pointed towards your work by folks in the AWS Community Builder program, of which we both participate from time to time, and it's one of those, “Oh, wow, this is amazing. I really wish I'd discovered some of this sooner.” And every time I look through your back catalog, and I click on a new post, I see things that are either I've really agree with this or I can't stand this opinion, I want to fight about it, but more often than not, it's one of those recurring moments that I love: “Damn, I wish I had written something like this.” So first, you're absolutely killing it on the content front.Allen: Thank you, Corey, I appreciate that. The content that I make is really about the stuff that I'm doing at work. It's stuff that I'm passionate about, stuff that I'd spend a decent amount of time on, and really the most important thing about it for me, is it's stuff that I'm learning and forming opinions on and wants to share with others.Corey: I have to say, when I saw that you were—oh, your Tyler Technologies, which sounds for all the world like, oh, it's a relatively small consultancy run by some guy presumably named Tyler, and you know, it's a petite team of maybe 20, 30 people on the outside. Yeah, then I realized, wait a minute, that's not entirely true. For example, for starters, you're publicly traded. And okay, that does change things a little bit. First off, who are you people? Secondly, what do you do? And third, why have I never heard of you folks, until now?Allen: Tyler is the largest company that focuses completely on the public sector. We have divisions and products for pretty much everything that you can imagine that's in the public sector. We have software for schools, software for tax and appraisal, we have software for police officers, for courts, everything you can think of that runs the government can and a lot of times is run on Tyler software. We've been around for decades building our expertise in the domain, and the reason you probably haven't heard about us is because you might not have ever been in trouble with the law before. If you [laugh] if you have been—Corey: No, no, I learned very early on in the course of my life—which will come as a surprise to absolutely no one who spent more than 30 seconds with me—that I have remarkably little filter and if ten kids were the ones doing something wrong, I'm the one that gets caught. So, I spent a lot of time in the principal's office, so this taught me to keep my nose clean. I'm one of those squeaky-clean types, just because I was always terrified of getting punished because I knew I would get caught. I'm not saying this is the right way to go through life necessarily, but it did have the side benefit of, no, I don't really engage with law enforcement going throughout the course of my life.Allen: That's good. That's good. But one exposure that a lot of people get to Tyler is if you look at the bottom of your next traffic ticket, it'll probably say Tyler Technologies on the bottom there.Corey: Oh, so you're really popular in certain circles, I'd imagine?Allen: Super popular. Yes, yes. And of course, you get all the benefits of writing that code that says ‘if defendant equals Allen Helton then return.'Corey: I like that. You get to have the exception cases built in that no one's ever going to wind up looking into.Allen: That's right. Yes.Corey: The idea of what you're doing makes an awful lot of sense. There's a tremendous need for a wide variety of technical assistance in the public sector. What surprises me, although I guess it probably shouldn't, is how much of your content is aimed at serverless technologies and API design, which to my way of thinking, isn't really something that public sector has done a lot with. Clearly I'm wrong.Allen: Historically, you're not wrong. There's an old saying that government tends to run about ten years behind on technology. Not just technology, but all over the board and runs about ten years behind. And until recently, that's really been true. There was a case last year, a situation last year where one of the state governments—I don't remember which one it was—but they were having a crisis because they couldn't find any COBOL developers to come in and maintain their software that runs the state.And it's COBOL; you're not going to find a whole lot of people that have that skill. A lot of those people are retiring out. And what's happening is that we're getting new people sitting in positions of power and government that want innovation. They know about the cloud and they want to be able to integrate with systems quickly and easily, have little to no onboarding time. You know, there are people in power that have grown up with technology and understand that, well, with everything else, I can be up and running in five or ten minutes. I cannot do this with the software I'm consuming now.Corey: My opinion on it is admittedly conflicted because on the one hand, yeah, I don't think that governments should be running on COBOL software that runs on mainframes that haven't been supported in 25 years. Conversely, I also don't necessarily want them being run like a seed series startup, where, “Well, I wrote this code last night, and it's awesome, so off I go to production with it.” Because I can decide not to do business anymore with Twitter for Pets, and I could go on to something else, like PetFlicks, or whatever it is I choose to use. I can't easily opt out of my government. The decisions that they make stick and that is going to have a meaningful impact on my life and everyone else's life who is subject to their jurisdiction. So, I guess I don't really know where I believe the proper, I guess, pace of technological adoption should be for governments. Curious to get your thoughts on this.Allen: Well, you certainly don't want anything that's bleeding edge. That's one of the things that we kind of draw fine lines around. Because when we're dealing with government software, we're dealing with, usually, critically sensitive information. It's not medical records, but it's your criminal record, and it's things like your social security number, it's things that you can't have leaking out under any circumstances. So, the things that we're building on are things that have proven out to be secure and have best practices around security, uptime, reliability, and in a lot of cases as well, and maintainability. You know, if there are issues, then let's try to get those turned around as quickly as we can because we don't want to have any sort of downtime from the software side versus the software vendor side.Corey: I want to pivot a little bit to some of the content you've put out because an awful lot of it seems to be, I think I'll call it variations on a theme. For example, I just read some recent titles, and to illustrate my point, “Going API First: Your First 30 Days,” “Solutions Architect Tips how to Design Applications for Growth,” “3 Things to Know Before Building A Multi-Tenant Serverless App.” And the common thread that I see running through all of these things are these are things that you tend to have extraordinarily strong and vocal opinions about only after dismissing all of them the first time and slapping something together, and then sort of being forced to live with the consequences of the choices that you've made, in some cases you didn't realize you were making at the time. Are you one of those folks that has the wisdom to see what's coming down the road, or did you do what the rest of us do and basically learn all this stuff by getting it hilariously wrong and having to careen into rebound situations as a result?Allen: [laugh]. I love that question. I would like to say now, I feel like I have the vision to see something like that coming. Historically, no, not at all. Let me talk a little bit about how I got to where I am because that will shed a lot of context on that question.A few years ago, I was put into a position at Tyler that said, “Hey, go figure out this cloud thing.” Let's figure out what we need to do to move into the cloud safely, securely, quickly, all that rigmarole. And so, I did. I got to hand-select team of engineers from people that I worked with at Tyler over the past few years, and we were basically given free rein to learn. We were an R&D team, a hundred percent R&D, for about a year's worth of time, where we were learning about cloud concepts and theory and building little proof of concepts.CI/CD, serverless, APIs, multi-tenancy, a whole bunch of different stuff. NoSQL was another one of the things that we had to learn. And after that year of R&D, we were told, “Okay, now go do something with that. Go build this application.” And we did, building on our theory our cursory theory knowledge. And we get pretty close to go live, and then the business says, “What do you do in this scenario? What do you do in that scenario? What do you do here?”Corey: “I update my resume and go work somewhere else. Where's the hard part here?”Allen: [laugh].Corey: Turns out, that's not a convincing answer.Allen: Right. So, we moved quickly. And then I wouldn't say we backpedaled, but we hardened for a long time before the—prior to the go-live, with the lessons that we've learned with the eyes of Tyler, the mature enterprise company, saying, “These are the things that you have to make sure that you take into consideration in an actual production application.” One of the things that I always pushed—I was a manager for a few years of all these cloud teams—I always push do it; do it right; do it better. Right?It's kind of like crawl, walk, run. And if you follow my writing from the beginning, just looking at the titles and reading them, kind of like what you were doing, Corey, you'll see that very much. You'll see how I talk about CI/CD, you'll see me how I talk about authorization, you'll see me how I talk about multi-tenancy. And I kind of go in waves where maybe a year passes and you see my content revisit some of the topics that I've done in the past. And they're like, “No, no, no, don't do what I said before. It's not right.”Corey: The problem when I'm writing all of these things that I do, for example, my entire newsletter publication pipeline is built on a giant morass of Lambda functions and API Gateways. It's microservices-driven—kind of—and each microservice is built, almost always, with a different framework. Lately, all the new stuff is CDK. I started off with the serverless framework. There are a few other things here and there.And it's like going architecting, back in time as I have to make updates to these things from time to time. And it's the problem with having done all that myself is that I already know the answer to, “What fool designed this?” It's, well, you're basically watching me learn what I was, doing bit by bit. I'm starting to believe that the right answer on some level, is to build an inherent shelf-life into some of these things. Great, in five years, you're going to come back and re-architect it now that you know how this stuff actually works rather than patching together 15 blog posts by different authors, not all of whom are talking about the same thing and hoping for the best.Allen: Yep. That's one of the things that I really like about serverless, I view that as a giant pro of doing Serverless is that when we revisit with the lessons learned, we don't have to refactor everything at once like if it was just a big, you know, MVC controller out there in the sky. We can refactor one Lambda function at a time if now we're using a new version of the AWS SDK, or we've learned about a new best practice that needs to go in place. It's a, “While you're in there, tidy up, please,” kind of deal.Corey: I know that the DynamoDB fanatics will absolutely murder me over this one, but one of the reasons that I have multiple Dynamo tables that contain, effectively, variations on the exact same data, is because I want to have the dependency between the two different microservices be the API, not, “Oh, and under the hood, it's expecting this exact same data structure all the time.” But it just felt like that was the wrong direction to go in. That is the justification I use for myself why I run multiple DynamoDB tables that [laugh] have the same content. Where do you fall on the idea of data store separation?Allen: I'm a big single table design person myself, I really like the idea of being able to store everything in the same table and being able to create queries that can return me multiple different types of entity with one lookup. Now, that being said, one of the issues that we ran into, or one of the ambiguous areas when we were getting started with serverless was, what does single table design mean when you're talking about microservices? We were wondering does single table mean one DynamoDB table for an entire application that's composed of 15 microservices? Or is it one table per microservice? And that was ultimately what we ended up going with is a table per microservice. Even if multiple microservices are pushed into the same AWS account, we're still building that logical construct of a microservice and one table that houses similar entities in the same domain.Corey: So, something I wish that every service team at AWS would do as a part of their design is draw the architecture of an application that you're planning to build. Great, now assume that every single resource on that architecture diagram lives in its own distinct AWS account because somewhere in some customer, there's going to be an account boundary at every interconnection point along the way. And so, many services don't do that where it's, “Oh, that thing and the other thing has to be in the same account.” So, people have to write their own integration shims, and it makes doing the right thing of putting different services into distinct bounded AWS accounts for security or compliance reasons way harder than I feel like it needs to be.Allen: [laugh]. Totally agree with you on that one. That's one of the things that I feel like I'm still learning about is the account-level isolation. I'm still kind of early on, personally, with my opinions in how we're structuring things right now, but I'm very much of a like opinion that deploying multiple things into the same account is going to make it too easy to do something that you shouldn't. And I just try not to inherently trust people, in the sense that, “Oh, this is easy. I'm just going to cross that boundary real quick.”Corey: For me, it's also come down to security risk exposure. Like my lasttweetinaws.com Twitter shitposting thread client lives in a distinct AWS account that is separate from the AWS account that has all of our client billing data that lives within it. The idea being that if you find a way to compromise my public-facing Twitter client, great, the blast radius should be constrained to, “Yay, now you can, I don't know, spin up some cryptocurrency mining in my AWS account and I get to look like a fool when I beg AWS for forgiveness.”But that should be the end of it. It shouldn't be a security incident because I should not have the credit card numbers living right next to the funny internet web thing. That sort of flies in the face of the original guidance that AWS gave at launch. And right around 2008-era, best practices were one customer, one AWS account. And then by 2012, they had changed their perspective, but once you've made a decision to build multiple services in a single account, unwinding and unpacking that becomes an incredibly burdensome thing. It's about the equivalent of doing a cloud migration, in some ways.Allen: We went through that. We started off building one application with the intent that it was going to be a siloed application, a one-off, essentially. And about a year into it, it's one of those moments of, “Oh, no. What we're building is not actually a one-off. It's a piece to a much larger puzzle.”And we had a whole bunch of—unfortunately—tightly coupled things that were in there that we're assuming that resources were going to be in the same AWS account. So, we ended up—how long—I think we took probably two months, which in the grand scheme of things isn't that long, but two months, kind of unwinding the pieces and decoupling what was possible at the time into multiple AWS accounts, kind of, segmented by domain, essentially. But that's hard. AWS puts it, you know, it's those one-way door decisions. I think this one was a two-way door, but it locked and you could kind of jimmy the lock on the way back out.Corey: And you could buzz someone from the lobby to let you back in. Yeah, the biggest problem is not necessarily the one-way door decisions. It's the one-way door decisions that you don't realize you're passing through at the time that you do them. Which, of course, brings us to a topic near and dear to your heart—and I only recently started have opinions on this myself—and that is the proper design of APIs, which I'm sure will incense absolutely no one who's listening to this. Like, my opinions on APIs start with well, probably REST is the right answer in this day and age. I had people, like, “Well, I don't know, GraphQL is pretty awesome.” Like, “Oh, I'm thinking SOAP,” and people look at me like I'm a monster from the Black Lagoon of centuries past in XML-land. So, my particular brand of strangeness side, what do you see that people are doing in the world of API design that is the, I guess, most common or easy to make mistakes that you really wish they would stop doing?Allen: If I could boil it down to one word, fundamentalism. Let me unpack that for you.Corey: Oh, please, absolutely want to get a definition on that one.Allen: [laugh]. I approach API design from a developer experience point of view: how easy is it for both internal and external integrators to consume and satisfy the business processes that they want to accomplish? And a lot of times, REST guidelines, you know, it's all about entity basis, you know, drill into the appropriate entities and name your endpoints with nouns, not verbs. I'm actually very much onto that one.But something that you could easily do, let's say you have a business process that given a fundamentally correct RESTful API design takes ten API calls to satisfy. You could, in theory, boil that down to maybe three well-designed endpoints that aren't, quote-unquote, “RESTful,” that make that developer experience significantly easier. And if you were a fundamentalist, that option is not even on the table, but thinking about it pragmatically from a developer experience point of view, that might be the better call. So, that's one of the things that, I know feels like a hot take. Every time I say it, I get a little bit of flack for it, but don't be a fundamentalist when it comes to your API designs. Do something that makes it easier while staying in the guidelines to do what you want.Corey: For me the problem that I've kept smacking into with API design, and it honestly—let me be very clear on this—my first real exposure to API design rather than API consumer—which of course, I complain about constantly, especially in the context of the AWS inconsistent APIs between services—was when I'm building something out, and I'm reading the documentation for API Gateway, and oh, this is how you wind up having this stage linked to this thing, and here's the endpoint. And okay, great, so I would just populate—build out a structure or a schema that has the positional parameters I want to use as variables in my function. And that's awesome. And then I realized, “Oh, I might want to call this a different way. Aw, crap.” And sometimes it's easy; you just add a different endpoint. Other times, I have to significantly rethink things. And I can't shake the feeling that this is an entire discipline that exists that I just haven't had a whole lot of exposure to previously.Allen: Yeah, I believe that. One of the things that you could tie a metaphor to for what I'm saying and kind of what you're saying, is AWS SAM, the Serverless Application Model, all it does is basically macros CloudFormation resources. It's just a transform from a template into CloudFormation. CDK does same thing. But what the developers of SAM have done is they've recognized these business processes that people do regularly, and they've made these incredibly easy ways to satisfy those business processes and tie them all together, right?If I want to have a Lambda function that is backed behind a endpoint, an API endpoint, I just have to add four or five lines of YAML or JSON that says, “This is the event trigger, here's the route, here's the API.” And then it goes and does four, five, six different things. Now, there's some engineers that don't like that because sometimes that feels like magic. Sometimes a little bit magic is okay.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: I feel like one of the benefits I've had with the vast majority of APIs that I've built is that because this is all relatively small-scale stuff for what amounts to basically shitposting for the sake of entertainment, I'm really the only consumer of an awful lot of these things. So, I get frustrated when I have to backtrack and make changes and teach other microservices to talk to this thing that has now changed. And it's frustrating, but I have the capacity to do that. It's just work for a period of time. I feel like that equation completely shifts when you have published this and it is now out in the world, and it's not just users, but in many cases paying customers where you can't really make those changes without significant notice, and every time you do you're creating work for those customers, so you have to be a lot more judicious about it.Allen: Oh, yeah. There is a whole lot of governance and practice that goes into production-level APIs that people integrate with. You know, they say once you push something out the door into production that you're going to support it forever. I don't disagree with that. That seems like something that a lot of people don't understand.And that's one of the reasons why I push API-first development so hard in all the content that I write is because you need to be intentional about what you're letting out the door. You need to go in and work, not just with the developers, but your product people and your analysts to say, what does this absolutely need to do, and what does it need to do in the future? And you take those things, and you work with analysts who want specifics, you work with the engineers to actually build it out. And you're very intentional about what goes out the door that first time because once it goes out with a mistake, you're either going to version it immediately or you're going to make some people very unhappy when you make a breaking change to something that they immediately started consuming.Corey: It absolutely feels like that's one of those things that AWS gets astonishingly right. I mean, I had the privilege of interviewing, at the time, Jeff Barr and then Ariel Kelman, who was their head of marketing, to basically debunk a bunch of old myths. And one thing that they started talking about extensively was the idea that an API is fundamentally a promise to your customers. And when you make a promise, you'd better damn well intend on keeping it. It's why API deprecations from AWS are effectively unique whenever something happens.It's the, this is a singular moment in time when they turn off a service or degrade old functionality in favor of new. They can add to it, they can launch a V2 of something and then start to wean people off by calling the old one classic or whatnot, but if I built something on AWS in 2008 and I wound up sleeping until today, and go and try and do the exact same thing and deploy it now, it will almost certainly work exactly as it did back then. Sure, reliability is going to be a lot better and there's a crap ton of features and whatnot that I'm not taking advantage of, but that fundamental ability to do that is awesome. Conversely, it feels like Google Cloud likes to change around a lot of their API stories almost constantly. And it's unplanned work that frustrates the heck out of me when I'm trying to build something stable and lasting on top of it.Allen: I think it goes to show the maturity of these companies as API companies versus just vendors. It's one of the things that I think AWS does [laugh]—Corey: You see the similar dichotomy with Microsoft and Apple. Microsoft's new versions of Windows generally still have functionalities in them to support stuff that was written in the '90s for a few use cases, whereas Apple's like, “Oh, your computer's more than 18-months old? Have you tried throwing it away and buying a new one? And oh, it's a new version of Mac OS, so yeah, maybe the last one would get security updates for a year and then get with the times.” And I can't shake the feeling that the correct answer is in some way, both of those, depending upon who your customer is and what it is you're trying to achieve.If Microsoft adopted the Apple approach, their customers would mutiny, and rightfully so; the expectation has been set for decades that isn't what happens. Conversely, if Apple decided now we're going to support this version of Mac OS in perpetuity, I don't think a lot of their application developers wouldn't quite know what to make of that.Allen: Yeah. I think it also comes from a standpoint of you better make it worth their while if you're going to move their cheese. I'm not a Mac user myself, but from what I hear for Mac users—and this could be rose-colored glasses—but is that their stuff works phenomenally well. You know, when a new thing comes out—Corey: Until it doesn't, absolutely. It's—whenever I say things like that on this show, I get letters. And it's, “Oh, yeah, really? They'll come up with something that is a colossal pain in the ass on Mac.” Like, yeah, “Try building a system-wide mute key.”It's yeah, that's just a hotkey away on windows and here in Mac land. It's, “But it makes such beautiful sounds. Why would you want them to be quiet?” And it's, yeah, it becomes this back-and-forth dichotomy there. And you can even explain it to iPhones as well and the Android ecosystem where it's, oh, you're going to support the last couple of versions of iOS.Well, as a developer, I don't want to do that. And Apple's position is, “Okay, great.” Almost half of the mobile users on the planet will be upgrading because they're in the ecosystem. Do you want us to be able to sell things those people are not? And they're at a point of scale where they get to dictate those terms.On some level, there are benefits to it and others, it is intensely frustrating. I don't know what the right answer is on the level of permanence on that level of platform. I only have slightly better ideas around the position of APIs. I will say that when AWS deprecates something, they reach out individually to affected customers, on some level, and invariably, when they say, “This is going to be deprecated as of August 31,” or whenever it is, yeah, it is going to slip at least twice in almost every case, just because they're not going to turn off a service that is revenue-bearing or critical-load-bearing for customers without massive amounts of notice and outreach, and in some cases according to rumor, having engineers reach out to help restructure things so it's not as big of a burden on customers. That's a level of customer focus that I don't think most other companies are capable of matching.Allen: I think that comes with the size and the history of Amazon. And one of the things that they're doing right now, we've used Amazon Cloud Cams for years, in my house. We use them as baby monitors. And they—Corey: Yea, I saw this I did something very similar with Nest. They didn't have the Cloud Cam at the right time that I was looking at it. And they just announced that they're going to be deprecating. They're withdrawing them for sale. They're not going to support them anymore. Which, oh at Amazon—we're not offering this anymore. But you tell the story; what are they offering existing customers?Allen: Yeah, so slightly upset about it because I like my Cloud Cams and I don't want to have to take them off the wall or wherever they are to replace them with something else. But what they're doing is, you know, they gave me—or they gave all the customers about eight months head start. I think they're going to be taking them offline around Thanksgiving this year, just mid-November. And what they said is as compensation for you, we're going to send you a Blink Cam—a Blink Mini—for every Cloud Cam that you have in use, and then we are going to gift you a year subscription to the Pro for Blink.Corey: That's very reasonable for things that were bought years ago. Meanwhile, I feel like not to be unkind or uncharitable here, but I use Nest Cams. And that's a Google product. I half expected if they ever get deprecated, I'll find out because Google just turns it off in the middle of the night—Allen: [laugh].Corey: —and I wake up and have to read a blog post somewhere that they put an update on Nest Cams, the same way they killed Google Reader once upon a time. That's slightly unfair, but the fact that joke even lands does say a lot about Google's reputation in this space.Allen: For sure.Corey: One last topic I want to talk with you about before we call it a show is that at the time of this recording, you recently had a blog post titled, “What does the Future Hold for Serverless?” Summarize that for me. Where do you see this serverless movement—if you'll forgive the term—going?Allen: So, I'm going to start at the end. I'm going to work back a little bit on what needs to happen for us to get there. I have a feeling that in the future—I'm going to be vague about how far in the future this is—that we'll finally have a satisfied promise of all you're going to write in the future is business logic. And what does that mean? I think what can end up happening, given the right focus, the right companies, the right feedback, at the right time, is we can write code as developers and have that get pushed up into the cloud.And a phrase that I know Jeremy Daly likes to say ‘infrastructure from code,' where it provisions resources in the cloud for you based on your use case. I've developed an application and it gets pushed up in the cloud at the time of deploying it, optimized resource allocation. Over time, what will happen—with my future vision—is when you get production traffic going through, maybe it's spiky, maybe it's consistently at a scale that outperforms the resources that it originally provisioned. We can have monitoring tools that analyze that and pick that out, find the anomalies, find the standard patterns, and adjust that infrastructure that it deployed for you automatically, where it's based on your production traffic for what it created, optimizes it for you. Which is something that you can't do on an initial deployment right now. You can put what looks best on paper, but once you actually get traffic through your application, you realize that, you know, what was on paper might not be correct.Corey: You ever noticed that whiteboard diagrams never show the reality, and they're always aspirational, and they miss certain parts? And I used to think that this was the symptom I had from working at small, scrappy companies because you know what, those big tech companies, everything they build is amazing and awesome. I know it because I've seen their conference talks. But I've been a consultant long enough now, and for a number of those companies, to realize that nope, everyone's infrastructure is basically a trash fire at any given point in time. And it works almost in spite of itself, rather than because of it.There is no golden path where everything is shiny, new and beautiful. And that, honestly, I got to say, it was really [laugh] depressing when I first discovered it. Like, oh, God, even these really smart people who are so intelligent they have to have extra brain packs bolted to their chests don't have the magic answer to all of this. The rest of us are just screwed, then. But we find ways to make it work.Allen: Yep. There's a quote, I wish I remembered who said it, but it was a military quote where, “No battle plan survives impact with the enemy—first contact with the enemy.” It's kind of that way with infrastructure diagrams. We can draw it out however we want and then you turn it on in production. It's like, “Oh, no. That's not right.”Corey: I want to mix the metaphors there and say, yeah, no architecture survives your first fight with a customer. Like, “Great, I don't think that's quite what they're trying to say.” It's like, “What, you don't attack your customers? Pfft, what's your customer service line look like?” Yeah, it's… I think you're onto something.I think that inherently everything beyond the V1 design of almost anything is an emergent property where this is what we learned about it by running it and putting traffic through it and finding these problems, and here's how it wound up evolving to account for that.Allen: I agree. I don't have anything to add on that.Corey: [laugh]. Fair enough. I really want to thank you for taking so much time out of your day to talk about how you view these things. If people want to learn more, where is the best place to find you?Allen: Twitter is probably the best place to find me: @AllenHeltonDev. I have that username on all the major social platforms, so if you want to find me on LinkedIn, same thing: AllenHeltonDev. My blog is always open as well, if you have any feedback you'd like to give there: readysetcloud.io.Corey: And we will, of course, put links to that in the show notes. Thanks again for spending so much time talking to me. I really appreciate it.Allen: Yeah, this was fun. This was a lot of fun. I love talking shop.Corey: It shows. And it's nice to talk about things I don't spend enough time thinking about. Allen Helton, cloud architect at Tyler Technologies. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that I will reject because it was not written in valid XML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In this episode we are going to look at REST.We will be discussing REST and RESTful API, RESTful Implementation, URI, URN, and URL, Anatomy of a RESTful Request, and finally RESTful API Applications.Thank you so much for listening to this episode of my series on Enterprise Networking, Security, and Automation for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Cisco Certified Network Associate (CCNA)Enterprise Networking, Security, and Automation v3 (ENSA)Episode 14 - Network AutomationPart D - RESTPodcast Number: 72-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
This episode of the InfoQ podcast is the API Showdown, recorded during QCon Plus in November 2021. What is the single best API technology you should always use? Thomas Betts moderated the discussion, with the goal to understand some of the high-level features and capabilities of three popular technologies for implementing APIs. The discussion covers some of the pros and cons of GraphQL and gRPC, and why you might use them instead of a RESTful API. Read a transcript of this interview: https://bit.ly/327JNmD Subscribe to our newsletters: - The InfoQ weekly newsletter: www.infoq.com/news/InfoQ-Newsletter/ - The Software Architects' Newsletter [monthly]: www.infoq.com/software-architects-newsletter/ Upcoming Virtual Events - events.infoq.com/ QCon London: https://qconlondon.com/ - April 4-6, 2022 / London, UK QCon Plus: https://plus.qconferences.com/ - May 10-20, 2022 - Nov 29 - Dec 9, 2022 QCon San Francisco https://qconsf.com/ - Oct 24-28, 2022 InfoQ Live: https://live.infoq.com/ - Feb 22, 2022 - June 21, 2022 - July 19, 2022 - August 23, 2022 Follow InfoQ: - Twitter: twitter.com/infoq - LinkedIn: www.linkedin.com/company/infoq/ - Facebook: www.facebook.com/InfoQdotcom/ - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq
Azure Digital Twins is a platform as a service offering that enables you to create live digital twin graphs that represent entities and their relationships. In today's world where your IDE automatically detects your coding style and makes recommendations using machine learning, creating a digital twin might seem like more work than you're accustomed. First, you'd need to create a document using the Digital Twin Definition Language (DTDL). Then you'd need to add it to Azure Digital Twins using a Restful API call. You then need to repeat this process as changes are required. A more familiar way might be to use Plain Old Class Objects or simply POCO. You may already be familiar with it if you've used Entity Framework Core. Using POCO, you may find it easier to create and manage your digital twins using a code first approach.Learn more about this at https://aka.ms/iotshow/CodeFirstDigitalTwins
Security, Data safety, and identity. Three major topics and issues for every investor. This episode will tell you how to protect your investments and business thanks to Dragonchain. * Our Host Joe Robert will go into detail with CEO Joe Roets about how Dragonchain came to be, and what their plans are for the future. They also discuss the Dragonchain token DRGN. * Dragonchain is a US-based blockchain software company. Simplifies the secure integration of real business applications and data on a blockchain. The blockchain platform provides features such as protection of business data and operations, multi-currency support, and fast RESTful API integrations with any blockchain or legacy system. * The company provides a fully open source ecosystem to enable the creation of successful and scalable blockchain projects for enterprises with long-term value. Dragonchain was originally developed at Disney's Seattle office as the Disney Private Blockchain Platform in 2014. ►Subscribe to Bull Flag Group: www.bullflaggroup.com *DISCLAIMER: The information provided is not legal, accounting, tax, or investment advice.
As explained in this piece, "A headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API or GraphQL API for display on any device." Shopify has leaned hard into GraphQL and APIs in general. The goal, as Coates describes it, is to allow developers to bring their own stack to the front-end, but provide them with the benefits of Shopify's back-end, like edge data processing for improved speed at global scale. Shopify also offers a wealth of DevOps tooling and logistical support when it comes to international commerce. We also discuss Liquid, the flexible template language Shopify uses for building web apps.Our lifeboat badge of the week goes to chunhunghan for answering the question: How to customize the switch button in a flutter?
As explained in this piece, "A headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API or GraphQL API for display on any device." Shopify has leaned hard into GraphQL and APIs in general. The goal, as Coates describes it, is to allow developers to bring their own stack to the front-end, but provide them with the benefits of Shopify's back-end, like edge data processing for improved speed at global scale. Shopify also offers a wealth of DevOps tooling and logistical support when it comes to international commerce. We also discuss Liquid, the flexible template language Shopify uses for building web apps.Our lifeboat badge of the week goes to chunhunghan for answering the question: How to customize the switch button in a flutter?
REST is an architectural style of communication, based on HTTP. It was proposed in the year 2000 by Roy Fielding. In his dissertation he describes the way systems should communicate, embracing fundamental features of HTTP. He puts emphasis on: statelessness, support for caching, uniform representation and self-discoverability. APIs that adhere to these priniciples are called RESTful. This academic paper is quite abstract so I'll focus on what it means in the enterprise. Also, it's much easier to understand what RESTful API is when contrasted to SOAP. And GraphQL released recently. Read more: https://256.nurkiewicz.com/44 Get the new episode straight to your mailbox: https://256.nurkiewicz.com/newsletter
Today’s episode features an interview between Matt Trifiro and Fay Arjomandi, Founder, President, and CEO of mimik technology.Fay is a serial entrepreneur, multiple-time CEO, and authoritative voice in the tech industry. She is an official member of the Forbes Technology Council, and recently received the State of the Edge and Edge Computing World 2020 Edge Woman of the Year Award.In this interview, Fay discusses the origins of mimik’s hybrid edge cloud computing application development platform, and how it is enabling digital transformation and helping build a socially and economically sustainable applications ecosystem. Key Quotes“We are a big supporter of ecosystem. That's number one. We're basically saying that the opportunity is way bigger than one single company winning everything…We think this is an ecosystem play, and the opportunity is quite lucrative for everybody.”“We believe in the ecosystem that is like a Bazaar. Everybody defines their own set of things that they sell, and the consumer can pick and choose different parts of the product that they want from different entities and different companies. But the Bazaar itself goes by a set of regulations.”“The fast pace of the world is why restful APIs are so important, because there is so much we don't know. We don't know what we don't know. And I think that's one of the challenges of enterprises right now, it's almost like ‘Okay, I need to do my digital transformation. Where do I start?’"“Restful API has become so important and defining an architectural pattern of microservices has become so important, because that allows people to iterate on design, development, and delivery and integration with the rest of the world.”“When people say ‘What does mimik do?’ We say simply, we enable microservice development on any device. Because that is the key in terms of market adoption of digital solutions in a sustainable and systematic approach.”SponsorsOver the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.The featured sponsor of this episode of Over the Edge is Seagate Technology. Seagate’s new CORTX Intelligent Object Storage Software is 100% open source. It enables efficient capture and consolidation of massive, unstructured data sets for the lowest cost per petabyte. Learn more and join the community at seagate.comLinksFollow Matt on TwitterFollow Fay on TwitterEdge Woman of the Year Award 2021
CocoaBrew – формат мини-подкаста с новостями про Apple и технологии, полезностями про iOS, Swift и мобильную разработку. От сообщества iOS-разработчиков CocoaHeads! Ведущие: Никита Майданов, Илья Чикмарев, Лена Гордиенко CocoaBrew #7 с новостями за 26 апреля – 2 мая 2021 Заходи в t.me/cocoaheads чтобы не пропустить бесплатные митапы, дискуссии, викторины и многое другое Ссылки: Industry (0:00): Скандал в Basecamp https://www.platformer.news/p/-what-really-happened-at-basecamp HIG от Apple обновлен, в частности с примерами про ATT https://developer.apple.com/design/human-interface-guidelines/ios/app-architecture/accessing-user-data/ Начали делать M2 процессоры https://www.macrumors.com/2021/04/27/apple-m2-mac-chip-enters-mass-production/ ФАС иск на 12млн $ к Apple иск из-за Касперского(родительский контроль) https://fas.gov.ru/news/31268 Community (3:00): Круглый стол про Архитектуры https://youtu.be/qNu_9RjfV6E Сообщество про подписки SubHub t.me/subhub_chat Андрей Володин выпустил обзор на Mac Pro https://www.youtube.com/watch?v=3x3cWEWn1bQ Swift (3:56): Swift 5.4 released + Xcode 12.5 https://swift.org/blog/swift-5-4-released How order set works https://oleb.net/2021/ordered-set Matchable protocol https://github.com/elegantchaos/Matchable SwiftUI in production от PSPDFKit https://pspdfkit.com/blog/2021/swiftui-in-production/ Abstracting navigation in SwiftUI https://obscuredpixels.com/abstracting-navigation-in-swiftui RESTful API with Vapor https://theswiftdev.com/how-to-design-type-safe-restful-apis-using-swift-and-vapor/ Немного про print в Swift'e https://www.andyibanez.com/posts/swift-print-in-depth/ Developer (5:54): Google наводит порядок в своем сторе https://android-developers.googleblog.com/2021/04/updated-guidance-to-improve-your-app.html
Tune in to this installment of Modulate Demodulate as Darren O'Connor joins Chris C., Nick, and Dave to discuss his side project—BGPStuff.net. This tool is a modern BGP looking glass built in Golang that anyone can use to gather a wealth of information on the BGP Routing table. Some of the things you can see are AS_Path, Origin AS, ROA, ASName, RPKI Invalids, DFZ RIB size, and more. In addition to a nice web interface, the latest version of BGPStuff introduces an updated RESTful API. Come check out our discussion of the architecture and technology behind BGPStuff.net! Links Mentioned BGPStuff Presentation Darren's VirtualNOG Project BGPStuff Python Client ASN Bogon Validation Python Library BGP6-Table TwitterBot BGP4-Table TwitterBot
Becky Jaimes is a product manager at Salesforce interviewing Dejim Juang, Master Principal Solutions Engineer at Mulesoft. Recently, Dejim wrote an article describing how to connect Mulesoft with Heroku Postgres as a new data source. The main function of Mulesoft is to integrate with various SOA, SaaS, and APIs, and provide developers with a single integration point. Rather than writing entirely new data ingestion software from scratch, Mulesoft does the heavy lifting of connecting to data sources and responding back with the requested information. MuleSoft can be used to build integrations between Salesforce and applications outside of that ecosystem through a drag and drop interface. Some use cases where Mulesoft might not be appropriate include building a BPM tool or managing file transfers. Although Mulesoft certainly has these capabilities, they are too fragile and inefficient to be relied upon heavily. In terms of database connections, you can make RESTful API calls to Mulesoft and have it access information across all of your systems. This is especially useful if your customer data is located in one place and your software data is located somewhere else. Developers can also write their own code to manipulate the data from disparate sources. They can choose to share their project on the Anypoint Exchange, or continue to use it locally. Although Java is the primary language of choice, there are also scripting choices for JavaScript, Python, .Net, and Ruby. Mulesoft also comes with protections against reporting changes from underlying database migrations, as well as issues with connectivity. Links from this episode Dejim Juang's post on the connector between MuleSoft and Heroku Postgres docs.mulesoft.com and training.mulesoft.com provide more information on working with Mulesoft
Voor uitzending 11 van seizoen 2 hadden Stéphanie en Walter een uitgebreid gesprek met Bob van Luijt. Hij is de oprichter van Semi.Tech, het bedrijf achter Weaviate. En als het aan Bob ligt, is Weaviate binnen een paar jaar de nieuwe standaard voor het opslaan van data. Opschalen via Google en ING In deze uitzending heeft Bob het uitgebreid over zijn carriere en de keuzes die hij heeft gemaakt om te komen waar hij nu staat met zijn opschalende start-up. Zo legt hij uit hoe een bezoek aan een expertgroepsessie van Google in San Francisco en een rapport van Bain & Company over de kosten van datapreparatie het zaadje hebben geplant voor Weaviate. Ook ING speelt hierin een belangrijke rol en niet alleen als early adopter. Weaviate, het product van Semi.Tech is een open-source, GraphQL en RESTful API-toepasbare smart graph, gebaseerde op een mechanisme om graphs in te sluiten dat the Contextionary genaamd. shownotes op https://www.dedataloog.nl/uitzending/s2e11-semi-tech-geeft-ieder-data-object-een-eigen-coordinaat/
In this episode of The Podlets Podcast, we are diving into contracts and some of the building blocks of the Cloud-Native application. The focus is on the importance of contracts and how API's help us and fit into the cloud native space. We start off by considering the role of the API at the center of a project and some definitions of what we consider to be an API in this sense. This question of API-first development sheds some light onto Kubernetes and what necessitated its birth. We also get into picking appropriate architecture according to the work at hand, Kubernetes' declarative nature and how micro-services aid the problems often experienced in more monolithic work. The conversation also covers some of these particular issues, while considering possible benefits of the monolith development structure. We talk about company structures, Conway's Law and best practices for avoiding the pitfalls of these, so for all this and a whole lot more on the subject of API's and contracts, listen in with us, today! Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback and episode suggestions: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Josh Rosso Duffie Cooley Patrick Barker Key Points From This Episode: • Reasons that it is critical to start with APIs at the center. • Building out the user interface and how the steps in the process fit together. • Picking the way to approach your design based on the specifics of that job. • A discussion of what we consider to qualify as an API in the cloud-native space. • The benefit of public APIs and more transparent understanding. • Comparing the declarative nature of Kubernetes with more imperative models. • Creating and accepting pods, querying APIs and the cycle of Kubernetes. • The huge impact of the declarative model and correlation to other steps forward. • The power of the list and watch pattern in Kubernetes. • Discipline and making sure things are not misplaced with monoliths.• How micro-services goes a long way to eradicate some of the confusion that arises in monoliths. • Counteracting issues that arise out of a company's own architecture. • The care that is needed as soon as there is any networking between services. • Considering the handling of an API's lifecycle through its changes. • Independently deploying outside of the monolith model and the dangers to a system.• Making a service a consumer of a centralized API and flipping the model. Quotes: “Whether that contract is represented by an API or whether that contract is represented by a data model, it’s critical that you have some way of actually defining exactly what that is.” — @mauilion [0:05:27] “When you just look at the data model and the concepts, you focus on those first, you have a tendency to decompose the problem.” — @pbarkerco [0:05:48] “It takes a lot of discipline to really build an API first and to focus on those pieces first. It’s so tempting to go right to the UI. Because you get these immediate results.” — @pbarkerco [0:06:57] “What I’m saying is, you shouldn’t do one just because you don’t know how to do the others, you should really look into what will serve you better.” — @carlisia [0:07:19] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Nicera — https://www.nicera.co.jp/ Swagger — https://swagger.io/tools/swagger-ui/ Jeff Bezos — https://www.forbes.com/profile/jeff-bezos/ AWS — https://aws.amazon.com/ Kubernetes — https://kubernetes.io/ Go Language — https://golang.org/ Hacker Noon — https://hackernoon.com/ Kafka — https://kafka.apache.org/ etcd — https://etcd.io/ Conway’s Law — https://medium.com/better-practices/how-to-dissolve-communication-barriers-in-your-api-development-organization-3347179b4ecc Java — https://www.java.com/ Transcript: EPISODE 03 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.2] D: Good afternoon everybody, my name is Duffy and I’m back with you this week. We also have Josh and Carlisia and a new member of our cast, Patrick Barker. [0:00:49.4] PB: Hey, I’m Patrick, I’m an upstream contributor to Kubernetes. I do a lot of stuff around auditing. [0:00:54.7] CC: Glad to be here. What are we going to talk about today? [0:00:57.5] D: This week, we’re going to talk about some of the building blocks of a cloud native application. This week we’re going to kind of focus on contracts and how API’s kind of help us and why they’re important to cloud native ecosystem. Usually, with these episodes, we start talking about the problem first and then we kind of dig into why this particular solution, something like a contract or an API is important. And so, to kind of kick that of, let’s talk about maybe this idea of API-first development and why that’s important. I know that Josh and Patrick both and Carlisia have all done some very interesting work in this space as far as developing your applications with that kind of a model in mind. Let’s open the floor. [0:01:34.1] PB: It’s critical to build API-centric. When you don’t build API-centric, most commonly, you’ll see a cross ecosystem building UI centric, it’s very tempting to do this sort of thing because UI’s are visually enticing and they’re kind of eye candy. But when you don’t go to API-centric and you go that direction, you kind of miss the majority of use cases down the line which are often around an SCK, just ended up being more often than not the flows that are the most useful to people but they’re kind of hard to see it to be getting. I think going and saying we’re building a product API-first is really saying, we understand that this is going to happen in the future and we’re making this a principle early, we’re going to enforce these patterns early, so that we develop a complete product that could be used in many fashions. [0:02:19.6] J: I’ve seen some of that in the past as well working for a company called Nicera, which is a network virtualization company. We really focused on providing an API that would be between you and your network infrastructure and I remember that being really critical that we define effectively what would be the entire public API for that product out in front and then later on, we figured out what obviously to learn this semantics of that sort, to be able to build a mental model around what that API might be, that’s where the UI piece comes in. That was an interesting experiment and like, we ended up actually kind of creating what was the kind of creating what was kind of the – an early version of the Swagger UI in which you basically had a UI that would allow you to explore and introspect and play with, all of those different API objects but it wasn’t a UI in the sense that you know, it had like a constrained user story that was trying to be defined, that was my first experience where I was working with a product that had an API-first model. [0:03:17.0] CC: I had to warm up my brain, I think about why do we build API’s to begin with before I could think why API-first is of a benefit and where the benefits are. And I actually looked up something today and it’s this Jeff Bezos mandate, I had seen this before, right? I mean, why do we view the API’s? API what you’re talking about is data transfer, right? Taking data from over here and sending it over there or you’re making that available so somebody can fetch it. It’s communication. Why do we build API? To make it easier to do that so you can automate, you can expose it, you can gate it with some security, right? Authentication, all of those things and with every increasing amount of data, this becomes more and more relevant and I think when Patrick was saying, when you do it API first, you’re absolutely focusing on making it all of those characteristics a priority, making that work well. If you want to make it pretty, okay, you can take that data in. Transforming some other way to make your presentation pretty, to display on the mobile device or whatever. [0:04:26.4] PB: Yeah, I think another thing with inserting the API design upfront in the software development lifecycle, at least in my experience has been – it allows you to sort of gather feedback from who your consumers will be early on before you worry about the intricacies of all the implementation details, right? I guess with Nicera’s instant stuff, I wonder when you all made that contract, were you pushing out a Swagger UI or just general API documentation before you had actually implemented the underlying pieces or did that all happen together? [0:04:58.1] D: With an API-first, we didn’t build out the UI until after the fact so even to the point where we would define a new object in that API, like a distributed logical router for example. We would actually define that API first and we would have test plants for it and all of that stuff and t hen we would surface it in the UI part of it and that’s a great point. I will say that it is probably to your benefit in the long run to define what all of the things that you’re going to be concerned with are out front. And if you can do that tin a contractual basis, whether that contract is represented by an API or whether that contract is represented by a data model, it’s critical that you have some way of actually defining exactly what that is so that you can also support things like versioning and being able to actually modify that contract as you move forward. [0:05:45.0] PB: I think another important piece here, too, is when you just look at the data model and the concepts, you focus on those first, you have a tendency to more decompose the problem, right? You start to look at things and you break it down better into individual pieces that combine better and you end up with more use cases and you end up with a more useable API. [0:06:03.2] D: That’s a good point. Yeah, I think one of the key parts of this contract is kind of like what you’re trying to solve and it’s always important, you know? I think that, when I talk about API-first development, it is totally kind of in line with that, you have to kind of think about what all the use cases are and if you’re trying to develop a contract that might satisfy multiple use cases, then you get this great benefit of thinking of it as you can kind of collapse a lot of the functionality down into a more composable API, rather than having to solve each individual use cases and kind of a myopic way. [0:06:34.5] CC: Yeah, it’s the concept of reusability, having the ability of making things composable, reusable. [0:06:40.7] D: I think we probably all seen UI’s that gets stuck in exactly that pattern, to Patrick’s point. They try to solve the user story for the UI and then on the backend, you’re like, why do we have two different data models for the same object, it doesn’t make sense. We have definitely seen that before. [0:06:57.2] PB: Yeah, I’ve seen that more times than not, it takes a lot of discipline to really build a UI or an API, you know, first to focus on those pieces first – it’s so tempting to go right to the UI because you get these immediate results and everyone’s like – you really need to bring that back, it takes discipline to focus on the concepts first but it’s just so important to do. [0:07:19.5] CC: I guess it really depends on what you are doing too. I can see all kinds of benefits for any kind of approach. But I guess, one thing to highlight is that different ways of doing it, you can do a UI-first, presentation first, you can do an API-first and you can do a model-first so those are three different ways to approach the design and then you have to think well, what I’m saying is, you shouldn’t do one just because you don’t know how to do the others, you should really look into what will serve you better. [0:07:49.4] J: Yeah, with a lot of this talk about API’s and contracts, obviously in software, there’s many levels of contracts we potentially work on, right? There’s the higher level, potential UI stuff and sometimes there’s a lower level pieces with code. Perhaps if you all think it’s a good idea, we could start with talking about what we consider to be an API in the cloud native space and what we’re referring to. A lot of the API’s we’ve described so far, if I heard everyone correctly, they sounded like they were more so API, as describing perhaps a web service of sorts, is that fair? [0:08:18.8] PB: That’s an interesting point to bring up. I’m definitely describing the consumption model of a particular service. I’m referring to that contract as an infrastructure guy, I want to be able to consume an API that will allow me to model or create infrastructure. I’m thinking of it from that perspective. If AWS didn’t have an API, I probably wouldn’t have adopted it, like the UI is not enough to do this job, either, like I need something that I could tie to better abstractions, things like terraform and stuff like that. I’m definitely kind of picturing it from that perspective. But I will add one other interesting point to this which is that in some cases, to Josh’s point, these things are broken up into public and private API’s, that might be kind of interesting to dig into. Why you would model it that way. There are certainly different interactions between composed services that you’re going to have to solve for. It’s an interesting point. [0:09:10.9] CC: Let’s hold that thought for a second. We are acknowledging that there are public and private API’s and we could talk about why their services work there. Other flavors of API’s also, you can have for example, a web service type of API and you can have a command line API, right? You can see a line on top of a web service API which is the crazy like, come to mind, Kubernetes but they have different shapes and different flavors even though they are accessing pretty much the same functionality. You know, of course, they have different purposes and you have to see a light and another one, yet, is the library so in this case, you see the calls to library which calls the web service API but like Duffy is saying, it’s critical sometimes to be able to have this different entry points because each one has its different advantages like a lot of times, it’s way faster to do things on the command line than it is to be a UI interface on the web that would access that web API which basically, you do want to have. Either your Y interface or CLA interface for that. [0:10:21.5] PB: What’s interesting about Kubernetes too and what I think they kind of introduced and someone could correct me if I’m wrong but is this kid of concept of a core generative type and in Kubernetes, it ends up being this [inaudible]. From the [inaudible], you’re generating out the web API and the CLI and the SCK and they all just come from this one place, it’s all code gen out of that. Kubernetes is really the first place I’ve seen do that but it’s really impressive model because you end up with this nice congruence across all your interfaces. It just makes the product really rockable, you can understand the concepts better because everywhere you go, you end up with the same things and you’re interacting with them in the same way. [0:11:00.3] D: Which is kind of the defining of type interface that Kubernetes relates to, right? [0:11:04.6] PB: Obviously, Kubernetes is incredibly declarative and we could talk a bit about declarative versus imperative, almost entirely declarative. You end up with kind of a nice, neat clear model which goes out to YAML and you end up a pretty clean interface. [0:11:19.7] D: If we’re going to talk about just the API as it could be consumed by other things. I think we’re kind of talking a little bit about the forward facing API, this is one of those things that I think Kubernetes does differently than pretty much any other model that I’ve seen. In Kubernetes, there are no hidden API’s, there’s not private API. Everything is exposed all the time which is fascinating. Because it means that the contract has to be solid for every consumer, not just the ones that are public but also anything that’s built on the back end of Kubernetes, the Kublet, controller manager, all of these pieces are going to be accessing the very same API that the user does. I’ve never seen another application built this way. In most applications, what I see is actually that they might define an API between particular services that you might have a contract between those particular services. Because this is literally — to Carlisia’s point, in most of the models that I’ve seen API’s are contract written, this is about how do I get data or consume data or interact with data, between two services and so there might be a contract between that service and all of its consumers, rather than between the course or within all of the consumers. [0:12:21.7] D: Like you said, Kubernetes is the first thing I’ve seen that does that. I’m pulling an API right now, there’s a strong push of internal API’s for it. But we’re building on top a Kubernetes product and it’s so interesting how they’ve been able to do that, where literally every API is public and it works well, there really aren't issues with it and I think it actually creates a better understanding of the underlying system and more people can probably contribute because of that. [0:12:45.8] J: On that front, I hope this is a good segue but I think it would be really interesting to talk about that point you made Patrick, around declarative versus imperative and how the API we’re discussing right now with Kubernetes in particular, it’s almost entirely declarative. Could you maybe expand on that a bit and compare the two? [0:13:00.8] PB: It’s interesting thing that Kubernetes has really brought to the forefront – I don’t know if there’d be another notable declarative API be terraform. This notion of you just declare state within a file and in some capacity, you just apply that up to a server and then that state is acted on by a controller and it brings us straight to fruition. I mean, that’s almost indicative of Kubernetes at this point I think. It’s so ingrained into the product and it’s one of the first things to kind of do that and that it’s almost what you think of when you think of Kubernetes. And with the advent of CRD’s and what not, that’s now, they want to be extended out to really in the use case you would have, that would fit this declarative pattern of just declaring to say which it turns out there’s a ton of use cases and that’s incredibly useful. Now, they’re kind of looking at, in core Kubernetes, could we add imperative functionality on top of the declarative resources, which is interesting too. They’re looking at that for V2 now because there are limitations, there are some things that just do fit in to declarative pattern perfectly that would fit just the standard rest. You end up some weird edges there. As they’re going towards V2, they’re starting to look at could we mix imperative and declarative, which is and even maybe more interesting idea if you could do that right. [0:14:09.3] CC: In the Kubernetes world, what would that look like? [0:14:11.3] PB: Say you have an object that just represents something like on FOO, you have a YAML file and you're declaring FOO to be some sort of thing, you could apply that file and then now that state exist within the system and things noticed that that state of change that they’re acting on that state, there are times when you might want that FOO to have another action. Besides just applying states, you may want it to have some sort of capability on top of the point, let’s say, they’re quite a few use cases that come in where that turns into a thing. It’s something to explore, it’s a bit of a Pandora’s box if you will because where does that end. Kubernetes is kind of nice that it does enforce constraints at this core level and it produces these really kind of deep patterns within the system that people will find kind of easy to understand at least at a high level. Granted, you go deep into it, it gets highly complex but enforcing like name spaces as this concept of just a flat name space with declarative resources within it and then declarative resources themselves just being confined to the standard rest verbs, is a model that people I think understand well. I think this is part of the success for Kubernetes is just that people could get their hands around that model. It’s also just incredibly useful. [0:15:23.7] D: Another way to think about this is like, you probably seen articles out there that kind of describe the RESTful model and talking about whether REST can be transactional. Let’s talk a little bit about what that means. I know the implementation of an API pattern or an interface pattern might be. That the client sends information to the server and that the server locks that client connection until it’s able to return the result, whatever that result is. Think of this, in some ways, this is very much like a database, right? As a client of a database, I want to insert a row into a database, the database will lock that row, it will lock my connection, it will insert that row and it will return success and in this way, it’s synchronous, right? It’s not trying to just accept the change, it just wants to make sure that it returns to a persisted that change to the database before, letting go of the connection. This pattern is probably one of the most common patterns in interfaces in the world like it is way super common. But it’s very different than the restful pattern or some of the implementations of a restful pattern. In which what we say, especially in this declarative model, right? In a declarative model, the contract is basically, I’m going to describe a thing and you're going to tell me when you understand the thing I want to describe. It’s asynchronous. For example, if I were interacting with Kubernetes and I said, cube kettle create pod, I would provide the information necessary to define that pod declaratively and I would get back from the API server 200 okay, pod has been accepted. It doesn’t mean to it's been created. It means it’s been accepted as an object and persisted to disk. Now, to understand from a declarative perspective, where I am in the life cycle of managing that pod object, I have to query that API again. Hey, this pod that I ask you to make, are you done making it and how does this work and where are you in that cycle of creating that thing? This is where I like within Kubernetes, we have the idea of a speck which defines all of the bits that are declaratively described and we have the idea of a status which describes what we’ve been up to around that declarative object and whether we’ve actually successfully created it or not. I would argue that from a cloud native perspective that declarative model is critical to our success. Because it allows us to scale and it allows us to provide an asynchronous API around those objects that we’re trying to interact with and it really changes the game as far as like, how we go about implementing those inputs. [0:17:47.2] CC: This is so interesting, it was definitely a mind bender for me when I started developing against Kubernetes. Because what do you mean you’ve returned the 200 okay, and the thing is not created yet. When does it get created? It’s not hard to understand but I was so not used to that model. I think it gives us a lot of control. So it is very interesting that way and I think you might be right, Duffy, that it might be critical to the success of native apps because it is always like the way I am thinking about it right now just having heard you is almost like with all the models, let’s say you are working with a database in that transactional system. The data has be inserted and that system decides to retry or not once the transaction is complete as we get a result back. With a Kubernetes model or cloud native model, I don’t know what, which is both a proper things to say, the control is with us. We send the request, Kubernetes is going to do its thing, which allows us to move on too, which is great, right? Then I can check for the result, when I want to check and then I can decide what to do with the results when I want to do anything with it if it all, I think it gives us a lot more control as developers. [0:19:04.2] D: Agreed. And I think another thing that has stuck in my head around this model whether it would be declared over imperative is that I think that Go Lang itself has actually really enabled us to adopt that asynchronous model around things that threads are first class, right? You can build a channel to handle each individual request, that you are not in this world where all transactions have to stop until this one is complete and then we’ll take the next one out of queue and do that one. We're no longer in that kind of a queue model, we can actually handle these things in parallel quite a bit more. It makes you think differently when you are developing software. [0:19:35.9] J: It’s scary too that you can check this stuff into a repo. The advent of Git Ops is almost parallel to the advent of Kubernetes and Terra Form and that you can now have this state that is source controlled and then you just apply it to the system and it understands what to do with it and how to put all of the pieces together that you gave it, which is a super powerful model. [0:19:54.7] D: There is a point to that whole asynchronous model. It is like the idea of the API that has a declarative or an imperative model and this is an idea and distributed system that is [inaudible]. It is like edge triggering or level triggering but definitely recommend looking up this idea. There is a great article on it on Hack Noon and what they highlight is that the pure abstract perspective there is probably no difference between edge and level triggering. But when you get down to the details especially with distributed systems or cloud native architectures, you have to take into account the fact that there is a whole bunch of disruption between your services pretty much all the time and this is the challenge of distributed systems in general, when you are defining a bunch of unique individual systems that need to interact and they are going to rely on an unreliable network and they are going to rely on unreliable DNS. And they’re going to rely on all kinds of things that are going to jump in the way of between these communication models. And the question becomes how do you build a system that can be resilient to those interruptions. The asynchronous model absolutely puts you in that place, where if you are in that situation wherein you say, “Create me a pod.” And that pod object is persisted and now you can have something else to do the work that will reconcile that declared state with the actual state until it works. It will just keep trying and trying and trying until it works. In other models, you basically say, “Okay, well what work do I have to do right now and I have to focus on doing this work until it stops.” What happens if the process itself dies? What happens if any of the interruptions that we talk about happen? Another good example of this is the Kafka model versus something like a watch on etcd, right? In Kafka, you have these events that you are watching for. And if you weren’t paying attention when that event went by, you didn’t get that event. It is not still there. It is gone now whereas like with etcd and models like that, what you are saying is I need to reconcile my expectancy of the world with what the desired thing is. And so I am no longer looking for events. I am looking for a list of work that I have to reconcile to, which is a very different model for these sorts of things. [0:21:47.9] J: In Kubernetes, it becomes the informer pattern. If you all don’t know, which is basically at the core of the informer is just this concepts of list and watch where you are just watching for changes but every so often you list as well in case you missed something. I would argue that that pattern is so much more powerful than the Kafka model you’re just going to skin as well because like you mentioned, if you missed an event in Kafka somehow, someway is very difficult to reconcile that state. Like you mentioned, your entire system can go down in a level set system. You bring it back up and because it is level set, everything just figures itself out, which is a lot nicer than your entire system going down in an edge-based system and trying to figure out how to put everything back together yourself, which is not a fun time, if you have ever done it. [0:22:33.2] D: These are some patterns in the contracts that we see in the cloud native ecosystem and so it is really interesting to talk about them. Did you have another point Josh around API’s and stuff? [0:22:40.8] J: No, not in particular. [0:22:42.2] D: So I guess we give into like what some of the forms of these API’s to talk about. We could talk about RESTful API’s versus to TIPC-based API’s or maybe even just interfaces back and forth between modular code and how that helped you architect things. One of the things I’ve had conversations with people around is we spend a lot of our time conditioning our audience when in cloud native architecture to the idea that monliths are bad, bad, bad and they should never do them. And that is not necessarily true, right? And I think it is definitely worth talking through like why we have these different opinions and what they mean. When I have that conversation with customers, frequently a monolith makes sense because as long as you’re able to build modularity into it and you are being really clear about the interfaces back and forth between those functions with the idea that if you have to actually scale traffic to or from this monolith. If the function that you are writing needs to be effectively externalized in such a way that can handle an amount of work that will surpass what the entire monolith can handle. As long as you are really clear about the contract that you are defining between those functions then later on, when it comes to a time to externalize those functions and embrace kind of a more microservices based model mainly due to traffic reload or any of the other concerns that kind of drive you toward a cloud native architecture, I think you are in a better spot and this is definitely one of the points of the contract piece that I wanted to raise up. [0:24:05.0] CC: I wonder though how hard it is for people to keep that in mind and follow that intention. If you have to break things into micro services because you have bottlenecks in your monolith and maybe have to redo the whole thing, once you have the micro services, you have gone through the exercise of deciding, you know this goes here, these goes there and once you have the separate modules it is clear where they should go. But when you have a monolith it is so easy to put things in a place where they shouldn’t be. It takes so much discipline and if you are working on a team that is greater than two, I don’t know. [0:24:44.3] PB: There are certain languages that lend themselves to these things like when you are writing Java services or there are things where it is easy to — when writing even quickly, rapidly prototyping an application that has multiple functions to be careful about those interfaces that you are writing, like Go because it is a strongly type language kind of forces you into this, right? There are some other languages that are out that make it difficult to be sloppy about those interfaces. And I think that is inherently a good thing. But to your point like you are looking at some of the larger monoliths that are out there. It is very easy to fall into these patterns where instead of an asynchronous API or an asynchronous interface, you have just a native interface and you are a asynchronous interface in which you expect that I would be able to call this functional and put something in there. I will get the result back and that is a pattern for monoliths. Like that is how we do it in monoliths. [0:25:31.8] CC: Because you say in there also made me think of the Conway’s Law because when we separate these into micro services and I am not saying micro services is right for everything for every team and every company. But I am just saying if you are going through that exercise of separating things because you have bottlenecks then maybe in the future you have to put them elsewhere. Externalize them like you said. If you think if the Conway’s Law if you have a big team, everybody working on that same monolith that is when things are in depth in the place that they shouldn’t be. The point of micro services is not just to technically separate things but to allow people to work separately and that inter-team communication is going to be reflected in the software that they are creating but because they are forced to communicate and hopefully they do it well that those micro services should be well-designed but if you have a monolith and everyone working on the same project, it gets more confusing. [0:26:31.4] D: Conway’s Law as an overview is basically that an organization will build software and laid out similar to the way the thought musician itself is architected. So if everybody in the entire company is working on one thing and they are really focused on doing that one thing, you’d better build a monolith. If you have these groups that are disparate and are really focused on some subset of work and need to communicate with each other to do that thing then you are going to build something more similar or maybe more capable as a micro service. That is a great point. So actually one of the things about [inaudible] that I found so fascinating with it, it would be a 100 people and we were everywhere. So communication became a problem that absolutely had to be solved or we wouldn’t be able to move forward as a team. [0:27:09.5] J: An observation that I had in my past life helping folks, breaking apart Java monoliths like you said Duffy, assume they had really good interfaces and contracts right? And that made it a lot easier to find the breaking points for their API’s to pull those API’s out into a different type of API. They went from this programmatic API, that was in the JBM where things were just intercommunicating to an API that was based on a web service. And an interesting observation I oftentimes found was that people didn’t realize that in removing complexity from within the app to the network space that oftentimes caused a lot of issues and I am not trying to down API’s because obviously we are trying to talk about the benefits of them but it is an interesting balancing act. Oftentimes when you are working with how to decouple a monolith, I feel like you actually can go too far with it. It can cause some serious issues. [0:27:57.4] D: I completely agree with that. That is where I wanted to go with the idea of why we say that building a monolith is bad and like with the challenges of breaking those monoliths apart later. But you are absolutely right. When you are going to introduce the wild chaos that is a network between your services, are you going to externalize functions and which means that you have to care a lot more about where you store a state because that state is no longer shared across all of the things. It means that you have to be really super careful about how you are modeling that. If you get to the point where this software that you built that is a monolith that is wildly successful and all of its consumers are networked based, you are going to have to come around on that point of contracts. Another thing that we haven’t really talked on so much is like we all agree that maybe like an API for say the consumer model is important. We have talked a little bit about whether private API’s or public API’s make sense. We described one of the whacky things that Kubernetes does, which is that there are no private API’s. It is all totally exposed all the time. I am sure that all of us have seen way more examples of things that do have a private API mainly because perhaps the services are trained. Service A always fact to service B. Service B has an API that it may be a private API. You are never going to expose to your external customers only to service A or to consumers of that internal API. One of the other things that we should talk about is when you are starting to think about these contracts. One of the biggest and most important bits is how you handle the lifecycle of those API’s, as they change right? Like I say add new features or functionality or as I deprecate old features and functionality, what are my concerns as it relates to this contract. [0:29:33.5] CC: Tell me and take my money. [0:29:37.6] D: I wish there was like a perfect answer. But I am pretty convinced that there are no perfect answers. [0:29:42.0] J: I spent a lot of time in the space recently and I have researched it for like a month or so and honestly, there are no perfect answers to try to version an API. Every single on of them has horrible potential consequences to it. The approach Kubernetes took is API evolution, where basically all versions of the API have to be backwards compatible and they basically all translate to what is an internal type in Kubernetes and everything has to be translatable back to that. This is nice for reasons. It is also very difficult to deal with at times because if you add things to an API, you can’t really every remove them without a massive amount of deprecation effort basically moderating the usage of that API specifically and then somehow deprecating it. It is incredibly challenging. [0:30:31.4] PB: I think it is 1-16 in which they finally turn off a lot of the deprecated API’s that Kubernetes had. So a lot of this stuff that has been moved for some number of versions off to different spaces for example deployments used to be extensions and now they are in apps. They have a lot of these things. Some of the older API’s are going to be turned off by default in 1-16 and I am really interested to see how this plays out you know from kind of a chaos level perspective. But yeah you’re right, it is tough. Having that backwards compatibility definitely means that the contract is still viable for your customers regardless of how old their client side looks like but this is kind of a fingernail problem, right? You are going to be in a situation where you are going to be holding those translations to that stored object for how many generations before you are able to finally get rid of some of those old API’s that you’ve have obviously moved on from. [0:31:19.6] CC: Deprecating an end point is not reviewed at all and ideally like better with, you would be able to monitor the usage of the end point and see as you intend deprecating is the usage is going lower and if there is anything you can do to accelerate that, which actually made me think of a question I have for you guys because I don’t know the answer to this. Do we have access to the end points usage, the consumption rate of Kubernetes end points by any of the cloud service providers? It would be nice if we did. [0:31:54.9] D: Yeah, there would be no way for us to get that information right? The thing about Kubernetes is something that you are going to run on your own infrastructure and there is no phone home thing like that. [0:32:03.9] CC: Yeah but the cost providers could do that and provide us a nice service to the community. [0:32:09.5] D: They could that is a very good point. [0:32:11.3] PB: [inaudible] JKE, it could expose some of the statistics around those API end points. [0:32:16.2] J: I think the model right now is they just ping the community and say they are deprecating it and if a bunch of people scream, they don’t. I mean that is the only way to really know right now. [0:32:27.7] CC: The squeaky wheels get the grease kind of thing. [0:32:29.4] J: Yeah. [0:32:30.0] D: I mean that is how it turns out. [0:32:31.4] J: In regarding versioning, taking out of Kubernetes for a second, I also think this is one of the challenges with micro service architectures, right? Because now you have the ability to independently deploy a service outside of the whole monolith and if you happen to break something that cracks contractually you said you would and people just didn’t pay attention or you accidentally broke it not knowing, it can cause a lot of rift in a system. So versioning becomes a new concern because you are no longer deploying a massive system. You are deploying bits of it and perhaps versioning them and releasing them at different times. So again, it is that added complexity. [0:33:03.1] CC: And then you have this set of versions talk to this set of versions. Now you have a matrix and it is very complicated. [0:33:08.7] PB: Yeah and you do somewhat have a choice. You can’t have each service independently versioned or you could go with global versioning, where everything within V1 could talk to everything else than V1. But it's an interesting point around breakage because tools like GRPC kind of enforce you to where you cannot break the API, through just how the framework itself is built and that’s why you see GRPC in a lot of places where you see micro services just because it helps get the system stable. [0:33:33.1] D: Yeah and I will call back to that one point again, which I think is actually one of Josh’s points. If you are going to build multiple services and you are building an API between them then that means the communication path might be service A to service B and service B to service A. You are going to build this crazy mesh in which you have to define an API in each of these points to allow for that consumption or that interaction data. And one of the big takeaways for me in studying the cloud native ecosystem is that if you could define that API and that declarative state as a central model to all of your services then you can flip this model on its head instead of actually trying to define an API between in front of a service. You can make that service a consumer of a centralized API and now you have one contract to right and one contract to standby and all of those things that are going to do work are going to pull down from that central API. And do the work and put back into that central API the results, meaning that you are flipping this model on its head. You are no longer locking until service B can return the result to you. You are saying, “Service B here is a declarative state that I want you to accomplish and when you are done accomplishing it, let me know and I will come back for the results,” right? And you could let me know in an event stream. You can let me know by updating a status object that I am monitoring. There’s lots of different ways for you to let me know that service B is done doing the work but it really makes you think about the architecture of these distributed systems. It is really one of the big highlights for me personally when I look at the way that Kubernetes was architected. Because there are no private API’s. Everything talks to the API server. Everything that is doing work regardless of what data it’s manipulating but it is changing or modifying. It has to adhere to that central contract. [0:35:18.5] J: And that is an interesting point you brought up is that Kubernetes in a way is almost a monolith, in that everything passes through the API server, all the data leaves in this central place but you still have those distributed nature too, with the controllers. It is almost a mix of the patterns in some ways. [0:35:35.8] D: Yeah, I mean thanks for the discussion everybody that was a tremendous talk on contracts and API’s. I hope everybody got some real value out of it. And this is Duffy signing off. I will see you next week. [0:35:44.8] CC: This is great, thank you. [0:35:46.5] J: Cheers, thanks. [0:35:47.8] CC: Bye. [END OF INTERVIEW] [0:35:49.2] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io website where you will find transcripts and show notes. We’ll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
The “front end” is pretty nebulous. What makes a good front end developer? Its definitions, and so its answers, are all over the place. It’s not just the introduction of new front end frameworks that have changed how we talk about it, but in terms of the discipline of designing websites we have begun to think differently: in components, in services. We can see front end in flux in Ernie Hsiung’s “A fictitious, somewhat farcical conversation between me and the JavaScript programming language,” where Ernie as a front end developer — a successful front ender, look at his resume — imagines having an existential crisis because the front end changed while he was busy being a boss (a good one).Our understanding of the front end as a place no longer holds up to scrutiny. We cannot define it by a specific suite of tools, a kind of user expertise, or even that — if anything — front end development requires a browser.If we talk about the front end in terms of distance from a user, the “front” specifically being the interface between the user and the service provided, then we ought to accept that voice user interfaces (among other examples) challenge the notion that the front end has any tangible component at all. Even here, we are cherry-picking a particular kind of user, one that is at the tail-end of a complex service provision - presumably outside the “provision” circle of the Venn Diagram altogether. We’ll call this person “end user.” But again, using “front end” as a descriptor of that distance from a user — while accepting there are no constraints on technology or medium (like a browser) — should be applicable to the RESTful API that I design for consumption by a variety of user interfaces. The consumer is my end user. When I design the interface for that user, am I performing “front end development?”What I am trying to illustrate is that when we are thinking about user experience and service design, paradigms like “front end” and “back end” no longer align with an understanding of service clusters, ecosystems, or - frankly - users. These kind of identifiers are ephemeral and, as such, pose a problem to companies who organize themselves around these concepts. You long time Metric readers and listeners — “Metricians?” — might find this latter concern familiar. The practice of thinking in services reveals that same ephemeral nature around the product itself, given that the product is just a tool in a larger service provision, it is thus — like a tool — replaceable. Choosing to design organizations around products will shape the way in which said organizations develop, which is probably against the grain of good service design. Hot take, I know.So, practically, as service providers who make products that require engineers, how do we hire after we thought-spiral ourselves away from terms like “front end developer?” I think there is an implied solution. In 2017, Rob Schade at Strategyn suggested a new kind of role called the “Job Manager” in “Product Managers are Obsolete; focus on the Job-to-be-Done.” As an alternative to a Product Manager, the Job Manager focused on the design and development of solutions for a given job-to-be-done. If you need a refresher, a job-to-be-done describes a task where the user has a demonstrable need for the solution you provide. That is, if I need talk shop and further develop my own thinking about service design, a company like Substack provides a solution for my shop-talking need. The product — the newsletter editor and mailer that Substack provides — is a means to an end, not the end, and not necessarily crucial to my job to be done. In this example, rather than there being a Product Manager in charge of the newsletter editor, there would be a Job Manager overseeing the entire service Substack provides that connects me to you. These circles overlap, but the lens is very different.I think there is room then for “Job Designers” and “Job Developers.” These are the design workers who, alongside the Job Manager, provide the company’s solution for a specific job to be done. A huge benefit, you argue, for “front end” is that even if the definition is super loose it implies some kind of big umbrella of skills around which people can identify. Maybe that is, regardless of which framework compiles them, something to do with HTML, CSS, and JavaScript. There is nothing about “Job Developer” that implies what that person does. I agree. While I think it’s liberating to disassociate the technology from the service provided, it makes this s**t hard in a different way. Rather, I think there’s room for specialization, in the way that “front end” and “back end” are technically specializations of just “developer.” This specialization is also implied by jobs to be done.A job to be done is composed of jobs to be done. In the job that is “understand the civic role I have in my community,” there is a subsequent job that is “see the agenda for upcoming city council meetings in my email.” There are jobs that imply the interface, so there is a role in the service provision to provide that service to that interface. Make up a title that works. Surely “interface developer” — or, even, “civic interface developer” — has more semantic meaning that “front end.” That’s sort of about the point where I stop caring and let you all figure it out. Titles. Some projects need only one developer, others need one hundred. Few job titles really hold up to the scrutiny — I’m looking at you, Business Analyst 1 — and “front end” or “back end” won’t solve that.More importantly, a jobs-to-be-done approach to role-making is still fundamentally about spatial relationship to the user. These kinds of roles are user centric - all of them. Whereas “front end” and “back end” might describe distance from a user, all job-to-be-done style roles imply the user in their very description. Each job solution has a userbase, and so fundamental skills like empathy, data-driven decision making, and user advocacy (privacy, accessibility, usability) are part of each job - regardless of where they are in the stack. Liking (❤) this issue of Metric is a super way to brighten my day. It helps signal to the great algorithms in the sky that this writeup is worth a few minutes of your time.Metric is a podcast, too, which includes audio versions of these writeups and other chats. Look for “Metric UX” in your favorite podcatcher.Remember that the user experience is a metric.Michael Schofield Get full access to Metric by Michael Schofield at metric.substack.com/subscribe
A RESTful API is an application program interface (API) that uses HTTP requests to get, put, post and delete data. In this episode, we are going to check with VMware Partners Ciaran Roche from Coevolve and Jason Gintert from Wan Dynamics on how they are leveraging the RestAPI with VMware SD-WAN solution. Support the show (https://www.velocloud.com/sd-wan-resources/podcasts/sd-wan-360)
And I'm still looking for job. Two companies are interested in what I have to offer: Volkswagen Puebla and Nearshore technology. For Volkswagen Puebla, I was asked to create a REST API for transferring money between balance accounts and the required technology was Python. Listen to the tale of how I managed to create a RESTful API on Python and how I learned Python in 24 hours. YouTube: https://youtu.be/JhfzDPI-5Vo Mixer: JorgeEscobar - Mixer
In this episode I'm joined by Lee Byron, former Facebook employee, who is one of the co-creators of GraphQL. Coming from RESTful API development, I've run into many pain-points that GraphQL works to alleviate, and is one of the reasons why I've become such an advocate for it for modern development. Lee and I spend a lot of time discussing the fundamentals of GraphQL and how to get started with it in development. Not only are we focusing on the how, but we are also focusing on the why, which is very important when evaluating a new technology and methodology. A brief writeup to this episode can be found via https://www.thepolyglotdeveloper.com/2018/08/tpdp-e20-graphql-api-development/
ajiyoshiさん、ぺいさんとGoCon Spring 2018、Go言語、コード生成、Protocol Buffers、JSONなどについて話しました。 Go Conference 2018 Spring src/cmd/compile/internal/gc/walk.go のMapのASTを処理するあたり GoらしいAPIを求める旅路 (Go Conference 2018 Spring) github.com/go-chi/chi gorilla/mux julienschmidt/httprouter コードジェネレートとの付き合い方 @Go Conference 2018 Spring swaggo/swag: Automatically generate RESTful API documentation with Swagger 2.0 for Go. 115枚目のスライド swaggo/swag/issues/88: Docs generation loop Protocol Buffers protocプラグインの書き方 今さらProtocol Buffersと、手に馴染む道具の話 全ての管理画面開発に悩めるエンジニアに捧ぐ 〜Viron誕生〜 管理画面は設定ファイルぐらいシンプルに作れるべき!『Viron』を使ってみました - pixiv inside OpenRTB Integration
Learn about benefits of GraphQL over RESTful API and why GraphQL will kill REST API just like REST killed SOAP 10 years ago.
In this session, hear how Cambia Health Solutions, a not-for-profit total health solutions company, created a self-service data model to convert a large-scale, on-premises batch processing model to a cloud-based, real-time pub-sub and RESTful API model. Learn how Cambia leveraged AWS services like Amazon Aurora, AWS Database Migration Service (AWS DMS), AWS Lambda, and AWS messaging services to create an architecture that provides a reasonable runway for legacy customers to convert from old mode to new mode and, at the same time, offer a fast track for onboarding new customers.
Goro Fuji さんをゲストに迎えて、Discord, Slack, GraphQL, RESTful API, Pixel 2, Kotlin, React Native などについて話しました。 Show Notes ISUCON Fastly Yamagoya Meetup 2017 fastly #yamagoya2017 - Togetterまとめ Discord Reactiflux is moving to Discord - React Blog Gitter Slack日本語版、年内に登場へ Introducing Shared Channels: Where you can work with anyone in Slack GraphQL | A query language for your API GraphQL: A data query language The GitHub GraphQL API Hypermedia Swagger rmosolgo/graphql-ruby: Ruby implementation of GraphQL Node.js + GraphQLでBFFを作った話 PromQL | Prometheus Apollo GraphQL Caching of GraphQL servers with Fastly / Varnish JSON API SSKDs and LSUDs Kibela Bloke takes over every .io domain by snapping up crucial name servers Is using an .ly domain right - or wrong? Google Pixel 2 How Google Built the Pixel 2 Camera 「Pixel 2」日本投入なくアプリ開発者が困惑 Pixel 2 and Pixel 2 XL are the first phones to support eSIM for Project Fi users Latest Chrome Beta Update Drops the Address Bar to the Bottom by Default Bottom navigation - Components - Material Design 507SH, Android One Kotlinのスキルを持たないAndroid開発者は恐竜のようになるリスクに直面 Jake Wharton Microsoft/reactxp necolas/react-native-web: React Native for Web If you use Twitter Lite you're now using a web app rendered by React Native for Web Relay
Chris Chuter: @Chris_Chuter Show Notes: 00:47 - Peeple: What is it? Why? 02:59 - Iterations and User Testing 13:32 - Complexity of Installation 17:26 - Device Integration 22:15 - Setup and Installation 25:35 - Laws and Building Codes 26:39 - Getting Started in this Space 31:29 - Ensuring Quality, Integration Testing, and Deployment Pipelines 33:18 - The Manufacturing Process Resources: If This Then That (IFTTT) Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 82. My name is Charles Lowell, a developer here at the Frontside and your podcast host-in-training. With me is Elrick Ryan. Hello Elrick. ELRICK: Hey, hello. CHARLES: And today, we are going to be continuing our series on the Internet of Things and we have someone on the podcast today who's going to talk to us about the Internet of Things. His name is Chris Chuter and he is the CEO, inventor and founder of Peeple. Hey, Chris. CHRIS: Hey. How is it going? CHARLES: It's gone well. Thanks for coming on the program. Peeple, what is it? Why don't you give us a quick overview of the product? Obviously it pertains to IoT, what is it and how did you become involved with it? Let's delve into that. CHRIS: Yes, sure. Let me give you the elevator short version first then we can dive deeper. Peeple is caller ID for your front door. The idea is when you get a phone call and you don't answer the phone, what happens? It goes to your voicemail. You know someone called you. But today, if someone comes to your house, you have no idea that they came unless you're there. This is the central problem that we solved with Peeple. It's a little device, a hardware device, an Internet of Things device that fits over the peephole in your door in the inside of your house. When someone knocks or doors open, you get a push notification on your phone. You can open up the phone and you can see a live view of your peephole. In a nutshell, Peeple is a smart peephole. CHARLES: Is it more for the case when you're not home at all or do you find the people use it for what you would traditionally use a peephole. CHRIS: It depends on the person. Now, my personal use case is for keeping track of wandering kids and that's actually inspiration for this invention. I have two boys and when one of my boys was three years old, he managed to open the door, walk out, go on to the street and walk down to the end of the street. Now, I live in Austin and I live right off the edge of a very busy street. Now, my kid didn't die or anything like that. It's not a really sad story but a neighbor brought my kid home and it was one of those moments as a parent where you're like, "Oh my God. I'm a terrible parent." But being an inventor and an engineer, I was like, "I'm going to hook something up that just tells me when my door is opened or closed," and it morphed into this invention. We showed it to people at South by Southwest almost three or four years ago. That's when we realized we were on to something that didn't exist. It was just a little camera on the door. CHARLES: Tell me about those first versions. I'm so curious. It sounds like there's a lot of layers of functionality that you've been through, a lot of iterations so I'm curious about that. What's was that zero iteration look like? CHRIS: Version 0 was made in 24 hours. It was a hackathon for... I can't remember the name of it. There was a hackathon group that recently imploded and we won this hackathon. The hackathon thing was to make something... I'm not sure if this is for Internet of Things but we were all making that kind of stuff. I made this little Raspberry Pi demo with a little mini door and I had talked to my wife and this is how I was able to make this invention, to keep track the kid as I was busy doing other stuff but I talked her into giving me 24 hours to make this one thing. Then me and another guy, David we won this hackathon. We were like, "We've got to turn this into a real thing," because one of the awards of the hackathon was you go to Silicon Valley, you show this off and you do all this cool stuff with it. We were like, "We've got to actually turn this into something that's presentable." That was Version 0. It was just a little Raspberry Pi. CHARLES: Now, what were you doing to detect the state of the door? CHRIS: That's the crazy thing. The first version of the device had more sensors on it than the final version. The first version had everything. It had a doorbell, it had a knock sensor, it had a motion, it had a speaker that played Paul McCartney's 'Someone's Knockin' At The Door,' but it had an accelerometer. I threw everything in there the first thing and half of it worked for the hackathon demo but it was good enough to win. This is something that, I guess I could call wisdom now but the real thing I learned is you start with everything and then you narrow and get it more tuned and highly focused and more precise as a device, like the difference between the iPhone and the Samsung phones. One of them is to throw everything into it and then the iPhone is just really specialize into a few things really well. The next three years, we're pulling stuff out. CHARLES: What are some examples of that calling that you're describing where you're saying, "I'm to take this out? I'm going to take this out. I'm going to take that out." CHRIS: We got rid of things like the doorbell and some of the other sensors, mainly because it was just a wiring issue and as well as we wanted to keep track when the door was opened and closed. It didn't make sense to have the speaker on there at the time so we really focused more on the accelerometer and the knock sensor for the first version of Peeple. CHARLES: That is not the final version. Is it mostly just the accelerometer? What if someone doesn't knock? I assume there's some sort of detection that goes on with the camera. CHRIS: That's the next version. That's something that we've been working on right now, what we're going to be delivering. We have delivered our first, I would say Version 1.0 of Peeple devices to our customers. There's a thousand of these or so in the wild, all around the world and the next version we have added -- and I guess this my first real announcement of this -- a motion detection module. It's not a camera-based. It's more or less magic and it just works through the door. That's the most I'm going to say on it right now because we're probably the first hardware device that it's actually using this technology. ELRICK: That's an excellent pitch. Everyone loves magic. CHRIS: Yes, it's basically magic. It works through the door. ELRICK: As you were going to these iterations, were you doing like user testing to see what users wanted? Or did you internally say, "This doesn't make sense. Let's just take this out." CHRIS: Absolutely. That's the second part of this story. After this hackathon happened, we prepared to go on the road show to go and show it off to Silicon Valley but in the meantime, this hackathon group, I think it was called AngelHack, it imploded. One of their founders made all these disparaging comments about homeless people and what essentially happened is we lost the award. They said, "We're sorry. We can't give you the award," but we had spent about three months fine-tuning, making something pretty and putting a pitch together. I went in and I pitched at a TechCrunch Meetup in Austin and we came in second at that but during that meetup, I met one of the reporters and said, "You really need to talk to these guys in San Francisco called Highway1," so I did. We eventually ended up moving to San Francisco. Now, the reason I mentioned that to answer your question is they understand this idea of user testing, I think better than a lot of people. Even though they were focused on working on hardware and getting an IoT device that works out there, they were drilling it into our heads is, "You have to get this in people's homes now. I don't care how bad it is. I don't care if you have to hire people, to sit at a peephole and just look through it and pretend like there are hardware device. You got to do this and you have to find out what the problems are, what works. I want you to look at your biggest fears of this thing and you quash them and you do that before you put any Silicon down," so we did that as best we could. CHARLES: So you did that with the Version 0 and Version 1 devices? CHRIS: Exactly, just a Version 0, I have all these pictures. We put them in about 12 to 20 homes and we have these long extension cords powering this thing because we didn't have the batteries to figure out. We had these huge lag problems. It would take like 30 seconds to a minute before something would happen. We had all these issues but in the end, people were still like, "It had these issues. You couldn't do this," but the fact that I had a door log, a door diary as what we're calling it now, that's something I never had before. That's where your secret sauce is so we ran with that. CHARLES: Yeah. That's the kind of thing it never even occurs to you. CHRIS: Exactly. In the app, or at least the early versions of the app, is you have these versions like a calendar that are like, "Okay, I got 10 visits yesterday. I got 20 visits today. No one came to visit me today. I'm so sad," but I have a calendar of, I think it was May of last year when I got visited by three or four magazine salesman in one week so you could correlate that with, "Did we have any break ins?" or something like that. CHARLES: Yeah, it would be interesting to be able to share that data with your neighborhood or somehow coordinate that. one of things I'm curious about too is you did this user testing you were talking about, doing the wiring and the installation, it's a conversation that always comes up when you're talking about custom hardware because there's always the drive to be small, there's always the drive to be have a small form factor and then you have challenges of power like how do you power this device. How cumbersome is the installation onto someone's door? CHRIS: Yeah, we had it all. That's a big difference, I think between San Francisco or Silicon Valley and other towns is there's this acceptance and there's this readiness to participate in the tech scene. We did a call out for volunteers and we had no problems finding them. They didn't mind us coming to their house and hooking up these big, bulky things and just being real intrusive. The fact that we found these people and they were the key to this early stage of, "Do you become a product or do you not?" We were only there for four months but by the end of this time that we were there, there was this legitimate tangible feeling of we're not a prototype anymore. We're a product and we didn't have a product. It was just prettier but we could see the light at the end of the tunnel. I don't think that would have happened had we not gone through this very painful experience with all these poor people that we inflicted our device on. CHARLES: This actually is fascinating because obviously, you're back in Austin now and I never heard of programs like that, like sign up to have someone come up and test it at some alpha stage prototype in your home. That sounds crazy and yet, it sounds like they were just going out of the woodwork. CHRIS: In San Francisco, it's not a problem. If I put the call out now, I probably have to really like, "Here's an Amazon gift card." I have to start doing a little bit of bribery. ELRICK: I think I would sign up just to see the cool tech. CHRIS: Yeah and those people exist. I think we don't have the means to really find them. That infrastructure already exists. In Silicon Valley, you just go down to Starbucks. CHARLES: There ought to be some sort of meetup for people who want to experiment with very early stage IoT devices here in Austin. Maybe, we'll have to look at it. If that doesn't exist, I would love being a guinea pig. I actually think there is an untapped willingness here but there's just not -- CHRIS: I think you need a critical mass of hardware people and hardware devices that are ready to be put in doors or put in the houses. There's definitely some in there. I have a lot of friends and there are hardware meetups that we go to but this stuff takes so long and it's so hard as hardware is hard. There's that small window of, "We got this little idea of a water sprinkler. Do you think anyone want to try it out?" or something like that and then the moments gone. Then six months later, there's another one. CHARLES: Yeah. I wonder if there's a way to really decrease that iteration cycle so that you can get feedback more quickly. I guess the problem is when you need a physical device, you just needed a physical device. CHRIS: We're talking about the Maker Movement and the MakerClub. If you're part of those, these people are hard to find. People that go to Maker Faires, that's the people you're looking for. CHARLES: Right. Now, transitioning because ultimately your target customer base is not makers, not people who are willing to put up with wires and cabling and people doing protracted installation. What does the kind of 1.0 product look like? Because what I'm curious is what immediately jumps to mind is this thing sounds like it's going to probably consume a lot of power. How do you get the power to that and what are the challenges and what are the tradeoffs that you have to make to try and get that power consumption down or get the installation complexity down? How complex is it today to install? CHRIS: I guess, I'll toot my own horn a little bit but I think we have one of the easiest IoT devices on the planet to install. You can possibly not even need tools. You can use your fingers but the biggest challenge for any IoT device is getting that home network connection. If there's been a few technologies through the years in which they've tried to fix this problem, basically just like self-pairing or things like that, like how Bluetooth can sometimes be really cumbersome. Now imagine that with Wi-Fi, it's the same thing but now you've got a password you've got to throw in there. That's really the only real hiccup with the installation on our device and we tried a few things. We went through about three different Wi-Fi chips before we settled on what we were using now. The first Wi-Fi chip was a TI one, which offered this nice pairing capability but it just didn't work half the time. Then we switched to a Broadcom chip, which was really solid and stable but turned out to be the most expensive component in the whole device so we had to get rid of that. The Wi-Fi issue was something we had to solve early because it goes also toward your power consumption. We have a camera and a Wi-Fi chip and both of those take up to 140 to 200 milliamps of juice when they're on. We had to be really smart of when this thing was going to be on and that's essentially when we went in parallel with the knock accelerometer. This device stays asleep most of the time and that's how we get the many months of battery life out of it. We put a rechargeable battery inside, it only turns on when it needs to and it's just hanging around waiting for an event for the rest of the time. Those were the things we were solving to get the Version 1. CHARLES: Now, it's waiting for some event but in order to receive the event, doesn't the accelerometer need to be on? Or is there some motion detector that --? CHRIS: That's a solved problem. good news was that accelerometers are extremely low power in the nano or picoamps but that's also another reason why the motion detection was going to be a hard problem because that is not, unless you're using what's called a PIR that is not a low power solution. CHARLES: Acronym alert. What is a PIR? CHRIS: It's an infrared proximity detection. That's how almost all motion detection cameras work. They have one hole for the camera and another hole for the PIR. The problem with these are is they don't work well in sunlight, outdoor-light and things like that in one of our use cases so we were kind of stuck. That's why we've recently come up with this new motion solution that doesn't rely on that technology -- the magic solution. CHARLES: All right. When we're going to find out about the magic solution? CHRIS: As soon as I ship this next version because it is being used in a few products but it's not really stateside yet and I want to save my thunder but it's something that I think is really cool. It really is magic. It's just amazing to me that it works. CHARLES: Well, I'm eager to see it. You were talking about Wi-Fi being one of the biggest challenges. That's a perfect segue. The connection to the network for something that we're always curious is discovering a new and interesting device is always a pleasure and then the next thought that almost funnels immediately after is how can I integrate this with other strange and wonderful devices to make something even more wonderful? A question we ask everybody is have you thought about how this might be a participant in an ecosystem so if there were other devices around the home, how would they even talk to the people? How might it offer information to someone looking to, maybe do some custom integration in their home? CHRIS: That's a lot of questions in one. Essentially, there's two ways of looking at it. You can look at it from your customer's perspective, what kind of customer do I think is going to have this or is going to use this the most. Back when we came up with this, there were a lot of do-it-yourself types and If This Then That protocol was out there but we really wanted to focus on something that was incredibly easy to use and didn't require you to program anything. I was really frustrated with the whole idea of Internet of Things because it almost implied that you had to be a programmer to use it. I didn't like that at that time. I've since come around to it because there's all these great tool kits out there. We initially looked at integrating with HomeKit. We thought they'd be perfect but what a lot of consumers don't realize is early HomeKit -- I don't believe it does that anymore -- made you modify your hardware to put in this special Apple hardware. When you're making a device, it is so hard just to get the hardware down. It's so expensive. To add anything or to put anything else in there, it's a huge friction point. It's really something that small startups just can't afford to do. A big Nest or a company like that have no problem but when you're making a one device, this is a big deal so we weren't able to really leverage something like HomeKit for an API. But we do have our own cloud-based API. We're RESTful API but it's just not documented and put out in a way where we want to have people programming it. But the good news is we did leverage several APIs when we were making things like the app and doing things like the push notifications and things like that. Now, it turns out that a lot of the case we used are now integrating with things like Alexa and other device protocols so we essentially get those for free. This whole ecosystem is forming around us. Just most important is to get your device out there because you have a vision for what the device will be used for. But then your customers tell you what the device is really useful for and that's when the real work starts. CHARLES: Right. I guess, it's true you have your first line of customers and I guess the use case what I was thinking of is me being a developer. I'm thinking what products could be built then using this as a component, so to speak. Have you'd given any thought to that or have anyone had approached you to say, "This is amazing. I'd like to build this meta product that integrates that," or is it kind of early days? CHRIS: Early on, that was the approach of the Internet of Things and it merged away from that in my experience. Early on, it was all about building blocks. You got to understand, these are old Zigbee Z-Wave programmers and that was the whole concept. Then it got turned on its head by, "I really have this problem that I need to solve and I don't want to have to make a bunch of building blocks to do this." For attacking it from the other side, like you're saying, building up into pieces, I really recommend you talk to the Twine guys -- super mechanical -- they're here in Austin as well. A year or so before, we came out with Peeple. They put out this device which was exactly what you're talking about. An Internet of Things type hub where you just add in all the pieces and then you integrate with everything. They can better give you a story of how that lifeline goes. CHARLES: Yeah, because it's always something you think about because you've got all these wonderful things. CHRIS: Yeah, some would say, an Internet of Things. CHARLES: Yup, or at least a floor plan. ELRICK: When someone gets a Peeple device, what is the full installation story and set up? What is the walkthrough for that? CHRIS: We have a little video of that. What you essentially do for Peeple when you're installing it on the peephole in your door, you unscrew the peephole. Now, the way Peeple's work is they need to handle doors that are variable width, depending on where you live. There's no real standard. All of the Peeple's work by having a shaft that you screw onto another side so it's basically two pieces. Now, one of those shafts holds this bracket that we include in the package. You screw that onto your door with the peephole holding it to the door, then you turn on the Peeple device and you connect it to your home Wi-Fi and then you're ready to go. That's it. CHARLES: That's the hardware side of the onboarding and then what about the software? How do I go and look at my door diary? CHRIS: You do this during the installation. You go to My.Peeple.io and there's a little button to add your Peeple device. UI-wise, it's one user interface among all the platforms whether your Android, iPhone or on a browser. You just go to that webpage and associate your account to your Peeple devices. You will have to log in. You can log in with Gmail, Facebook or just a regular email. Then you add your device and any time you go back to that page, it will show you only the videos from your device so you have a list of all the events from your Peeple device on that page or in that app. CHARLES: That is interesting. I'm looking at the videos right now online. Although my problem actually is I've got a glass door. CHRIS: Yes, we got you covered as well. CHARLES: You do? CHRIS: Yes. The reason you have a glass door or a peephole and many people don't realize this is it because it's required by law. If you ever plan to have run out your house as a multi-family unit, you have to have a peephole or a window surface to where people can look out. Once we figured that, that's when we realized we were onto something. The first versions of Peeple came with these little adhesive pads that we called gecko skin and this is where we learned a valuable lesson. No matter how sticky you make your stickers, they're not sticky enough. We included three of these little tabs in every device to put on a glass door, if you had glass so the Peeple device would work the same way for glass door, except that you would use a sticker, instead of unscrewing the peephole. The only problem with the stickers were is they were not sticky enough. If there was condensation or a weather event or something like that, these things would fall off so we made a modification. We found better stickers and I mailed those out to all the people. But this is why hardware is hard. You're going to make these mistakes. In all our testing, we didn't find this but of course, once you have a thousand testers, you find a little more. ELRICK: That's interesting that you brought up the laws about the peephole. Were there any particular building codes or anything of that nature that you guys had to be concerned about when having Peeple installed things on their doors that you had to figure out before shipping them out? CHRIS: Not really. The Texas property code is more geared among making landlords do the right thing. In case you're wondering, I think it's Texas Property Code 94-152 that covers this. There must be an external viewable portion for all multi-family units to the front entryway. Now, this is just the Texas law. We had to look this up in a few other states and it turns out there's one in San Francisco, there's one in Virginia but they're all different. But so far, we haven't had any issues with any property codes or building code issues. CHARLES: This has been an almost four-year odyssey for you that you've been on, right? CHRIS: Right. CHARLES: You've been involved in this scene and working with hardware probably for a long time even before that, it sounds like. For people who are just getting into it, because I feel like there's this wave cresting now, where these types of startups and these types of side projects and hobby projects are just starting to enter the mainstream. Do you have any advice for anybody who would want to get into this space? CHRIS: Well, that's a great question. Of course. Now, contrary to what you just stated, I didn't have much of a hardware background. I'm a software guy. I can personally attest to the pains of becoming a hardware guy. Now, the irony of this is I do have a master's degree in electronics engineering but electronic engineering is so huge. It's such a big field that you can spend your entire career not doing much hardware. But I always had the ability to go back and build some circuits but I would say the number one thing, if you're not a hardware guy is go to some of these meetups or get involved in a community and find yourself one, someone who has experience doing hardware because coming from the software room, you're used to this flexibility of changing a few lines of code and being everything changing. Now, when you get a hardware guy onboard and our hardware guy's name is Craig, when he comes to work -- CHARLES: Or gal. CHRIS: Yeah, or gal, of course. When they look at the same problems you're looking at, they're like, "Hold on a second. Let's step back. Let's test this." There's this quantitative slowing which you need to have as hardware because once you build a PCB, a circuit board, you are now stuck with that board for the next month or so because it takes a while to make another one so get that right before you jump around and do all these changes. My first advice would be is get help. There's no shame in going out there and you might be surprised. There are so many people out there that want to join in. If you have a good idea, there's plenty of people who want to contribute. CHARLES: Would you say that there are communities out there like the software communities where you have meetups? Some of the software meetups are just fantastic, where people are so welcoming and they're just so excited to share the information that they themselves are so excited about. CHRIS: Yes and there's the same thing as on the hardware side. You would definitely go to a few hardware meetups, there are several in Austin. There's at least one every week and it's a great chance for people to tell these kinds of stories. This is a maker type community so they welcome these ideas because that's what fuels their enthusiasm. Every time someone is doing something new, they want to hear it. That's the change now. This decade has happened to where you can go out and buy a few modules and make your little device. Then there's the next big step of turning it into going from prototype to hardware but you can get all those kinks out without having to make your own printed circuit boards, without having to have a huge firmware background. Just knowing a little bit of tech and a Raspberry Pi, you can test out your inventions at this early stage without having to invest all this money and these other things. There's never been a better time to do it. I would leave your listeners with is if you got something swirling around your head, get a Pi, get a little Arduino and do it. There's nothing stopping you. CHARLES: Yeah, it's shocking how affordable they are. CHRIS: I don't even touch on China, by the way but that's the next step. CHARLES: That's the great thought that I want to leave everybody with but I actually have more questions so we won't leave everybody with that. We'll keep on going because I want to talk about China and I want to talk about something that was in there. You've touched on it a couple of times when telling your story how you go from this just do it, get it out there, get it into people's homes, just get the Version 0 out, just buy an Arduino, slap together something terrible, that is at least one millionth of the dream that you have and you've taken your first step on that odyssey. That's a very common story in software. The way that we develop software too is have these agile methodologies and these techniques to reinforce them, testing, continuous integration, continuous deployment. How does that play out? A fascinating subject to me personally is how do you do that in the context of hardware. A question that I love to ask is how do you do things like ensure quality? How do you do integration testing? How do you have a deployment pipeline if you've got these Peeple devices out there on tens of thousands of doors globally? How do you push out a bug fix or a feature update? What's the automation around that look like? CHRIS: The over-the-air updates are your friend. If you're going to make a hardware device, I recommend making a Wi-Fi enabled device because then your firmware is not locked, then you can do over-the-air updates. That has been a lifesaver. We've done maybe a dozen software updates to our device to date, sometimes little changes, sometimes big changes. But what happens is any time the Peeple device wakes up, it says, "Hello, server," and the server says, "I got an update. First, let me give you all these images." Give me the code. The devices are constantly upgradable, just like you'd expect with software. Now, with some of these Bluetooth devices, you can't do that. You've got to go out the door being ready to go with no issues. It's a friction point to tell someone, "Your headphones can't work now. You need to plug it into a computer. You need to download this firmware upgrade. You need to update the firmware doing it by hand." That just isn't going to fly in today's consumer market so I would recommend if you can, make your device a hardware Wi-Fi device, get a Wi-Fi module in there and that opens up the world to you on doing a lot of these updates, to answer the last part of your question. CHARLES: You mentioned China, since you're touching on the manufacturing process or just the market over there or --? CHRIS: Yeah, be ready to fully commit. I've been to China, maybe four times now. I have a 10-year visa. It took a while to find the right partner and you've got to be boots on the ground in the factory for a couple of weeks just getting the whole line up. It's a whole another product when you're at the manufacturing stage. You're making all these little test things, they've got to hook up the boards to certain devices, they've got to put the firmware on it, they've got to do these things. It's a whole another job. That's why when you do these Kickstarter. They say, "We're going to be out in three months," and then six months later, "We're still working on it." I have a lot of empathy for this because I've lived it. You think, "I've got everything done. My hardware works. All I have to do is team up with someone to just make it and with them, we'll ship it." There's a whole another level to just a manufacturing piece and you can't really learned. There's no real textbooks to learn this because every factories are different. Our factory is right north of Shenzhen and we talked to some US manufacturers but they just weren't competitive to be in the discussion so you pretty much have to go overseas and then you have to sit down with them and just a little bit of communication difficulties can bring down a whole manufacturing line so it's very important that you're very hands on and you see your product all the way to package. ELRICK: That's interesting. I know of it but I never really thought about it because I was really not in that position. What are some of the higher level of things that you should look out for when evaluating a manufacturing partner? CHRIS: We talked to about a half a dozen before we decided on our manufacturing partner. The big one for me was cultural fit. I talked to some of the big ones like the one that makes the Apple phones, we talked to them for a while and I just found that I would say, "We would like to do this or we need this," and then the next week, they'd be asking a question, "What about this?" and I'm like, "Oh, you didn't understand what I was really asking," so you would lose weeks just by tiny misunderstandings. I found a manufacturing partner that has a subsidiary here in the US and my main contact grew up in the United States but he also goes to China every other week. Having that kind intermediary made everything so much easier. The communication was never an issue. I was able to get things done almost twice as quick with the other manufacturers I was talking to. In the end, they also came up with a great price so it turned out to be a win-win. I would recommend talking to the bigger manufacturers but spend a lot of time on the smaller ones and really figuring out is the communication up to snuff to really make your product. It's huge. CHARLES: What a story. I'm really glad that we got to have you on the podcast, Chris because you have the story that starts from literally slapping a Raspberry Pi and an accelerometer and speaker and apparently a bunch of other things on your front door and with an extension cord and walking a continuous path to where you're flying back and forth between China and Austin to inspect and ensure your assembly line and making a real product. It demonstrates that it can be done by the fact that you have done it so I think it serves as an inspirational case for a lot of people out there who might think that this is something that they might want to do. Or think that they're capable of. Thank you so much for coming and talking about Peeple. Everybody, you can go ahead and check it out. It's Peeple.io, right? CHRIS: That's correct. CHARLES: All right. Also, is there anything else that you'd like to announce other than the magic, which you're going to keep a lid on? CHRIS: Yes, I know I'd appropriately teased everyone about that but you can go to our website. If you go to Shop.Peeple.io, we're taking preorders for this next magical version, the Peeple Version 1.1, I guess I'll call it. I would like to add just before we go is if you're going to endeavor to do something like this, make sure you have a very understanding family because they couldn't have done it without a wife and kids that understood my craziness and allowed me to have just a complete mess of our house for, I guess, for three years now. CHARLES: Thanks again and thanks everybody for listening to this episode. You can get in touch with us on Twitter. We're at @TheFrontside and you can always find us on the web at Frontside.io and there's a contact form and we'd love to hear from you, for any reason whatsoever. Thanks, everybody and we'll talk to you next week.
Kubernetes Joe Beda @jbeda | Heptio | eightypercent.net Show Notes: 00:51 - What is Kubernetes? Why does it exist? 07:32 - Kubernetes Cluster; Cluster Autoscaling 11:43 - Application Abstraction 14:44 - Services That Implement Kubernetes 16:08 - Starting Heptio 17:58 - Kubernetes vs Services Like Cloud Foundry and OpenShift 22:39 - Getting Started with Kubernetes 27:37 - Working on the Original Internet Explorer Team Resources: Google Compute Engine Google Container Engine Minikube Kubernetes: Up and Running: Dive into the Future of Infrastructure by Kelsey Hightower, Brendan Burns, and Joe Beda Joe Beda: Kubecon Berlin Keynote: Scaling Kubernetes: How do we grow the Kubernetes user base by 10x? Wordpress with Helm Sock Shop: A Microservices Demo Application Kelsey Hightower Keynote: Kubernetes Federation Joe Beda: Kubernetes 101 AWS Quick Start for Kubernetes by Heptio Open Source Bridge: Enter the coupon code PODCAST to get $50 off a ticket! The conference will be held June 20-23, 2017 at The Eliot Center in downtown Portland, Oregon. Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 70. With me is Elrick Ryan. ELRICK: Hey, what's going on? CHARLES: We're going to get started with our guest here who many of you may have heard of before. You probably heard of the technology that he created or was a key part of creating, a self-described medium deal. [Laughter] JOE: Thanks for having me on. I really appreciate it. CHARLES: Joe, here at The Frontside most of what we do is UI-related, completely frontend but obviously, the frontend is built on backend technology and we need to be running things that serve our clients. Kubernetes is something that I think I started hearing about, I don't know maybe a year ago. All of a sudden, it just started popping up in my Twitter feed and I was like, "Hmm, that's a weird word," and then people started talking more and more about it and move from something that was behind me into something that was to the side and now it's edging into our peripheral vision more and more as I think more and more people adopt it, to build things on top of it. I'm really excited to have you here on the show to just talk about it. I guess we should start by saying what is the reason for its existence? What are the unique set of problems that you were encountering or noticed that everybody was encountering that caused you to want to create this? JOE: That's a really good set up, I think just for way of context, I spent about 10 years at Google. I learned how to do software on the server at Google. Before that, I was at Microsoft working on Internet Explorer and Windows Presentation Foundation, which maybe some of your listeners had to actually go ahead and use that. I learned how to write software for the server at Google so my experience in terms of what it takes to build and deploy software was really warped by that. It really doesn't much what pretty much anybody else in the industry does or at least did. As my career progressed, I ended up starting this project called Google Compute Engine which is Google's virtual machine as a service, analogous to say, EC2. Then as that became more and more of a priority for the company. There was this idea that we wanted internal Google developers to have a shared experience with external users. Internally, Google didn't do anything with virtual machines hardly. Everything was with containers and Google had built up some really sophisticated systems to be able to manage containers across very large clusters of computers. For Google developers, the interface to the world of production and how you actually launched off and monitor and maintain it was through this toolset, Borg and all these fellow travelers that come along with it inside of Google. Nobody really actually managed machines using traditional configuration management tools like Puppet or Chef or anything like that. It's a completely different experience. We built a compute engine, GCE and then I had a new boss because of executive shuffle and he spun up a VM and he'd been at Google for a while. His reaction to the thing was like, "Now, what?" I was like I'm sitting there at the root prompt go and like, "I don't know what to do now." It turns out that inside of Google that was actually a common thing. It just felt incredibly primitive to actually have a raw VM that you could have SSH into because there's so much to be done above that to get to something that you're comfortable with building a production grade service on top of. The choice as Google got more and more serious about cloud was to either have everybody inside of Google start using raw VMs and live the life that everybody outside of Google's living or try and bring the experience around Borg and this idea of very dynamic, container-centric, scheduled-cluster thinking bring that outside of Google. Borg was entangled enough with the rest of Google systems that sort of porting that directly and externalizing that directly wasn't super practical. Me and couple of other folks, Brendan Burns and Craig McLuckie pitched this crazy idea of starting a new open source project that borrowed from a lot of the ideas from Borg but really melded it with a lot of the needs for folks outside of Google because again, Google is a bit of a special case in so many ways. The core problem that we're solving here is how do you move the idea of deploying software from being something that's based on these physical concepts like virtual machines, where the amount of problems that you have to solve, to actually get that thing up and running is actually pretty great. How do we move that such that you have a higher, more logical set of abstractions that you're dealing with? Instead of worrying about what kernel you're running on, instead of worrying about individual nodes and what happens if a node goes down, you can instead just say, "Make sure this thing is running," and the system will just do its best to make sure that things are running and then you can also do interesting things like make sure 10 of these things are running, which is at Google scale that ends up being important. CHARLES: When you say like a thing, you're talking about like a database server or API server or --? JOE: Yeah, any process that you could want to be running. Exactly. The abstraction that you think about when you're deploying stuff into the cloud moves from a virtual machine to a process. When I say process, I mean like a process plus all the things that it needs so that ends up being a container or a Docker image or something along those lines. Now the way that Google does it internally slightly different than how it's done with Docker but you can squint at these things and you can see a lot of parallels there. When Docker first came out, it was really good. I think at Docker and containers people look for three things out of it. The first one is that they want a packaged artifact, something that I can create, run on my laptop, run in a data center and it's mostly the same thing running in both places and that's an incredibly useful thing, like on your Mac you have a .app and it's really a directory but the finder treats it as you can just drag it around and the thing runs. Containers are that for the server. They just have this thing that you can just say, run this thing on the server and you're pretty sure that it's going to run. That's a huge step forward and I think that's what most folks really see in the value with respect to Docker. Other things that folks look at with containerized technology is a level of efficiency of being able to pack a lot of stuff onto a little bit of hardware. That was the main driver for Google. Google has so many computers that if you improve utilization by 1%, that ends up being real money. Then the last thing is, I think a lot of folks look at this as a security boundary and I think there's some real nuance conversations to have around that. The goal is to take that logical infrastructure and make it such that, instead of talking about raw VMs, you're actually talking about containers and processes and how these things relate to each other. Yet, you still have the flexibility of a tool box that you get with an infrastructure level system versus if you look at something like Heroku or App Engine or these other platform as a service. Those things are relatively fixed function in terms of their architectures that you can build. I think the container cluster stuff that you see with things like Kubernetes is a nice middle ground between raw VMs and a very, very opinionated platform as a service type of thing. It ends up being a building block for building their more specialized experiences. There's a lot to digest there so I apologize. CHARLES: Yeah, there's a lot to digest there but we can jump right into digesting it. You were talking about the different abstractions where you have your hardware, your virtual machine and the containers that are running on top of that virtual machine and then you mentioned, I think I'm all the way up there but then you said Kubernetes cluster. What is the anatomy of a Kubernetes cluster and what does that entail? And what can you do with it? JOE: When folks talk about Kubernetes, I think there's two different audiences and it's important to talk about the experience from each audience. There's the audience from the point of view of what it takes to actually run a cluster -- this is a cluster operator audience -- then there's the audience in terms of what it takes to use a cluster. Assuming that somebody else is running a cluster for me, what does it look like for me to go ahead and use this thing? This is really different from a lot of different dev app tools which really makes these things together. We've tried to create a clean split here. I'm going to skip past what it means to launch and run a Kubernetes cluster because it turns out that over time, this is going to be something that you can just have somebody else do for you. It's like running your own MySQL database versus using RDS in Amazon. At some point, you're going to be like, "You know what, that's a pain in the butt. I want to make that somebody else's problem." When it comes to using the cluster, pretty much what it comes down to is that you can tell a cluster. There's an API to a cluster and that API is sort of a spiritual cousin to something like the EC2 API. You can talk to this API -- it's a RESTful API -- and you can say, "Make sure that you have 10 of these container images running," and then Kubernetes will make sure that ten of those things are running. If a node goes down, it'll start another one up and it will maintain that. That's the first piece of the puzzle. That creates a very dynamic environment where you can actually program these things coming and going, scaling up and down. The next piece of the puzzle that really, really starts to be necessary then is that if you have things moving around, you need a way to find them. There is built in ideas of defining what a service is and then doing service discovery. Service discovery is a fancy name for naming. It's like I have a name for something, I want to look that up to an IP address so that I can talk to it. Traditionally we use DNS. DNS is problematic in the super dynamic environments so a lot of folks, as they build backend systems within the data center, they really start moving past DNS to something that's a lot more dynamic and purpose-built for that. But you can think about it in your mind as a fancy super-fast DNS. CHARLES: The customer is itself something that's abstract so I can change it state and configure it and say, "I want 10 instances of Postgres running," or, "I want between five and 15 and it will handle all of that for you." How do you then make it smart so that you can react to load, for example like all of the sudden, this thing is handling more load so I need to say... What's the word I'm looking for, I need to handle -- JOE: Autoscale? CHARLES: Yeah, autoscale. Are there primitives for that? JOE: Exactly. Kubernetes itself was meant to be a tool box that you can build on top of. There are some common community-built primitives for doing it's called -- excuse the nomenclature here because there's a lot of it in Kubernetes and I can define it -- Horizontal Pod Autoscaling. It's this idea that you can have a set of pods and you want to tune the number of replicas to that pod based on load. That's something that's built in. But now maybe you're cluster, you don't have enough nodes in your cluster as you go up and down so there's this idea of cluster autoscaling where I want to add more capacity that I'm actually launching these things into. Fundamentally, Kubernetes is built on top of virtual machines so at the base, there's a bunch of virtual or physical machines hardware that's running and then it's the idea of how do I schedule stuff into that and then I can pack things into that cluster. There's this idea of scaling the cluster but then also scaling workloads running on top of the cluster. If you find that some of these algorithms or methods for how you want to scale things when you want to launch things, how you want to hook them up, if those things don't work for you, the Kubernetes system itself is programmable so you can build your own algorithms for how you want to launch and control things. It's really built from the get go to be an extensible system. CHARLES: One question that's keeps coming up is as I hear you describing these things is the Kubernetes cluster then, it's not application-oriented so you could have multiple applications running on a single cluster? JOE: Very much so. CHARLES: How do you then layer on your application abstraction on top of this cluster abstraction? JOE: An application is made up of a bunch of running bits, whether it'd be a database. I think as we move towards microservices, it's not just going to be one set of code. It can be a bunch of sets of codes that are working together or bunch of servers that are working together. There are these ideas are like I want to run 10 of these things, I want to run five of these things, I want to run three of these things and then I want them to be able to find each other and then I want to take this thing and I want to expose it out to the internet through a load balancer on Amazon, for example. Kubernetes can help to set up all those pieces. It turns out that Kubernetes doesn't have an idea of an application. There is no actually object inside a Kubernetes called application. There is this idea of running services and exposing services and if you bring a bunch of services together, that ends up being an application. But in a modern world, you actually have services that can play double duty across applications. One of the things that I think is exciting about Kubernetes is that it can grow with you as you move from a single application to something that really becomes a service mesh, as your application, your company grows. Imagine that you have some sort of app and then you have your customer service portal for your internal employees. You can have those both being frontend applications, both running on a Kubernetes cluster, talking to a common backend with a hidden API that you don't expose to customers but it's something that's exposed to both of those frontends and then that API may talk to a database. Then as you understand your problems, you can actually spawn off different microservices that can be managed separately by different teams. Kubernetes becomes a platform where you can actually start with something relatively simple and then grow with that and have it stretch from single application to multiple service microservice-base application to a larger cluster that can actually stretch across multiple teams and there's a bunch of facilities for folks not stepping on each other's toes as they do this stuff. Just to be clear, this is what Kubernetes is as it's based. I think one of the powerful things that you can do is that there's a whole host to folks that are building more platform as a service like abstractions on top of Kubernetes. I'm not going to say it's a trivial thing but it's a relatively straightforward thing to build a Heroku-like experience on top of Kubernetes. But the great thing is that if you find that that Heroku experience, if some of the opinions that were made as part of that don't work for you, you can actually drop down to a level that's more useful than going all the way down to raw VM because right now, if you're running on Heroku and something doesn't work for you, it's like, "Here's a raw VM. Good luck with that." There's a huge cliff as you actually want to start coloring outside the lines for, as I mix my metaphors here for these platform services. ELRICK: What services that are out there that you can use that would implement Kubernetes? JOE: That's a great question. There are a whole host there. One of the folks in the community has pulled together a spreadsheet of all the different ways to install and run Kubernetes and I think there were something like 60 entries on it. It's an open source system. It's credibly adaptable in terms of running in all sorts of different mechanisms for places and there are really active startups that are helping folks to run that stuff. In terms of the easiest turnkey things, I would probably start with Google Container Engine, which is honestly one click. It fits within a Free Tier. It can get you up and running so that you can actually play with Kubernetes super easy. There's this thing from the folks at CoreOS called minikube that lets you run it on your laptop as a development environment. That's a great way to kick the tires. If you're on Amazon, my company Heptio has a quick start that we did with some of the Amazon community folks. It's a cloud formation template that launches a Kubernetes stack that you can get up and running and really understand what's happening. I think as users, understand what value it brings at the user level then they'll figure out whether they want to invest in terms of figuring out what the best place to run and the best way to run it for them is. I think my advice to folks would be find some way to start getting familiar with it and then decide if you have to go deep in terms of how to be a cluster operator and how to run the thing. ELRICK: Yup. That was going to be my next question. You just brought up your company, Heptio. What was the reason for starting that startup? JOE: Heptio was founded by Craig McLuckie, one of the other Kubernetes founders and me. We started about six months or seven months ago now. The goal here is to bring Kubernetes to enterprises and how do we bridge the gap of bringing some of this technology forward company thinking to think about companies like Google and Twitter and Facebook. They have a certain way of thinking about building a deployment software. How do we bring those ideas into more mainstream enterprise? How do we bridge that gap and we're really using doing Kubernetes as the tool to do that? We're doing a bunch of things to make that happen. The first being that we're offering training, support and services so right now, if companies want to get started today, they can engage with us and we can help them understand what makes sense there. Over time, we want to make that be more self-service, easier to do so that you actually don't have to hire someone like us to get started and to be successful there. We want to invest in the community in terms of making Kubernetes easier to approach, easier to run and then more applicable to a more diverse set of audiences. This conversation that we're having here, I'm hoping that at some point, we won't have to have this because Kubernetes will be easy enough and self-describing enough that folks won't feel like they have to dig deep to get started. Then the last thing that we're going to be doing is offering commercial services and software that really helps teach Kubernetes into the fabric of how large companies work. I think there's a set of tools that you need as you move from being a startup or a small team to actually dealing within the structure of a large enterprise and that's really where we're going to be looking to create and sell product. ELRICK: Gotcha. CHARLES: How does Kubernetes then compare in contrast to other technologies that we hear when we talk about integrating with the enterprise and having enterprise clients managing their own infrastructure things like Cloud Foundry, for example. From someone who's kind of ignorant of both, how do you discriminate between the two? JOE: Cloud Foundry is a more of a traditional platform as a service. There's a lot to like there and there are some places where the Kubernetes community and the Cloud Foundry community are starting to cooperate. There is a common way for provisioning and creating external services so you can say, "I want MySQL database." We're trying to make that idea of, "Give me MySQL database. I don't care who and where it's running." We're trying to make those mechanisms common across Cloud Foundry and Kubernetes so there is some effort going in there. But Cloud Foundry is more of a traditional platform as a service. It's opinionated in terms of the right way to create, launch, roll out, hooks services together. Whereas, Kubernetes is more of a building block type of thing. Kubernetes is, at least raw Kubernetes in some ways a more of a lower levels building block technology than something like Cloud Foundry. The most applicable competitor in this world to Cloud Foundry, I would say would be OpenShift from Red Hat. Open Shift is a set of extensions built on top of it. Right now, it's a little bit of a modified version of Kubernetes but over time that teams working to make it be a set of pure extensions on top of Kubernetes that adds a platform as a service layer on top of the container cluster layer. The experience for Open Shift will be comparable to the experience for Cloud Foundry. There's other folks like Microsoft just bought the small company called Deis. They offer a thing called Workflow which gives you a little bit of the flavor of a platform as a service also. There's multiple flavors of platforms built on top of Kubernetes that would be more apples to apples comparable to something like Cloud Foundry. Now the interesting thing with thing Deis' Workflow or Open Shift or some of the other platforms built on top of Kubernetes is that, again if you find yourself where that platform doesn't work for you for some reason, you don't have to throw out everything. You can actually start picking and choosing what primitives you want to drop down to in the Kubernetes world without having to go down to raw VMs. Whereas, Cloud Foundry really doesn't have a widely supported, sort of more raw interface to run in containers and services. It's kind of subtle. CHARLES: Yeah, it's kind of subtle. This is an analogy that just popped into my head while I was listening to you and I don't know if this is way off base. But when you were describing having... What was the word you used? You said a container clast --? It was a container clustered... JOE: Container orchestrator, container cluster. These are all -- CHARLES: Right and then kind of hearkening back to the beginning of our conversation where you were talking about being able to specify, "I want 10 of these processes," or an elastic amount of these processes that reminded me of Erlang VM and how kind of baked into that thing is the concept of these lightweight processes and be able to manage communication between these lightweight processes and also supervise these processes and have layers of supervisors supervising other supervisors to be able to declare a configuration for a set of processes to always be running. Then also propagate failure of those processes and escalate and stuff like that. Would you say that there is an analogy there? I know there are completely separate beast but is there a co-evolution there? JOE: I've never used Erlang in Anger so it's hard for me to speak super knowledgeably about it. For what I understand, I think there is a lot in common there. I think Erlang was originally built by Nokia for telecoms switches, I believe which you have these strong availability guarantees so any time when you're aiming for high availability, you need to decouple things with outside control loops and ways to actually coordinate across pieces of hardware and software so that when things fail, you can isolate that and have a blast radius for a failure and then have higher level mechanisms that can help recover. That's very much what happens with something like Kubernetes and container orchestrator. I think there's a ton of parallels there. CHARLES: I'm just trying to grasp at analogies of things that might be -- ELRICK: I think they call that the OTP, Open Telecom Platform or something like that in Erlang. CHARLES: Yeah, but it just got a lot of these things -- ELRICK: Very similar. CHARLES: Yeah, it seems very similar. ELRICK: Interestingly enough, for someone that's starting from the bottom, an initiated person to Kubernetes containers, Docker images, Docker, where would they start to ramp up themselves? I know you mentioned that you are writing a book --? JOE: Yes. ELRICK: -- 'Kubernetes: Up and Running'. Would that be a good place to start when it comes out or is there like another place they should start before they get there. What is your thoughts on that? JOE: Definitely, check out the book. This is a book that I'm writing with Kelsey Hightower who's one of the developer evangelists for Google. He is the most dynamic speaker I've ever seen so if you ever have a chance to see him live, it's pretty great. But Kelsey started this and he's a busy guy so he brought in Brendan Burns, one of the other Kubernetes co-founders and me to help finish that book off and that should be coming out soon. It's Kubernetes: Up and Running. Definitely check that out. There's a bunch of good tutorials out there also that start introducing you to a lot of the concepts in Kubernetes. I'm not going to go through all of those concepts right now. There's probably like half a dozen different concepts and terminology, things that you have to learn to really get going with it and I think that's a problem right now. There's a lot to import before you can get started. I gave a talk at the Kubernetes Conference in Berlin, a month or two ago and it was essentially like, yeah we got our work cut out for us to actually make the stuff applicable to wider audience. But if you want to see the power, I think one of the things that you can do is there's the system built on top of Kubernetes called Helm, H-E-L-M, like a ship's helm because we love our nautical analogies here. Helm is a package manager for Kubernetes and just like you can login to say, in Ubuntu machine and do apps get install MySQL and you have a database up and running. With Helm you can say, create and install 'WordPress install' on my Kubernetes cluster and it'll just make that happen. It takes this idea of package management of describing applications up to the next level. When you're doing regular sysadmin stuff, you can actually go through and do the system to [Inaudible] files or to [Inaudible] files and copy stuff out and use Puppet and Chef to orchestrate all of that stuff. Or you can take the stuff that sort of package maintainers for the operating system have done and actually just go ahead and say, "Get that installed." We want to be able to offer a similar experience at the cluster level. I think that's a great way to start seeing the power. After you understand all these concepts here is how easy you can make it to bring up and run these distributed systems that are real applications. The Weaveworks folks, there are company that do container networking and introspection stuff based out of London. They have this example application called Sock Shop. It's like the pet shop example but distributed and built to show off how you can build an application on top of Kubernetes that pulls a lot of moving pieces together. Then there's some other applications out there like that that give you a little bit of an idea of what things look like as you start using this stuff to its fullest extent. I would say start with something that feels concrete where you can start poking around and seeing how things work before you commit. I know some people are sort of depth first learners and some are breadth first learners. If you're depth first, go and read the book, go to Kubernetes documentation site. If you're breadth first, just start with an application and go from there. ELRICK: Okay. CHARLES: I think I definitely fall into that breadth first. I want to build something with it first before trying to manage my own cluster. ELRICK: Yeah. True. I think I watched your talk and I did watch one of Kelsey's talks: container management. There was stuff about replicators and schedulers and I was like, "The ocean just getting deeper and deeper," as I listened to his talk. JOE: Actually, I think this is one of the cultural gaps to bridge between frontend and backend thinking. I think a lot of backend folks end up being these depths first types of folks, where when they want to use a technology, they want to read all the source code before they first apply it. I'm sure everybody has met those type of developers. Then I think there's folks that are breadth first where they really just want to understand enough to be effective, they want to get something up and running, they want to like if they hit a problem, then they'll go ahead and fix that problem but other than that, they're very goal-oriented towards, I want to get this thing running. Kubernetes right now is kind of built by systems engineers for systems engineers and it shows so we have our work cut out for us, I think to bridge that gap. It's going to be an ongoing thing. ELRICK: Yeah, I'm like a depth first but I have to keep myself in check because I have to get work done as a developer. [Laughter] JOE: That sounds about right, yeah. Yeah, so you're held accountable for writing code. CHARLES: Yeah. That's where real learning happens when you're depth first but you've got deadlines. ELRICK: Yes. CHARLES: I think that's a very effective combination. Before we go, I wanted to switch topic away from Kubernetes for just a little bit because you mentioned something when we were emailing that, I guess in a different lifetime you were actually on the original IE team or at the very beginning of the Internet Explorer team at Microsoft? JOE: Yes, that's where I started my career. Back in '97, I've done a couple of internships at Microsoft and then went to join full time, moved up here to Seattle and I had a choice between joining the NT kernel team or the Internet Explorer team. This was after IE3 before IE4. I don't know if this whole internet thing is going to pan out but it looks like that gives you a lot of interesting stuff. You got to understand the internet, it wasn't an assumed thing back then, right? ELRICK: Yeah, that's true. JOE: I don't know, this internet thing. CHARLES: I know. I was there and I know that like old school IE sometimes gets a bad rap. It does get a bad rap for being a little bit of an albatross but if you were there for the early days of IE, it really was the thing that blew it wide open like people do not give credit. It was extraordinarily ahead of its time. That was [Inaudible] team that coin DHTML back to when it was called DHTML. I remember, actually using it for the first time, I think about '97 is about what I was writing raw HTML for everything. CSS wasn't even a thing hardly. When I realized, all these static things when we render them, they're etched in stone. The idea that every one of these properties which I already knew is now dynamic and completely reflected, just moment to moment. It was just eye-opening. It was mind blowing and it was kind of the beginning of the next 20 years. I want to just talk a little bit about that, about where those ideas came from and what was the impetus for that? JOE: Oh, man. There's so much history here. First of all, thank you for calling out. I think we did a lot of really interesting groundbreaking work then. I think the sin was not in IE6 as it was but in [inaudible]. I think the fact that -- CHARLES: IE6 was actually an amazing browser. Absolutely an amazing browser. JOE: And then the world moved past it, right? It didn't catch up. That was the problem. For its time when it was released, I was proud of that release. But four years on, things get a little bit long in the tooth. I think IE3 was based on rendering engine that was very static, very similar to Netscape at the time. The thing to keep in mind is that Netscape at that time, it would download a webpage, parse it and display it. There was no idea of a DOM at Netscape at that point so it would throw away a lot of the information and actually only store stuff that was very specific to the display context. Literally, when you resize the window for Netscape back then, it would actually reparse the original HTML to regenerate things. It wasn't even able to actually resize the window without going through and reparsing. What we did with IE4 -- and I joined sort of close to the tail on IE4 so I can't claim too much credit here -- is bringing some of the ideas from something like Visual Basic and merge those into the idea of the browser where you actually have this programming model which became the DOM of where your controls are, how they fit together, being able to live modify these things. This was all part and parcel of how people built Windows applications. It turns out that IE4 was the combination of the old IE3 rendering engine, sort of stealing stuff from there but then this project that was built as a bunch of Active X controls for Office called [inaudible]. As you smash that stuff together and turn it into a browser rendering engine, that browser rendering engine ended up being called Trident. That's the thing that got a nautical theme. I don't think it's connected and that's the thing that that I joined and started working on at the time. This whole idea that you have actually have this DOM, that you can modify a programmable representation of DHTML and have it be live updated on screen, that was only with IE4. I don't think anybody had done it at that point. The competing scheme from Netscape was this thing called layers where it was essentially multiple HTML documents where you could replace one of the HTML documents and they would be rendered on top of each other. It was awful and it lost to the mist of time. CHARLES: I remember marketing material about layers and hearing how layers was just going to be this wonderful thing but I don't ever remember actually, did they ever even ship it? JOE: I don't know if they did or not. The thing that you got to understand is that anybody who spent any significant amount of time at Microsoft, you just really internalize the idea of a platform like no place else. Microsoft lives and breathes platforms. I think sometimes it does them a disservice. I've been out of Microsoft for like 13 years now so maybe some of my knowledge is a little outdated here but I still have friends over there. But Microsoft is like the poor schmuck that goes to Vegas and pulls the slot machine and wins the jackpot on the first pull. I'm not saying that there wasn't a lot of hard work that went behind Windows but like they hit the goldmine with that from a platform point of view and then they essentially did it again with Office. You have these two incredibly powerful platforms that ended up being an enormous growth engine for the company over time so that fundamentally changed the world view of Microsoft where they really viewed everything as a platform. I think there were some forward thinking people at Netscape and other companies but I think, Microsoft early on really understood what it meant to be a platform and we saw back then what the web could be. One of the original IE team members, I'm going to give a shout out to him, Chris Wilson who's now on the Chrome team, I think. I don't know where he is these days. Chris was on the original IE team. He's still heavily involved in web standards. None of this stuff is a surprise to us. I look at some of the original so after we finished IE6, a lot of the IE team rolled off to doing Avalon which became Windows Presentation Foundation, which was really looking to sort of reinvent Windows UI, importing a bunch of the ideas from web and modern programming there. That's where we came up with XAML and eventually begat Silverlight for good or ill. But some of our original demos for Avalon, if you go back in time and look at that, that was probably... I don't know, 2000 or something like that. They're exactly the type of stuff that people are building with the web platform today. Back then, they'll flex with the thing. We're reinventing this stuff over and over again. I like where it's going. I think we're in a good spot right now but we see things like the Shadow DOM come up and I look at that and I'm like, "We had HTC controls which did a lot of Shadow DOM stuff like stuff in IE early on." These things get reinvented and refined over time and I think it's great but it's fascinating to be in the industry long enough that you can see these patterns repeat. CHARLES: It is actually interesting. I remember doing UI in C++ and in Java. We did a lot of Java and it was a long time. I felt like I was wandering in the wilderness of the web where I was like, "Oh, man. I just wish we had these capabilities of things that we could do in swing, 10 or 15 years ago," but the happy ending is that I really actually do feel we are in a place now, finally where you have options for it really is truly competitive as a developer experience to the way it was, these many years ago and it's also a testament just how compelling the deployment model of the web is, that people were willing to forgo all of that so they could distribute their applications really easily. JOE: Never underestimate the power of view source. CHARLES: Yeah. [Laughter] ELRICK: I think that's why this sort of conversations are very powerful, like going back in time and looking at the development up until now because like they say that people that don't know their history, they're doomed to repeat it. I think this is a beautiful conversation. JOE: Yeah. Because I've done that developer focused frontend type of stuff. I've done the backend stuff. One of the things that I noticed is that you see patterns repeat over and over again. Let's be honest, it probably more like a week, I was going to say a weekend and learn the React the other day and the way that it encapsulate state up and down, model view, it's like these things are like there's different twists on them that you see in different places but you see the same patterns repeat again and again. I look at the way that we do scheduling in Kubernetes. Scheduling is this idea that you have a bunch of workloads that have a certain amount of CPU and RAM that they require like you want to play this Tetris game of being able to fit these things in, you look at scheduling like that and there are echoes for how layout happens in a browser. There is a deeper game coming on here and as you go through your career and if you're like me and you always are interested in trying new things, you never leave it all behind. You always see things that influence your thinking moving forward. CHARLES: Absolutely. I kind of did the opposite. I started out on the backend and then moved over into the frontend but there's never been any concept that I was familiar with working on server side code that did not come to my aid at some point working on the frontend. I can appreciate that fully. ELRICK: Yup. I can agree with the same thing. I jump all around the board, learning things that I have no use currently but somehow, they come back to help me. CHARLES: That will come back to help you. You thread them together at some point. ELRICK: Yup. CHARLES: As they said in one of my favorite video games in high school, Mortal Kombat there is no knowledge that is not power. JOE: I was all Street Fighter. CHARLES: Really? [Laughter] JOE: I cut class in high school and went to play Street Fighter at the mall. CHARLES: There is no knowledge that isn't power except for... I'm not sure that the knowledge of all these little mashy key buttons combinations, really, I don't think there's much power in that. JOE: Well, the Konami code still shows up all the time, right? [Laughter] CHARLES: I'm surprised how that's been passed down from generation to generation. JOE: You still see it show up in places that you wouldn't expect. One of the sad things that early on in IE, we had all these Internet Explorer Easter eggs where if you type this right combination into the address bar, do this thing and you clicked and turn around three times and face west, you actually got this cool DHTML thing and those things are largely disappearing. People don't make Easter eggs like they used to. I think there's probably legal reasons for making sure that every feature is as spec. But I kind of missed those old Easter eggs that we used to find. CHARLES: Yeah, me too. I guess everybody save their Easter eggs for April 1st but -- JOE: For the release notes, [inaudible]. CHARLES: All right. Well, thank you so much for coming by JOE. I know I'm personally excited. I'm going to go find one of those Kubernetes as a services that you mentioned and try and do a little breadth first learning but whether you're depth first or breadth first, I say go to it and thank you so much for coming on the show. JOE: Well, thank you so much for having me on. It's been great. CHARLES: Before we go, there is actually one other special item that I wanted to mention. This is the Open Source Bridge which is a conference being held in Portland, Oregon on the 20th to 23rd of June this year. The tracks are activism, culture hacks, practice and theory and podcast listeners may be offered a discount code for $50 off of the ticket by entering in the code 'podcast' on the Event Brite page, which we will link to in the show notes. Thank you, Elrick. Thank you, Joe. Thank you everybody and we will see you next week.
Jean Barmash is Director of Engineering at Compass, Founder & Co-Organizer, NYC CTO School Meetup. Live in New York City. He has over 15 years of experience in software industry, and has been part of 4 startups over the last seven years, 3 as CTO / VPE and one of which he co-founded. Prior to his entrepreneurial adventures, Jean held a variety of progressively senior roles in development, integration consulting, training, and team leadership. He worked for such companies as Trilogy, Symantec, Infusion and Alfresco, consulting to Fortune 100 companies like Ford, Toyota, Microsoft, Adobe, IHG, Citi, BofA, NBC, and Booz Allen Hamilton. Jean will speak at QCon New York 2017: http://bit.ly/2nN7KKo Why listen to this podcast: - The Compass backend is mostly written in Java and Python, with Go increasingly a first class language. The main reason for Go being added was developer productivity. - The app is based on a Microservices architecture with around 40-50 services in total. - Binary RPC, originally Thrift and Finagle, is used as the communication protocol, but the company is gradually moving to gRPC still with Thrift. One advantage that gRPC offers is better Python support than Finagle. - The company has built a code generation framework which takes Thrift and converts it to a RESTful API for clients to consume. - Constraint theory is about how you manage the one constraint in a system or team that prevent you increasing throughput; for example if your software engineering team only has one front end engineer do you ask back-end engineers to pick off some front-end tasks, or bring in a contractor. Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq
Standard-basierte RESTful API als Erweiterung von DMTF Redfish: Speicher, Switches, Fabrics und Server zentral in Hyperscale- und virtualisierten Cloudumgebungen verwalten...
Algorithmia is marketplace for algorithms. A software engineer who writes an algorithm for image processing or spam detection or TF-IDF can turn that algorithm into a RESTful API to be consumed by other developers. Different algorithms can be composed together to build even higher level applications. Diego Oppenheimer is the CEO of Algorithmia, and he The post Algorithm Marketplace with Diego Oppenheimer of Algorithmia appeared first on Software Engineering Daily.
+ Школа данных «Билайн» начало занятий с 25 января. // http://bit.ly/TAOP101lol+ 8 surprising facts about real Docker adoption // http://bit.ly/TAOP101datadog+ Swagger // Swagger is a simple yet powerful representation of your RESTful API. // http://swagger.io/+ Paw // The ultimate REST client for Mac // https://luckymarmot.com/paw+ Postman // Modern software is built on APIs. Postman helps you build APIs faster. // https://www.getpostman.com + Зачем хранимые процедуры?+ Why mobile web apps are slow // http://bit.ly/TAOP101slow Поддержи подкаст тут https://www.patreon.com/golodnyjНовые темы для выпусков тут http://bit.ly/TAOPgitПодписаться на подкаст через iTunes http://bit.ly/TAOPiTunesПодписаться на подкаст по RSS без iTunes http://bit.ly/TAOPrss
I had the opportunity to interview Matt Mullenweg about an ambitious project that included more than a year and a half of development to create an all new WordPress.com interface, both for the web and a desktop app. The project was codenamed Calypso, and we talked about many aspects fo Calypso, as well as a variety of subjects that relate to it. Why did you make such a big bet on Calypso? Matt has talked for a while now about his vision that WordPress can become an "app platform", and this is an example of what that meant to him. He also notes how he's always looking for things that will "move the needle" for greater WordPress adoption. We were both thinking about the same statistic: that roughly 96% of WordPress.com users (and probably a high number of WordPress.org users too) essentially abandon their websites after a short tenure. So anything that can increase that number, to say 8% or 15% of folks that stick with it long term, can make a huge difference. How do you think about investing in feature development for WordPress.com, and how it affects WordPress as well? When Matt considers what he wants to invest Automattic developer and designer time in, he says he thinks of WordPress as a whole first, before considering specifics for WordPress.com. He'd rather see WordPress.com as a gateway to a self-hosted install. And whether they stay on .com or move to a self-hosted install, he wants to help ensure that their problems are solved. WordPresses I guess it's new to me, because Matt says he's been saying it for years, but he calls WordPress websites "WordPresses", after a long time debate internally about whether to call WordPress.com sites sites or blogs. WordPress.com as a network versus a platform The new homepage for logged in users, or users in the WordPress.com app, default to the Reader view of the WordPress.com interface, versus the writing view. This intrigued me, as I don't personally think of WordPress.com as a read-first ecosystem, but rather a place to write. I think more of Tumblr or Medium when I think of a destination for reading, where I may write. Matt and I talked about the merits of WordPress as a network versus a platform. He thinks it can be both. And I think this touches on one of the big goals for Calypso that we haven't discussed yet: to make WordPress a better network. To me, WordPress.com is a platform, but WordPress (both .com and Jetpack enabled sites) are ripe to be a hugely successful network, through the huge number of websites and independent publishers that are interconnected via WordPress.com. There is more evidence that this is a goal for them too, with the launch of Discover WordPress along with the release of the new interface. Discover WordPress is a project by the editorial team to surface the best writing across WordPress.com and Jetpack enabled websites. Furthermore, beyond the human curated content, much could be done in the future algorithmically. We didn't get as much into this stuff as I would've liked, but I think it's an enormous growth area for Automattic. Open Sourcing Calypso The Calypso project code is fully open source, and is a top trending project on Github right now. There are few requirements to run the code locally, so you can pretty quickly get a working web view. There are a slew of fancy React components that could be pretty easily lifted from Calypso and used independently, as well as a guide to getting started with the full codebase. How can the community anticipate the future, with more abstracted implementations of WordPress? As WordPress projects continue to use REST APIs to create fully custom frontends, backends, and inbetweens, I was curious what Matt thinks the community can do to anticipate and educate users on how to deal with these scenarios, that may fragment WordPress and be confusing for people who expect WordPress plugins and code to interract well with one another. He doesn't think it's too much of a problem, but says it's important that we experiment and learn from our experiments; he was hesitant to call the potential for confusion fragmentation as much as experimentation. Either way, I do think education and documentation will be important as other folks continue to use parts of WordPress to make impressive things, without supporting every specific thing that can also run on WordPress. An example of this is the WordPress.com app itself. You can manage Jetpack enabled sites through it, but that doesn't mean you get everything in the editor you'd get with a WordPress.org site, like custom fields and other plugin functionality that the desktop app doesn't support. What is Automattic's differentiating factor? I wanted to know what Automattic's differentiating factor is, in Matt's mind. He defaulted, I guess unsurprisingly, to "everything", but as I pushed him a little further, he dug a bit more into some of the things that make Automattic interesting. From a WordPress.org perspective, WordPress.com integrated tools like Stats, VaultPress, and Akismet are difficult to match with other tools. For WordPress.com, he thinks the potential power of the Reader and network can be compelling. I agree there that the diversity of the WordPress.com and Jetpack author audience could make for a compelling Reading product, that has more potential than I see right now in a competitor like Medium, which is very tech heavy. Matt says, [pullquote align="right"]"We've built up a lot of trust in the community, and that goodwill definitely pays back."[/pullquote] Part of what makes it hard to identify Automattic's specific differentiator is that they do a lot of things. Matt acknoledged this, but counters by saying that they work hard on user experience and being a good community citizen. How have teams changed at Automattic over time? Automattic scales by splitting teams when they get too big. Today, there are 46 teams. Some of those teams are embedded in larger teams and have some hierarchy, but the company is still quite flat for a company of 400 people. The rule of thumb Matt wants to maintain is that someone should have no more than 10 people that report directly to them, though he has around 23. According to the standards of the tech world, Automattic's scale in terms of the number of employees may be somewhat ordinary, but they have still had massive and consistent change over the decade of the company's existence. And they are hiring as fast as they can to this day. The challenge of customizing WordPress sites A couple of years ago, someone from Automattic told me how concerned they were about the WordPress customizer's ability to scale, both for use on mobile devices, and as a utility that could manage a lot of features. And I wanted to know how Matt thinks that has evolved, now that the customizer is in such significant use on both WordPress.com and for self-hosted websites. As he notes, the customizer has undergone a lot of positive iteration over the last several releases, and today the WordPress.com and WordPress.org customizers are using the same base code; whereas for a while WordPress.com had their own custom implementation. But he still says that, "if we're candid with ourselves, ... customization is still the worst part of WordPress, you know? It's the hardest. It's where people get stuck. It's where there's a real gap between the promise and what people are able to realize and create when they get started using WordPress." It's not as much a problem with the use of themes, or if you can code, but for new users, "it's their biggest struggle." One idea that I have is to have a more Medium-like interface be the "default" view, versus a changing default theme. That way, WordPress.com could be more opinionated about the default view, and change the theme at will, or along with trends, versus giving new users the default theme of a particular year and then that theme is untouched unless the user decides to switch. Matt said they have that a bit on the Reader view, but that is what someone in the WordPress.com network would see, versus what an outside website visitor would see. Anyway, there are definitely challenges ahead for making customization and, more importantly, just ensuring sites look good for users. I think that this is an area where other platforms -- like Medium and Squarespace, though in different ways -- are doing a good job. The first line of the Automattic creed The Automattic creed states at the very beginning, "I will never stop learning." That was part of Matt's response when I asked just how they managed to cross-train a workforce that was primarily made of PHP developers to create a world-class JavaScript driven application. Additionally to the natural desires that Automattic employees should have to learn, they created internal resources for helping people, and are considering releasing some of that material, maybe in the form of webinars or an online conference. Matt said Automatticians will also be sharing what they learn at other conferences, like the upcoming A Day of REST, where two Automatticians will be speaking. Matt did admit that he hasn't made the PHP to JavaScript switch yet, and personally feels more comfortable in PHP; though some of his team have said it wasn't as intimidating as it sounds. Bug bounties Did you know all Automattic properties are on Hacker One , the bug bounty community? If you find a security bug, you can get a bounty if you report it. I didn't know this until the Calypso launch. How is Automattic thinking about revenue? With my napkin math and a few small things I know about Automattic, I'd guesstimate they are somewhere in the neighborhood of $100 million in annual revenue. I didn't even attempt to get confirmation of this, because I know they don't reveal this kind of information. So instead I wanted to get more insights of how Matt thinks about revenue at Automattic. Generally, he says they put their focus in, "three main buckets." The use that focus both for revenue purposes and product purposes. Those areas are WordPress.com, Jetpack, and WooCommerce. They group things like VaultPress and Akismet under Jetpack; so it's basically their WordPress.org SaaS revenue stream. Those are paid subscription products. They have been transitioning that offering, as Matt shared, "a big trend over the past few years, has been to move away from a la carte upgrades, and have more bundles." They've discovered that bundled plans of $100 per year and $300 per year have been successful. Here are those plans, for both WordPress.com and WordPress.org, as shown in the new WordPress.com/Calypso interface: It appears they get most of their revenue from this stream. I do know, and have previously reported, that at least one point, WordPress.com VIP accounted for upwards of 25% of overally revenue, and though that gross number has gone up over the years, its percentage of overally revenue has gone down, meaning that these paid plans have outpaced VIP, growth-wise. I'd guess VIP revenue is now less than half of that 25% number now, but can't confirm it. Total sites, versus engagement There are a lot of WordPress.com websites, but as Matt noted, it's a vanity metric due to the face that such a small percentage are active, from engaged users. So they are trying more to track engagement versus total blogs. I tried to get him to share the number of active websites, but that's not something he could share. Helping site owners monetize, and WooCommerce integration to WordPress.com I talked about the roadmap some, and asked Matt about what they may offer in the future to help authors monetize their sites. They currently have a WordAds program, but that is a pageview driven strategy, and I'd love to see them introduce a way for authors to get paid via a tipjar, private paid posts, or subscription system like I've heard Medium is talking about. It's not on their current roadmap, but he says he'd be open to it. He also noted that since WooCommerce is now "part of the family," that there may be future monetization opportunities through that, though he said they don't have current plans for a hosted version of WordPress.com. I was honestly pretty surprised by this: In the beginning, our focus is really going to be on people hosting their stores, you know, with web hosts. Because, part of the beauty of why WooCommerce is so popular is the flexibility, and I don't think the usability is there -- yet -- to be competitive with, like, a Shopify, or a BigCommerce. So, it's just a lot of work to do there. [pullquote align="right"]Matt said he thinks of WooCommerce as how WordPress was around version 1.5. He called it, "very early days"[/pullquote], in that people are using it and see the potential, but knows, "there's just so much to work on and improve to make it accessible to a wider audience." He says the Woo team is now 63 people, and a number of Automatticians are doing "Wootations", or rotations with the Woo team. What to expect next in the new WordPress.com interface They are still working on a lot of things for the new interface. There are certain things that aren't there yet. For instance, showing and hiding your blogs you are personally attached to still requires the regular admin. I actually experienced this myself. Some parts of the interface are pretty circular and confusing. But they expect to do more going forward. They want to see what there is demand for, and what other people do with the open source nature of the project. Matt also noted that he'd like to "loop back" to content blocks (code named CEUX) -- the project that stalled last year. And he's like to see what can be done around collaboration, editing, and the suggestion process. Power and ease of use One of the biggest challenges for WordPress is to continue to get easier to use, as other avenues for sharing information have gotten easier and easier, while continuing to enable powerful, feature rich implementations of WordPress. Matt thinks this balance is important, and that we must continue to improve in both directions to continue WordPress's growth. Wrapping up I really enjoyed my first audio interview with Matt. He says we can expect more announcements around WordCamp US, which starts next week. The Calypso project is a fascinating one, and it's a great example of what we should continue to expect: powerful, catered tools that run on a RESTful API. They aren't always going to be tools for use everywhere, but we can expect to continue to see WordPress used in innovative ways, and be an exceptional platform for all kinds of applications. And finally, at the end of the interview, I pitched Matt on one of my most hairbrained ideas. The naming conflict between WordPress.com and WordPress was really bad with this project, as nearly everyone not deeply embedded within the WordPress world got it wrong, and conflated Automattic's WordPress.com with WordPress the software. And I think Jetpack's brand has really blossomed. I think there is an argument to be made that Automattic could change the name of WordPress.com to Jetpack, and both Automattic and WordPress would win from the change. It wouldn't be easy, but all I asked from him, is whether he'd read my post if I made one to give the pitch. He said he would, so expect that sometime soon. Thanks to Matt for the interview, and thanks to Mark Armstrong for helping me get going with the new WordPress.com app and arranging the interview.
Hans, Schepp und Stefan erzählen diesmal von ihren Erfahrungen in der Zusammenarbeit zwischen Backend und Frontend Entwicklern bei Schnittstellendefinitionen. [00:00:16] News YUI Yahoo stellt die Entwicklung des in die Jahre gekommenen YUI Frameworks ein. Bei der künftigen Technologiewahl durchaus zu berücksichtigen. Schaunotizen [00:01:06] Schnittstellendesign Schepp und Stefan resümieren über ein paar Tools für Entwickler, die […]
## Project * First off, what is Indivizo? Indivizo is a web application that provides a set of products for human resources specialists. Our flagship product is Indivizo Selection, that is a video interview platform. * Allows HR professionals to use video interviews as part of their selection process * Asynchronous interviews — no need to schedule anything, everything is automatic * Question databank, interview plans * Each answer is recorded as a standalone video * Unique workflow: allows you to focus on the competencies and skills of your candidates * Other products: ATS, Search * Why is it a good approach to use video interviews in the selection process? * Saves time (up to 90%) * Better evaluation: objective, easier... * We recommend to use it for first-round selection, but... * Where did the idea come from? * My partners are HR consultants and organizational developers with over a decade of experience on the field. They have helped many organizations in selection processes… * The project emerged from real world needs, with real expertise on the field * Our strength is how we work together * We are an HR company with technology * Global trends show that supporting HR and talent management with technology is getting more and more significant. An HR department plays a key role in an organization’s success, making the need of developing this area imperative. * What is your target group? * Everyone! :) * Large-scale corporations (IT, telecommunication, SSC, bank and insurance) * Small and medium scale companies (without HR department) * Switching to the technology side… What’s the biggest architectural decision when building a Saas app? * Separation of customer spaces: application vs. server level * What are the pros and cons for these two? * Important factors: * Access control * Provisioning new “user space” * Deployment * Building server infrastructure * Scaling * Centralized billing system * Client customization * Which direction was taken with Indivizo? Why? * Installation profile, separation on the server-level, provisioning a separate instance for each customer * Drush make * Build script * Scaling and customization * What is the server infrastructure behind Indivizo? * One single VPS, custom scripts * Waiting for Commerce Platform to be released: * Modern, scalable, cloud-based hosting solution that is modeled on agile development best practices. Its unique capability revolves around managing the infrastructure topology and configuration using the same git-based tools that you use to manage your code. * What are the key modules of the installation profile of Indivizo? * Bootstrap * Page manager and Panels everywhere * Message stack * Organic Groups * Naturally: Views, Entity API * How are the videos recorded? * With the help of a third-party vendor * Recording happens through a flash widget * Videos are hosted by our vendor * We use video.js to play the videos * You mentioned a question databank. How does that work? * Centralized place to curate the content - separate Drupal installation * Client sites fetch the content through a RESTful API * We have future plans with this question databank… * Big organizations often have existing systems in place. Can you integrate with those? * It’s common that a client already has an ATS * We can retrieve applicant data through our RESTful API * What is it like to build a product with Drupal * Insanely fast until about the 80%... * Amazing prototyping tool * I learnt what agile really means: for me is changing directions as quickly as we can * Reacting on customer feedback * Releasing as early as possible * “If you are not ashamed of your product when you launched, you launched too late” - Reid Hoffman, founder of LinkedIn * I jumped into this project as a developer, but working on a product requires more of a business mindset * As a developer I liked polishing things until they are (nearly) perfect... * Shipping on time and on budget is essential, even when what you ship is rough * I’m gonna do a session about this topic at Drupal Developer Days in Szeged * Where are you at with Indivizo? * Expanding on the Hungarian market… * Working on a strategy to reach out customers in broader Europe ## Use Cases * Things I hope this has inspired you to do something! ## NodeSquirrel Ad Have you heard of/used NodeSquirrel? Use "StartToGrow" it's a 12-month free upgrade from the Start plan to the Grow plan. So, using it means that the Grow plan will cost $5/month for the first year instead of $10. (10 GB storage on up to 5 sites)
This week on BSD Now... a wrap-up from NYCBSDCon! We'll also be talking to Luke Marsden, CEO of HybridCluster, about how they use BSD at large. Following that, our tutorial will show you how to securely share files with SFTP in a chroot. The latest news and answers to your questions, of course it's BSD Now - the place to B.. SD. This episode was brought to you by Headlines FreeBSD 10 as a firewall (http://www.pantz.org/software/pf/use_freebsd_10_as_a_pf_firewall.html) Back in 2012, the author of this site wrote an article stating you should avoid FreeBSD 9 for a firewall and use OpenBSD instead Now, with the release of 10.0, he's apparently changed his mind and switched back over It mentions the SMP version of pf, general performance advantages and more modern features The author is a regular listener of BSD Now, hi Joe! *** Network Noise Reduction Using Free Tools (http://bsdly.blogspot.com/2014/02/effective-spam-and-malware.html) Really long blog post, based on a BSDCan presentation, about fighting spam with OpenBSD Peter Hansteen, author of the book of PF, goes through how he uses OpenBSD's spamd and other security features to combat spam and malware He goes through his experiences with content filtering and disappointment with a certain proprietary vendor Not totally BSD-specific, lots of people can enjoy the article - lots of virus history as well *** FreeBSD ASLR patches submitted (http://0xfeedface.org/blog/lattera/2014-02-02/freebsd-aslr-patch-submitted-upstream) So far, FreeBSD hasn't had Address Space Layout Randomization ASLR is a nice security feature, see wikipedia (https://en.wikipedia.org/wiki/Address_space_layout_randomization) for more information With a giant patch from Shawn Webb, it might be integrated into a future version (after a vicious review from the security team of course) We might have Shawn on the show to talk about it, but he's also giving a presentation at BSDCan about his work with ASLR *** Old-style pkg_ tools retired (http://blogs.freebsdish.org/portmgr/2014/02/03/time-to-bid-farewell-to-the-old-pkg_-tools/) At last the old pkg_add tools are being retired in FreeBSD pkgng (http://www.bsdnow.tv/tutorials/pkgng) is a huge improvement, and now portmgr@ thinks it's time to cut the cord on the legacy toolset Ports aren't going away, and probably never will, but for binary package fans and new users that are used to things like apt, pkgng is the way to go All pkg_ tools will be considered unsupported on September 1, 2014 - even on older branches *** Interview - Luke Marsden - luke@hybridcluster.com (mailto:luke@hybridcluster.com) / @lmarsden (https://twitter.com/lmarsden) BSD at HybridCluster Tutorial Filesharing with chrooted SFTP (http://www.bsdnow.tv/tutorials/chroot-sftp) News Roundup FreeBSD on OpenStack (http://pellaeon.github.io/bsd-cloudinit/) OpenStack (https://en.wikipedia.org/wiki/OpenStack) is a cloud computing project It consists of "a series of interrelated projects that control pools of processing, storage, and networking resources throughout a datacenter, able to be managed or provisioned through a web-based dashboard, command-line tools, or a RESTful API." Until now, there wasn't a good way to run a full BSD instance on OpenStack With a project in the vein of Colin Percival (http://www.bsdnow.tv/episodes/2014_01_22-tendresse_for_ten)'s AWS startup scripts, now that's no longer the case! *** FOSDEM BSD videos (https://fosdem.org/2014/schedule/track/bsd/) This year's FOSDEM had seven BSD presentations The videos are slowly being uploaded (https://video.fosdem.org/2014/) for your viewing pleasure Not all of the BSD ones are up yet, but by the time you're watching this they might be! Check this directory (https://video.fosdem.org/2014/AW1121/Saturday/) for most of 'em The BSD dev room was full, lots of interest in what's going on from the other communities *** The FreeBSD challenge finally returns! (http://www.thelinuxcauldron.com/2014/02/05/freebsd-challenge-returns-day-11-30/) Due to prodding from a certain guy of a certain podcast, the "FreeBSD Challenge" series has finally resumed Our friend from the Linux foundation picks up with day 11 (http://www.thelinuxcauldron.com/2014/02/05/freebsd-challenge-day-11-30/) and day 12 (http://www.thelinuxcauldron.com/2014/02/09/freebsd-challenge-day-12-30/) on his switching from Linux journey This time he outlines the upgrade process of going from 9 to 10, using freebsd-update There's also some notes about different options for upgrading ports and some extra tips *** PCBSD weekly digest (http://blog.pcbsd.org/2014/02/pc-bsd-weekly-feature-digest-16/) After the big 10.0 release, the PCBSD crew is focusing on bug fixes for a while During their "fine tuning phase" users are encouraged to submit any and all bugs via the trac system Warden got some fixes and the package manager got some updates as well Huge size reduction in PBI format *** Feedback/Questions Derrick writes in (http://slexy.org/view/s21nbJKYmb) Sean writes in (http://slexy.org/view/s2yhziVsBP) Patrick writes in (http://slexy.org/view/s20PuccWbo) Peter writes in (http://slexy.org/view/s22PL0SbUO) Sean writes in (http://slexy.org/view/s20dkbjuOK) ***
Portland-based Orchestrate (orchestrate.io) rolls out its commercial NoSQL offering today, claiming to significantly decrease the time, cost and complexity of putting cloud-based data to work. I took the chance to speak with co-founder and CEO (and former Basho co-founder) Antony Falco, to learn more about the company and the problems it’s seeking to address. Our chat ended up becoming quite a wide-ranging discussion of the world of databases, and it’s embedded here as a podcast. The Orchestrate team suggests that a ‘typical’ web application today can use as many as five different databases to store and process diverse data types, or to interact with multiple sensors and other forms of data input. Orchestrate sets out to simplify that complexity, by layering a common and developer-friendly RESTful API on top of the underlying database technologies. Antony discusses some of the ways in which Orchestrate seeks to reduce complexity without abstracting away the power of the underlying tools. Orchestrate’s solution currently runs in Amazon’s US cloud infrastructure, with other clouds and other geographies being actively explored. Antony notes during our call that the company has seen a great deal of interest from beyond the United States, particularly from Europe.
Justin and Jason discuss Udi's stay in Pasadena and his working relationship with Justin, how to manage sessions between a Javascript client and a RESTful API, solving the Catalyst network problems and why Jason thinks Javascript isn't the easiest language for kids to learn, thoughts on the Code.org video, how Pluggio was hacked, the evolution of the Catalyst stack and how Jason uses Node.js in combination with LAMP, the tradeoffs of building a startup on cutting edge technologies, the progress being made on AnyFu, the results of the Uber nearest cabs challenge, Jason's thoughts on the Upverter hackathon, the SHIELD Act and how patent trolls are going after popular podcasters, Justin's TV recommendations and why Jason listens to podcasts about the The Walking Dead, and Udi's concluding thoughts on America.
Justin and Jason discuss Justin's important meeting in Chicago, why SQL is agile, how FriendFeed used MySQL to store schema-less data and when normalized data will hurt you, Justin's experience with Uber, how Jason works with the Uber engineering team as an off-site contractor and how they scaled the Node.js-based dispatching system, Gabriel Weinberg's thoughts on orders of magnitude, how Justin is working with Udi on Digedu and why they chose the new PHP framework Laravel, why Jason wrote a RESTful API for Catalyst in PHP and why he thinks building web apps in Node.js is usually more trouble than it's worth, the upcoming Catalyst point system, the possibility of creating a WordPress pluggin for Pluggio, how Jason bought an Arduino at the local Radio Shack and while there heard about a local hardware super hacker who built a long distance surveillance microphone, the difference between Raspberry Pi and Arduino, how a judge dismissed a dealers' lawsuit against Tesla's retail stores, why Jason thinks Hacker News needs a prerequisite reading list, why Justin is worried that we're all getting stuck in our own social media silos, why you should pay attention to power law distributions and why you should look for things you can scale.
Episode 31 - How to Start Programming Subscribe on iTunes Subscribe to RSS Download MP3 Brandon and I discuss learning how to program with Zach Silveira. Zach is 17 and started programming when he was 14. He has worked on about half a dozen projects and uses PHP primarily. He talks with us about why he started programming and what resources he uses to learn. Noteworthy links Grape - An opinionated micro-framework for building RESTful API apps on Ruby HTML9 Responsive Boilerstrap JS Linus Torvalds Invented Git, But He Pulls No Patches With GitHub Fixie: filler content for HTML documents Goodbye, CouchDB On why I am not buying RubyMotion Between a rock and a hard place – our decision to abandon the Mac App Store
APIs should be consistent, but it is difficult to do this when returning a JSON response along side the HTML interface. Here I show how to add a versioned, RESTful API. The version can be determined from either the URL or HTTP headers.
APIs should be consistent, but it is difficult to do this when returning a JSON response along side the HTML interface. Here I show how to add a versioned, RESTful API. The version can be determined from either the URL or HTTP headers.
In der siebten Episode berichten wir anfangs mal wieder kurz über aktuelle Neuigkeiten, um dann ausführlich über OpenSocial zu sprechen. Wir geben euch eine recht allgemeine Einleitung in das Thema und erklären sowohl die Gadget als auch die RESTful API.Wie im Podcast bereits angekündigt, haben wir noch neun Cliqset Beta Invites für Developer zu verschenken. Wer einen haben möchte, einfach hier kurz in den Kommentaren melden! Ihr Browser unterstützt diesen Audio-Player nicht.Die Links zur Sendung findet ihr hier! Den Podcast bekommen: Download MP3 RSS Feed iTunes