Podcasts about json api

  • 45PODCASTS
  • 81EPISODES
  • 56mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jul 15, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about json api

Latest podcast episodes about json api

Talking Drupal
Talking Drupal #459 - Off The Cuff 8

Talking Drupal

Play Episode Listen Later Jul 15, 2024 48:40


Today we are talking about Config Actions, The Panels Favorite Drupal Modules, and Drupal Contribution. We'll also cover Transform API as our module of the week. For show notes visit: www.talkingDrupal.com/459 Topics New Config Action: Place Block Favorite Contrib modules Slack channels Preparing for Drupal 11 Drupal events Resources Config Action Place Block Front End Editing Drupal Module Gin Admin Theme Migrate Boost Keysave Navigation Matt Glaman Smart Date Code Blog Post Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Martin Anderson-Clutz - mandclu.com mandclu Baddý Sonja Breidert - 1xINTERNET baddysonja MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to expose your Drupal site's data as JSON using view modes, formatters, blocks, and more? There's a module for that. Module name/project name: https://www.drupal.org/project/transform_api Transform API Brief history How old: created in Sep 2023 by LupusGr3y, aka Martin Giessing of Denmark Versions available: 1.1.0-beta4 and 1.0.2 versions available, both of which work with Drupal 9 and 10 Maintainership Actively maintained, in fact the latest commit was earlier today Security coverage Documentation: in-depth README and a full user guide Number of open issues: 14 open issues, 3 of which are bugs, but none against the current branch Usage stats: 2 sites Module features and usage After installing Transform API, you should be able to get the JSON for any entities on your site by adding “format=json” as a parameter to the URL To get more fields exposed as JSON, you can configure a Transform mode, using a Field UI configuration very similar to view modes You can also add transform blocks to globally include specific data in all transformed URLs, in the same way you would use normal blocks to show information on your entity pages. The output of transform blocks is segmented into regions, Where Drupal's standard engine produces render arrays that ultimately become HTML, Transform API replaces it with an engine that produces Transform Arrays that will ultimately become JSON Where Drupal's standard JSON:API supports more or less exposes all information as raw data for the front end to format, Transform API allows for more of the formatting to be managed on the back end, where it will use Drupal's standard caching mechanisms, permission-based access, and more Transform API also supports lazy transformers, which are callbacks that will be called after caching but before the JSON response is sent You can also use alter hooks to manipulate the transformed data

The Laravel Podcast
Listener Q&A: ChatGPT, Laravel Hangups & Best Practices, API Docs, Inertia Next Steps

The Laravel Podcast

Play Episode Listen Later Jun 24, 2024 44:26


In this episode, we dive into listener-generated questions. Join us as we cover a wide range of topics, from hangups in new Laravel apps and best practices for bigger apps, to the impact of AI tools on our workflows. We'll discuss common mistakes when building your first Laravel application, tips for documenting APIs, Laravel's transformative impact on business, and what Inertia needs to become feature complete.Taylor Otwell's Twitter - https://twitter.com/taylorotwellMatt Stauffer's Twitter - https://twitter.com/stauffermattLaravel Twitter - https://twitter.com/laravelphpLaravel Website - https://laravel.com/Tighten Website - https://tighten.com/Podcast Suggestions - https://suggest.gg/laravelpodcast/ideasMatt's talk on JSON:API: https://youtu.be/C01dvypo4O4?si=BDZKfz-bxeZK3AzvSpatie Query Package: https://github.com/spatie/laravel-query-builderBusiness of Laravel Podcast: https://businessoflaravel.com/-----Editing and transcription sponsored by Tighten.

Sixteen:Nine
Chris Johns, PassageWay

Sixteen:Nine

Play Episode Listen Later Jan 17, 2024 34:57


The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT The UK startup PassageWay operates with the interesting mission of using technology that nudges people to make well-informed and more sustainable decisions about how they get from A to B. That's done by thinking through and developing the presentation layer for Real-Time Passenger Information content that's then run on digital signs, most notably for the bus systems around the city of London. PassageWay's business model is - in simple terms - taking the rich, real-time data available for routes and stops and making it presentable and digestible for transport authorities, like Transport For London, which pays the start-up to do so. The logical notion is that the more that good, real-time information is made available to people, the more the transport services will be used. While London Underground stations are well-equipped with information and the services are pretty predictable, there's not as much available to the millions who use less-predictable surface transport services like the iconic double-decker red buses. I had a good chat about all this recently with PassageWay co-founder Chris Johns. Subscribe from wherever you pick up new podcasts. TRANSCRIPT Chris, thank you for joining me. Can you tell me what PassageWay is all about?  Chris Johns: Thanks so much for inviting us to your podcast today. PassageWay is all about generating demand for public transport by leveraging real-time information. We do this by putting it onto digital signs that are displayed on host-supplied screens and typically these screens only require a modern browser to display the digital sign.  You made a point of saying the host supplied. There's been a history through the years of companies who've done things like put in the infrastructure, the screens, and so on and then run content on them with the idea that content would be Interrupted so to speak by advertising. You're not going down that path.  Chris Johns: No, we're not. Typically those sorts of plays are similar to JC Decaux or Clear Channel who have long had this relationship with transport authorities whereby they will fund the deployment of bus shelters in return for an ad revenue share. We supply transport for London with digital signs that are displayed at bus shelters but also within their other infrastructure like bus stations. But really we're more citywide about putting digital signs into places such as schools, hospitals, workplaces, offices, and such in order to generate demand from the sort of non-traditional locations and encouraging the people within those locations to consider public transport. So this doesn't sound like a traditional business, you said, this is about generating demand to use public transport services and so on versus, more traditionally, this is about making money somehow or other.  Chris Johns: Yeah. I think that's the difference, a lot of those traditional plays actually put the real-time information secondary to their primary objective which is to earn revenue from the display of ads. And to my mind, that means a poor customer experience and the poor customer experience means reduced demand. If you think about traditional bus shelters, they are actually incredibly complex for many people trying to navigate the public transport information. If you're coming to London, for example, trying to find out which is the right bus? Is it going to go to your preferred stop? How long is it going to take? Is there any disruption information? If you don't have it, it will make you want to go and choose a different mode of transport. So, you probably take a taxi or you may end up using your own car, for example. Actually what we're trying to do is to show people that public transport is really easy to use. It's really accessible. It can get you from A to B pretty fast. And if you're aware of the onward travel information from the stop you're trying to get to, then actually, you can make the whole journey much easier and less stressful, for many people. So this almost seems like a community initiative but there is a business model behind this, right?  Chris Johns: Yeah, there is. The business model is pretty straightforward, to be honest. We are paid by the transport authority or their contract partners and our job is to provide these digital signs and the digital signs generate demand. So in a different way of thinking, you might consider the real time information as being the best form of advertising for public transport. Certainly better than a static advert, in my opinion, anyway.  Your company's efforts are to aggregate the data, make sure it's handled accurately and always up to date, and so on. Why would transport for London not do that themselves?  Chris Johns: Yeah, they do. Transport for London is the world's largest integrated transport network and they have the global leading data strategy. And they're famed the world over for their open API strategy. That means we can access their data and we pretty much have unfettered use of that data. And so do many other developers as well and we can Be sure that the data we've got is true and accurate. What we do is that we take that information and we plot it around a particular location and we bring it together with a legible London-style wayfinding map, where we plot the access points onto it and then we bring it all together into a sort of nice looking digital sign that's easy to understand and act upon.  So we're not generating data or we're not modifying data, all we're doing is bringing data together into an easy-to-understand format.  So you're doing the presentation layer that in theory, transport for London could do themselves but you're good at it. it's not what they want to focus on. So they're happy to work with you to do that part of it.  Chris Johns: That's right. Yeah. We are a supplier to TFL and they use lots of other different tech suppliers whether it's to build their award winning TFL go app or to build bus shelters whatever it may be. They have lots of different suppliers bringing their individual skill sets into play and that's basically what we do. But I think that one of the things that we do bring to the party because we're a tech startup is innovation and the ability to pivot quickly and come up with sort of entrepreneurial new ideas that we can bring into play and throw them out to TFL and say, listen, what do you think about this? And so we can move quite quickly.  Did you have to go to them to sell into this or is your company kind of a result of being in discussions with them and starting the company because this opportunity existed? Chris Johns: It's a mix between the two actually. So TFL actually issued a tender some time ago that we want to produce the platform and we've taken it on from there and given it a life of its own and extended the service beyond London as well. So working with other transport authorities and other partners outside of London.  So this is audio, so it makes it a little difficult to visualize things. But can you give me some sense of how this manifests itself within the transport system? And then in public and private buildings.  Chris Johns: Okay. I'll give you a couple of examples. For example, in every bus station across London, there are digital totems. And those digital totems are a bit like an airport or a train station where you've got a central totem and it shows all the services where they're going and whereabouts within the bus station they're leaving from and if there's any disruptions. So we look after all of those for London. Another example would be smart bus shelters, whereby you could have a large format digital screen with detailed route maps for each of the services that are running via that bus shelter with real time information on all those routes plotted not on a fixed JPEG of a route but actually plotted live onto a legible London style map. With onward time estimation to reach all the onward stops, onward travel information such as the tube status, any disruption notifications and more so that people can quite easily contextualize their journey and see if it's going to be running smoothly all the way through. Another example, could be at a bus stop itself. So across London, there are about 18,000 bus stops and only about 2000 bus shelters. So only about 2000 of these locations have any real time information. So what we can do for those ones is put in QR codes and customers can scan the QR codes and open up a real time digital sign on their personal device with no registration, no login, no heavy download. It's just a purely web based solution that shows all the upcoming departures for that particular stop with detailed route information, onward stop information et cetera and then links to download the official apps. So it's like an interstitial page where it's easy for everyone to access. Hopefully you're going to convert more people into downloading the official apps. Now the official app is the TFL official app or yours?  Chris Johns: No, we don't do apps. I'm afraid. One of the points about what we're doing is about trying to make everything as open and as accessible as possible. So there is no registration, there's no login, there's no download. All you need Is a modern web browser and you can access the information. We don't ask anything from the customers. We don't track them. We don't do anything really about that.  Yeah. That's one of the problems when you go to an unfamiliar city and you decide I'm going to use their transport system. You go to the app store to find the app for the mass transport system in that city. And there's five or six of them and you don't know which one is official or which one's riddled with ads or not updated or God knows what. Chris Johns: Yeah. In London, I can't really speak for other cities because our primary focus is London, that's our area of expertise. But there are hundreds of thousands of people who are digitally excluded. People who don't have smartphones at all and then there is a whole another segment that are extremely low digital users and I think in London, there's about 2 million of those, according to a Lloyd's report. You've got about 2.5 million people that are not going to be using smartphones or not downloading apps and you've got to provide real time information to those people because those are also a core audience for the transport authority because they tend to be looking at the demographic. They match perfectly the sort of TFL bus user type. But at the moment they're somewhat excluded from the service or the latest developments of promoting those services.  Is the focus more as a result on road transport, buses and so on, as opposed to the London underground? Because the underground has maps. It's got covered areas and everything else. It's easier to convey information.  Chris Johns: That's right. Like train stations and tubes, they're fairly straightforward. You go onto the platform, you take a train going one way or the other way or if you go to a train station, it's all linear. But if you're taking buses or you want to go get a bicycle, they're within the built environment itself. And they could be going pretty much any direction. And you really need to know where the best location is for you to find your particular service and then how long you're going to wait and if there's any problems with that particular service. Also the other thing is that the tube services are linear again. They're always getting the district line, for example and are always going to go to those particular routes, one way or the other. They might stop slightly earlier but generally, they're always going to follow that same path. And if you wait one minute, then the next one's coming along for two or three minutes. So what we do is that we just show on the tube status. We show if there's any problems on any particular line. And then we say all of the lines are running fine, which is the sort of TFL standard approach to displaying the statements  Yeah. This year I've spent a couple of weeks in London, doing interviews and then I was there semi holidaying as well and I was struck by the amount of real time information that you could get on. I was taking the Elizabeth line more than anything else and it was terrific in terms of telling me, I definitely don't want to go on the Circle line right now. Chris Johns: Yes. the Northern line.  The really old ones. Chris Johns: Yeah, some of them are better than others, to be honest. Also you've got to pick the right one. It's freezing in London at the moment and some of them have heating and some of them don't. Like in the summer, some of them have air con and some of them don't as well. We don't flag that as much. I couldn't tell you offhand which ones are which. Toko on here is stifling.  Chris Johns: Yeah. It could be useful information to many people.   What you're doing is a little reminiscent of a US company called TransitScreen.  Chris Johns: Yeah, I know. I've heard of TransitScreen. Yeah. They would sell a service into a building and they would also layer in things like the availability of Rideshare, Dockless bikes. I'm not sure what their status is right now but probably scooters as well. Do you do any of that? Chris Johns: Not at the moment. It is something that we are quite interested in. But we are dependent on the data sources that are available to us. And obviously we are primarily funded by TFL as well. Our modus operandi is to really promote TFL services. When we've looked at it before there are Lime and Forest e-bikes for example, across London. But they don't actually have an open API that we can access. The other thing I think separates us from the transit screen service, I think they've rebranded actually now. But I think they don't tend to have maps or contextual maps on their screens. They tend to be very linear in terms of saying information is available on this particular site type of service at this particular place. And that it's 500 meters where you have to go and work out which direction it is, whereas in London, we've got what's called the legible London wayfinding scheme. So across London, you find all these Totems which are just flat totems, they're not real time information. But they've got localized maps with all the local highlights on it. So, there's a sort of native way of expecting maps and how they should appear to people as they're moving through the built environment that we've tried to replicate. Ultimately, what we'd like to do is to take over those totems and convert them from being static information locations to being real time digital totems with wayfinding public transport information and other information as well.  I suspect the barriers, there are steady advances in e-paper. As that gets better versus using LCD or things like that require a lot of energy to be visible in daylight.  Chris Johns: Yeah. I think you hit the nail on the head there or bleakly by saying, really the issue is cost and technology. There are hundreds of legible London totems around London. Not all of them have power nearby and the cost to convert each and every one of them would be very substantial but if we can bring in as technology advances and things become cheaper, solar power and other sort of lower energy burn options come into play then that's where we're hoping that there's an opportunity. So, I think I saw you guys have your offices or technical location and the Battersea area. If the Battersea power station which is now a kind of a multi use mall and other things, wanted to put your content on a large screen in their main access areas, would they need to do what's involved? Chris Johns: It's really quite straightforward. They just need to install a screen of any particular size, it can be small or super large. We put a 75 inch screen into an office complex, Paternoster Square, just a week or so ago. But you can go for pretty much any size screen. The larger ones tend to be ethernet connected rather than Wi Fi connected. As long as that screen has browser capability then we can deploy a digital sign onto it. And it will be suitable for displaying both small scale and large scale. So you could have it within a stadium. If you've been to the power station, they've got the huge sort of warehouse-y style engine rooms there which are now full of shops but you could put one at the end of one of those engine rooms and it would look fantastic.  Yeah. I was there three-four months ago. It's a great reworking of that building. Outside they could really use wayfinding but that's somebody else's problem.  Chris Johns: Yeah. Also there's boats there as well. So Uber has taken over the boats in London. So, unfortunately they no longer provide data onto the TFL data feed. And so we're trying to work with them to get data from them. But at the moment, they're not included within the TFL API feed. I'm understanding this correctly, there's a URL per geo-specific site.  Chris Johns: That's right.  And if it was a digital sign in a building that was also showing, if we're using the Battersea Power Station as an example, also showing sales promotions for some of the retail tenants, could your information be scheduled in or does it need to be on there full time? Chris Johns: No, it doesn't need to be full-time. Obviously, we're very aware that digital screens need to pay for themselves and often that's through advertising. Our content can be part of a playlist and run for 15-20 seconds every 40 seconds or whatever the host decides is best. So, we're working on another project at the moment which is actually something very similar to that, whereby the content will rotate with other content about walking routes, heritage and other information that takes to a particular place. Because obviously, public transport information is not the only thing that's of interest to people as they're moving through the built environment. But it's one of the time sensitive things that is important to them.  Because it's web based information, is it responsive? Chris Johns: Yeah, We do smartphone friendly signs as well but usually they're going to be QR code based. So, someone will scan a QR code and then it will open up a smartphone or other personal device friendly version. Some of the other signs that we've designed particularly for larger format digital signage screens. So what I've seen examples of was a portrait mode screen but you could do a landscape screen, no problem.  Chris Johns: Oh yeah. We've got loads of them. It's roughly 50-50 at the moment in terms of deployment between landscape and portrait. I don't really have a preference. I think they look good. I think the one we put in last week into Paternoster Square was a portrait and I think it looks really quite nice in portrait style.  And have you done the design and everything to mirror or parrot the transport for London colors and so on? Chris Johns: We've built it to meet the TFL brand guidelines. So that was very important. Obviously, because we're paid by TFL and the map is styled to look as close as possible to the legible London guidelines but without copying it. We use a service called Mapbox to do that which allows us to play with the layers and the design of the layers on the maps very efficiently. And we actually did a project for Melbourne as well, Transport for Victoria in Australia where we came up with a similar whole range of concepts for Melbourne and again using their sort of legible Melbourne guidelines or Transport for Victoria guidelines with their branding and their mapping as well. So is there a consulting wing to what you do as well?  Chris Johns: Basically we can provide just consulting but really what we're hoping to do is to build long term relationships with transport authorities where we can deploy the platform, make the signs available across their estate and out to their community. And if that option is available to us then we'll do the consulting bundled into a longer term agreement with them.  But it's not fundamental to your offer?  Chris Johns: No. No, not at all.  My next question is, are you working outside of London? So you're in Australia. Are you elsewhere as well? Chris Johns: So, we're one of the winners of a global innovation tender for Transport for Victoria and we developed a whole range of concepts for them. Unfortunately, their data wasn't quite a state as yet to enable the concepts to be deployed. So that one very much watches this space. We've also had discussions with others, both, in Europe and also in North America as well. We're quite keen on working internationally. I think on the international side, we're much better when we work with a bigger technology partner. So, usually with transport authority tenders, they put them out there and there's big organizations which pitch for them. We're typically too small to pitch for them but we can go in with those larger organizations and bring that element of innovation and entrepreneurialism and some design to give them an extra edge in their tender over and above everyone else. So you might be going with an IBM or somebody like that?  Chris Johns: Yeah. The big one in America is VIX Technology and they're a nice bunch of guys. But we've also partnered previously with Trapeze which is in the UK. And also, there's a one in the UK who we work with very well called True Form Engineering as well. We've done stuff with them both in London and outside of London as well.  You mentioned at the start that you're working with the London authority which has a world reputation for its data API and everything else. And you also mentioned that Melbourne isn't quite at the same level. Is that a big challenge when you look at other jurisdictions? Chris Johns: Yeah, totally. Basically, the world is changing and it's changing very rapidly. The data is becoming less of a problem. But one of the problems that remains is the cost of data which means that actually using our service may be prohibitive to smaller towns or organizations outside of London. With the CFL API, we have free Access to that but if it was outside of London, for example, in Bristol, then we would have to partner with a third party data provider. And there are a small number of those that can provide that service. But it's not free and their costs are extensive. And then we have to layer our costs on top of that and it may be that for that transport authority which they look at that and say, we can't do that sort of cost at the moment. Indeed somewhere Bristol actually used to have their own API and then took it offline. Because they said, we can't justify the cost of maintaining this open API strategy which to my mind is insane because surely the biggest way of generating demand for public transport authority is telling people what services there are there. And you can only do that if you've got real time information. So if you suddenly say to all the developers and even your own services, we're not going to have an API anymore. It just means that you're going to have a natural impact on demand.  I don't know if this is a simple answer or way too involved to even get into but I'm curious if I'm a transport authority, let's say in Kansas city, Missouri, Winnipeg, Manitoba, or Munich, Germany. Do you need the shape and structure of data to make this workable?  Chris Johns: It's what we call a JSON API and then documentation around it and we'll take it from there. So, most of the APIs follow a common standard these days and we can work with any of them, really. We've not done any multi-language so digital sign designs as yet. So we do need to consider the elements of user experience for trying to work in something like Japanese, for example, would be challenging for us at the moment because we'd have to consider how they interpret information which is different to how we might interpret information in the UK. But somewhere like Missouri and Munich would be fairly straightforward for us.  Okay. So if people want to know more about your organization, where do they find you?  Chris Johns: So the best thing to do is to look at our website, which is at passage-way.com, or connect with me on LinkedIn. I'm quite chatty on LinkedIn, and I post a fair amount, and also the company is on LinkedIn as well. That's how I found you.  Chris Johns: Yeah, and the more the merrier, really.  All right. Chris, thank you very much for spending some time with me.  Chris Johns: Thank you. Have a great day.  

Smart Software with SmartLogic
HTTP Requests in Elixir vs. JavaScript with Yordis Prieto & Stephen Chudleigh

Smart Software with SmartLogic

Play Episode Listen Later Oct 26, 2023 50:29


In today's episode, Sundi and Owen are joined by Yordis Prieto and Stephen Chudleigh to compare notes on HTTP requests in Elixir vs. Ruby, JavaScript, Go, and Rust. They cover common pain points when working with APIs, best practices, and lessons that can be learned from other programming languages. Yordis maintains Elixir's popular Tesla HTTP client library and shares insights from building APIs and maintaining open-source projects. Stephen has experience with Rails and JavaScript, and now works primarily in Elixir. They offer perspectives on testing HTTP requests and working with different libraries. While Elixir has matured, there is room for improvement - especially around richer struct parsing from HTTP responses. The discussion highlights ongoing efforts to improve the developer experience for HTTP clients in Elixir and other ecosystems. Topics Discussed in this Episode HTTP is a protocol - but each language has different implementation methods Tesla represents requests as middleware that can be modified before sending Testing HTTP requests can be a challenge due to dependence on outside systems GraphQL, OpenAPI, and JSON API provide clear request/response formats Elixir could improve richer parsing from HTTP into structs Focus on contribution ergonomics lowers barriers for new participants Maintainers emphasize making contributions easy via templates and clear documentation APIs drive adoption of standards for client/server contracts They discuss GraphQL, JSON API, OpenAPI schemas, and other standards that provide clear request/response formats TypeScript brings types to APIs and helps to validate responses Yordis notes that Go and Rust make requests simple via tags for mapping JSON to structs Language collaboration shares strengths from different ecosystems and inspires new libraries and tools for improving the programming experience Links Mentioned Elixir-Tesla Library: https://github.com/elixir-tesla/tesla Yordis on Github: https://github.com/yordis Yordis on Twitter: https://twitter.com/alchemist_ubi Yordis on LinkedIn: https://www.linkedin.com/in/yordisprieto/ Yordis on YouTube: https://www.youtube.com/@alchemistubi Stephen on Twitter: https://twitter.com/stepchud Stephen's projects on consciousness: https://harmonicdevelopment.us Owen suggests: Http.cat HTTParty: https://github.com/jnunemaker/httparty Guardian Library: https://github.com/ueberauth/guardian Axios: https://axios-http.com/ Straw Hat Fetcher: https://github.com/straw-hat-team/nodejs-monorepo/tree/master/packages/%40straw-hat/fetcher Elixir Tesla Wiki: https://github.com/elixir-tesla/tesla/wiki HTTPoison: https://github.com/edgurgel/httpoison Tesla Testing: https://hexdocs.pm/tesla/readme.html#testing Tesla Mock: https://hexdocs.pm/tesla/Tesla.Mock.html Finch: https://hex.pm/packages/finch Mojito: https://github.com/appcues/mojito Erlang Libraries and Frameworks Working Group: https://github.com/erlef/libs-and-frameworks/ and https://erlef.org/wg/libs-and-frameworks Special Guests: Stephen Chudleigh and Yordis Prieto.

North Meets South Web Podcast
The one with all the JSON API stuff with TJ Miller

North Meets South Web Podcast

Play Episode Listen Later Oct 12, 2023 46:36


Jake and Michael are joined by TJ Miller to try and untangle their confusion about JSON API, Open API, Swagger, and JSON Schema from last episode.This episode is brought to you by our friends at Workvivo - The leading employee communication app.Show links Generate API Documentation for Laravel with Scramble OpenAPI JSON Schema JSON:API Swagger Joe Tennanbaum going full Norton Commander with Laravel Prompts Remote Procedure Call (RPC) spatie/laravel-data Pact Stoplight Redoc SwaggerHub MuleSoft Apiary

North Meets South Web Podcast
DIY woodwork, React micro-frontends, and confusing OpenJSONAPISchema

North Meets South Web Podcast

Play Episode Listen Later Sep 28, 2023 40:23


In this episode, Jake and Michael discuss building your own monitor stand, the mysterious world of React micro-frontends, and get confused about JSON API, Open API, Swagger, and JSON Schema.This episode is brought to you by our friends at Workvivo - The leading employee communication app.Show links DIY monitor stand Micro-frontends Module federation JSON:API OpenAPI vs JSON:API JSON:API, OpenAPI, and JSON Schema working in harmony sixlive/json-schema-assertions

Whiskey Web and Whatnot
SST, AWS, and Ember with Dax Raad

Whiskey Web and Whatnot

Play Episode Listen Later Sep 28, 2023 53:20


Dax Raad, Founder of Bumi and Ironbay and SST Core Maintainer, is a passionate open-source developer who knows his way around the startup tech space with over a decade of experience under his belt. He is intimately involved in the Serverless Stack Toolkit (SST) and sheds some light on what it's all about. Dax reveals the story behind the inception of SST and its unique role in the software development ecosystem. He explores how SST is revolutionizing the way developers approach serverless applications by streamlining deployment on AWS while also focusing on developer experience. Dax also touches on the integration of Next.js and how SST has become an essential tool for deploying Next.js applications on AWS seamlessly. The discussion shifts gears to the world of cloud computing, where AWS is the big kingpin. Dax explains how being the first big player gives AWS a huge advantage in terms of money and customers. Other companies like Google Cloud and Azure have a hard time catching up because of AWS' head start. In this episode, Dax talks to Robbie and Chuck about his experience in early-stage startups and open-source projects, SST's role in simplifying AWS development, and how JSON API and Ember.js are changing the landscape of web development. Key Takeaways [00:32] - Intro to Dax Raad. [01:35] - A whiskey review: Belle Meade Sour Mash Straight Whiskey. [11:04] - Tech hot takes. [18:46] - When Dax got involved in the SST project. [25:19] - Why businesses build on top of AWS. [30:35] - The relationship between Next.js and the SST project. [36:50] - Dax's experience using Ember.js. [41:49] - The career Dax would be in if he wasn't in tech. [43:55] - Chuck and Dax discuss Lionel Messi being in Miami. Quotes [25:43] - “I don't believe you can catch up with a company that started before you in the cloud business.” ~ Dax Raad [33:08] - “It is extremely tedious. It is extremely hard to keep up with intentional changes that Vercel and Next.js make but also breakages that they do accidentally.” ~ Dax Raad [33:43] - “The vast majority of Next.js users, Next,js isn't the thing they live and die by.” ~ Dax Raad Links Dax Raad Dax Raad Twitter Dax Raad LinkedIn AWS Twitter Belle Meade Sour Mash Straight Whiskey Twizzlers Taco Bell Maker's Mark Jim Beam Jack Daniel's Tennessee Whiskey Pappy Van Winkle Tailwind CSS Stitches HTMX Astro Sentry React Google Cloud NPM Ryan Carniato Disney Next JS Vercel Ember JS JSON Graph QL Slack Discord Orbit JS Rails jQuery Major League Soccer Inter Miami CF Connect with our hosts Robbie Wagner Chuck Carpenter Ship Shape Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts Whiskey Web and Whatnot Top-Tier, Full-Stack Software Consultants This show is brought to you by Ship Shape. Ship Shape's software consultants solve complex software and app development problems with top-tier coding expertise, superior service, and speed. In a sea of choices, our senior-level development crew rises above the rest by delivering the best solutions for fintech, cybersecurity, and other fast-growing industries. Check us out at shipshape.io. --- Send in a voice message: https://podcasters.spotify.com/pod/show/whiskey-web-and-whatnot/message

The Bike Shed
393: Is REST the Best? APIs and Domain Modeling

The Bike Shed

Play Episode Listen Later Jul 18, 2023 33:49


It's updates on the work front today! Stephanie was tasked with removing a six-year-old feature flag from a codebase. Joël's been doing a lot of small database migrations. A listener question sparked today's main discussion on gerunds' interesting relationship to data modeling. Episode 386: Value Objects Revisited: The Tally Edition (https://www.bikeshed.fm/386) RailsConf 2017: In Relentless Pursuit of REST by Derek Prior (https://www.youtube.com/watch?v=HctYHe-YjnE) REST Turns Humans Into Database Clients (https://chrislwhite.com/rest-contortion/) Parse, don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Wikipedia Getting to Philosophy (https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, this week, I've been tasked with something that I've been finding very fun, which is removing a six-year-old feature flag from the codebase that is still very much in use in the sense that it is actually a mechanism for providing customers access to a feature that had been originally launched as a beta. And that was why the feature flag was introduced. But in the years since, you know, the business has shifted to a model where you have to pay for those features. And some customers are still hanging on to this beta feature flag that lets them get the features for free. So one of the ways that we're trying to convert those people to be paying for the feature is to, you know, gradually remove the feature flag and maybe, you know, give them a heads up that this is happening. I'm also getting to improve the codebase with this change as well because it has really been propagating [laughs] in there. There wasn't necessarily a single, I guess, entry point for determining whether customers should get access to this feature through the flag or not. So it ended up being repeated in a bunch of different places because the feature set has grown. And so, now we have to do this check for the flag in several places, like, different pages of the application. And it's been really interesting to see just how this kind of stuff can grow and mutate over several years. JOËL: So, if I understand correctly, there's kind of two overlapping conditions now around this feature. So you have access to it if you've either paid for the feature or if you were a beta tester. STEPHANIE: Yeah, exactly. And the interesting thought that I had about this was it actually sounds a lot like the strangler fig pattern, which we've talked about before, where we've now introduced the new source of data that we want to be using moving forward. But we still have this, you know, old limb or branch hanging on that hasn't quite been removed or pruned off [chuckles] yet. So that's what I'm doing now. And it's nice in the sense that I can trust that we are already sending the correct data that we want to be consuming, and it's just the cleanup part. So, in some ways, we had been in that half-step for several years, and they're now getting to the point where we can finally remove it. JOËL: I think in kind of true strangler fig pattern, you would probably move all of your users off of that feature flag so that the people that have it active are zero, at which point it is effectively dead code, and then you can remove it. STEPHANIE: Yeah, that's a great point. And we had considered doing that first, but the thing that we had kind of come away with was that removing all of those customers from that feature flag would probably require a script or, you know, updating the production data. And that seemed a bit riskier actually to us because it wasn't as reversible as a code change. JOËL: I think you bring up a really interesting point, which is that production data changes, in general, are just scarier than code changes. At least for me, it feels like it's fairly easy generally to revert a code change. Whereas if I've messed up the production database, [laughs] that's going to be unpleasant few days. STEPHANIE: What's interesting is that this feature flag is not really supported by a nice user interface for managing it. And so, we inevitably had to do a more developer-focused solution to remove these customers from being able to access this feature. And so, the two options, you know, that we had available were to do it through data, like I mentioned, or do it through that code change. And again, I think we evaluated both options. But what's kind of nice about doing it with the code change is that when we eventually get to delete those feature flag records, it will be really nice and easy. JOËL: That's really exciting. One thing that's different about kind of more mature projects is that we often get to do some kind of change management, unlike a greenfield app where you just get to, oh, let's introduce this new thing, cool. Oftentimes, on a more mature project, before you introduce the new thing, you have to figure out, like, what is the migration path towards that? Is that a kind of work that you enjoy? STEPHANIE: I think this was definitely an exercise in thinking about how to break this down into steps. So, yeah, that change management process you mentioned, I, like, did find a lot of satisfaction in trying to break it up, you know, especially because I was also thinking that you know, maybe I am not able to see the complete, like, cleanup and removal, and, like, where can someone pick up after me? In some ways, I feel like I was kind of stepping into that migration, you know, six years [laughs] in the making from beta to the paid product. But I think I will feel really satisfied if I'm able to see this thing through and get to celebrate the success of saying, hey, like, I removed...at this point, it's a few hundred lines of code. [laughs] And also, you know, with the added business value of encouraging more customers to pay for the product. But I think I also I'm maybe figuring out how to accept like, okay, like, how could I, like, step away from this in the middle and be able to feel good that I've left it in a place that someone else could see through? JOËL: So you mentioned you're taking this over from somebody else, and this has been kind of six years in the making. I'm curious, is the person who introduced this feature flag six years ago are they even still at the company? STEPHANIE: No, they are not, which I think is pretty typical, you know, it's, like, really common for someone who had all that context about how it came to be. In fact, I actually didn't even realize that the feature flag was the original beta version of the product because that's not what it's called. [laughs] And it was when I was first onboarding onto this project, and I was like, "Hey, like, what is this? Like, why is this still here?" Knowing that the canonical, you know, version that customers were using was the paid version. And the team was like, "Oh, yeah, like, that's this whole thing that we've been meaning to remove for a long time." So it's really interesting to see the lifecycle, like, as to some of this code a little bit. And sometimes, it can be really frustrating, but this has felt a little more like an archaeology dig a little bit. JOËL: That sounds like a really interesting project to be on. STEPHANIE: Yeah. What about you, Joël, what's new in your world? JOËL: So, on my project, I've been having to do a lot of small database migrations. So I've got a bunch of these little features to do that all involve doing database migrations. They're not building on each other. So I'm just doing them all, like, in different feature branches, and pushing them all up to GitHub to get reviewed, kind of working on them in parallel. And the problem that happens is that when you switch from one branch where you've run a migration to another and then run migrations again, some local database state persists between the branch switch, which means that when you run the migrations, then this app uses a structure.sql. And the structure.sql has a bunch of extra junk from other branches you've been on that you don't want as part of your diff. And beyond, like, two or three branches, this becomes an absolute mess. STEPHANIE: Oh, I have been there. [laughs] It's always really frustrating when I switch branches and then try to do my development and then realize that I have had my leftover database changes. And then having to go back and then always forgetting what order of operations to do to reverse the migration and then having to re-migrate. I know that pain very well. JOËL: Something I've been doing for this project is when I switch branches, making sure that my structure SQL is checked out to the latest version from the main branch. So I have a clean structure SQL then I drop my local database, recreate an empty one, and run a rake db:schema:load. And that will load that structure file as it is on the main branch into the database schema. That does not have any of the migrations on this branch run, so, at that point, I can run a rake db:migrate. And I will get exactly what's on main plus what gets generated on this branch and nothing else. And so, that's been a way that I've been able to kind of switch between branches and run database operations without getting any cross-contamination. STEPHANIE: Cross-contamination. I like that term. Have you automated this at all, or are you doing this manually? JOËL: Entirely manually. I could probably script some of this. Right now...so it's three steps, right? Drop, create, schema load. I just have them in one command because you can chain Unix commands with a double ampersand. So that's what I'm doing right now. I want to say there's a db:reset task, but I think that it uses migrate rather than schema load. And I don't want to actually run migrations. STEPHANIE: Yeah, that would take longer. That's funny. I do love the up arrow key [laughs] in your terminal for, you know, going back to the thing you're running over and over again. I also appreciate the couple extra seconds that you're spending in waiting for your database to recreate. Like, you're paying that cost upfront rather than down the line when you are in the middle of doing [laughs] what you're trying to do and realize, oh no, my database is not in the state that I want it to be for this branch. JOËL: Or I'm dealing with some awful git conflict when trying to merge some of these branches. Or, you know, somebody comments on my PR and says, "Why are you touching the orders table? This change has nothing to do with orders." I'm like, "Oh, sorry, that actually came out of a different thing that I did." So, yep, keeping those diffs small. STEPHANIE: Nice. Well, I'm glad that you found a way to manage it. JOËL: So you mentioned the up arrow key and how that's really nice in the terminal. Something that I've been relying on a lot recently is reverse history search, CTRL+R in the terminal. That allows me to, instead of, like, going one by one in order of the history, filter for something that matches the thing that I've written. So, in this case, I'll hit CTRL+R, type, you know, Rails DB or whatever, then immediately it shows me, oh, did you want this long command? Hit enter, and I'm done. Even if I've done, you know, 20 git commands between then and the last time I ran it. STEPHANIE: Yeah, that's a great tip. So, a few weeks ago, we received a listener question from John, and he was responding to an episode where I'd asked about what the grammatical term is for verbs that are also nouns. He told us about the phrase, a verbal noun, for which there's a specific term called gerund, which is basically, in English, the words ending in ING. So, the gerund version of bike would be biking. And he pointed out a really interesting relationship that gerunds have to data modeling, where you can use a gerund to model something that you might describe as a verb, especially as a user interaction, but can be turned into a noun to form a resource that you might want to introduce CRUD operations for in your application. So one example that he was telling us about is the idea of maybe confirming a reservation. And, you know, we think of that as an action, but there is also a noun form of that, which is a confirmation. And so, confirmation could be a new resource, right? It could even be backed at the database level. And now you have a simpler way of representing the idea of confirming a reservation that is more about the confirmation as the resource itself rather than some kind of append them to a reservation itself. JOËL: That's really cool. We get to have a crossover between grammar terms and programming, and being able to connect those two is always a fun day for me. STEPHANIE: Yeah, I actually find it quite difficult, I think, to come up with noun forms of verbs on my own. Like, I just don't really think about resources that way. I'm so used to thinking about them in a more tangible way, I suppose. And it's really kind of cool that, you know, in the English language, we have turned these abstract ideas, these actions into, like, an object form. JOËL: And this is particularly useful when we're trying to design RESTful either APIs or even just resources for a Rails app that's server-rendered so that instead of trying to create all these, like, extra actions on our controller that are verbs, we might decide to instead create new resources in the system, new nouns that people can do the standard 7 to. STEPHANIE: Yes. I like that better than introducing custom controller actions or routes that deviate from RESTful conventions because, you know, I probably have seen a slash confirm reservation [laughs] URL. And, you know, this is, I think, an interesting way of avoiding having too many of those deviating endpoints. JOËL: Yeah, I found that while Rails does have support for those, just all the built-in things play much more nicely if you're restricting yourself to the classic seven. And I think, in general, it's easier to model and think about things in a Rails app when you have a lot of noun resources rather than one giant controller with a bunch of kind of verb actions that you can do to it. In the more formal jargon, I think we might refer to that as RESTful style versus RPC style, a Remote Procedure Call. STEPHANIE: Could you tell me more about Remote Procedure Calls and what that means? JOËL: The general idea is that it's almost like doing a method call on an object somewhere. And so, you would say, hey, I've got an account, and I want to call the confirm method on it because I know that maybe underlying this is an ActiveRecord account model. And the API or the web UI is just a really thin layer over those objects. And so, more or less, whatever your methods on your object are, can be accessed through the API. So the two kind of mirror each other. STEPHANIE: Got it. That's interesting because I can see how someone might want to do that, especially if, you know, the account is the domain object they're using at the, you know, persistence layer, and maybe they're not quite able to see an abstraction for something else. And so, they kind of want to try to fit that into their API design. JOËL: So I have a perhaps controversial opinion, which is that the resources in your Rails application, so your controllers, shouldn't map one-to-one with your database tables, your models. STEPHANIE: So, are you saying that you are more likely to have more abstractions or various resources than what you might have at the database level? JOËL: Well, you know what? Maybe more, but I would say, in general, different. And I think because both layers, the controller layer, and the model layer, are playing with very different sets of constraints. So when I'm designing database tables, I'm thinking in terms of normalization. And so, maybe I would take one big concept and split it up into smaller concepts, smaller tables because I need this data to be normalized so that there's no ambiguity when I'm making queries. So maybe something that's one resource at the controller layer might actually be multiple tables at the database layer. But the inverse could also be true, right? You might have, in the example that John gave, you know, an account that has a single table in the database with just a Boolean field confirmed yes or no. And maybe there's just a generic account resource. But then, separately, there's also a confirmation resource. And so, now we've got more resources at the controller layer than at the database layer. So I think it can go either way, but they're just not tightly coupled to each other. STEPHANIE: Yeah, that makes sense. I think another way that I've seen this manifest is when, like you said, like, maybe multiple database tables need to be updated by, you know, a request to this endpoint. And now we get into [chuckles] what some people may call services or that territory of basically something. And what's interesting is that a lot of the service classes are named as verbs, right? So order, creator. And, like, whatever order of operations that needs to happen on multiple database objects that happens as a result of a user placing an order. But the idea that those are frequently named as verbs was kind of interesting to me and a bit of a connection to our new gerund tip. JOËL: That's really interesting. I had not made that connection before. Because I think my first instinct would be to avoid a service object there and instead use something closer to a form object that takes the same idea and represents it as a noun, potentially with the same name as the resource. So maybe leaning really heavily into that idea of the verbal noun, not just in describing the controller or the route but then also maybe the object backing it, even if it's not connecting directly to a database table. STEPHANIE: Interesting. So, in this case, would the form object be mapped closer to your controller resource? JOËL: Potentially, yes. So maybe I do have some kind of, like, object that represents a confirmation and makes it nicer to render the confirmation form on the edit page or the new page. In this case, you know, it's probably just one checkbox, so maybe it's not worth creating an object. But if there were multiple fields, then yes, maybe it's nice to create an in-memory object that has the same name as the resource. Similar maybe for a resource that represents multiple underlying database tables. It can be nice to have kind of one object that represents all of them, almost like a facade, I guess. STEPHANIE: Yeah, that's really interesting. I like that idea of a facade, or it's, like, something at a higher level representing hopefully, like, some kind of meaning of all of these database objects together. JOËL: I want to give a shout-out to talk from a former thoughtboter, Derek Prior—actually, former Bike Shed host—from RailsConf 2017 called In Relentless Pursuit of REST, where he digs into a lot of these concepts, particularly how to model resources in your Rails app that don't necessarily map one to one with a database table, and why that can be a good thing. Have you seen that talk? STEPHANIE: I haven't, but I love the title of it. It's a great pun. It's very evocative, I think because I'm really curious about this idea of a relentless pursuit. Because I think another way to react to that could be to be done with REST entirely and maybe go with something like GraphQL. JOËL: So instead of a relentless pursuit, it's a relentless...what's the opposite of pursuing? Fleeing? STEPHANIE: Fleeing? [laughs] I like how we arrived there at the same time. Yes. So now I'm thinking of I had mentioned a little bit ago on the show we had our spicy takes Lightning Talks on our Boost Team. And a fellow thoughtboter, Chris White, he had given a talk about Why REST Is Not the Best and for -- JOËL: Also, a great title. STEPHANIE: Yes, also, a great title. JOËL: I love the rhyming there. STEPHANIE: Yeah. And his reaction to the idea of trying to conform user interactions that don't quite map to a noun or an obvious resource was to potentially introduce GraphQL, where you have one endpoint that can service really anything that you can think of, I suppose. But, in his example, he was making the argument that human interactions are not database resources, right? And maybe if you're not able to find that abstraction as a noun or object, with GraphQL, you can encapsulate those ideas as closer to actions, but in the GraphQL world, like, I think they're called mutations. But it is, I think, a whole world of, like, deciding what you want to be changed on the server side that is a little less constrained to having to come up with the right abstraction. JOËL: I feel like GraphQL kind of takes that, like, complete opposite philosophy in that instead of saying, hey, let's have, like, this decoupling between the API layer and the database, GraphQL almost says, "No, let's lean into that." And yeah, you want to traverse the graph of, like, tables under the hood? Absolutely. You get to know the tables. You get to know how they're related to each other. I guess, in theory, you could build a middle layer, and that's the graph that gets traversed rather than the graph of the tables. In practice, I think most people build it so that the API layer more or less has access directly to tables. Has that been your experience? STEPHANIE: That's really interesting that you brought that up. I haven't worked with GraphQL in a while, but I was reading up on it before we started recording because I was kind of curious about how it might play with what we're talking about now. But the idea that it's graphed based, to me, was like, oh, like, that naturally, it could look very much like, you know, an entity graph of your relational database. But the more I was reading about the GraphQL schema and different types, I realized that it could actually look quite different. And because it is a little bit closer to your UI layer, like, maybe you are building an abstraction that is more for serving that as that middle layer between your front end and your back end. JOËL: That's really interesting that you mentioned that because I feel like the sort of traditional way that APIs are built is that they are built by the back-end team. And oftentimes, they will reflect the database schema. But you kind of mentioned with GraphQL here, sometimes it's the opposite that happens. Instead of being driven kind of from the back towards the front, it might be driven from the front towards the back where the UI team is building something that says, hey, we need these objects. We need these connections. Can you expose them to us? And then they get access to them. What has been your experience when you've been working with front ends that are backed by a GraphQL API? STEPHANIE: I think I've tended to see a GraphQL API when you do have a pretty rich client-side application with a lot of user interactions that then need to, you know, go and fetch some data. And you, like, really, you know, obviously don't want a page reload, right? So it's really interesting, actually, that you pointed out that it's, like, perhaps the front end or the UI driving the API. Because, on one hand, the flexibility is really nice. And there's a lot more freedom even in maybe, like, what the product can do or how it would look. On the other hand, what I've kind of also seen is that eventually, maybe we do just want an API that we can talk to separate from, you know, any kind of UI. And, at that point, we have to go and build a separate thing [laughs] for the same data. JOËL: So we've been talking about structuring APIs and, like, boundaries and things like that. I think my personal favorite feature of GraphQL is not the graph part but the fact that it comes with a built-in schema. And that plays really nicely with some typed technologies. Particularly, I've used Elm with some of the GraphQL libraries there, and that experience is just really nice. Where it will tell you if your front-end code is not compatible with the current API schema, and it will generate some things based off the schema. So you have this really nice feedback cycle where somebody makes a change to the API, or you want to make a change to the code, and it will tell you immediately is your front end compatible with the current state of the back end? Which is a classic problem with developing front-end code. STEPHANIE: First of all, I think it's very funny that you admitted to not preferring the graph part of GraphQL as a graph enthusiast yourself. [laughs] But I think I'm in agreement with you because, like, normally, I'm looking at it in its schema format. And that makes a lot of sense to me. But what you said was really interesting because, in some ways, we're now kind of going back to the idea of maybe boundaries blurring because the types that you are creating for GraphQL are kind of then servicing both your front end and your back end. Do you think that's accurate? JOËL: Ooh. That is an important distinction. I think you can. And I want to say that in some TypeScript implementations, you do use the types on both sides. In Elm, typically, you would not unless there's something really primitive, like a string or something like that. STEPHANIE: Okay, how does that work? JOËL: So you have some conversion layer that happens. STEPHANIE: Got it. JOËL: Honestly, I think that's my preference, and not just at the front end versus API layer but kind of all throughout. So the shape of an object in the database should not be the same shape as the object in the business logic that runs on the back end, which should not be the same shape as the object in transport, so JSON or whatever, which is also not the same shape as the object in your front-end code. Those might be similar, but each of these layers has different responsibilities, different things it's trying to optimize for. Your code should be built, in my opinion, in a way that allows all four of those layers to diverge in their interpretation of not only what maybe common entities are, so maybe a user looks slightly different at each of these layers, but maybe even what the entities are to start with. And that maybe in the database what, we don't have a full user, we've got a profile and an account, and those get merged somehow. And eventually, when it gets to the front end, all we care about is the concept of a user because that's what we need in that context. STEPHANIE: Yeah, that's really interesting because now it almost sounds like separate systems, which they kind of are, and then finding a way to make them work also as one bigger [laughs] system. I would love to ask, though, what that conversion looks like to you. Or, like, how have you implemented that? Or, like, what kind of pattern would you use for that? JOËL: So I'm going to give a shout-out to the article that I always give a shout-out to: Parse, Don't Validate. In general, yeah, you do a transformation, and potentially it can fail. Let's say I'm pulling data from a GraphQL API into an Elm app. Elm has some built-in libraries for doing those transformations and will tell you at compile time if you're incorrectly transforming the data that comes from the shape that we expect from the schema. But just because the schema comes in as, like, a flat object with certain fields or maybe it's a deeply nested chain of objects in GraphQL, it doesn't mean that it has to be that way in your Elm app. So that transformation step, you get to sort of make it whatever you want. So my general approach is, at each layer, forget what other people are sending you and just design the entities that you would like to. I've heard the term wish-driven development, which I really like. So just, you know, if you could have, like, to make your life easy, what would the entities look like? And then kind of work backwards from there to make that sort of perfect world a reality for you and make it play nicely with other systems. And, to me, that's true at every layer of the application. STEPHANIE: Interesting. So I'm also imagining that the transformation kind of has to happen both ways, right? Like, the server needs a way to transform data from the front end or some, you know, whatever, third party. But that's also true of the front end because what you're kind of saying is that these will be different. [laughs] JOËL: Right. And, in many ways, it has to be because JSON is a very limited format. But some of the fancier things that you might have access to either on the back end or on the front end might be challenging to represent natively in JSON. And a classic one would be what Elm calls a custom type. You know, they're also called tagged unions, discriminated unions, algebraic data types. These things go by a bajillion names, and it's confusing. But they're really kind of awkward and hard, almost impossible to represent in straight-up JSON because JSON is a very limited kind of transportation format. So you have to almost, like, have a rehydration step on one side and a kind of packing down step on the other when you're reading or writing from a JSON API. STEPHANIE: Have you ever heard of or played that Wikipedia game Getting to Philosophy? JOËL: I've done, I think, variations on it, the idea that you have a start and an end article, and then you have to either get through in the fewest amount of clicks, or it might be a timed thing, whoever can get to the target article first. Is that what you're referring to? STEPHANIE: Yeah. So, in this case, I'm thinking, how many clicks through Wikipedia to get to the Wiki article about philosophy? And that's how I'm thinking about how we end up getting to [laughs] talking about types and parsing, and graphs even [laughs] on the show. JOËL: It's all connected, almost as if it forms a graph of knowledge. STEPHANIE: Learning that's another common topic on the show. [laughs] I think it's great. It's a lot of interesting lenses to view, like, the same things and just digging further and further deeper into them to always, like, come away with a little more perspective. JOËL: So, in the vein of wish-driven development, if you're starting a brand-new front-end UI, what is your sort of dream approach for working with an API? STEPHANIE: Wish-driven development is very visceral to me because I often think about when I'm working with legacy code and what my wishes and dreams were for the, you know, the stack or the technology or whatever. But, at that point, I don't really have the power to change it. You know, it's like I have what I have. And that's different from being in the driver's seat of a greenfield application where you're not just wishing. You're just deciding for yourself. You get to choose. At the end of the day, though, I think, you know, you're likely starting from a simple application. And you haven't gotten to the point where you have, like, a lot of features that you have to figure out how to support and, like, complexity to manage. And, you know, you don't even know if you're going to get there. So I would probably start with REST. JOËL: So we started this episode from a very back-end perspective where we're talking about Rails, and routes, and controllers. And we kind of ended it talking from a very front-end perspective. We also contrasted kind of a more RESTful approach, versus GraphQL, versus more kind of old-school RPC-style routing. And now, I'm almost starting to wonder if there's some kind of correlation between whether someone primarily works from the back end and maybe likes, let's say, REST versus maybe somebody on the front end maybe preferring GraphQL. So I'd be happy for any of our listeners who have strong opinions preferring GraphQL, or REST, or something else; message us at hosts@bikeshed.fm and let us know. And, if you do, please let us know if you're primarily a front-end or a back-end developer because I think it would be really fun to see any connections there. STEPHANIE: Absolutely. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

Side Project Spotlight
#41: Package Oriented Programming

Side Project Spotlight

Play Episode Listen Later May 8, 2023 60:08


This week, the trio celebrates their podcast mid-life crisis by discussing a concept coined by Daniel Steinberg in a 2022 talk, "Packaged Oriented Programming." How do you organize your app using Swift Package Manager packages? What are the benefits and costs? There is also some discussion on dealing with JSON API changes, using Codable vs DTOs, and strategies for caching external package dependencies for the longterm. Be sure to stay until the end where Kotaro engages in some live "prompt engineering" with Chat GPT that generates some impressively bad jokes. ## Topics Discussed - Mid-life crisis episode - Are we buying the rumored AR headset? - PickleJarTodo / LazyGrids are cool! - Package Oriented Programming - Previews and Package Oriented Programming - Daniel Steinberg - CocoaHeadsNL, Do iOS 2022 - https://youtu.be/_5uBJeJVUm0 - Why? - How? - How many frameworks per SPM? - Codable/Decodable/Encodable/DTO - Dealing with JSON API changes - Unit testing - Project organization - Circular dependencies - Assets/Resources - Apple Food Truck example - Caching SPMs for the future - https://www.sonatype.com/products/sonatype-nexus-repository - Swift Package Index - https://swiftpackageindex.com - Be mindful of importing dependencies - Wrap-Up - Chat GPT Prompting for Jokes! Intro music: "When I Hit the Floor", © 2021 Lorne Behrman. Used with permission of the artist.

Screaming in the Cloud
The Quest to Make Edge Computing a Reality with Andy Champagne

Screaming in the Cloud

Play Episode Listen Later Nov 10, 2022 46:56


About AndyAndy is on a lifelong journey to understand, invent, apply, and leverage technology in our world. Both personally and professionally technology is at the root of his interests and passions.Andy has always had an interest in understanding how things work at their fundamental level. In addition to figuring out how something works, the recursive journey of learning about enabling technologies and underlying principles is a fascinating experience which he greatly enjoys.The early Internet afforded tremendous opportunities for learning and discovery. Andy's early work focused on network engineering and architecture for regional Internet service providers in the late 1990s – a time of fantastic expansion on the Internet.Since joining Akamai in 2000, Akamai has afforded countless opportunities for learning and curiosity through its practically limitless globally distributed compute platform. Throughout his time at Akamai, Andy has held a variety of engineering and product leadership roles, resulting in the creation of many external and internal products, features, and intellectual property.Andy's role today at Akamai – Senior Vice President within the CTO Team - offers broad access and input to the full spectrum of Akamai's applied operations – from detailed patent filings to strategic company direction. Working to grow and scale Akamai's technology and business from a few hundred people to roughly 10,000 with a world-class team is an amazing environment for learning and creating connections.Personally Andy is an avid adventurer, observer, and photographer of nature, marine, and astronomical subjects. Hiking, typically in the varied terrain of New England, with his family is a common endeavor. He enjoys compact/embedded systems development and networking with a view towards their applications in drone technology.Links Referenced: Macrometa: https://www.macrometa.com/ Akamai: https://www.akamai.com/ LinkedIn: https://www.linkedin.com/in/andychampagne/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH.Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built-in key rotation, permissions as code, connectivity between any two devices, reduce latency, and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. Tailscale is completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It's a spooky season and you're already shaking. It's time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomomento.co/screaming That's GO M-O-M-E-N-T-O dot co slash screamingCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I like doing promoted guest episodes like this one. Not that I don't enjoy all of my promoted guest episodes. But every once in a while, I generally have the ability to wind up winning an argument with one of my customers. Namely, it's great to talk to you folks, but why don't you send me someone who doesn't work at your company? Maybe a partner, maybe an investor, maybe a customer. At Macrometa who's sponsoring this episode said, okay, my guest today is Andy Champagne, SVP at the CTO office at Akamai. Andy, thanks for joining me.Andy: Thanks, Corey. Appreciate you having me. And appreciate Macrometa letting me come.Corey: Let's start with talking about you, and then we'll get around to the Macrometa discussion in the fullness of time. You've been at an Akamai for 22 years, which in tech company terms, it's like staying at a normal job for 75 years. What's it been like being in the same place for over two decades?Andy: Yeah, I've got several gold watches. I've been retired twice. Nobody—you know, Akamai—so in the late-90s, I was in the ISP universe, right? So, I was in network engineering at regional ISPs, you know, kind of cutting teeth on, you know, trying to scale networks and deal with the flux of user traffic coming in from the growth of the web. And, you know, frankly, it wasn't working, right?Companies were trying to scale up at the time by adding bigger and bigger servers, and buying literally, you know, servers, the size of refrigerators. And all of a sudden, there was this company that was coming together out in Cambridge, I'm from Massachusetts, and Akamai started in Cambridge, Massachusetts, still headquartered there. And Akamai was forming up and they had a totally different solution to how to solve this, which was amazing. And it was compelling and it drew me there, and I am still there, 22-odd years in, trying to solve challenging problems.Corey: Akamai is one of those companies that I often will describe to people who aren't quite as inclined in the network direction as I've been previously, as one of the biggest companies of the internet that you've never heard of. You are—the way that I think of you historically, I know this is not how you folks frame yourself these days, but I always thought of you as the CDN that you use when it really mattered, especially in the earlier days of the internet where there were not a whole lot of good options to choose from, and the failure mode that Akamai had when I was looking at it many years ago, is that, well, it feels enterprise-y. Well, what does that mean exactly because that's usually used as a disparaging term by any developer in San Francisco. What does that actually unpack to? And to my mind, it was, well, it was one of the more expensive options, which yes, that's generally not a terrible thing, and also that it felt relatively stodgy, for lack of a better term, where it felt like updating things through an API was more of a JSON API—namely a guy named Jason—who would take a ticket, possibly from Jira if they were that modern or not, and then implement it by hand. I don't believe that it is quite that bad these days because, again, this was circa 2012 that we're talking here. But how do you view what Akamai is and does in 2022?Andy: Yeah. Awesome question. There's a lot to unpack in there, including a few clever jabs you threw in. But all good.Corey: [laugh].Andy: [laugh]. I think Akamai has been through a tremendous, tremendous series of evolutions on the internet. And really the one that, you know, we're most excited about today is, you know, earlier this year, we kind of concluded our acquisition of Linode. And if we think about Linode, which brings compute into our platform, you know, ultimately Akamai today is a compute company that has a security offering and has a delivery offering as well. We do more security than delivery, so you know, delivery is kind of something that was really important during our first ten or twelve years, and security during the last ten, and we think compute during the next ten.The great news there is that if you look at Linode, you can't really find a more developer-focused company than Linode. You essentially fall into a virtual machine, you may accidentally set up a virtual machine inadvertently it's so easy. And that is how we see the interface evolving. We see a compute-centric interface becoming standard for people as time moves on.Corey: I'm reminded of one of those ancient advertisements, I forget, I think would have been Sun that put it out where the network is the computer or the computer is the network. The idea of that a computer sitting by itself unplugged was basically just this side of useless, whereas a bunch of interconnected computers was incredibly powerful. That today and 2022 sounds like an extraordinarily obvious statement, but it feels like this is sort of a natural outgrowth of that, where, okay, you've wound up solving the CDN piece of it pretty effectively. Now, you're expanding out into, as you say, compute through the Linode acquisition and others, and the question I have is, is that because there's a larger picture that's currently unfolding, or is this a scenario where well, we nailed the CDN side of the world, well, on that side of the universe, there's no new worlds left to conquer. Let's see what else we can do. Next, maybe we'll start making toasters.Andy: Bunch of bored guys in Cambridge, and we're just like, “Hey, let's go after compute. We don't know what we're doing.” No. There's a little bit more—Corey: Exactly. “We have money and time. Let's combine the two and see what we can come up with.”Andy: [laugh]. Hey, folks, compute: it's the new thing. No, it's more than that. And you know, Akamai has a very long history with the edge, right? And Akamai started—and again, arrogantly saying, we invented the concept of the edge, right, out there in '99, 2000, deploying hundreds and then to thousands of different locations, which is what our CDN ran on top of.And that was a really new, novel concept at the time. We extended that. We've always been flirting with what is called edge computing, which is how do we take pieces of application logic and move them from a centralized point and move them out to the edge. And I mean, cripes, if you go back and Google, like, ‘Akamai edge computing,' we were working on that in 2003, which is a bit like ancient history, right? And we are still on a quest.And literally, we think about it in the company this way: we are on a quest to make edge computing a reality, which is how do you take applications that have centralized chokepoints? And how do you move as much of those applications as possible out to the edge of the network to unblock user performance and experience, and then see what folks developers can enable with that kind of platform?Corey: For me, it seems that the rise of AWS—which is, by extension, the rise of cloud—has been, okay, you wind up building whatever you want for the internet and you stuff it into an AWS region, and oh, that's far away from your customers and/or your entire architecture is terrible so it has to make 20 different calls to the data center in series rather than in parallel. Great, how do we reduce the latency as much as possible? And their answer has largely seemed to be, ah, we'll build more regions, ever closer to you. One of these days, I expect to wake up and find that there's an announcement that they're launching a new region in my spare room here. It just seems to get closer and closer and closer. You look around, and there's a cloud construction crew stalking you to the mall and whatnot. I don't believe that is the direction that the future necessarily wants to be going in.Andy: Yeah, I think there's a lot there. And I would say it this way, which is, you know, having two-ish dozen uber-large data centers is probably not the peak technology of the internet, right? There's more we need to do to be able to get applications truly distributed. And, you know, just to be clear, I mean, Amazon AWS's done amazing stuff, they've projected phenomenal scale and they continue to do so. You know, but at Akamai, the problem we're trying to solve is really different than how do we put a bunch of stuff in a small number of data centers?It's, you know, obviously, there's going to be a centralized aspect, but there also needs to be incredibly integrated and seamless, moves through a gradient of compute, where hey, maybe you're in a very large data center for your AI/ML, kind of, you know, offline data lake type stuff. And then maybe you're in hundreds of locations for mid-tier application processing, and, you know, reconciliation of databases, et cetera. And then all the way out at the edge, you know, in thousands of locations, you should be there for user interactivity. And when I say user interactivity, I don't just mean, you know, read-only, but you've got to be able to do a read-write operation in synchronous fashion with the edge. And that's what we're after is building ultimately a platform for that and looking at tools, technology, and people along the way to help us with it.Corey: I've built something out, my lasttweetinaws.com threading Twitter client, and that's… it's fine. It's stateless, but it's a little too intricate to effectively run in the Lambda@Edge approach, so using their CloudFront offering is simply a non-starter. So, in order to get low latency for people using it around the world, I now have to deploy it simultaneously to 20 different AWS regions.And that is, to be direct, a colossal pain in the ass. No one is really doing stuff like that, that I can see. I had to build a whole lot of customs tooling just to get a CI/CD system up and working. Their strong regional isolation is great for containing blast radii, but obnoxious when you're trying to get something deployed globally. It's not the only way.Combine that with the reality that ingress data transfer to any of their regions is free—generally—but sending data to the internet is a jewel beyond price because all my stars, that is egress bandwidth; there is nothing more valuable on this planet or any other. And that doesn't quite seem right. Because if that were actively true, a whole swath of industries and apps would not be able to exist.Andy: Yeah, you know, Akamai, a huge part of our business is effectively distributing egress bandwidth to the world, right? And that is a big focus of ours. So, when we look at customers that are well positioned to do compute with Akamai, candidly, the filtering question that I typically ask with customers is, “Hey, do you have a highly distributed audience that you want to engage with, you know, a lot of interactivity or you're pushing a lot of content, video, updates, whatever it is, to them?” And that notion of highly distributed applications that have high egress requirements is exactly the sweet spot that we think Akamai has, you know, just a great advantage with, between our edge platform that we've been working on for the last 20-odd years and obviously, the platform that Linode brings into the conversation.Corey: Let's talk a little bit about Macrometa.Andy: Sure.Corey: What is the nature of your involvement with those folks? Because it seems like you sort of crossed into a whole bunch of different areas simultaneously, which is fascinating and great to see, but to my understanding, you do not own them.Andy: No, we don't. No, they're an independent company doing their thing. So, one of the fun hats that I get to wear at Akamai is, I'm responsible for our Akamai Ventures Program. So, we do our corporate investing and all this kind of thing. And we work with a wide array of companies that we think are contributing to the progression of the internet.So, there's a bunch of other folks out there that we work with as well. And Macrometa is on that list, which is we've done an investment in Macrometa, we're board observers there, so we get to sit in and give them input on, kind of, how they're doing things, but they don't have to listen to us since we're only observers. And we've also struck a preferred partnership with them. And what that means is that as our customers are building solutions, or as we're building solutions for our customers, utilizing the edge, you know, we're really excited and we've got Macrometa at the table to help with that. And Macrometa is—you know, just kind of as a refresher—is trying to solve the problem of distributed data access at the edge in a high-performance and almost non-blocking, developer-friendly way. And that is very, very exciting to us, so that's the context in which they're interesting to our continuing evolution of how the edge works.Corey: One of the questions I always like to ask, and it's usually not considered a personal attack when I asked the question—Andy: Oh, good.Corey: But it's, “Describe what the company does.” Now, at some places like the latter days of Yahoo, for example, it's very much a personal attack. But what is it that Macrometa does?Andy: So, Macrometa provides a worldwide, high-speed distributed database that is resident on what today, you could call the edge of the network. And the advantage here is, instead of having one SQL server sitting somewhere, or what you would call a distributed SQL Server, which is two SQL Servers sitting next to one another, Macrometa has a high-speed data store that allows you to, instead of having that centralized SQL Server, have it run natively at the edge of the network. And when you're building applications that run on the edge or anywhere, you need to try to think about how do you have the data as close to the user or to the access point as possible. And that's the problem Macrometa is after and that's what their products today solve. It's an incredibly bright team over there, a fantastic founder-CEO team, and we're really excited to be working with him.Corey: It wasn't intentionally designed this way as a setup when I mentioned a few minutes ago, but yeah, my Twitter client works across the 20-some-odd AWS regions, specifically because it's stateless. All of the state, other than a couple of API keys at provision time, wind up living in the user's browser. If this was something that needed to retain state in any way, like, you know, basically every real application under the sun, this strategy would absolutely not work unless I wound up with some heinous form of circular replication, and then you wind up with a single region going down and everything explodes. Having a cohesive, coherent data layer that spans all of that is key.Andy: Yeah, and you're on to the classical, you know, CompSci issue here around edge, which is if you have 100 edge regions, how do you have consistent state storage between applications running on N of those? And that is the problem Macrometa is after, and, you know, Akamai has been working on this and other variants of the edge problem for some time. We're very excited to be working with the folks at Macrometa. It's a cool group of folks. And it's an interesting approach to the technology. And from what we've seen so far, it's been working great.Corey: The idea of how do I wind up having persistent, scalable state across a bunch of different edge locations is not just a hard computer science problem; it's also a hard cloud economics problem, given the cost of data transit in a bunch of different directions between different providers. It turns, “How much does it cost?” In most cases to a question that can only be answered by well let's run it for a few days and find out. Which is not usually the best way to answer some questions. Like, “Is that power socket live?” “Let's touch it and find out.” Yeah, there are ways you learn that are extraordinarily painful.Andy: Yeah no, nobody should be doing that with power sockets. I think this is one of these interesting areas, which is this is really right in Akamai's backyard but it's not realized by a lot of folks. So, you know, Akamai has, for the last 20-odd-years, been all about how do we egress as much as possible to the entire internet. The weird areas, the big areas, the small areas, the up-and-coming areas, we serve them all. And in doing that, we've built a very large global fabric network, which allows us to get between those locations at a very low cost because we have to move our own content around.And hooking those together, having a essentially private network fabric that hooks the vast majority of our big locations together and then having very high-speed egress out of all of the locations to the internet, you know, that's been how we operate our business at scale effectively and economically for years, and utilizing that for compute data replication, data synchronization tasks is what we're doing.Corey: There are a lot of different solutions that could be used to solve a lot of the persistent data layer question. For example, when you had to solve a similar problem with compute, you had a few options in front of you. Well, we could buy a whole bunch of computers and stuff them in a rack somewhere because, eh, cloud; how hard could it be? Saner heads prevailed, and no, no, no, we're going to buy Linode, which was honestly a genius approach on about three different levels, and I'm still unconvinced the industry sees that for the savvy move that it was. I'm confident that'll change in time.Why not build it yourself? Or alternately, acquire another company that was working on something similar? Instead, you're an investor in a company that's doing this effectively, but not buying them outright?Andy: Yeah, you know, and I think that's—Akamai is beyond at this point in thinking that it's just about ownership, right? I think that this—we don't have to own everything in order to have a successful ecosystem. You know, certainly, we're going to want to own key parts of it and that's where you saw the Linode acquisition, where we felt that was kind of core. But ultimately, we believe in promoting customer choice here. And there's a pretty big role that we have that we think we can help with companies, such as folks like Macrometa where they have, you know, really interesting technology, but they can use leverage, they can use some of our go-to-market, they can use, you know, some of our, you know, kind of guidance and expertise on running a startup—which, by the way, it's not an easy job for these folks—and that's what we're there to do.So, with things like Linode, you know, we want to bring it in, and we want to own it because we think it's just so compelling, and it fits so well with where we want to go. With folks like Macrometa, you know, that's still a really young area. I mean, you know, Linode was in business for many, many, many years and was a good-sized business, you know, before we bought them.Corey: Yeah, there's something to be said, for letting the market shake something out rather than having to do it all yourself as trailblazers. I'm a big believer in letting other companies do things. I mean, one of the more annoying things, from my position, is this idea where AWS takes a product strategy of, “Yes.” That becomes a bit of a challenge when they're trying to wind up building compete decks, and how do we defeat the competition? And it's like, “Wh—oh, you're talking about the other hyperscalers?” “No, we're talking with the service team one floor away.”That just seems a little on the strange side to—some companies get too big and too expensive on some level. I think that there's a very real risk of Akamai trying to do everything on the internet if you continue to expand and start listing out things that are not currently in your portfolio. And, oh, we should do that, too, and we should do that, too, and we should do that, too. And suddenly, it feels pretty closely aligned with you're trying to do everything.Andy: Yeah. I think we've been a company who has been really disciplined and not doing everything. You know, we started with CDN. And you know, we're talking '98 to 2010, you know, CDN was really our thing, and we feel we executed really well on that. We probably executed quite quietly and well, but feel we executed pretty well on that.Really from 2010, 2012 to 2020, it was all about security, right? And, you know, we built, you know, pretty amazing security business, hundred percent of SaaS business, on top of our CDN platform with security. And now we're thinking about—we did that route relatively quietly, as well, and now we're thinking about the next ten years and how do we have that same kind of impact on cloud. And that is exciting because it's not just centralized cloud; it's about a distributed cloud vision. And that is really compelling and that's why you know, we've got great folks that are still here and working on it.Corey: I'm a big believer in the idea that you can start getting distilled truth out of folks, particularly companies, the more you compress the space they have to wind up saying. Something that's why Twitter very often lets people tip their hands. But a commonplace that I look for is the title field on a company's website. So, when I go over to akamai.com, you position yourself as something that fits in a small portion of a tweet, which is good. Whenever have a Tolstoy-length paragraph in the tooltip title for the browser tab, that's a problem.But you say simply, “Security, cloud delivery, performance. Akamai.” Which is beautifully well done, but security comes first. I have a mental model of Akamai as being a CDN and some other stuff that I don't fully understand. But again, I first encountered you folks in the early-2000s.It turns out that it's hard to change existing opinions. Are you a CDN Company or are you a security company?Andy: Oh, super—Corey: In other words, if someone wind up mis-alphabetizing that and they're about to get censured after this show because, “No, we're a CDN, first; why did you put security first?”Andy: You know, so all those things feed off each other, right? And this has been a question where it's like, you know, our security layer and our distributed WAF and other security offerings run on top of the CDN layer. So, it's all about building a common compute edge and then leveraging that for new applications. CDN was the first application. The next and second application was security.And we think the third application, but probably not the final one, is compute. So, I think I don't think anyone in marketing will be fired by the ordering that they did on that. I think that ultimately now, you know, for—just if we look at it from a monetary perspective, right, we do more security than we do CDN. So, there's a lot that we have in the security business. And you know, compute's got a long way to go, especially because it's not just one big data center of compute; it is a different flavor than I think folks have seen before.Corey: When I was at RSA, you folks were one of the exhibitors there. And I like to make the common observation that there are basically six companies that exhibit at RSA. Yeah, there are hundreds of booths, but it's the same six products, all marketed are different logos with different words. And they all seem to approach it from a few relatively expectable personas and positions. I've always found myself agreeing with the things that you folks say, and maybe it's because of my own network-centric background, but it doesn't seem like you take the same approach that a number of other companies do or it's, “Oh, it has to start with the way that developers write their first line of code.” Instead, it seems to take a holistic view that comes from the starting position of everything talks to each other on a network basis, and from here, let's move forward. Is that accurate to how you view the security space?Andy: Yeah, you know, our view of the security space is—again, it's a network-centric one, right? And our work in the security space initially came from really big DDoS attacks, right? And how do we stop Distributed Denial of Service attacks from impacting folks? And that was the initial benefit that we brought. And from there, we evolved our story around, you know, how do we have a more sophisticated WAF? How do we have predictive capabilities at the edge?So ultimately, we're not about ingraining into your process of how your thing was written or telling you how to write it. We're about, you know, essentially being that perimeter edge that is watching and monitoring everything that comes into you to make sure that, you know, hey, we're not seeing Log4j-type exploits coming at you, and we'll let you know if we do, or to block malicious activity. So, we fit on anything, which is why our security business has been so successful. If you have an application on the edge, you can put Akamai Security in front of it and it's going to make your application better. That's been super compelling for the last, you know, again, last decade or so that we've really been focused on security.Corey: I think that it is a mistake to take a security model that starts with a view of what people have in front of them day-to-day—like, I look at my laptop and say, “Oh, this is what I spend my time on. This is where all security must start and stop.” Because yeah, okay, great. If you get physical access to my laptop, it's pretty much game over on some level. But yeah, if you're at a point where you're going to bust into my house and threaten me in order to get access to my laptop, here you go.There are no secrets that I am in possession of that are worth dying for. It's just money and that's okay. But looking at it through a lens of the internet has gone from science experiment to thing that the nerds love to use to a cornerstone of the fabric of modern society. And that's not because of the magic supercomputer that we all have in our pockets, but rather because those magic supercomputers can talk to the sum total of human knowledge and any other human anywhere on the planet, basically, ever. And I don't know that that evolution has been really appreciated by society at large as far as just how empowering that can be. But it completely changes the entire security paradigm from back in the '80s when I got started, don't put untrusted floppy disks into your computer or it might literally explode on your desk.Andy: [laugh]. So, we're talking about floppy disks now? Yes. So, first of all, the scope of impact of the internet has increased, meaning what you can do with it has increased. And directly proportional to that increase the threat vectors have increased, right? And the more systems are connected, the more vulnerabilities there are.So listen, it's easy to scare anybody about security on the internet. It is a topic that is an infinite well of scariness. At the same time, you know, and not just Akamai, but there's a lot of companies out there that can, whether it's making your development more secure, making your pipeline, your digital supply chain a more secure, or then you know where Akamai is, we're at the end, which is you know, helping to wrap around your entire web presence to make it more secure, there's a variety of companies that are out there really making the internet work from a security perspective. And honestly, there's also been tremendous progress on the operating system front in the last several years, which previously was not as good—probably is way to characterize it—as it is today. So, and you know, at the end of the day, the nerds are still out there working, right?We are out here still working on making the internet, you know, scale better, making it more secure, making it more robust because we're probably not done, right? You know, phones are awesome, and tablet devices, et cetera, are awesome, but we've probably got more coming. We don't quite know what that is yet, but we want to have the capacity, safety, and compute to power it.Corey: How does Macrometa as a persistent data layer tie into your future vision of security first as what Akamai does? I can see a few directions, but I'm going to go out on a limb and guess that before you folks decided to make an investment in such a thing, you probably gave it more than the 30 seconds or whatnot or so a thought that I've had to wind up putting these pieces together.Andy: So, a few things there. First of all, Macrometa, ultimately, we see them coming in the front door with our compute solution, right? Because as folks are building capabilities on the edge, “Hey, I want to run compute on the edge. How do I interoperate with data?” The worst answer possible is, “Well, call back to the centralized data store.”So, we want to ensure that customers have choice and performance options for distributed data access. Macrometa fits great there. However, now pause that; let's transition back to the security point you raised, which is, you know, coordinating an edge data security platform is a really complicated thing. Because you want to make sure that threats that are coming in on one side of the network, or you know, in one given country, you know, are also understood throughout the network. And there's a definite role for a data platform in doing that.We obviously, you know, for the last ten years have built several that help accomplish that at scale for our network, but we also recognize that, you know, innovation in data platforms is probably not done. And you know, Macrometa's got some pretty interesting approaches. So, we're very interested in working with them and talking jointly with customers, which we've done a bunch of, to see how that progresses. But there's tie-ins, I would say, mostly on compute, but secondarily, there's a lot of interesting areas with real-time security intel, they can be very useful as well.Corey: Since I have you here, I would love to ask you something that's a little orthogonal to the rest of this conversation, but I don't even care about that because that's why it's my show; I can ask what I want.Andy: Oh, no.Corey: Talk to me a little bit about the Linode acquisition. Because when it first came out, I thought, “Oh, Linode must not be doing well, so it's an acqui-hire scenario.” Followed by, “Wait a minute, that doesn't seem quite right.” And I dug deeper, and suddenly, I started to see a bunch of things that made sense. But that's just my outside perspective. I prefer to see you justify what it is that you've done.Andy: Justify what we've done. Well, with that positive framing—Corey: Exactly. “Explain yourself. How dare you, sir?”Andy: [laugh]. “What are you doing?” So, to take that, which is first of all, Linode was doing great when we bought them and they're continuing to do great now. You know, backstory here is actually a fun one. So, I personally have been a customer of Linode for about 13 years, and you know, super familiar with their offerings, as we're a bunch of other folks at Akamai.And what ultimately attracted us to Linode was, first of all, from a strategic perspective, is we talked about how Akamai thinks about Compute being a gradient of compute: you've got the edge, you've got kind of a middle tier, and you've got more centralized locations. Akamai has the edge, we've got the middle, we didn't have the central. Linode has got the central. And obviously, you know, we're going to see some significant expansion of capacity and scale there, but they've got the central location. And, you know, ultimately, we feel that there's a lot of passion in Linode.You know, they're a Linux open-source-centric company, and believe it or not Akamai is, too. I mean, you know, that's kind of how it works. And there was a great connection between the sorts of folks that they had and how they think about customers. Linode was a really customer-driven company. I mean, they were fanatical.I mean, I as a, you know, customer of $30 a month personally, could open a ticket and I'd get an answer in five minutes. And that's very similar to kind of how Akamai is driven, which is we're very customer-centric, and when a customer has a problem or need something different, you know, we're on it. So, there's literally nothing bad there and it's a super exciting beginning of a new chapter for Akamai, which is really how do we tackle compute? We're super excited to have the Linode team. You know, they're still mostly down in Philadelphia doing their thing.And, you know, we've hired substantially and we're continuing to do so, so if you want to work there, drop a note over. And it's been fantastic. And it's one of our, you know, really large acquisitions that we've done, and I think we were really lucky to find a great company in such a good position and be able to make it work.Corey: From my perspective, one of the areas that has me excited about the acquisition stems from what I would consider to be something of a customer-base culture misalignment between the two companies. One of the things that I have always enjoyed about Linode—and in the interest of full transparency, they have been a periodic sponsor over the last five or six years of my ridiculous nonsense. I believe that they are not at the moment which I expect you to immediately rectify after this conversation, of course.Andy: I'll give you my credit card. Yeah.Corey: Excellent. Excellent. We do not get in the way of people trying to give you money. But it was great because that's exactly it. I could take a credit card in the middle of the night and spin up things on Linode.And it was one of those companies that aligned very closely to how I tended to view cloud infrastructure from the perspective of, I need a Linux box, or I need a bunch of Linux boxes right there, right now, and I don't have 12 weeks to go to cloud school to learn the intricacies of a given provider. It more or less just worked in a whole bunch of easy ways. Whereas if I wanted to roll out at Akamai, it was always I would pull up the website, and it's, “Click here to talk to our enterprise sales team.” And that tells me two things. One, it is probably going to be outside of my signing authority because no one trusts me with money for obvious reasons, when I was an employee, and two, you will not be going to space today because those conversations always take time.And it's going to be—if I'm in a hurry and trying to get something out the door, that is going to act as a significant drag on capability. Now, most of your customers do not launch things by the seat of their pants, three hours after the idea first occurs to them, but on Linode, that often seems to be the case. The idea of addressing developers early on in the ‘it's just an idea' phase. I can't shake the feeling that there's a definite future in which Linode winds up being able to speak much more effectively to enterprise, while Akamai also learns to speak to, honestly, half-awake shitposters at 2 a.m. when we're building something heinous.Andy: I feel like you've been sitting in on our strategy presentations. Maybe not the shitposters, but the rest of it. And I think the way that I would couch it, my corporate-speak of that, would be that there's a distinct yin and yang, there a complementary nature between the customer bases of Akamai, which has, you know, an incredible list of enterprise customers—I mean, the who's-who of enterprise customers, Akamai works with them—but then, you know, Linode, who has really tremendous representation of developers—that's what we'll use for the name posts—like, folks like myself included, right, who want to throw something together, want to spin up a VM, and then maybe tear it down and never do it again, or maybe set up 100 of them. And, to your point, the crossover opportunities there, which is, you know, Linode has done a really good job of having small customers that grow over time. And by having Akamai, you know, you can now grow, and never have to leave because we're going to be able to bring enough scale and throughput and, you know, professional help services as you need it to help you stay in the ecosystem.And similarly, Akamai has a tremendous—you know, the benefit of a tremendous set of enterprise customers who are out there, you know, frankly, looking to solve their compute challenges, saying, “Hey, I have a highly distributed application. Akamai, how can you help me with this?” Or, “Hey, I need presence in x or y.” And now we have, you know, with Linode, the right tools to support that. And yes, we can make all kinds of jokes about, you know, Akamai and Linode and different, you know, people and archetypes we appeal to, but ultimately, there's an alignment between Akamai and Linode on how we approach things, which is about Linux, open-source, it's about technical honesty and simplicity. So, great group of folks. And secondly, like, I think the customer crossover, you're right on it. And we're very excited for how that goes.Corey: I also want to call out that Macrometa seems to have split this difference perfectly. One of the first things I visit on any given company's page when I'm trying to understand them is the pricing page. It's one of those areas where people spend the least time, early on, but it's also where they tend to be the most honest. Maybe that's why. And I look for two things, and Macrometa has both of them.The first is a ‘try it for free, right now, get started.' It's a free-tier approach. Because even if you charge $10 or whatnot, there are many developers working on things in odd hours where they don't necessarily either have the ability to make that purchase decision, know that they have the ability to make that purchase decision, or are willing to do that by the seat of their pants. So, ‘get started for free' is important; it means you can develop right now. Conversely, there are a bunch of enterprise procurement departments out there who will want a whole bunch of custom things.Custom SLAs, custom support responses, custom everything, and they also don't know how to sign a check that doesn't have two commas in it. So, you don't probably want to avoid those customers, but what they're looking for is an enterprise offering that is no price. There should not be a price tag on that because you will never get it right for everyone, but what they want to see is ‘click here to contact sales.' That is coded language for, “We are serious professionals and know who you are and how you like to operate.” They've got both and I think that is absolutely the right decision.Andy: It do—Corey: And whatever you have in between those two is almost irrelevant.Andy: No, I think you're on it. And Macrometa, their pricing philosophy allows you to get in and try it with zero friction, which is super important. Like, I don't even have to use a credit card. I can experiment for free, I can try it for free, but then as I grow their pricing tier kind of scales along with that. And it's a—you know, that is the way that folks try applications.I always try to think about, hey, you know, if I'm on a team and we're tasked with putting together a proof of concept for something in two days, and I've got, you know, a couple folks working with me, how do I do that? And you don't have time for procurement, you might need to use the free thing to experiment. So, there is a lot that they can do. And you know, their pricing—this transparency of pricing that they have is fantastic. Now, Linode, also very transparent, we don't have a free tier, but you know, you can get in for very low friction and try that as well.Corey: Yeah, companies tend to go through a maturity curve evolution on these things. I've talked to companies that purely view it is how much money a given customer is spending determines how much attention they get. And it's like, “Yeah, maybe take a look through some of your smaller users or new signups there.” Yeah, they're spending $10 a month or whatnot, but their email address is@cocacola.com. Just spitballing here; maybe you might want a white-glove a few of those folks, just because not everyone comes in the door via an RFP.Andy: Yep. We look at customers for what your potential is, right? Like, you know, how much could you end up spending with us, right? You know, so if you're building your application on Linode, and you're going to spend $20, for the first couple months, that's totally fine. Get in there, experiment, and then you know, in the next several years, let's see where it goes. So, you're exactly right, which is, you know, that username@enterprisedomain.com is often much more indicative than what the actual bill is on a monthly basis.Corey: I always find it a little strange when I have a vendor that I'm doing business with, and then suddenly, an account person reaches out, like, hey, let's just have a call for half an hour to talk about what you're doing and how you're doing it. It's my immediate response to that these days, just of too many years doing that, as, “I really need to look at that bill. How much are we spending, again?” And I honestly, usually not that much because believe it or not, when you focus on cloud economics for a living, you pay attention to your credit card bills, but it is always interesting to see who reaches out and who doesn't. That's been a strange approach, and there is no one right answer for all of this.If every free tier account user of any given cloud provider wound up getting constant emails from their account managers, it's how desperate are you to grow revenue, and what are you about to do to pricing? At some level of becomes… unhelpful.Andy: I can see that. I've had, personally, situations where I'm a trial user of something, and all of a sudden I get emails—you know, using personal email addresses, no Akamai involvement—all of a sudden, I'm getting emails. And I'm like, “Really? Did I make the priority list for you to call me and leave me a voicemail, and then email me?” I don't know how that's possible.So, from a personal perspective, totally see that. You know, from an account development perspective, you know, kind of with the Akamai hat on, it's challenging, right? You know, folks are out there trying to figure out where business is going to come from. And I think if you're able to get an indicator that somebody, you know, maybe you're going to call that person at enterprisedomain.com to try to figure out, you know, hey, is this real and is this you with a side project or is this you with a proof of concept for something that could be more fruitful? And, you know, Corey, they're probably just calling you because you're you.Corey: One of the things that I was surprised by where I saw the exact same thing. I started getting a series of emails from my account manager for Google Workspaces. Okay, and then I really did a spit-take when I realized this was on my personal address. Okay… so I read this carefully because what the hell is happening? Oh, they're raising prices and it's a campaign. Great.Now, my one-user vanity domain is going to go from $6 a month to $8 a month or whatever. Cool, I don't care. This is not someone actively trying to reach out as a human being. It's an outreach campaign. Cool, fair. But that's the problem, on some level, for super-tiny customers. It's a, what is it, is it a shakedown? What are they about to yell at me for?Andy: No, I got the same thing. My Google Workspace personal account, which is, like, two people, right? Like, and I got an email and then I think, like, a voicemail. And I'm like, I read the email and I'm like—you know, it's going—again, it's like, it was like six something and now it's, like, eight something a month. So, it's like, “Okay. You're all right.”Corey: Just go—that's what you have a credit card for. Go ahead and charge it. It's fine. Now, yeah, counterpoint if you're a large company, and yeah, we're just going to be raising prices by 20% across the board for everyone, and you look at this and like, that's a phone number. Yeah, I kind of want some special outreach and conversations there. But it's odd.Andy: It's interesting. Yeah. They're great.Corey: Last question before we call this an episode. In 22 years, how have you seen the market change from your perspective? Most people do not work in the industry from one company's perspective for as long as you have. That gives you a somewhat privileged position to see, from a point of relative stability, what the industry has done.Andy: So—Corey: What have you noticed?Andy: —and I'm going to give you an answer, which is about, like, the sales cycle, which is it used to be about meetings and about everybody coming together and used to have to occasionally wear a suit. And there would be, you know, meetings where you would need to get a CEO or CFO to personally see a presentation and decide something and say, “Okay, we're going with X or Y. We're going to make a decision.” And today, those decisions are, pretty far and wide, made much, much further down in the organization. They're made by developers, team leads, project managers, program managers.So, the way people engage with customers today is so different. First of all, like, most meetings are still virtual. I mean, like, yeah, we have physical meetings and we get together for things, but like, so much more is done virtually, which is cool because we built the internet so we wouldn't have to go anywhere, so it's nice that we got that landed. It's unfortunate that we had to do with Covid to get there, but ultimately, I think that purchasing decisions and technology decisions are distributed so much more deeply into the organization than they were. It used to be a, like, C-level thing. We're now seeing that stuff happened much further down in the organization.We see that inside Akamai and we see it with our customers as well. It's been, honestly, refreshing because you tend to be able to engage with technical folks when you're talking about technical products. And you know, the business folks are still there and they're helping to guide the discussions and all that, but it's a much better time, I think, to be a technical person now than it probably was 20 years ago.Corey: I would say that being a technical person has gotten easier in a bunch of ways; it's gotten harder in a bunch of ways. I would say that it has transformed. I was very opposed to the idea that oh, as a sysadmin, why should I learn to write code? And in retrospect, it was because I wasn't sure I could do it and it felt like the rising tide was going to drown me. And in hindsight, yeah, it was the right direction for the industry to go in.But I'm also sensitive to folks who don't want to, midway through their career, pick up an entirely new skill set in order to remain relevant. I think that it is a lot easier to do some things. Back when Akamai started, it took an intimate knowledge of GCC compiler flags, in most cases, to host a website. Now, it is checking a box on a web page and you're done. Things have gotten easier.The abstractions continue to slip below the waterline, so the things we have to care about getting more and more meaningful to the business. We're nowhere near our final form yet, but I'm very excited about how accessible this industry is to folks that previously would not have been, while also disheartened by just how much there is to know. Otherwise, “Oh yeah, that entire aspect of the way that this core thing that runs my business, yeah, that's basically magic and we just hope the magic doesn't stop working, or we make a sacrifice to the proper God, which is usually a giant trillion-dollar company.” And the sacrifice is, of course, engineering time combined with money.Andy: You know, technology is all about abstraction layers, right? And I think—that's my view, right—and we've been spending the last several decades, not, ‘we' Akamai; ‘we' the technology industry—on, you know, coming up with some pretty solid abstraction layers. And you're right, like, the, you know, GCC j6—you know, -j6—you know, kind of compiler tags not that important anymore, we could go back in time and talk about inetd, the first serverless. But other than that, you know, as we get to the present day, I think what's really interesting is you can contribute technically without being a super coding nerd. There's all kinds of different technical approaches today and technical disciplines that aren't just about development.Development is super important, but you know, frankly, the sysadmin skill set is more valuable today if you look at what SREs have become and how important they are to the industry. I mean, you know, those are some of the most critical folks in the entire piping here. So, don't feel bad for starting out as a sysadmin. I think that's my closing comment back to you.Corey: I think that's probably a good place to leave it. I really want to thank you for being so generous with your time.Andy: Anytime.Corey: If people want to learn more about how you see the world, where can they find you?Andy: Yeah, I mean, I guess you could check me out on LinkedIn. Happy to shoot me something there and happy to catch up. I'm pretty much read-only on social, so I don't pontificate a lot on Twitter, but—Corey: Such a good decision.Andy: Feel free to shoot me something on LinkedIn if you want to get in touch or chat about Akamai.Corey: Excellent. And of course, our thanks goes well, to the fine folks at Macrometa who have promoted this episode. It is always appreciated when people wind up supporting this ridiculous nonsense that I do. My guest has been Andy Champagne SVP at the CTO office over at Akamai. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that will not post successfully because your podcast provider of choice wound up skimping out on a provider who did not care enough about a persistent global data layer.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Bike Shed
359: Serializers

The Bike Shed

Play Episode Listen Later Oct 25, 2022 44:10


Chris Toomey is back! (For an episode.) He talks about what he's been up to since handing off the reins to Joël. He's been playing around with something at Sagewell that he enjoys. At the core of it? Serializers. Primalize gem (https://github.com/jgaskins/primalize) Derek's talk on code review (https://www.youtube.com/watch?v=PJjmw9TRB7s) Inertia.js (https://inertiajs.com/) Phantom types (https://thoughtbot.com/blog/modeling-currency-in-elm-using-phantom-types) io-ts (https://gcanti.github.io/io-ts/) dry-rb (https://dry-rb.org/) parse don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) value objects (http://wiki.c2.com/?ValueObject) broader perspective on parsing (https://thoughtbot.com/blog/a-broader-take-on-parsing) Enumerable#tally (https://medium.com/@baweaver/ruby-2-7-enumerable-tally-a706a5fb11ea) RubyConf mini (https://www.rubyconfmini.com/) where.missing (https://boringrails.com/tips/activerecord-where-missing-associations) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by a very special guest, former host Chris Toomey. CHRIS: Hi, Joël. Thanks for having me. JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Chris, what's new in your world? CHRIS: Being on this podcast is new in my world, or everything old is new again, or something along those lines. But, yeah, thank you so much for having me back. It's a pleasure. Although it's very odd, it feels somehow so different and yet very familiar. But yeah, more generally, what's new in my world? I think this was probably in development as I was winding down my time as a host here on The Bike Shed, but I don't know that I ever got a chance to talk about it. There has been a fun sort of deep-in-the-weeds technical thing that we've been playing around with at Sagewell that I've really enjoyed. So at the core of it, we have serializers. So we take some data structures in our Ruby on Rails code base, and we need to serialize them to JSON to send them to the front end. In our case, we're using Inertia, so it's not quite a JSON API, but it's fine to think about it in that way for the context of this discussion. And what we were finding is our front end has TypeScript. So we're writing Svelte, which is using TypeScript. And so we're stating or asserting that the types like, hey, we're going to get this data in from the back end, and it's going to have this shape to it. And we found that it was really hard to keep those in sync to keep, like, what does the user mean on the front end? What's the data that we're going to get? It's going to have a full name, which is a string, except sometimes that might be null. So how do we make sure that those are keeping up to date? And then we had a growing number of serializers on the back end and determining which serializer we were actually using, and it was just...it was a mess, to put it lightly. And so we had explored a couple of different options around it, and eventually, we found a library called Primalize. So Primalize is a Ruby library. It is for writing JSON serializers. But what's really interesting about it is it has a typing layer. It's like a type system sort of thing at play. So when you define a serializer in Primalize, instead of just saying, here are the fields; there is an ID, a name, et cetera, you say, there is an ID, and it is a string. There is a name, and it is a string, or an optional string, which is the even more interesting bit. You can say array. You can say object. You can say an enum of a couple of different values. And so we looked at that, and we said, ooh, this is very interesting. Astute listeners will know that this is probably useless in a Ruby system, which doesn't have types or a compilation step or anything like that. But what's really cool about this is when you use a Primalize serializer, as you're serializing an object, if there is ever a type mismatch, so the observed type at runtime and the authored type if those ever mismatch, then you can have some sort of notification happen. So in our case, we configured it to send a warning to Sentry to say, "Hey, you said the types were this, but we're actually seeing this other thing." Most often, it will be like an Optional, a null sneaking through, a nil sneaking through on the Ruby side. But what was really interesting is as we were squinting at this, we're like, huh, so now we're going to write all this type information. What if we could somehow get that type information down to the front end? So I had a long weekend, one weekend, and I went away, and I wrote a bunch of code that took all of those serializers, ran through them, and generated the associated TypeScript interfaces. And so now we have a build step that will essentially run that and assert that we're getting the same thing in CI as we have committed to the codebase. But now we have the generated serializer types on the front end that match to the used serializer on the back end, as well as the observed run-time types. So it's a combination of a true compilation step type system on the front end and a run-time type system on the back end, which has been very, very interesting. JOËL: I have a lot of thoughts here. CHRIS: I figured you would. [laughs] JOËL: But the first thing that came to mind is, as a consultant, there's a scenario with especially smaller startups that generally concerns me, and that is the CTO goes away for a weekend and writes a lot of code... CHRIS: [laughs] JOËL: And brings in a new system on Monday, which is exactly what you're describing here. How do you feel about the fact that you've done that? CHRIS: I wasn't ready to go this deep this early on in this episode. JOËL: [laughs] CHRIS: But honestly, that is a fantastic question. It's a thing that I have been truly not struggling with but really thinking about. We're going to go on a slight aside here, but I am finding it really difficult to engage with the actual day-to-day coding work that we're doing and to still stay close to the codebase and not be in the way. There's a pattern that I've seen happen a number of times now where I pick up a piece of work that is, you know, one of the tickets at the top of the backlog. I start to work on it. I get pulled into a meeting, then another meeting, then three more meetings. And suddenly, it's three days later. I haven't completed this piece of work that was defined to be the next most important piece of work. And suddenly, I'm blocking the team. JOËL: Hmmm. CHRIS: So I actually made a rule that I'm not allowed to own critical path work, which feels weird because it's like, I want to be engaged with that work. So the counterpoint to that is I'm now trying to schedule pairing sessions with each of the developers on the team once a week. And in that time, I can work on that sort of stuff with them, and they'll then own it and run with it. So it makes sure that I'm not blocking on those sorts of things, but I'm still connected to the core work that we're doing. But the other thing that you're describing of the CTO goes away for the weekend and then comes back with a new harebrained scheme; I'm very sensitive to that, having worked on; frankly, I think the same project. I can think of a project that you and I worked on where we experienced this. JOËL: I think we're thinking of the same project. CHRIS: So yes. Like, I'm scarred by that and, frankly, a handful of experiences of that nature. So we actually, I think, have a really healthy system in place at Sagewell for capturing, documenting, prioritizing this sort of other work, this developer-centric work. So this is the feature and bug work that gets prioritized and one list over here that is owned by our product manager. Separately, the dev team gets to say, here are the pain points. Here's the stuff that keeps breaking. Here are the things that I wish was better. Here is the observability hard-to-understand bits. And so we have a couple of different systems at play and recurring meetings and sort of unique ceremonies around that, and so this work was very much a fallout of that. It was actually a recurring topic that we kept trying a couple of different stabs at, and we never quite landed it. And then I showed up this one Monday morning, and I was like, "I found a thing; what do we think?" And then, critically, from there, I made sure I paired with other folks on the team as we pushed on the implementation. And then, actually, I mentioned Primalize, the library that we're using. We have now since deprecated Primalize within the app because we kept just adding to it so much that eventually, we're like, at this point, should we own this stuff? So we ended up rewriting the core bits of Primalize to better fit our use cases. And now we've actually removed Primalize, wonderful library. I highly recommend it to anyone who has that particular use case but then the additional type generation for the front end. Plus, we have some custom types within our app, Money being the most interesting one. We decided to model Money as our first-class consideration rather than just letting JavaScript have the sole idea of a number. But yes, in a very long-winded way, yes, I'm very sensitive to the thing you described. And I hope, in this case, I did not fall prey to the CTO goes away for the weekend and made a thing. JOËL: I think what I'm hearing is the key difference here is that you got buy-in from the team around this idea before you went out and implemented it. So you're not off doing your own things disconnected from the team and then imposing it from on high. The team already agreed this is the thing we want to do, and then you just did it for them. CHRIS: Largely, yes. Although I will say there are times that each developer on the team, myself included, have sort of gone away, come back with something, and said, "Hey, here's a WIP PR exploring an area." And there was actually...I'm forgetting what the context was, but there was one that happened recently that I introduced. I was like; I had to do this. And the team talked me out of it, and I ended up closing that PR. Someone else actually made a different PR that was an alternative implementation. I was like, no, that's better; we should absolutely do that. And I think that's really healthy. That's a hard thing to maintain but making sure that everyone feels like they've got a strong voice and that we're considering all of the different ways in which we might consider the work. Most critically, you know, how does this impact users at the end of the day? That's always the primary consideration. How do we make sure we build a robust, maintainable, observable system, all those sorts of things? And primarily, this work should go in that other direction, but I also don't want to stifle that creative spark of I got this thing in my head, and I had to explore it. Like, we shouldn't then need to never mind, throw away the work, put it into a ticket. Like, for as long as we can, that more organic, intuitive process if we can retain that, I like that. Critically, with the ability for everyone to tell me, "No, this is a bad idea. Stop it. What are you doing?" And that has happened recently. I mean, they were kinder about it, but they did talk me out of a bad idea. So here we are. JOËL: So you showed up on Monday morning, not with telling everyone, "Hey, I merged this thing over the weekend." You're showing up with a work-in-progress PR. CHRIS: Yes, definitely. I mean, everything goes through a PR, and everything has discussion and conversation around it. That's a strong, strong like Derek Prior's wonderful talk Building a Culture of Code Review. I forget the exact name of it. But it's one of my favorite talks in talking about the utility of code review as a way to share ideas and all of those wonderful things. So everything goes through code review, and particularly anything that is of that more exploratory architectural space. Often we'll say any one review from anyone on the team is sufficient to merge most things but something like that, I would want to say, "Hey, can everybody take a look at this? And if anyone has any reservations, then let's talk about it more." But if I or anyone else on the team for this sort of work gets everybody approving it, then cool, we're good to go. But yeah, code review critical, critical part of the process. JOËL: I'm curious about Primalize, the gem that you mentioned. It sounds like it's some kind of validation layer between some Ruby data structure and your serializers. CHRIS: It is the serializer, but in the process of serializing, it does run-time type validation, essentially. So as it's accessing, you know, you say first name. You have a user object. You pass it in, and you say, "Serializer, there's a first name, and it's a string." It will call the first name method on that user object. And then, it will check that it has the expected type, and if it doesn't, then, in our case, it sends to Sentry. We have configured it...it's actually interesting. In development and test mode, it will raise for a type mismatch, and in production mode, it will alert Sentry so you can configure that differently. But that ends up being really nice because these type mismatches end up being very loud early on. And it's surprisingly easy to maintain and ends up telling us a lot of truths about our system because, really, what we're doing is connecting data from many different systems and flowing it in and out. And all of the inputs and outputs from our system feel very meaningful to lock down in this way. But yeah, it's been an adventure. JOËL: It seems to me there could almost be two sets of types here, the inputs coming into Primalize from your Ruby data structures and then the outputs that are the actual serialized values. And so you might expect, let's say, an integer on the Ruby side, but maybe at the serialization level, you're serializing it to a string. Do you have that sort of conversion step as part of your serializers sometimes, or is the idea that everything's already the right type on the Ruby side, and then we just, like, to JSON it at the end? CHRIS: Yep. Primalize, I think, probably works a little closer to what you're describing. They have the idea of coercions. So within Primalize, there is the concept of a timestamp; that is one of the types that is available. But a timestamp is sort of the union of a date, a time, or I think they might let through a string; I'm not sure if there is as well. But frankly, for us, that was more ambiguity than we wanted or more blurring across the lines. And in the implementation that we've now built, date and time are distinct. And critically, a string is not a valid date or time; it is a string, that's another thing. And so there's a bunch of plumbing within the way you define the serializers. There are override methods so that you can locally within the serializer say, like, oh, we need to coerce from the shape of data into this other shape of data, even little like in-line proc, so we can do it quickly. But the idea is that the data, once it has been passed to the serializer, should be up the right shape. And so when we get to the type assertion part of the library, we expect that things are in the asserted type and will warn if not. We get surprisingly few warnings, which is interesting now. This whole process has made us pay a little more intention, and it's been less arduous simultaneously than I would have expected because like this is kind of a lot of work that I'm describing. And yet it ends up being very natural when you're the developer in context, like, oh, I've been reading these docs for days. I know the shape of this JSON that I'm working with inside and out, and now I'll just write it down in the serializer. It's very easy to do in that moment, and then it captures it and enforces it in such a useful way. As an aside, as I've been looking at this, I'm like, this is just GraphQL, but inside out, I'm pretty sure. But that is a choice that we have made. We didn't want to adopt the whole GraphQL thing. But just for anyone out there who is listening and is thinking, isn't this just GraphQL but inside out? Kind of. Yes. JOËL: I think my favorite part of GraphQL is the schema, which is not really the selling point for GraphQL, you know, like the idea that you can traverse the graph and get any subset of data that you want and all that. I think I would be more than happy with a REST API that has some kind of schema built around it. And someone told me that maybe what I really just want is SOAP, and I don't know how to feel about that comment. CHRIS: You just got to have some XML, and some WSDLs, and other fun things. I've heard people say good things about SOAP. SOAP seems like a fine idea. If anything, I think a critical part of this is we don't have a JSON API. We have a very tightly coupled front end and back end, and a singular front end, frankly. And so that I think naturally...that makes the thing that I'm describing here a much more comfortable fit. If we had multiple different downstream clients that we're trying to consume from the same back end, then I think a GraphQL API or some other structured JSON schema, whatever it is type of API, and associated documentation and typing layer would be probably a better fit. But as I've said many a time on this here, Bike Shed, Inertia is one of my favorite libraries or frameworks (They're probably more of a framework.) one of my favorite technological approaches that I have ever found. And particularly in buildings Sagewell, it has allowed us to move so rapidly the idea that changes are, you know, one fell swoop changes everything within the codebase. We don't have to think about syncing deploys for the back end and the front end and how to coordinate across them. Our app is so much easier to understand by virtue of that architecture that Inertia implies. JOËL: So, if I understand correctly, you don't serialize to JSON as part of the serializers. You're serializing directly to JavaScript. CHRIS: We do serialize to JSON. At the end of the day, Inertia takes care of this on both the Rails side and the client side. There is a JSON API. Like, if you look at the network inspector, you will see XHR requests happening. But critically, we're not doing that. We're not the ones in charge of it. We're not hitting a specific endpoint. It feels as an application coder much closer to a traditional Rails app. It just happens to be that we're writing our view layer. Instead of an ERB, we're writing them in Svelte files. But otherwise, it feels almost identical to a normal traditional Rails app with controllers and the normal routing and all that kind of stuff. JOËL: One thing that's really interesting about JSON as an interchange format is that it is very restrictive. The primitives it has are even narrower than, say, the primitives that Ruby has. So you'd mentioned sending a date through. There is no JSON date. You have to serialize it to some other type, potentially an integer, potentially a string that has a format that the other side knows how it's going to interpret. And I feel like it's those sorts of richer types when we need to pass them through JSON that serialization and deserialization or parsing on the other end become really interesting. CHRIS: Yeah, I definitely agree with that. It was a struggling point for a while until we found this new approach that we're doing with the serializers in the type system. But so far, the only thing that we've done this with is Money. But on the front end, a while ago, we introduced a specific TypeScript type. So it's a phantom type, and I believe I'm getting this correct. It's a phantom type called Cents, C-E-N-T-S. So it represents...I'm going to say an integer. I know that JavaScript doesn't have integers, but logically, it represents an integer amount of cents. And critically, it is not a number, like, the lowercase number in the type system. We cannot add them together. We can't -- JOËL: I thought you were going to say, NaN. CHRIS: [laughs] It is not a number. I saw a n/a for not applicable somewhere in the application the other day. I was like, oh my God, we have a NaN? It happened? But it wasn't, it was just n/a, and I was fine. But yeah, so we have this idea of Cents within the application. We have a money input, which is a special input designed exactly for this. So to a user, it is formatted to look like you're entering dollars and cents. But under the hood, we are bidirectionally converting that to the integer amount of cents that we need. And we strictly, within the type system, those are cents. And you can't do math on Cents unless you use a special set of helper functions. You cannot generate Cents on the fly unless you use a special set of helper functions, the constructor functions. So we've been really restrictive about that, which was kind of annoying because a lot of the data coming from the server is just, you know, numbers. But now, with this type system that we've introduced on the Ruby side, we can assert and enforce that these are money.new on the Ruby side, so using the Money gem. And they come down to the front end as capital C Cents in the type system on the TypeScript side. So we're able to actually bind that together and then enforce proper usage sort of on both sides. The next step that we plan to do after that is dates and times. And those are actually almost weirder because they end up...we just have to sort of say what they are, and they will be ISO 8601 date and time strings, respectively. But we'll have functions that know this is a date string; that's a thing. It is, again, a phantom type implemented within our TypeScript type system. But we will have custom functions that deal with that and really constrain...lock ourselves down to only working with them correctly. And critically, saying that is the only date and time format that we work with; there is no other. We don't have arbitrary dates. Is this a JSON date or something else? I don't know; there are too many date syntaxes. JOËL: I like the idea of what you're doing in that it sounds like you're very much narrowing that sort of window of where in the stack the data exists in the sort of unstructured, free-floating primitives that could be misinterpreted. And so, at this point, it's almost narrowed to the point where it can't be touched by any user or developer-written code because you've pushed the boundaries on the Rails side down and then on the JavaScript side up to the point where the translation here you define translations on one side or, I guess, a parser on one side and a serializer on the other. And they guarantee that everything is good up until that point. CHRIS: Yep, with the added fun of the runtime reflection on the Ruby side. So it's an interesting thing. Like, TypeScript actually has similar things. You can say what the type is all day long, and your code will consistently conform to that asserted type. But at the end of the day, if your JSON API gets in some different data...unless you're using a library like io-ts, is one that I've looked at, which actually does parsing and returns a result object of did we parse to the thing that you wanted or did we get an error in that data structure? So we could get to that level on the client side as well. We haven't done that yet largely because we've essentially pushed that concern up to the Ruby layer. So where we're authoring the data, because we own that, we're going to do it at that level. There are a bunch of benefits of defining it there and then sort of reflecting it down. But yeah, TypeScript, you can absolutely lie to yourself, whereas Elm, a language that I know you love dearly, you cannot lie to yourself in Elm. You've got to tell the truth. It's the only option. You've got to prove it. Whereas in TypeScript, you can just kind of suggest, and TypeScript will be like, all right, cool, I'll make sure you stay honest on that, but I'm not going to make you prove it, which is an interesting sort of set of related trade-offs there. But I think we found a very comfortable resting spot for right now. Although now, we're starting to look at the edges of the Ruby system where data is coming in. So we have lots of webhooks and other external partners that we're integrating with, and they're sending us data. And that data is of varying shapes. Some will send us a payload with the word amount, and it refers to an integer amount of cents because, of course, it does. Some will send us the word amount in their payload, and it will be a floating amount of dollars. And I get a little sad on those days. But critically, our job is to make sure all of those are the same and that we never pass dollars as cents or cents as dollars because that's where things go sad. That is job number one at Sagewell in the engineering team is never get the decimal place wrong in money. JOËL: That would be a pretty terrible mistake to make. CHRIS: It would. I mean, it happens. In fintech, that problem comes up a lot. And again, the fact that...I'm honestly surprised to see situations out there where we're getting in floating point dollars. That is a surprise to me because I thought we had all agreed sort of as a community that it was integer cents but especially in a language that has integers. JavaScript, it's kind of making it up the whole time. But Ruby has integers. JSON, I guess, doesn't have integers, so I'm sort of mixing concerns here, but you get the idea. JOËL: Despite Ruby not having a static type system, I've found that generally, when I'm integrating with a third-party API, I get to the point where I want something that approximates like Elm's JSON decoders or io-ts or something like that. Because JSON is just a big blob of data that could be of any shape, and I don't really trust it because it's third-party data, and you should not trust third parties. And I find that I end up maybe cobbling something together commonly with like a bunch of usage of hash.fetch, things like that. But I feel like Ruby doesn't have a great approach to parsing and composing these validators for external data. CHRIS: Ruby as a language certainly doesn't, and the ecosystem, I would say, is rather limited in terms of the options here. We have looked a bit at the dry-rb stack of gems, so dry-validation and dry-schema, in particular, both offer potentially useful aspects. We've actually done a little bit of spiking internally around that sort of thing of, like, let's parse this incoming data instead of just coercing to hash and saying that it's got probably the shape that we want. And then similarly, I will fetch all day instead of digging because I want to be quite loud when we get it wrong. But we're already using dry-monads. So we have the idea of result types within the system. We can either succeed or fail at certain operations. And I think it's just a little further down the stack. But probably something that we will implement soon is at those external boundaries where data is coming in doing some form of parsing and validation to make sure that it conforms to unknown data structure. And then, within the app, we can do things more cleanly. That also would allow us to, like, let's push the idea that this is floating point dollars all the way out to the edge. And the minute it hits our system, we convert it into a money.new, which means that cents are properly handled. It's the same type of money or dollar, same type of currency handling as everywhere else in the app. And so pushing that to the very edges of our application is a very interesting idea. And so that could happen in the library or sort of a parsing client, I guess, is probably the best way to think about it. So I'm excited to do that at some point. JOËL: Have you read the article, Parse, Don't Validate? CHRIS: I actually posted that in some code review the other day to one of the developers on the team, and they replied, "You're just going to quietly drop one of my favorite articles of all time in code review?" [laughs] So yes, I've read it; I love it. It's a wonderful idea, definitely something that I'm intrigued by. And sort of bringing dry-monads into Ruby, on the one hand, feels like a forced fit and yet has also been one of the other, I think strongest sort of architectural decisions that we've made within the application. There's so much imperative work that we ended up having to do. Send this off to this external API, then tell this other one, then tell this other one. Put the whole thing in a transaction so that our local data properly handles it. And having dry-monads do notation, in particular, to allow us to make that manageable but fail in all the ways it needs to fail, very expressive in its failure modes, that's been great. And then parse, don't validate we don't quite do it yet. But that's one of the dreams of, like, our codebase really should do that thing. We believe in that. So let's get there soon. JOËL: And the core idea behind parse, don't validate is that instead of just having some data that you don't trust, running a check on it and passing that blob of now checked but still untrusted data down to the next person who might also want to check it. Generally, you want to pass it through some sort of filter that will, one, validate that it's correct but then actually typically convert it into some other trusted shape. In Ruby, that might be something like taking an amorphous blob of JSON and turning it into some kind of value object or something like that. And then anybody downstream that receives, let's say, money object can trust that they're dealing with a well-formed money value as opposed to an arbitrary blob of JSON, which hopefully somebody else has validated, but who knows? So I'm going to validate it again. CHRIS: You can tell that I've been out of the podcasting game for a while because I just started responding to yes; I love that blog post without describing the core premise of it. So kudos to you, Joël; you are a fantastic podcast host over there. I will say one of the things you just described is an interesting...it's been a bit of a struggle for us. We keep sort of talking through what's the architecture. How do we want to build this application? What do we care about? What are the things that really matter within this codebase, and then what is all the other stuff? And we've been good at determining the things that really matter, thinking collectively as a group, and I think coming up with some novel, useful, elegant...I'm saying too many positive adjectives for what we're doing. But I've been very happy with sort of the thing that we decide. And then there's the long-tail work of actually propagating that change throughout the rest of the application. We're, like, okay, here's how it works. Every incoming webhook, we now parse and yield a value object. That sentence that you just said a minute ago is exactly what I want. That's like a bunch of work. It's particularly a bunch of work to convert an existing codebase. It's easy to say, okay, from here forward, any new webhooks, payloads that are coming in, we're going to do in this way. But we have a lot of things in our app now that exist in this half-converted way. There was a brief period where we had three different serializer technologies at play. Just this week, I did the work of killing off the middle ground one, the Primalized-based thing, and we now have only our new hotness and then the very old. We were using Blueprinter as the serializer as the initial sort of stub. And so that still exists within the codebase in some places. But trying to figure out how to prioritize that work, the finishing out those maintenance-type conversions is a tricky one. It's never the priority. But it is really nice to have consistency in a codebase. So it's...yeah, do you have any thoughts on that? JOËL: I think going back to the article and what the meaning of parsing is, I used to always think of parsing as taking strings and turning them into something else, and I think this really broadened my perspective on the idea of parsing. And now, I think of it more as converting from a broader type to a narrower type with failures. So, for example, you could go from a string to an integer, and not all strings are valid integers. So you're narrowing the type. And if you have the string hello world, it will fail, and it will give you an error of some type. But you can have multiple layers of that. So maybe you have a string that you parse into an integer, but then, later on, you might want to parse that integer into something else that requires an integer in a range. Let's say it's a percentage. So you have a value object that is a percentage, but it's encoded in the JSON as a string. So that first pass, you parse it from a string into an integer, and then you parse that integer into a percentage object. But if it's outside the range of valid percentage numbers, then maybe you get an error there as well. So it's a thing that can happen at multiple layers. And I've now really connected it with the primitive obsession smell in code. So oftentimes, when you decide, wait, I don't want a primitive here; I want a richer type, commonly, there's going to be a parsing step that should exist to go from that primitive into the richer type. CHRIS: I like that. That was a classic Joël wildly concise summary of a deeply complex technical topic right there. JOËL: It's like I'm going to connect some ideas from functional programming and a classic object-oriented code smell and, yeah, just kind of mash it all together with a popular article. CHRIS: If only you had a diagram. Podcast is not the best medium for diagrams, but I think you could do it. You could speak one out loud, and everyone would be able to see it in their mind's eye. JOËL: So I will tell you what my diagram is for this because I've actually created it already. I imagine this as a sort of like pyramid with different layers that keep getting smaller and smaller. So the size of type is sort of the width of a layer. And so your strings are a very wide layer. Then on top of that, you have a narrower layer that might be, you know, it could be an integer, or you could even if you're parsing JSON, you first start with a string, then you parse that into a Ruby hash, not all strings are valid hashes. So that's going to be narrower. Then you might extract some values out of that hash. But if the keys aren't right, that might also fail. You're trying to pull the user out of it. And so each layer it gets a richer type, but that richer type, by virtue of being richer, is narrower. And as you're trying to move up that pyramid at every step, there is a possibility for a failure. CHRIS: Have you written a blog post about this with said diagram in it? And is that why you have that so readily at hand? [laughs] JOËL: Yes, that is the case. CHRIS: Okay. Yeah, that made sense to me. [laughs] JOËL: We'll make sure to link to it in the show notes. CHRIS: Now you have to link to Joël blog posts, whereas I used to have to link to them [chuckles] in almost every episode of The Bike Shed that I recorded. JOËL: Another thing I've been thinking about in terms of this parsing is that parsing and serializing are, in a sense, almost opposites of each other. Typically, when you're parsing, you're going from a broad type to a narrow one. And when you're serializing, you're going from a narrow type to a broader one. So you might go from a user into a hash into a string. So you're sort of going down that pyramid rather than going up. CHRIS: It is an interesting observation and one that immediately my brain is like, okay, cool. So can we reuse our serializers but just run them in reverse or? And then I try and talk myself out of that because that's a classic don't repeat yourself sort of failure mode of, like, actually, it's fine. You can repeat a little bit. So long as you can repeat and constrain, that's a fine version. But yeah, feels true, though, at the core. JOËL: I think, in some ways, if you want a single source of truth, what you want is a schema, and then you can derive serializers and parsers from that schema. CHRIS: It's interesting because you used the word derive. That has been an interesting evolution at Sagewell. The engineering team seems to be very collected around the idea of explicitness, almost the Zen of Python; explicit is better than implicit. And we are willing to write a lot of words down a lot of times and be happy with that. I think we actually made the explicit choice at one point that we will not implement an automatic camel case conversion in our serializer, even though we could; this is a knowable piece of code. But what we want is the grepability from the front end to the back end to say, like, where's this data coming from? And being able to say, like, it is this data, which is from this serializer, which comes from this object method, and being able to trace that very literally and very explicitly in the code, even though that is definitely the sort of thing that we could derive or automatically infer or have Ruby do that translation for us. And our codebase is more verbose and a little noisier. But I think overall, I've been very happy with it, and I think the team has been very happy. But it is an interesting one because I've seen plenty of teams where it is the exact opposite. Any repeated characters must be destroyed. We must write code to write the code for us. And so it's fun to be working with a team where we seem to be aligned around an approach on that front. JOËL: That example that you gave is really interesting because I feel like a common thing that happens in a serialization layer is also a form of normalization. And so, for example, you might downcase all strings as part of the serialization, definitely, like dates always get written in ISO 8601 format whenever that happens. And so, regardless of how you might have it stored on the Ruby side, by the time it gets to the JSON, it's always in a standard format. And it sounds like you're not necessarily doing that with capitalization. CHRIS: I think the distinction would be the keys and the values, so we are definitely doing normalization on the values side. So ISO 8601 date and time strings, respectively that, is the direction that we plan to go for the value. But then for the key that's associated with that, what is the name for this data, those we're choosing to be explicit and somewhat repetitive, or not even necessarily repetitive, but the idea of, like, it's first_name on the Ruby side, and it's first capital N name camel case, or it's...I forget the name. It's not quite camel case; it's a different one but lower camel, maybe. But whatever JavaScript uses, we try to bias towards that when we're going to the front end. It does get a little tricky coming back into the Ruby side. So our controllers have a bunch of places where they need to know about what I think is called lower camel case, and so we're not perfect there. But that critical distinction between sort of the names for things, and the values for things, transformations, and normalizations on the values, I'm good with that. But we've chosen to go with a much more explicit version for the names of things or the keys in JSON objects specifically. JOËL: One thing that can be interesting if you have a normalization phase in your serializer is that that can mean that your serializer and parsers are not necessarily symmetric. So you might accept malformed data into your parser and parse it correctly. But then you can't guarantee that the data that gets serialized out is going to identically match the data that got parsed in. CHRIS: Yeah, that is interesting. I'm not quite sure of the ramifications, although I feel like there are some. It almost feels like formatting Prettier and things like that where they need to hold on to whitespace in some cases and throw out in others. I'm thinking about how ASTs work. And, I don't know, there's interesting stuff, but, again, not sure of the ramifications. But actually, to flip the tables just a little bit, and that's an aggressive terminology, but we're going to roll with it. To flip the script, let's go with that, Joël; what's been up in your world? You've been hosting this wonderful show. I've listened in to a number of episodes. You're doing a fantastic job. I want to hear a little bit more of what's new in your world, Joël. JOËL: So I've been working on a project that has a lot of flaky tests, and we're trying to figure out the source of that flakiness. It's easy to just dive into, oh, I saw a flaky Test. Let me try to fix it. But we have so much flakiness that I want to go about it a little bit more systematically. And so my first step has actually been gathering data. So I've actually been able to make API requests to our CI server. And the way we figure out flakiness is looking at the commit hash that a particular test suite run has executed on. And if there's more than one CI build for a given commit hash, we know that's probably some kind of flakiness. It could be a legitimate failure that somebody assumed was flakiness, and so they just re-run CI. But the symptom that we are trying to address is the fact that we have a very high level of people re-verifying their code. And so to do that or to figure out some stats, I made a request to the API grouped by commit hash and then was able to get the stats of how many re-verifications there are and even the distribution. The classic way that you would do that is in Ruby; you would use the GroupBy function from enumerable. And then, you would transform values instead of having, like, say; each commit hash then points to all the builds, an array of builds that match that commit hash. You would then thumb those. So now you have commit hashes that point to counts of how many builds there were for that commit hash. Newer versions of Ruby introduced the tally method, which I love, which allows you to basically do all of that in one step. One thing that I found really interesting, though, is that that will then give me a hash of commit hashes that point to the number of builds that are there. If I want to get the distribution for the whole project over the course of, say, the last week, and I want to say, "How many times do people run only one CI run versus running twice in the same commit versus running three times, or four times, or five or six times?" I want to see that distribution of how many times people are rerunning their build. You're effectively doing that tally process twice. So once you have a list of all the builds, you group by hash. You count, and so you end up with that. You have the Ruby hash of commit SHAs pointing to number of times the build was run on that. And then, you again group by the number of builds for each commit SHA. And so now what you have is you'll have something like one, and then that points to an array of SHA one, SHA two, SHA three, SHA four like all the builds. And then you tally that again, or you transform values, or however, you end up doing it. And what you end up with is saying for running only once, I now have 200 builds that ran only once. For running twice in the same commit SHA, there are 15. For running three times, there are two. For running four times, there is one. And now I've got my distribution broken down by how many times it was run. It took me a while to work through all of that. But now the shortcut in my head is going to be you double tally to get distribution. CHRIS: As an aside, the whole everything you're talking about is interesting and getting to that distribution. I feel like I've tried to solve that problem on data recently and struggled with it. But particularly tally, I just want to spend a minute because tally is such a fantastic addition to the Ruby standard library. I used to have in sort of like loose muscle memory transform value is grouped by ampersand itself, transform values count, sort, reverse to H. That whole string of nonsense gets replaced by tally, and, oof, what a beautiful example of Ruby, and enumerable, and all of the wonder that you can encapsulate there. JOËL: Enumerable is one of the best parts of Ruby. I love it so much. It was one of the first things that just blew my mind about Ruby when I started. I came from a PHP, C++ background and was used to writing for loops for everything and not the nice for each loops that a lot of languages have these days. You're writing like a legit for or while loop, and you're managing the indexes yourself. And there's so much room for things to go wrong. And being introduced to each blew my mind. And I was like, this is so beautiful. I'm not dealing with indexes. I'm not dealing with the raw implementation of the array. I can just say do a thing for each element. This is amazing. And that is when I truly fell in love with Ruby. CHRIS: I want to say I came from Python, most recently before Ruby. And Python has pretty nice list comprehensions and, in fact, in some ways, features that enumerable doesn't have. But, still, coming to Ruby, I was like, oh, this enumerable; this is cool. This is something. And it's only gotten better. It still keeps growing, and the idea of custom enumerables. And yeah, there's some real neat stuff in there. JOËL: I'm going to be speaking at RubyConf Mini this fall in November, and my talk is all about Enumerators and ranges in enumerable and ways you can use those to make the APIs of the objects that you create delightful for other people to use. CHRIS: That sounds like a classic Joël talk right there that I will be happy to listen to when it comes out. A very quick related, a semi-related aside, so, tally, beautiful addition to the Ruby language. On the Rails side, there was one that I used recently, which is where.missing. Have you seen where.missing? JOËL: I have not heard of this. CHRIS: So where.missing is fantastic. Let's assume you've got two related objects, so you've got like a has many blah, so like a user has many posts. I think you can...if I'm remembering it correctly, it's User.where.missing(:posts). So it's where dot missing and then parentheses the symbol posts. And under the hood, Rails will do the whole LEFT OUTER JOIN where the count is null, et cetera. It turns into this wildly complex SQL query or understandably complex, but there's a lot going on there. And yet it compresses down so elegantly into this nice, little ActiveRecord bit. So where.missing is my new favorite addition into the Rails landscape to complement tally on the Ruby side, which I think tally is Ruby 2.7, I want to say. So it's been around for a while. And where.missing might be a Ruby 7 feature. It might be a six-something, but still, wonderful features, ever-evolving these tool sets that we use. JOËL: One of the really nice things about enumerable and family is the fact that they build on a very small amount of primitives, and so as long as you basically understand blocks, you can use enumerable and anything in there. It's not special syntax that you have to memorize. It's just regular functions and blocks. Well, Chris, thank you so much for coming back for a visit. It's been a pleasure. And it's always good to have you share the cool things that you're doing at Sagewell. CHRIS: Well, thank you so much, Joël. It's been an absolute pleasure getting to come back to this whole Bike Shed. And, again, just to add a note here, you're doing a really fantastic job with the show. It's been interesting transitioning back into listener mode for the show. Weirdly, I wasn't listening when I was a host. But now I've regained the ability to listen to The Bike Shed and really enjoy the episodes that you've been doing and the wonderful spectrum of guests that you've had on and variety of topics. So, yeah, thank you for hosting this whole Bike Shed. It's been great. JOËL: And with that, let's wrap up. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Laravel News Podcast
Profiling your apps, scheduling email, and JSON API resources

Laravel News Podcast

Play Episode Listen Later Aug 23, 2022 45:54


Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Honeybadger - combining error monitoring, uptime monitoring and check-in monitoring into a single, easy to use platform and making you a DevOps hero. Show links Laravel 9.24 released Nagios Grafana Nginx Amplify Laravel 9.25 released Profile your Laravel applicatio with Xhprof Email scheduler package for Laravel Zero hassle CLI application with Laravel Zero JSON API resources in Laravel How I develop applications with Laravel Event sourcing in Laravel Detect slow queries before they hit your production database

Thinking Elixir Podcast
109: Digital Signal Processing with NxSignal

Thinking Elixir Podcast

Play Episode Listen Later Jul 26, 2022 35:27


A new library in the Nx ecosystem under active development is called NxSignal by Paulo Valente. We talk with Paulo to learn what a DSP (Digital Signal Processor) is, how it works, and we touch on the kinds of problems it can solve. We learn about his involvement in Nx, where the library is going, and some unusual ways he's applied it. He also shares how he's using Nx Explorer in production to clean up and process financial data returned in a JSON API and much more! Show Notes online - http://podcast.thinkingelixir.com/109 (http://podcast.thinkingelixir.com/109) Elixir Community News - https://asciinema.org/a/FYnQFc358WaL5uBfwZPoK5IRm (https://asciinema.org/a/FYnQFc358WaL5uBfwZPoK5IRm) – José Valim showed off a new Elixir.1.14 feature of line-by-line breakpoints demonstrated in IEx. - https://github.com/elixir-lang/elixir/pull/11974 (https://github.com/elixir-lang/elixir/pull/11974) – PR for initial Kernel.dbg/2 work - https://twitter.com/josevalim/status/1547154092019122176 (https://twitter.com/josevalim/status/1547154092019122176) - https://github.com/erlang/otp/pull/6144 (https://github.com/erlang/otp/pull/6144) – Implement new Erlang shell - https://blog.rabbitmq.com/posts/2022/07/rabbitmq-3-11-feature-preview-super-streams/ (https://blog.rabbitmq.com/posts/2022/07/rabbitmq-3-11-feature-preview-super-streams/) – RabbitMQ gets a new feature called “Super Streams” - https://github.com/elixir-grpc/grpc (https://github.com/elixir-grpc/grpc) – Paulo Valente became the new maintainer of the Elixir gRPC library - https://twitter.com/josevalim/status/1549091140246331399 (https://twitter.com/josevalim/status/1549091140246331399) – Livebook announcement. Cloud host or new Desktop option. - http://livebook.dev (http://livebook.dev) – Livebook Desktop was launched Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://github.com/polvalente/nx-signal (https://github.com/polvalente/nx-signal) – NxSignal project - https://twitter.com/polvalente/status/1533954854946848771 (https://twitter.com/polvalente/status/1533954854946848771) - https://www.stone.co/ (https://www.stone.co/) – Where Paulo Valente works - https://www.premierguitar.com/gear/gibsons-self-tuning-guitar (https://www.premierguitar.com/gear/gibsons-self-tuning-guitar) – Example of self tuning guitar with built-in DSP - https://en.wikipedia.org/wiki/ListofSuperNESenhancement_chips#DSP (https://en.wikipedia.org/wiki/List_of_Super_NES_enhancement_chips#DSP) – SNES DSP enhancement chips - https://github.com/polvalente/grpclassify (https://github.com/polvalente/grpclassify) – His academic project for transcribing musical notes - http://www.repositorio.poli.ufrj.br/monografias/monopoli10029831.pdf (http://www.repositorio.poli.ufrj.br/monografias/monopoli10029831.pdf) – The final project for his engineering degree that led him to get involved with Nx. - https://grpc.io/ (https://grpc.io/) – gRPC project - https://github.com/elixir-grpc/grpc (https://github.com/elixir-grpc/grpc) – An Elixir implementation of gRPC - https://prometheus.io/docs/introduction/overview/ (https://prometheus.io/docs/introduction/overview/) Guest Information - https://twitter.com/polvalente (https://twitter.com/polvalente) – on Twitter - https://github.com/polvalente/ (https://github.com/polvalente/) – on Github Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

Whiskey Web and Whatnot
Developing Orbit and the Future of Cross Framework Solutions with Dan Gebhardt

Whiskey Web and Whatnot

Play Episode Listen Later Jul 14, 2022 51:51


Years ago, Dan Gebhardt was mapping out data needs for an app he was building. In a struggle to make sense of every requirement and apply them to other packages like Ember Data, he hit a wall. At this point, there was no option for adapting Ember Data to the complex specificities of his app's needs.  Dan tried to rationalize a solution, deconstructing entire data universes and all aspects of a data library. The end result was Orbit, a framework-agnostic data layer with use cases beyond the obvious. Since its inception, many developers have leaned on Orbit, including those at Ship Shape.  In this episode, Chuck and Robbie talk with Dan about Orbit's origin story, the best (and least obvious) ways to use Orbit, why Dan chose platform-agnostic, what he really thinks about Starbeam, his ultimate goal with Orbit, and Dan's all-time favorite power tool.  Key Takeaways [00:45] - A brief intro to Dan.  [03:02] - A whiskey review - Nikka Single Malt Miyagikyo.  [11:04] - Why Dan created Orbit.  [15:47] - Unexpected use cases for Orbit.  [21:42] - How Orbit flags a conflict.  [25:33] - Orbit's use cases outside of JSON:API. [32:46] - What Dan thinks about Starbeam.  [35:12] - How Dan escapes his computer.  [40:32] - Dan's favorite power tool.  [42:33] - Dan's thoughts on New Hampshire (and New Jersey). [48:46] - Dan's closing thoughts and his sneak peek at a new release.  Quotes [13:28] - “Sometimes building for the hard case first also helps clarify the simple case and I think that Orbit really scales from the very simple to the very complex set of requirements.” ~ @dgeb [17:47] - “That's one of my favorite aspects of working with Orbit is using it as simply as possible to just prototype an app really quickly.” ~ @dgeb [33:32] - “The frameworks have too long been siloed and we are now seeing some really interesting cross framework solutions out there, whether you're talking about Starbeam or even something like Remix or Astro.” ~ @dgeb Links Dan Gebhardt Ember Core Team Emeritus JSON:API Orbit.js Tilde  Ruby On Rails Rust Yehuda Katz JSONAPI::Resources Nikka Single Malt Miyagikyo Nikka From The Barrel The Glencairn Whiskey Glass The Norlan Whiskey Glass GraphQL IndexedDB Ember Data Swach Git Apollo  Whiskey Web and Whatnot: Discovering Ember, Adopting Orbit, and Unlocking Optimization with Chris Thoburn (runspired)  LinkedIn ember-m3 Hooks React Eric Elliott  @glimmer/tracking Starbeam Astro Svelte View RedwoodJS Next.js Acquia Whiskey Web and Whatnot: Next.js 12, React vs. Svelte, and the Future of Frameworks with Wes Bos D.C. United  Audi Field Connect with our hosts Robbie Wagner Chuck Carpenter Ship Shape Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts Whiskey Web and Whatnot Top-Tier, Full-Stack Software Consultants This show is brought to you by Ship Shape. Ship Shape's software consultants solve complex software and app development problems with top-tier coding expertise, superior service, and speed. In a sea of choices, our senior-level development crew rises above the rest by delivering the best solutions for fintech, cybersecurity, and other fast-growing industries. Check us out at shipshape.io.

Talking Drupal
Talking Drupal #337 - Layout Paragraphs

Talking Drupal

Play Episode Listen Later Mar 7, 2022 70:18


Today we are talking about Layout Paragraphs with Justin Toupin. www.talkingDrupal.com/337 Topics Ukaine – https://www.drupal.org/association/blog/drupal-association-statement-of-support-for-ukraine Drupal 7 end of life What is Layout Paragraphs How it works Who it is for Current status Timeline for the project Why you worked on this Marketing and editorial staff need flexible tools Complex interfaces became the norm Content teams need to involve devs Layout paragraphs has been called an evolution of WYSIWYG Paragrpahs Comparison between Layout Paragraphs and Layout Builder Listener question from Steven – Is there a way to show the label of the paragraph type without needing to hover over the content on the edit screen What is next Mercury editor Getting started Headless Drupal Resources Drupal Association Drupal 7 end of life Layout Paragraphs Itamair Talking Drupal #327 - Layout Builder vs Paragraphs Mercury Editor Justin at Design for Drupal Guests Justin Toupin - aten.io @justin2pin Hosts Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Martin Anderson-Clutz - @mandclu MOTW JSON:API Node Preview Tab Adds a tab to nodes that allows a quick preview of the node's representation as JSON:API. If using this with a Chrome browser, we suggest using the JSONVue extension to improve the formatting, with the option enabled to format contents in frames.

Talk Python To Me - Python conversations for passionate developers

Do we talk about running Python in production enough? I can tell you that the Talk Python infrastructure (courses, podcasts, APIs, etc.) get a fair amount of traffic, but they look nothing like what Google, or Instagram, or insert [BIG TECH NAME] here's deployments do. Yet, mostly, we hear about interesting feats of engineering at massive scale that is impressive but often is also outside of the world most Python devs need for their companies and services. I have three great guests who do think we should talk more about small to medium-sized Python deployments: Emily Moorehouse, Hynek, and Glyph. I think you'll enjoy the conversation. They each bring their own interesting perspectives. Links from the show Emily on Twitter: @emilyemorehouse Hynek on Twitter: @hynek Glyph on Twitter: @glyph Main article by Hynek Python in Production Article: hynek.me Supporting articles Solid Snakes or: How to Take 5 Weeks of Vacation: hynek.me How to Write Deployment-friendly Applications: hynek.me Common Infrastructure Errors I've Made: matduggan.com Thoughts on Monoliths Give me back my monolith: craigkerstiens.com Goodbye Microservices: From 100s of problem children to 1 superstar: segment.com Configuring uWSGI for Production Deployment: techatbloomberg.com https://martinfowler.com/bliki/MicroservicePremium.html https://martinfowler.com/bliki/MonolithFirst.html More tools CuttleSoft: cuttlesoft.com pgMustard: Helps you review Postgres query plans quickly: pgmustard.com JSON:API: jsonapi.org Tenacity package: tenacity.readthedocs.io glom package: glom.readthedocs.io boltons package: boltons.readthedocs.io Joke: The Torture Never Stops: devops.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe on YouTube: youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors SignalWire Tonic Talk Python Training

Whiskey Web and Whatnot
Discovering Ember, Adopting Orbit, and Unlocking Optimization with Chris Thoburn (runspired)

Whiskey Web and Whatnot

Play Episode Listen Later Jan 27, 2022 58:56


Runspired's journey with Ember began just like Chuck's, Robbie's, and many who've come before them — with confusion, hesitancy, and gradual infatuation.  The year was 2008 and runspired was launching an app. Somewhere along the way, he realized that if he wanted to build the collaborative web-first application he envisioned, he needed to build in JavaScript.  Sifting through Angular and React, nothing stuck. When he finally stumbled upon Ember, the pitfalls and confusion were obvious and almost immediately he abandoned the framework. But runspired soon realized that features within Ember matched the ideas he began developing in his own framework years prior. Suddenly, everything clicked and today runspired is an Ember aficionado with big ideas on the future of framework and the secrets to cutting edge optimization. In this episode, Robbie, Chuck, and runspired discuss flaws in the developer community, why Orbit is useful, shifting the approach to API frameworks, and why JSON:API and GraphQL are a match made in developer heaven.  Key Takeaways [01:37] - A whiskey review. [11:26] - How runspired's journey in the Ember community evolved.  [20:22] - What runspired thinks about RedwoodJS and API frameworks.  [24:03] - Why Orbit is flawed but incredibly useful.  [29:45] - What's missing from the developer community.  [36:01] - Why JSON:API and GraphQL are a perfect marriage.  [41:59] - What Ember Data cares about. [48:01] - A conversation about whatnot including Chris' dive into professional running.  [55:55] - A cause runspired cares about in the Ember community.  Quotes [18:30] - “I've never found a reason to want to re-evaluate Ember as my main framework. Every time I've had a complaint, it's evolved to satisfy that complaint with time.” ~ runspired [23:00] - “So many of the problems that I see applications encounter late in their life cycles are problems where the API framework just wasn't set up well in the first place. And if they had had a better framework for building APIs and understanding how applications are maybe going to mature, and how that API is going to need to evolve as the application matures, they probably would have been set up for better success.” ~ runspired [24:42] - “Orbit, in my opinion, is the gold standard of data libraries for the front-end right now. Because it solves every problem that you don't know you have yet. But that's also its big flaw because it has found the end architecture that you've got to evolve to if you end up with those problems.” ~ runspired Links EmberFest Balcones Whiskey Ember Whiskey Web and Whatnot: Chuck's Origin Story: Career Pivots and Learning to Love Ember Whiskey Web and Whatnot: Robbie's Origin Story: Learning to Code, Learning to Hire, and Taking the Entrepreneurial Leap Whiskey Web and Whatnot: Ember vs. React, Jamstack, and Holes in the Hiring Process with Chris Manson Whiskey Web and Whatnot: RedwoodJS, Developer Experience, and Developing for Scale with Tom Preston-Werner National Geographic JavaScript jQuery  Backbone.js Angular  Hacker News React Svelte RedwoodJS Rails Spring Rust Orbit  GraphQL Discord  JSON:API Redux LinkedIn runspired on Instagram runspired on LinkedIn  Connect with our hosts Robbie Wagner Chuck Carpenter Ship Shape Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts Whiskey Web and Whatnot Top-Tier, Full-Stack Software Consultants This show is brought to you by Ship Shape. Ship Shape's software consultants solve complex software and app development problems with top-tier coding expertise, superior service, and speed. In a sea of choices, our senior-level development crew rises above the rest by delivering the best solutions for fintech, cybersecurity, and other fast-growing industries. Check us out at shipshape.io.

airhacks.fm podcast with adam bien
EDI, Java Batch, MicroProfile, JSON-API and OpenAPI

airhacks.fm podcast with adam bien

Play Episode Listen Later Jul 3, 2021 49:47


An airhacks.fm conversation with Michael Edgar (@xlateio) about: custom Pentium 100, a telnet based, MUD game, Vallhalla MUD, BBS was used to connect to the network, enjoying Apple 2 at school, enjoying Sonic Sega games, learning C-structures at collage, learning 68000 assembly, from Assembly to Visual Basic and Java, starting at an insurance company and learning EDI, X12 and EDIFACT in EDI universe, the fascination with EDI, the beginners mind and Java Connector Architectures, the EDI "hello, world", starting to understand COBOL, back to Java with WSAD and IBM WebSphere, using JDBC, Servlets and Java Server Pages (JSP), using Java Batch processing (jbatch), using Java Batch DSL features, from WebSphere to Wildfly, misusing WildFly as Tomcat, from WildFly to MicroProfile using smallrye, JWT and OpenAPI committer, reusing Java Bean Validation as openAPI metadata, using jandex index for annotation scanning, smallrye OpenAPI already uses Bean Validation annotations, JSON API is used by Ember, JSON API is similar to odata, JSON-API is generated from JAX-RS, JPA and Bean Validation, JSON-API is used by EmberJs, xlate, RedHat OpenShift Streams for Apache Kafka Michael Edgar on twitter: @xlateio

The Bike Shed
296: Speedy Performance with Nate Berkopec

The Bike Shed

Play Episode Listen Later Jun 15, 2021 63:33


Nate Berkopec is the author of the Complete Guide to Rails Performance, the creator of the Rails Performance Workshop, and the co-maintainer of Puma. He talks with Steph about being known as "The Rails Speed Guy," and how he ended up with that title, publishing content, working on workshops, and also contributing to open source projects. (You could say he's kind of a busy guy!) Speedshop (https://www.speedshop.co/) Puma (https://github.com/puma/puma/commits/master?author=nateberkopec) The Rails Performance Workshop (https://www.speedshop.co/rails-performance-workshop.html) The Complete Guide to Rails Performance (https://www.railsspeed.com/) How To Use Turbolinks to Make Fast Rails Apps (https://www.speedshop.co/2015/05/27/100-ms-to-glass-with-rails-and-turbolinks.html) Sidekiq (https://sidekiq.org/) Follow Nate Berkopec on Twitter (https://twitter.com/nateberkopec) Visit Nate's Website (https://www.nateberkopec.com/) Sign up for Nate's Speedshop Ruby Performance Newsletter (https://speedshop.us11.list-manage.com/subscribe?u=1aa0f43522f6d9ef96d1c5d6f&id=840412962b) Transcript: STEPH: All right. I'll kick us off with our fancy intro. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And this week, Chris is taking a break. But while he's away, I'm joined by Nate Berkopec, who is the owner of Speedshop, a Ruby on Rails performance consultancy. And, Nate, in addition to running a consultancy, you're the co-maintainer of Puma. You're also an author as you wrote a book called The Complete Guide to Rails Performance. And you run the workshop called The Rails Performance Workshop. So, Nate, I'm sensing a theme here. NATE: Yeah, make code go fast. STEPH: And you've been doing that for quite a while, haven't you? NATE: Yeah. It's pretty much been since 2015, or so I think. It all started when I actually wrote a blog post about Turbolinks that got a lot of pick up. My hot take at the time was that Turbolinks is actually a good thing. That take has since become uncontroversial, but it was quite controversial in 2015. So I got a lot of pick up on that, and I realized I liked working on performance, and people seem to want to hear about it. So I've been in that groove ever since. STEPH: When you started down the path of really focusing on performance, were you running your own consultancy at that point, or were you working for someone else? NATE: I would say it didn't really kick off until I actually published The Complete Guide to Rails Performance. So after that came out, which was, I think, March of 2016…I hope I'm getting that right. It wasn't until after that point when it was like, oh, I'm the Rails performance guy now. And I started getting emails inbound about that. I didn't really have any time when I was actually working on the CGRP to do that sort of thing. I just made that my full-time job to actually write, and market, and publish that. So it wasn't until after that that I was like, oh, I'm a performance consultant now. This is the lane I've driven myself into. I don't think I really had that as a strategy when I was writing the book. I wasn't like, okay, this is what I'm going to do. I'm going to build some reputation around this, and then that'll help me be a better consultant with this. But that's what ended up happening. STEPH: I see. So it sounds like it really started more as a passion and something that you wanted to share. And it has manifested to this point where you are the speed guy. NATE: Yeah, I think you could say that. I think when I started writing about it, I just knew...I liked it. I liked the work of performance. In a lot of ways, performance is a much more concrete discipline than a lot of other sub-disciplines of programming where I joke my job is number go down. It's very measurable, and it's very clear when you've made a difference. You can say, “Hey, this number was this, and now it's this. Look what I did.” And I always loved that concreteness of performance work. It makes it actually a lot more like a real kind of engineering discipline where I think of performance engineering as clarifying requirements and the limitations and then building a project that meets the requirements while staying within those limitations and constraints. And that's often not quite as clear for other disciplines like general feature work. It's kind of hard to say sometimes, like, did you actually make the user's life better by implementing such and such? That's more of a guess. That's more of a less clear relationship. And with performance, nobody's going to wake up ten years from today and wish that their app was slower. So we can argue about the relative importance of performance in an application, but we don't really argue about whether or not we made it faster because we can prove that. STEPH: Yeah. That's one area that working with different teams (as I tend to shift the clients that I'm working with every six months) where we often push hard around feature work to say, “How can we measure this? How can we know that we are delivering something valuable to users?” But as you said, that's really tricky. It's hard to evaluate. And then also, when you add on the fact that if I am leaving that project in six months, then I don't have the same insights to understand how something went for that team. So I can certainly appreciate the satisfaction that comes from knowing that, yes, you are delivering a faster app. And it's very measurable, given the time that you're there, whether it's a short time or if it's a long time that you're with that team. NATE: Yeah, totally. My consulting engagements are often really short. I don't really do a lot of super long-term stuff, and that's usually fine because I can point to stuff and say, “Yep. This thing was at A, and now it's at B. And that's what you hired me to do, so now it's done.” STEPH: I am curious; given that you have so many different facets where you are running your consultancy, you are also often publishing a lot of content and working on workshops and then also contributing to open source projects. What does a typical week look like for you? NATE: Well, right now is actually a decent example. I have client work two or three days a week. And I'm actually working on a new product right now that I'm calling Sidekiq in Practice, which is a course/workshop about scaling Sidekiq from zero to 1000 jobs per second. And I'll spend the other days of the week working on that. My content is...I always struggle with how much time to spend on blogging specifically because it takes so much time for me to come up with a post and publish that. But the newsletter that I write, which I try to write two once a week, I haven't been doing so well with it lately. But I think I got 50 newsletters done in 2020 or something like that. STEPH: Wow. NATE: And so I do okay on the per-week basis. And it's all content I've never published anywhere else. So that actually is like 45 minutes of me sitting down on a Monday and being like rant, [chuckles] slam keyboard and rant and then hit send. And my open source work is mostly 15 minutes a day while I'm drinking morning coffee kind of stuff. So I try to spread myself around and do a lot of different stuff. And a lot of that means, I think, pulling back in terms of thinking how much you need to spend on something, especially with newsletters, email newsletters, it was very easy to overthink that and spend a lot of time revising and whatever. But some newsletter is better than no newsletter. And especially when it comes to content and marketing, I've learned that frequency and regularity is more important than each and every post is the greatest thing that's ever come out since sliced bread. So trying to build a discipline and a practice around doing that regularly is more important for me. STEPH: I like that, some newsletter is better than no newsletter. I was listening to your chat with Brittany Martin on the Ruby on Rails podcast. And you said something very honest that I appreciated where you said, “Writing is really hard, and writing sucks.” And that made me laugh in the moment because even though I do enjoy writing, I still find it very hard to be disciplined, to sit down and make it happen. And then you go into that editor mode where you critique everything, and then you never really get it published because you are constantly fixing it. It sounds like...you've mentioned you set aside about 45 minutes on a Monday, and you crank out some work. How do you work through that inner critic? How do you get past it to the point where then you just publish? NATE: You have to separate the steps. You have to not do editing and first drafting at the same time. And the reason why I say it sucks and it's hard is because I think a lot of people don't do a lot of regular writing, maybe get intimidated when they try to start. And they're like, “Wow, this is really hard. This is not fun.” And I'm just trying to say that's everybody's experience and if it doesn't get any better, because it doesn't, [chuckles] there's nothing wrong with you, that's just writing, it's hard. For me, especially with the newsletter, I just have to give myself permission not to edit and to just hit send when I'm done. I try to do some spell checking,, and that's it. I just let it go. I'm not going back and reading it through again and making sure that I was very clear and cogent in all my points and that there's a really good flow through that newsletter. I think it comes with a little bit of confidence in your own ideas and your own experience and knowledge, believing that that's worth sharing and that's worth somebody's time, even if it's not a perfect expression of what's in your head. Like, a 75% expression is good enough, especially in a newsletter format where it's like 500 to 700 words. And it's something that comes once a week. And maybe not everyone's amazing, but some of them are, enough of them are that people stay subscribed. So I think a combination of separating editing and first drafting and just having enough confidence and the basis where you have to say, “It doesn't have to be perfect every single time.” STEPH: Yeah, I think that's something that I learned a while back to apply to my coding process where I had to separate those two steps of where I have to let the creator in me just create and write some code and make it work, and then come back to the editing process, and taking a similar approach with writing. As you may be familiar with thoughtbot, we're big advocates when it comes to sharing content and sharing things that we have learned throughout the week and different projects that we're working on. And often when people join thoughtbot, they're very excited to contribute to the blog. But it is daunting for that first post because you think it has to be this really grand novel. And it has to be something that is really going to appeal to everybody, and it's going to help everyone. And then over time, you learn it's like, oh well, actually it can be this very just small thing that I learned that maybe only helps 20 people, but it still helped those 20 people. And learning to publish more frequently versus going for those grand pieces is more favorable and often more helpful for people. NATE: Yeah, totally. That's something that is difficult for people at first. But everything in my experience has led me to believe that frequency and regularity is just as, if not more important than the quality of any individual piece of content that I put out. So that's not to say that...I guess it's weird advice to give because people will take it too far the other way and think that means he's saying quality doesn't matter. No, of course, it does, but I think just everyone's internal biases are just way too tuned towards this thing must be perfect. I've also learned we're just really bad judges internally of what is useful and good for people. Stuff that I think is amazing and really interesting sometimes I'll put that out, and nobody cares. [chuckles] And the other stuff I put out that's just like the 45-minute banging out newsletter, people email me back and say, “This is the most helpful thing anyone's ever read.” So that quality bias also assumes that you know what is good and actually we're not really good at that, knowing every time what our audience needs is actually really difficult. STEPH: That's totally fair. And I have definitely run into that too, where I have something that I'm very proud of and excited to share, and I realize it relates to a very small group of people. But then there's something small that I do every day, and then I just happen to tweet about it or talk about it, and suddenly that's the thing that everybody's really excited about. So yeah, you never know. So share it all. NATE: Yeah. And it's important to listen. I pay attention to what people get interested in from what I put out, and I will do more of that in the future. STEPH: You mentioned earlier that you are working on another workshop focused on Sidekiq. What can you tell me about that? NATE: So it's meant to be a guide to scaling Sidekiq from zero to 1000 requests per second. And it's meant to be a missing guide to all the things that happen, like the situations that can crop up operationally when you're working on an application that does a lot of work with Sidekiq. Whereas Mike Sidekiq, Wiki, or the docs are great about how do, you do this? What does this setting mean? And the basics of getting it just running, Sidekiq in practice, is meant to be the last half of that. How do you get it to run 1,000 jobs per second in a day-to-day application? So it's the collected wisdom and collected battle scars from five years of getting called in to fix people's Sidekiq installations and very much a product of what are the actual problems that people experience, and how do you fix and deal with those? So stuff about memory and managing Sidekiq memory usage, how to think about queues. Like, what should your queue structure be? How many should you have? Like, how do you organize jobs into queues, and how do you deal with problems like some client is dropping 10,000, 20,000 jobs into a queue. And now the other jobs I put in that queue have 20,000 jobs in front of them. And now this other job I've got will take three hours to get through that queue. How do you deal with problems like that? All the stuff that people have come to me over the years and that I've had to help them fix. STEPH: That sounds really great. Because yeah, I find that teams who are often in this space with Sidekiq we just let it run until there's a fire. And then suddenly, we start to care as to how it's processing, and we care about our queue structure and how many workers that we have that are pulling from that queue. So that sounds really helpful. When you're building a workshop, do you often go back to any of those customers and pull more ideas from them, or do you find that you just have enough examples from your collective work with clients that that itself creates a course? NATE: Usually, pretty much every chapter in the workshop I've probably implemented like three-plus times, so I don't really have to go back to any individual customer. I have had some interesting stuff with my current client, Gusto. And Gusto is going through some background job reorganization right now and actually started to implement a lot of the things that I'm advocating in the workshop actually without talking to me. It was a good validation of hey, we all actually think the same here. And a lot of the solutions that they were implementing were things that I was ready to put down into those workshops. So I'd like to see those solutions implemented and succeed. So I think a lot of the stuff in here has been pretty battle-tested. STEPH: For the Rails Performance Workshop, you started off doing those live and in-person with teams, and then you have since switched to now it is a CLI course, correct? NATE: That's correct. Yep. STEPH: I love that very much. When you've talked about it, it does feel very appropriate in terms of developers and how we like to consume content and learn. So that is really novel and also, it seems like a really nice win for you. So then other people can take this course, but you are no longer the individual that has to deliver it to their team, that they can independently take the course and go through it on their own. Are you thinking about doing the same thing for the Sidekiq course, or what are your plans for that one? NATE: Yeah, it's the exact same structure. So it's going to be delivered via the command line. Although I would say Sidekiq in practice has more text components. So it's going to be a combination of a very short manual or book, and some video, and some hands-on exercises. So, an equal blend between all three of those components. And it's a lot of stuff that I've learned over having to teach; I guess intermediate to advanced programming concepts for the last five years now that people learn at different paces. And one of the great things about this kind of format is you can pick it up, drop it off, and move at your own speed. Whereas a lot of times when I would do this in person, I think I would lose people halfway through because they would get stuck on something that I couldn't go back to because we only had four hours of the day. And if you deliver it in a class format, you're one person, and I've got 24 other people in this room. So it's infinitely pausable and replayable, and you can go back, or you can just skip ahead. If you've got a particular problem and you're like, hey, I just want to figure out how to fix such and such; you can do that. You can just come in and do a particular thing and then leave, and that's fine. So it's a good format that way. And I've definitely learned a lot from switching to pre-recorded and pre-prepared stuff rather than trying to do this all live in person. STEPH: That is one of the lessons that I've learned as well from the couple of workshops that I've led is that doing them in person, there's a lot of energy. And I really enjoy that part where I get to see people respond to the content. And then I get a lot of great feedback from people about what type of questions they have, where they are getting stuck. And that part is so important to me that I always love doing them live first. But then you get to the point, as you'd mentioned, where if you have a room full of 20 people and you have two people that are stuck, how do you help them but then still keep the class going forward? And then, if you are trying to tailor this content for a wide audience…so maybe beginners could take the Rails Performance Workshop, or they could also take the Sidekiq course. But you also want the more senior engineers to get something out of it as well. It's a very challenging task to make that content scale for everyone. NATE: Yeah. What you said there about getting feedback and learning was definitely something that I got out of doing the Rails Performance Workshop in person like three dozen times, was the ability to look over people's shoulders and see where they got stuck. Because people won't email me and say, “Hey, this thing is really confusing.” Or “It doesn't work the way you said it does for me.” But when I'm in the same room with them, I can look over their shoulder and be like, “Hey, you're stuck here.” People will not ask questions. And you can get past that in an in-person environment. Or there are even certain questions people will ask in person, but they won't take the time to sit down and email me about. So I definitely don't regret doing it in person for so long because I think I learned a lot about how to teach the material and what was important and how people...what were the problems that people would encounter and stuff like that. So that was useful. And definitely, the Rails Performance Workshop would not be in the place that it is today if I hadn't done that. STEPH: Yeah, helping people feel comfortable asking questions is incredibly hard and something I've gone so far in the past where I've created an anonymous way for people to submit questions. So during class, even if you didn't want to ask a question in front of everybody, you could submit a question to this forum, and I would get notified. I could bring it up, and we could answer it together. And even taking that strategy, I found that people wouldn't ask questions. And I guess it circles back to that inner critic that we have that's also preventing us from sharing knowledge that we have with the world because we're always judging what we're going to share and what we're going to ask in front of our peers who we respect. So I can certainly relate to being able to look over someone's shoulder and say, “Hey, I think you're stuck. We should talk. Let me walk you through this or help you out.” NATE: There are also weird dynamics around in-person, not necessarily in a small group setting. But I think one thing I really picked up on and learned from RailsConf2021 which was done online, was that in-person question asking requires a certain amount of confidence and bravado that you're not...People are worried about looking stupid, and they won't ask things in a public or semi-public setting that they think might make them look dumb. And so then the people that do end up asking questions are sometimes overconfident. They don't even ask a question. They just want to show off how smart they are about a particular issue. This is more of an issue at conferences. But the quality of questions that I got in the Q&A after RailsConf this year (They did it as Discord chats.) was way better. The quality of questions and discussion after my RailsConf talk was miles better than I've ever had at a conference before. Like, not even close. So I think experimenting with different formats around interaction is really good and interesting. Because it's clear there's no perfect format for everybody and experimenting with these different settings and different methods of delivery has been very useful to me. STEPH: Yeah, that makes a ton of sense. And I'm really glad then for those opportunities where we're discovering that certain forums will help us get more feedback and questions from people because then we can incorporate that and to future conferences where people can speak up and ask questions, and not necessarily be the one that's very confident and enjoys hearing their own voice. For the Rails Performance Workshop, what are some of the general things that you dive into for that workshop? I'm curious, what is it like to attend that workshop? Although I guess one can't attend it anymore. But what is it like to take that workshop? NATE: Well, you still can attend it in some sense because I do corporate bookings for it. So if you want to buy 20 seats, then I can come in and basically do a Q&A every week while everybody takes the workshop. Anyway, I still do that. I have one coming up in July, actually. But my overall approach to performance is to always start with monitoring. So the course starts with goals and monitoring and understanding where you want to go and where you are when it comes to performance. So the first module of the Rails Performance Workshop is actually really a group exercise that's about what are our performance requirements and how can we set those? Both high-level and low-levels. So what is our goal for page load time? How are we going to measure that? How are we going to use that to back into lower-level metrics? What is our goal for back-end response times? What is our goal for JavaScript bundle sizes? That all flows from a higher-level metric of how fast you want the page to load or how fast you want a route to change in a React app or something, and it talks about those goals. And then where should you even start with where those numbers should be? And then how are you going to measure it? What are the browser events that matter here? What tools are available to help you to get that data? Because without measurement, you don't really have a performance practice. You just have people guessing at what stuff is faster and what is not. And I teach performance as a scientific process as science and engineering. And so, in the scientific method, we have hypotheses. We test those hypotheses, and then we learn based on those tests of our hypotheses. So that requires us to A, have a hypothesis, so like, I think that doing X makes this faster. And I talk about how you generate hypotheses using profiling, using tools that will show you where all the time goes when you do this particular operation of your software—and then measuring what happens when you do that? And that's benchmarking. So if you think that getting rid of method X or changing method X will speed up the app, benchmarking tells you did you actually speed it up or not? And there are all sorts of little finer points to making sure that that hypothesis and that experiment is tested in a valid way. I spend a lot of time in the workshop yapping about the differences between development/local environments and production environments and which ones matter. Because what differences matter, it's not often the ones that we think about, but instead it's differences like actually in Rails apps the asset packaging and asset pipeline performs very differently in production than it does in development, works very differently. And it makes it one of the primary reasons development is slower than production, so making sure that we understand how to change those settings to more production-like settings. I talk a lot about data. It's the other primary difference between development and production is production has a million users, and development has 10. So when you call things like User.all, that behavior is very different in production than it is locally. So having decent production-like data is another big one that I like to harp on in the workshops. So it's a process in the workshop of you just go lesson by lesson. And it's a lot of video followed up by hands-on exercises that half of them are pre-baked problems where I'm like, hey, take a look at this Turbolinks app that I've given you and look at it in DevTools. And here's what you should see. And then the other half is like, go work on your application. And here are some pull requests I think you should probably go try on your app. So it's a combination of hands-on and videos of the actual experience going through it. STEPH: I love how you start with a smaller application that everyone can look at and then start to learn how performant is this particular application that I'm looking at? Versus trying to assess, let's say, their own application where there may be a number of other variables that they have to consider. That sounds really nice. You'd mentioned one of the first exercises is talking about setting some of those goals and perhaps some of those benchmarks that you want to meet in terms of how fast should this page load, or how quickly should a response from the API be? Do you have a certain set of numbers for those benchmarks, or is it something that is different for each product? NATE: Well, to some extent, Google has suddenly given us numbers to work with. So as of this month, I think, June 2021, Google has started to use what they're calling Core Web Vitals in their ranking of search results. They've always tried to say it's not a huge ranking factor, et cetera, et cetera, but it does exist. It is being used. And that data is based on Chrome user telemetry. So every time you go to a website in Chrome, it measures three metrics and sends those back to Google. And those three metrics are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). And First Input Delay and Cumulative Layout shift are more important for your single-page apps kind of stuff. It's hard to screw those up with a Golden Path Rails app that just does Turbolinks or Hotwire or whatever. But Largest Contentful Paint is an easy one to screw up. So Google's line in the sand that they've drawn is 2.5 seconds for Largest Contentful Paint. So that's saying that from clicking on your website in a Google search result, it should take 2.5 seconds for the page to paint the largest component of that new page. That's often an image or a video or a large H1 tag or something like that. And that process then will help you to...to get to 2.5 seconds in Largest Contentful Paint; there are things that have to happen along the way. We have to download and execute all JavaScript. We have to download CSS. We have to send and receive back-end responses. In the case of a simple Hotwire app, it's one back-end response. But in the case of a single-page app, you got to download the document and then maybe download several XHR fetches or whatever. So there's a chain of events that has to happen there. And you have to walk that back now from 2.5 seconds in Largest Contentful Paint. So that's the line that I'm seeing getting drawn in the sand right now with Google's Core Web Vitals. So pretty much any meaningful web application performance metric can be walked back from that. STEPH: Okay. That's super helpful. I wasn't aware of the Core Web Vitals and that particular stat that Google is using to then rank the sites. I was going to ask, this kind of blends in nicely into when do you start caring about performance? So if you have a new application that you are just starting to get to market, based on the fact that Google is going to start ranking you right away, you do have to care some right out of the gate. But I am curious, when do you start caring more about performance, and are there certain tools and benchmarking that you want to have in place from day one versus other things that you'll say, “Well, we can wait until we have X numbers of users or other conditions before we add more profiling?” NATE: I'd say as an approach, I teach people not to have a performance strategy of monitoring. So if your strategy is to have dashboards and look at them regularly, you're going to lose. Eventually, you're not going to look at that dashboard, or more often, you just don't understand what you're looking at. You just install New Relic or Datadog or whatever, and you don't know how to turn a dashboard into actual action. Also, it seems to just wear teams out, and there's no clear mechanism when you just have a dashboard of turning that into oh, well, this has to now be something that somebody on our team has to go work on. Contrast that with bugs, so teams usually have very defined processes around bugs. So usually, what happens is you'll get an Exception Notification through Sentry or Bugsnag or whatever your preferred Exception Notification service is. That gets read by a developer. And then you turn that into a Jira ticket or a Kanban board or whatever. And then that is where work is done and prioritized. Contrast that with performance; there's often no clear mechanism for turning metrics into stuff that people actually work on. So understanding at your organization how that's going to work and setting up a process that automatically will turn performance issues into actual work that people get done is important. The way that I generally teach people to do this is to focus instead of dashboards and monitoring, on alerts, on automated thresholds that get tripped and then sends somebody's an email or put something in the Kanban board or whatever. It just has to be something that automatically gets fired. Different tools have different ways of doing this. Datadog has pretty much built their entire product around monitoring and what they call monitors. That's a perfectly fine way to do it, whatever your chosen performance monitoring tool, which I would say is a required thing. I don't think there's really any good excuse in 2021 for not having a performance monitoring tool. There are a million different ways to slice it. You can do it yourself with OpenTelemetry and then like statsD, I don't know, or pay someone else like everyone else does for Datadog or New Relic or AppSignal or whatever. But you got to have one installed. And then I would say you have to have some sort of automated alerting. Now that alerting means that you've also decided on thresholds. And that's the hard work that doesn't get done when your strategy is just monitoring. So it's very easy to just install a dashboard and say, “Hey, I have this average page time load dashboard. That means I'm paying attention to performance.” But if you don't have a clear answer to what number is good and what number is bad, then that dashboard cannot be turned into real action. So that's why I push monitoring so hard is because it allows people to ignore performance is all that matters, and it forces you to make the decision upfront as to what number matters. So that is what I would say, install some kind of performance monitoring. I don't really care what kind. Nowadays, I also think there's probably no excuse to not have Real User Monitoring. So there's enough GDPR compliance Real User Monitoring now that I think everyone should be using it. So for industry terms, Real User Monitoring is just performance monitoring in the browser. So it's just users' browser APIs and sends those back to you or your third-party provider, so having that so you actually are collecting back-end and front-end performance metrics. And then making decisions around what is bad and what is good. Probably everybody should just start with a page load time monitor, Largest Contentful Paint monitor. And if you've got a single-page app, probably hooking up some stuff around route changes or whatever your app...because you don't actually have page loads on every single time you navigate. You have to instrument whatever those interactions are. So having those up and then just drawing some lines that say, “Hey, we want our React route changes to always be one second or less.” So I will set an alert that if the 95th percentile is one second or more, I'm going to get alerted. There's a lot of different ways to do that, and everybody will have different needs there. But having a handful of automated monitors is probably a place to start. STEPH: I like how you also focus on once you have decided those thresholds and have that monitoring in place, but then how do you make it actionable? Because I have certainly been part of teams where we get those alerts, but we don't necessarily...what you just mentioned, prioritize that work to get done until we have perhaps a user complaint about it. Or we start actually having pages that are timing out and not loading, and then they get bumped up in the priority queue. So I really like that idea that if we agree upon those thresholds and then we get alerted, we treat that alert as if it is a user that is letting us know that a page is too slow and that they are unable to use our application, so then we can prioritize that work. NATE: And it's not all that dissimilar to bugs, really. And I think most teams have processes around correctness issues. And so, all that my strategy is really advocating for is to make performance fail loudly in the same way that most exceptions do. [chuckles] Once you get to that point, I think a lot of teams have processes around prioritization for bugs versus features and all that. And just getting performance into that conversation at least tends to make that solve itself. STEPH: I'm curious, as you're joining teams and helping them with their performance issues, are there particular buckets or categories of performance issues that are the most common in terms of, let's say, 50% of issues are SQL-related N+1 issues? What tends to be the breakdown that you see? NATE: So, when it comes to why something is slow in a Ruby application, I teach a method that I call DRM. And that doesn't have anything to do with actual DRM. It's just memorable because it reminds me of things I don't like. DRM stands for Database Ruby and Memory and in that order. So the most common issue is database, the second most common issue is issues with your Ruby code. The least common issue is memory. Specifically, I'm talking about allocation of objects, creating lots of objects. So probably 80% of your issues are in some way database-related. In Rails, it's 50% of those are probably N+1. And then 30% of database issues are probably what I would call unnecessary SQL. So it's not necessarily N+1, but it's a SQL query for information that you already had, or you could do in a more efficient way. So a common thing for unnecessary SQL would be people will filter an ActiveRecord::Collection like ten different ways when they could have just loaded the whole collection, filtered it with Ruby in the ten different ways afterwards, and that works really well if the collection that you're loading is like 10, 20. Turning that into one database query, plus a bunch of calls to innumerable methods is often way faster than doing that as ten separate database queries. Also, that tends to be a more robust approach. This doesn't happen in most companies, but what could happen is the database is like a shared resource. It's a resource that everybody is affected by. So a performance degradation to the database is the worst possible scenario because everything is affected. But if you screw up what's happening at an individual Rails process, then only that Rails process is affected. The blast radius is tiny. It's just that one request. So doing less stuff in the database while it can actually seem like, oh, that doesn't feel right. I'm supposed to do a lot of stuff in the database. It actually can reduce blast radiuses of performance issues because you're not doing it on this database that everyone has to have access to. There are a lot of areas of gray here. And I talk a lot in all my other material like why -- There's a lot of nuance here. So database is the main stuff. Issues in how you write your Ruby code is probably the other one. Usually, that's just what I would call code that goes bump in the night. It's code that you don't know is running but actually is. Profilers are what help us figure that out. So oftentimes, I'll have someone open up a profiler on their controller action for the first time. And they're like, wait a minute, I had no idea that such and such was running during this controller action, and actually, we don't need to do that at all. So why is it here? So that's the second most common issue. And then the third issue that really doesn't actually come up all that often is object allocation, numbers of objects that get created. So primarily, this is a problem in index actions or actions transactions that deal with big collections. So in Ruby, we often get overly focused on garbage collection, but garbage collection doesn't take any time if you just don't create objects. And object creation itself takes time. So looking at code through the lens of what object does this code create? And trying to get rid of those object allocations can often be a pretty productive way to make stuff faster. STEPH: You said a lot of amazing things there. So I'm debating on which one to follow up on. I think the one that stuck out to me the most where I have felt pain around this is you mentioned identifying code that goes bump in the night or code that is running, but it doesn't need to be run. And that is something that I've run into with applications where we have a code path that seems important, but yet I can't prove that it's being executed and exactly why it's there and what flow it's supporting. And I'm curious, do you have any tips or tricks in how you've helped teams identify that this code path isn't used and it's something that we can remove and then that itself will help speed up the performance of that particular endpoint? NATE: Like, there's no performance cost to like 100 models in an application that never actually get used. There's really no performance downside to code in an app that doesn't actually ever get run. But instead, what happens is code gets added into callbacks that usually is probably the biggest offender that's like, always do this thing after you do X. But then, two years later, you don't always need to do that thing after you do X. So the callbacks always run, but sometimes requirements change, and they don't always need to be run. So usually, it's enough to just pop the profiler now on something. And I have people look at it, and they're like, “I don't know why any of this is happening.” Like, it's usually a pretty big Eureka moment once we look at a flame graph for the first time and people understand how to read those, and they understand what they're looking at. But sometimes there's a bit of a process where especially in a bigger app where it's like, “Such and such is running, and this was an entire other team that's working on this. I have no idea what this even does.” So on bigger apps, there's going to be more learning that has to get done there. You have to learn about other parts of the application that maybe you've never learned about before. But profiling helps us to not only see what code is running but also what that relative importance is. Like, okay, maybe this one callback runs, and you don't know what it does, and it's probably unnecessary. But if it only takes 1% of the total time to run this action, that's probably less important than something that takes 20% of total time. And so profilers help us to not only just see all the code that's being run but also to know where that time goes and what time corresponds to what parts of the code. STEPH: Yeah, that's often the code that makes me the most nervous is where it's code that I suspect is being run or maybe being run, but I don't understand why it's there and then figuring out if it can be removed and then figuring out ways to perhaps even log when a call is being made to that code to determine if it's truly in use or not or at least supported by a code path that a user is hitting. You have a blog post that I read recently that I really appreciated that talks about essentially gaming benchmarking where you talk about the importance of having context around benchmarks. So if someone says, “I've improved something where it is now 10% faster.” It's like, well, what is that 10% relative to? And if it's a tool that other people are using, what does that mean for them? Or did you improve something that was already very fast, and you made it 10% faster? Was that a really valuable use of your time? NATE: Yeah. You know, something that I read recently that made me think of that again was this Hacker News post that went viral. That was like, how I optimize an AWS EC2 instance to take 1.5 million requests per second on my JSON API. And out of the box, it was like 500 requests per second, and then he got it to 1.5 million. And the whole article was presented with relative numbers. So it was like, “I made this change, and things got 33% faster. And if you do the whole thing right, 500 to 1.5 million requests per second, it's like my app is three times faster now,” or whatever. And that's true, but it would probably be more accurate to say, “I've taken three-millionth of a second out of every request in my app.” That's two ways of saying the same thing because latency and throughput are just related that way. But it's probably more accurate and more useful to say the absolute number, but it doesn't make for great blog posts, so that doesn't tend to get said. The kinds of improvements that were discussed in this article were really, really low-level stuff. That was like if you turn off...I think it was like turn off iptables or something like that. And it's like, that shaves a microsecond off of every time we make a syscall or something. And that is useful if your performance goal is to serve 1.5 million requests per second Hello World responses off of my EC2 instance, which is what this person admittedly was doing. But there's a tendency to walk that back to if I do all things in this article, my application will be three times faster. And that's just not what the evidence says. It's not what you were told. So there's just a tendency to use relative numbers when absolute numbers would be more useful to giving you the context of like, oh, well, this will improve my app or it won't. We get this a lot in Puma. We get benchmarks that are like, hey, this thing is going to help us to do 50,000 requests per second in Puma instead of 10,000. And another way of saying that is you took a couple of nanoseconds off of the overhead of every single request to Puma. And most Puma applications have a hundred millisecond response time. So it's like, yeah, I guess it's cool that you took a nanosecond off, and I'm sure it's going to help us have cool benchmarks, but none of our users are going to care. No one that's used Puma is going to care that their requests are one nanosecond faster now. So what did we really gain here? STEPH: Yeah, it makes sense that people would want to share those more...I want to call them sparkly stats and something that catches your attention, but they're not necessarily something that's going to translate to us in the way that we hoped that they will in terms of it's not going to speed up our app 30% or have those same rewards or benefits. Speaking of Puma, how is it being a co-maintainer of Puma? And how do you balance that role with all of your other work? NATE: Actually, it doesn't take all that much of my time. I try to spend about 15 minutes a day on it. And that's really possible because of the philosophy I have around open-source maintenance. I think that open source projects are fundamentally about collaboration and about sharing our hard-fought extractions and fixes and knowledge together. And it's not about a single super contributor or super maintainer who is just out of the goodness of their heart releasing all of their incredible work and time into the public domain or into a free software license. Puma is a pretty popular piece of Ruby software, so a lot of people use it. And I have things on my back burner of if I ever got 20 hours to work on Puma, here's stuff I would do. But there are a lot of other people that have more time than me to work on Puma. And they're just as smart, and they have other tools they've got in their locker that I don't have. And I realized that it was more important that I actually find ways to recruit and then unblock those people than it was for me to devote as much time as I could to Puma. And so my work on Puma now is really just more like management than anything else. It's more trying to recruit new contributors and trying to give them what they need to help Puma. And contributing to open source is a really fraught experience for a lot of people, especially their first time. And I think we should also be really conscious of that. Like, 95% of software developers have really never contributed to open source in a meaningful way. And that's a huge talent pool of people that could be helping us that aren't. So I'm less concerned about the problems of the 5% that are currently contributing than I am about why there are 95% of us that don't do anything. So that's what gets me excited to work on Puma now, is trying to change that ratio. STEPH: I really like that mindset of where you are there to provide guidance but then essentially help unblock others as they're making contributions to the project but then still be there to have the history and full context and also provide a path forward of a good direction for Puma to head. In regards to encouraging more people to contribute to open source projects, I've often heard people say how challenging that is, where they have an open-source project that they would really love people to contribute to but finding people is really hard or just letting people know that they're interested in contributions. Have you had any strategies that have been successful for you in encouraging people to contribute? NATE: Yeah. So first thing, the easiest thing is we have a contributing.md file. So that's something I think more projects should adopt is have an actual file in your project that says everything about how to contribute. Like, what kinds of contributions do you want? Different projects have different things that they want. Like, Rails doesn't want to refactor PRs. Don't send a refactor PR to Rails because they'll reject it. Puma, I'm happy to accept those. So letting people know like, “Hey, here's how we work here. Here's the community we're creating, and here's how it works. Here's how to get involved.” And I think of it as hanging out the shingle and saying, “Yes, I want your contributions. Here's how to do it.” That alone puts you a step above other projects. The second thing I would say is you need to have contributor-only communication channels. So we have Matrix chat. So Matrix is like this successor to IRC. So we have a chat channel basically, but it's like contributors only. I don't enforce that, but I just don't want support requests in there. I don't want people coming in there and being like, “My Puma config doesn't work.” And instead, it's just for people that want to contribute to Puma, and that want to help out. If you have a question and come in there, anyone can answer it. And then finally, another thing that I've had success with is doing one-on-one stuff. So I will actually...I have a Calendly invite that I think is in contributing.md now that you can just book 30 minutes with me anytime about contributing to Puma. And I will get on a Zoom call with you and talk to you about what are your concerns? Where do I think you can help? And I give my time away that way. The way I see it is like if I do that 20 times and I create one super contributor to Puma, that is worth more than me spending 10 hours on Puma because that person can contribute 100, 200, 1,000 hours over their lifetime of contributing to Puma. So that's actually a much more higher leverage contribution, really from my perspective. It's actually helping other people contribute more. STEPH: Yeah, that's huge to offer people to say, “Hey, you can book time with me, and I will walk you through and let you know where you can start making an impactful contribution right away,” or “Here are some areas that I think you'd be interested, to begin with.” That seems like such a nice onboarding for someone who says, “I'm interested, but I'm nervous,” or “I'm just not sure about where to get started.” Also, I love your complaint department voice for the person who their Puma config doesn't work. That was delightful. [chuckles] NATE: I think it's a little bit part of my open-source philosophy that, especially at a certain scale like Puma is at that we really kind of over-prioritize users. And I'm not really here to do support; I'm here to make the project better. And users don't actually contribute to open source projects. Users use the thing, and that's great. That's the whole reason we're open-sourcing is so more people use it. But it's important not to prioritize that over people who want to make the project better. And I think a lot of times; people get caught up in this almost clout chasing of getting the most GitHub stars that they think they need and users they think they need. And you don't get paid for having users, and the product doesn't get any better either. So I don't prioritize users. I prioritize the quality of the project and getting contributors. And that will create a better project, which will then create more users. So I think it's easy to get sidetracked by people that ask for your time when they're not giving anything back to the project in return. And especially at Puma's scale, we have enough people that want my time or the time of other maintainers at Puma so that they can contribute to the project. And putting user support requests ahead of that is not good for the project. It's not the biggest, long-term value increase we could be making, so I don't prioritize them anymore. STEPH: Yep. That sounds like more the pursuit of sparkly stats and looking for all those GitHub stars or all of those likes. Well, Nate, if you're game, I have two listener questions that I'd like to run by you because I shared with some folks that you are going to be on The Bike Shed today. And they're very excited and have two questions that they'd like me to run by you. How does that sound? NATE: Yeah, all right. STEPH: So the first question is, are there any paradigms or trends in Rails that inherently hurt performance? NATE: Yeah. I get this question a lot, and I will preface it with saying that I'm the performance guy, and I'm not the software design guy. And I get a lot of questions about does such and such software design...how does that impact performance? And usually, there's like a way to do anything in a performant way. And I'm just here to help people to find the performant way and not to prescribe “You must always do X, Y, or Z,” or “ActiveRecord is bad. Never use it.” That's not my job here. And in my experience, there's a fast way to do almost anything. Now, one thing that I think is dying, I guess, or one approach or one common...I don't know what to call it. One common mistake that is clearly wrong is to not do any form of server-side rendering in a web application. So I am anti-client-side app. But there are ways to do that and to do it quickly. But rendering a basically blank document, which is what most of these applications will do when they're using Rails as a back-end…you'll serve this basically blank document or a document with maybe some Chrome in it. And then, the client-side app has to execute, compile JavaScript, make XHR requests, and then render the page. That is just by definition slower than serving somebody a server-side rendered page. Now, I am 100% agnostic on how you want to generate that server-side rendering. There are some people that are working on better ways to do that with Rails and client-side apps. Or you could just go the Hotwire Turbolinks way. And it's more progressive enhancement where the back-end is always just serving the server-side rendered response. And then you do some JavaScript on top of that. So I think five years from now, nobody will be doing this approach of serving blank documents and then booting client-side apps into that. Or at least it will be seen as outdated enough that you should never design a project that way anymore. It's one of those few things where it's like, yeah, just by definition, you're adding more steps into a rendering flow. That means, by necessity, it has to be slower. So I think everybody should be thinking about server-side rendering in their project. Again, I'm totally agnostic on how you want to implement that. With React, whatever front-end flavor of the month you want to go with, there's plenty of ways to do that, but I just think you have to be prioritizing that now. STEPH: All right. Well, I like that five-year projection of where we're headed. I have found that it's often the admin-side where people will still bring in a lot of JavaScript rendering, just to touch on a bit of what you're saying, in terms of let's favor the server-rendered HTML versus over-optimizing a space that one, probably isn't a profitable space in terms that we do want our admins to have a great experience for our product. But if they are not necessarily our users, then it also doesn't need to be anything that is over the top or fancy or probably uses a lot of JavaScript. And instead, we can start simple. And there's a number of times that I've been on projects where we have often walked the admin back to be more server-rendered because we got to a point where someone was very excited to make the admin very splashy and quick but then couldn't keep up with the requests because then they were having to prioritize the user experience first. So it was almost like optimizing the admin, but then it got left out in the cold. So then it's just sort of this poor experience. NATE: Yes. Shopify famously walked back their admin from I think it was Backbone to Turbolinks. And I think that that has now moved back to React is my understanding. But Shopify is a huge company, so they have plenty of time and resources to be able to do that. But I just remember that happening at the time where I was like, oh wow, they just rolled the whole thing back to Turbolinks again. And now, with the consolidation that's gone on in the React world, it's a little bit easier to pipe a server-side rendering into a React app. Whereas with Backbone, it was like no one knew what you were doing. So there was less knowledge about how to server-side render this stuff. Now it doesn't seem to be so much of a problem. But yeah, I mean, Rails is really good at CRUD apps, and admin is like 99% CRUD. And adhering as closely as possible to the Rails Golden Path there in an admin seems to be the most productive way to work on that kind of feature. STEPH: All right. Ready for your second question? NATE: Yes. STEPH: Okay. This one's a bit more in-depth. They also mentioned a particular project name. So I am going to swap it out with a different name. So on project cinnamon roll, we found a really gnarly time-consuming API endpoint that's getting hammered. And on a first pass, we addressed a couple of N+1 issues and tuned the performance, and felt pretty confident that they had addressed the issue. But it was still fairly slow. So then they took some additional incremental steps. So they swapped out to use OJ for serialization that shaved off an additional 10% but was still slow. They also went the route of going straight to Rails cache with a one-minute expiration. So that way, they could avoid mucking with cache busting because they confirmed with the client that data could be slightly stale. And this was great. It worked out well. So it dropped their average response time down to less than 70 milliseconds. With all that said, that journey took a few hours over a few days, and multiple production deploys. And had they gone straight to the cache, then they would have had a 15-minute fix with a single deploy. So this person's wondering, are there any other examples like that where, rather than taking these incremental seemingly obvious performance whims, there are situations where you want to be much more direct with your path? NATE: I guess I'd say that profiling can help you to understand and form better hypotheses about what will make things faster and what won't. Because a profiler can't really lie to you about where time goes, either you spent 20% of your time in this method, or you didn't. So I don't spend any time in any of my material talking about what JSON serializer you use. Because really that's actually never...that's really never anybody's bottleneck. It's never a huge proportion of people's total percentage of time. And I know that because I've looked at enough profiles that the issues are usually in other places. So I would say that if your hypotheses that you're generating are not working, it's because you're not generating good enough hypotheses. And profiling is the place to do that. So having profilers running in production is probably the biggest level-upscale-like that most teams could take. So having profilers that you can access as on production servers as a user is probably the biggest level up that you could make to generating the hypotheses because that'll have real production data, real production servers, real production environment. And it's pretty common now that pretty much every team that I work with either has that already, or we work on implementing it. It's something that I've seen in production at GitHub and Shopify. You can do it yourself with rack-mini-profiler. It's all about setting up the authorization, just making sure that only authorized users get to see every single SQL query generated in the flame graph and all that. But other than that, there's no reason you shouldn't do it. So I would say that if you're not generating the right hypotheses or you don't...if the last hypothesis out of 10 is the one that works, you need better hypotheses, and the best way to do that is better profiling. STEPH: Okay, better profiling. And yeah, it sounds like there's also a bit of experience in there in terms of things that you're used to seeing, that you've noticed that could be outliers in terms of that they're not necessarily the thing that you want to improve. Like you mentioned spending time on how you're serializing your JSON is not somewhere that you would look. But then there are other areas that you've gained experience that you know would be likely more beneficial to then focus on to form that hypothesis. NATE: Yeah, that's a long way of saying experience pays off. I've had six years of doing this every single day. So I'm going to be pretty good at...that's what I get paid for. [laughs] So if I wasn't very good at that, I probably wouldn't be making any money at it. STEPH: [laughs] All right. Well, thanks, Nate, so much for coming on the show today and talking so much about performance. On that note, I think it's a good place for us to wrap up. If people are interested in following along with what you're working on and they want to keep up with your latest and greatest workshops that are coming out, where can they find you on the internet? NATE: speedshop.co is my site. @nateberkopec on Twitter. And speedshop.co has a link to my newsletter, which is where I'm actively thinking every week and publishing stuff too. So if you want to get the drip of news and thoughts, that's probably the best place to go. STEPH: Perfect. All right. Well, thank you so much. NATE: No problem. STEPH: The show notes for this episode can be found at bikeshed.fm. CHRIS: This show is produced and edited by Mandy Moore. STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show. CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed on Twitter. And I'm @christoomey. STEPH: And I'm @SViccari. CHRIS: Or you can email us at hosts@bikeshed.fm. STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week. Together: Bye. Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Whiskey Web and Whatnot
Uncle Nearest 1856, JSON:API vs GraphQL, Traveling, Mexico, and Middleburg

Whiskey Web and Whatnot

Play Episode Listen Later May 20, 2021 55:05


In this episode we try the Uncle Nearest 1856 100 proof premium whiskey, discuss the pros and cons of JSON:API vs GraphQL, and give updates on our lives post-vaccination, the new office space in Middleburg, and the latest news in the Ship Shape world.

Syntax - Tasty Web Development Treats
Selling and Shipping T-Shirts with TypeScript

Syntax - Tasty Web Development Treats

Play Episode Listen Later Apr 21, 2021 56:09


In this episode of Syntax, Scott and Wes talk about selling and shipping t-shirts, and how to do it all in TypeScript! Prismic - Sponsor Prismic is a Headless CMS that makes it easy to build website pages as a set of components. Break pages into sections of components using React, Vue, or whatever you like. Make corresponding Slices in Prismic. Start building pages dynamically in minutes. Get started at prismic.io/syntax. Sentry - Sponsor If you want to know what’s happening with your code, track errors and monitor performance with Sentry. Sentry’s Application Monitoring platform helps developers see performance issues, fix errors faster, and optimize their code health. Cut your time on error resolution from hours to minutes. It works with any language and integrates with dozens of other services. Syntax listeners new to Sentry can get two months for free by visiting Sentry.io and using the coupon code TASTYTREAT during sign up. Deque - Sponsor Deque’s axe DevTools makes accessibility testing easy and doesn’t require special expertise. Find and fix issues while you code. Get started with a free trial of axe DevTools Pro at deque.com/syntax. No credit card needed. Show Notes 01:58 - T-Shirts 101 T-Shirts are cool I sold 100 right away to get the kinks out Then I did pre-order The stack TypeScript React Next.js 09:08 - Selling: Front-end Snipcart It’s a button When Someone buys, they scrape the site for the HTML If you only have a client-side rendered button, you use the JSON API instead Integrated into Gatsby pretty easily Wrote one custom hook to count inventory and disable when sold out I thought Snipcart would be enough, but I soon realized it wasn’t. I needed something to fulfill the shipment. 10:10 - Selling: Shipping Quotes Snipcart has integration for USPS, etc. You can also do custom shippers It’s a webhook They also take care of customs declaration 13:30 - Selling: Backend Next.js Dashboard Integrate with ChitChats, Stallion Express, and SnipCart. The tech Shipping Labels Packing slip 18:05 - Fulfilling Printing labels Designed with CSS + React Print CSS is wild Fan Fold labels were way better I switched to Stallion Express Cheaper Printing packing slips Batch scanning Scanning → Mark as shipped Started with webcam Bought scanner for cheap QR code was better because my tokens were long Data matrix is often better Sending notifications Hit the endpoint via Snipcart 28:48 - The physical part T-Shirts printed from local supplier U-Haul to get them here Bags printed in China (about 40 cents each) I wrote a bunch of code to organize by size This cut down on moving around (14 hours if you save 30 seconds per shirt) Some got stickers Multiples were the hardest 24 different types of shirts some wanted 4xl some wanted tall 36:30 - Common questions Why did you do this yourself? Fun project I learned a ton This is how you don’t burn out Why not print-on-demand? (DTG) Tonal Embroidery Quality Money Pay people in my community Control Bags, stickers, etc… stickermule Why not $companyThatHandlesIt I want to do stickers I want to do decks Why not Shopify Large orders still need major fulfillment strategies Code has to be written or money spent 44:16 - Other lessons learned Queues would be good here Sometimes you had to wait 3+ seconds for the confirmation of shipping No one reads, it was pre-order Don’t buy shipping right away — people email about incorrect addresses Over-order by a few each (out of 1550 orders, five got partial refunds and three got full refunds) Pre-order is great because you can offer many sizes Async JS to do things at most 50 at a time Links Wyze Plug ××× SIIIIICK ××× PIIIICKS ××× Scott: Pixeleyes AutoMounter Wes: Baratza Encore Conical Burr Coffee Grinder Shameless Plugs Scott: Level 2 Node Authentication - Sign up for the year and save 25%! Wes: All Courses - Use the coupon code ‘Syntax’ for $10 off! Tweet us your tasty treats! Scott’s Instagram LevelUpTutorials Instagram Wes’ Instagram Wes’ Twitter Wes’ Facebook Scott’s Twitter Make sure to include @SyntaxFM in your tweets

Tag1 Team Talks | The Tag1 Consulting Podcast
A Deep Dive on Decoupling Applications: Decoupled Drupal- Past, Present & Future - Part 2 - Tag1 TeamTalk

Tag1 Team Talks | The Tag1 Consulting Podcast

Play Episode Listen Later Mar 29, 2021 18:53


Decoupled Drupal is now a fixture of the Drupal community and ecosystem, but it has roots in key software concepts like the https://en.wikipedia.org/wiki/Separation_of_concerns (separation of concerns). Today, decoupled Drupal is commonplace across the Drupal world, not only at the highest echelons of enterprise implementations of Drupal, but also among smaller consultancies beginning to get their feet wet with headless CMS architecture. From the early days of Drupal's remarkably prescient web services, to the adoption of the JSON:API specification in Drupal core, to the ongoing innovation surrounding GraphQL in Drupal and the Decoupled Menu Initiative, how far have we come? How has decoupled Drupal evolved over the years, what makes it such a compelling architectural paradigm in light of the emerging distributed CMS trend, and why is it still in vogue? In part 2 of our series on decoupled Drupal, Preston So, the author of Decoupled Drupal in Practice (2018), co-originator of the term progressive decoupling, and Tag1 Consulting's Editor in Chief and Michael Meyers, Managing Director at Tag1, in this discussion of decoupled Drupal’s history and future, and the hows and whys of using decoupled Drupal.

Talking Drupal
Talking Drupal #287 - JSON:API

Talking Drupal

Play Episode Listen Later Mar 24, 2021 68:04


Today we are talking about a topic I don't know a ton about. Luckily we have my other co-hosts with me today to discuss the JSON API. www.talkingdrupal.com/287 Topics Shane - Head cold, Around Stephen - Mouse Shake Nic - Versioned Docker Images, ZWaveJS John - Smartless What is an API Protocols REST API Characteristics JSON JSON:API GraphGL Drupal and JSON:API JSON:API Features Tools Resources Around Tugboat previews are now available for contributed modules and themes JSON:API Extras JSON:API Explorer Postman JSON Formatter Hosts Stephen Cross - www.stephencross.com @stephencross John Picozzi - www.oomphinc.com @johnpicozzi Nic Laflin - www.nLighteneddevelopment.com @nicxvan Shane Thomas -  www.CodeKarate.com @smthomas3  

Technically Religious
S3E04: Tech In Religion 02

Technically Religious

Play Episode Listen Later Mar 9, 2021 41:41


image credit: CWWally: http://www.threadless.com/@cwwally) “Tech In Religion” is a running series under the Technically Religious umbrella. In these episodes, we look at technology - be it a website, a phone app, or a gadget - that somehow deepens, strengthens, or improves our experience of or connection to our faith (our religious, moral, and/or ethical point of view). This is a tech review lovingly wrapped in a through-line about faith in general and our experience of faith in particular. The goal is to uncover and promote tech you (our audience) might not have heard about; or describe a use for tech you may know, but didn't think of using in connection with your religious experiences. In this episode, Leon Adato is joined by Doug Johnson and Stephen Foskett. Listen or read the transcript below: music (00:01): [Music] Leon Adato (00:32): Welcome to our podcast, where we talk about the interesting, frustrating and inspiring experiences we have as people with strongly held religious views working in corporate IT, we're not here to preach or teach you our religion. We're here to explore ways. We make our career as IT professionals mesh, or at least not conflict with our religious life. This is technically religious Leon Adato (00:53): Here on technically religious. We focus on how we work to make our religious lives compliment, or at least not conflict with our career in tech. But what about the way tech enhances our lives as people with a strong connection to our faith, or lack thereof. In our ongoing series tech in religion, we aim to do just that in each episode, we'll highlight technological innovations that enhance, strengthen, and deepen, our connections to our religious, moral or ethical point of view. I'm Leon Adato and sharing their reason. Thoughtful, humble opinions with me today on the tech that helps our religion, our Doug Johnson, Hey, and also a newcomer to the technically religious, uh, cast is Stephen Foskett great to be here. Great to have you. Okay. So as is our want on technical, what we'll do is we're going to start off with shameless self promotion. Go ahead and tell us, uh, a little bit about yourself, whatever you're working on, that you want to bring to light for the listeners. And of course we want to know your religious point of view. Um, Doug, as the seasoned veteran, that means you're old. Doug Johnson (01:57): All right, here we go. I'm the old guy. Yep. Uh, Doug Johnson, I'm a web. Uh, my day job is I'm a web developer for Southwestern health resources, my side gig, which is going to make me a billion kazillionaire some day, If I live long enough is, uh, I'm the CTO for, uh, an RFID inventory company. So if you are an op, somebody with an optical shop and you really want to do your inventory better, why check out waverfid.net? I can be found on all of the various sundries Facebooks, et cetera, as @Dougjohnson. And I'm an evangelical Christian, but not one of those weird ones. You know, we were allowed to dance, but not in the, uh, not, not in the aisles. Leon Adato (02:40): There we go. Okay. I didn't realize there was a delineation between aisle dancing, Doug Johnson (02:43): I should show you Leon Adato (02:45): Aisle dancing, Evangelical Christians and not, Doug Johnson (02:48): I'll tell you that aisle dancing white evangelical Christians have got better music than a lot of the rest of us, but, but yeah, the, yeah, it's. Leon Adato (02:56): Ok, all right. Doug Johnson (02:57): I'll show someday when we have nothing better to do, I'll show you, there's some great video out there. Leon Adato (03:01): We should record that. That'll be educational for everyone or entertaining. We'll see. All right, Steven, uh, please help bring this a little bit of maturity and, uh, and seriousness to this. Stephen Foskett (03:13): Well, I'm glad that you, uh, brought me in to bring you both down to, down to earth as it were. So, yeah, so I'm Steven. Uh, I, uh, my day job is running gestalt IT and tech field day. Um, maybe you didn't know this, but I am also a writer in the wristwatch community, um, and quite active in the world of collectors there. And, um, I do a podcast on artificial intelligence as well called utilizing AI. Um, as far as religion goes, I was raised as a liberal Christian in the Episcopalians in Connecticut. And, um, have since become even more, um, I dunno, loony left by going to the Unitarian Universalists and becoming essentially a humanist. Leon Adato (04:04): Uh, we, we take all kinds here, uh, and it does, it would take all kinds. Doug Johnson (04:09): Just this side of Buddhism is cool stuff. Leon Adato (04:11): It's. Stephen Foskett (04:12): I believe in people. Doug Johnson (04:12): Right. Leon Adato (04:14): That's good. I think be believing in people is not a bad position to take. All right, I will, um, I will close the circle by providing my information, which probably the technical religious folks can repeat on their own, but we'll do it anyway. I am Leon Adato. I am a head geek. Yes, that is actually my job title, and I took it almost sight unseen when they offered it to me at SolarWinds, which is neither solar nor wind. It is a software vendor that makes monitoring solutions. You can find me on the Twitters, and I say it just that way to horrify Keith Townsend's daughter. Every time I say it, you can find me there at @LeonAdato. Uh, I also am known to pontificate on things, both technical and religious on my website, which is adatosystems.com. And I identify as an Orthodox Jew and occasionally my rabbi will admit to knowing me. So there we go. That gives you an idea of what, Doug Johnson (05:05): So you're like a liberal Orthodox, Leon Adato (05:09): Yes, okay. Orthodox in terms of Judaism, not in terms of perhaps political or even, uh, you know, personal restraint, concept. Stephen Foskett (05:21): Hush up there you Non dancing evangelic. Leon Adato (05:23): Oh you want to see non dancing. You should come to my side, then it's, you know, then you can't leave no mixed dancing, like, forget about it. It's the whole thing. All right. So tech in religion, which is what this series is called focuses on, uh, finding technology that helps deepen strengthen, or, uh, clarify our connection to our religious point of view or religious experience. So, um, Doug, I'm going to pick on you first. Do you have some technology that really helps you out with your being an evangelical, but not one of those kinds? Doug Johnson (05:57): Yes. Well, I mean, I've got technology that helps me everywhere and it's, it enables, it enables my, uh, religious practice because, um, I am multiple things. Uh, some of them good, but most of them are like, I'm ADD, or I'm now AAD. Right. I was ADHD. And then I was, I thought I was ADD, and then I found out I was ADD HD, and then now it's AED. I'm an adult. I, Leon Adato (06:25): Attention defecate. Doug Johnson (06:25): They, they keep on changing the letters on me. So I am whatever the current one is. All right. But, uh, and I'm also have SAD, which is a seasonal affective disorder, except now it's called depression seasonal type or who cares? I mean, you know, it's just so some, between the months of October and March, my brain stops. Not completely. Um, but it just becomes absolutely worthless. In fact, we have quite an indicator. Um, I was late to this meeting because I forgot. It was on my calendar. It was everywhere. Things were beeping. I'm sure phones were going off. And, you know, I just completely forgot. So everything that I have is basically, uh, designed around to keep my brain on target when I'm doing stuff. Leon Adato (07:12): Okay. Doug Johnson (07:12): So, uh, the first one is Trello. Trello is basically used for managing projects, right? Leon Adato (07:19): I was going to say, when you put it on the list, when we were prepping for this and you put on the list, I'm like Trello, helpful for being an evangelical Christian. These are, I wasn't going to make that connection, but I want to hear this. Doug Johnson (07:30): The question is, so what does your practice involve? I mean, do you do stuff for your church. Or your synagogue or whatever, do you do projects? Do you work with people on things? Leon Adato (07:43): Uh huh. Doug Johnson (07:43): Imagine that you were stuck with me on your committee, and. Leon Adato (07:48): [snorts with laughter] Doug Johnson (07:49): Exactly there you are. Now you understand, keep in mind that people who are, because I've been a Christian for so long. And because I actually do read the Bible and know the stuff that's in there, people always think, gee, this guy's really devout, which I am, but they don't also realize how flaky I am. And so by the time they find out how flaky I am, it's too late. Leon Adato (08:12): Its to late. Doug Johnson (08:12): They've already brought me in. They have me on committees. They have me doing stuff. One church made me a deacon. I mean, come on, think about this. So the reality is I have to go ahead and find ways so that I can get the things done that need to be done. The fact is there's a lot of people in Christianity who are wound just a little bit a little bit tightly. Just a smidge. Leon Adato (08:40): I even, I might have noticed that occasionally, but I wasn't going to attributed to Christianity particularly, but ok. Doug Johnson (08:46): Well I don't know. It's the group that I'm used to working within the, and I will tell you that the ones who actually make it into any kind of leadership position, except for ones who are attributed to be devout, but they don't know in flaky yet, anybody that's actually really, they're pretty tightly wound because they're, you know, in, in Christianity, it's really easy to offend people. And so the people who really make it are really good at not offending people. Now imagine that you go ahead and give Doug something to do, and he totally freaking forgets or the waits till the last minute. And there's like 15 people or, you know, anything at all. So Trello allows me to go ahead and keep track of what it is that I have to get done and what I've promised. And I actually, it's easy enough to use that. I can get other people on the committee, to go ahead and assign me tasks in Trello, and now it's there and I can track it because if they just ask me to do it, I'll agree to it. And if I can write it down right then fine. But the odds are by the time I get to my car, I've already forgotten, Leon Adato (09:47): Right? By the time you turned around and said, hello to the next person, you've forgotten. Doug Johnson (09:50): pretty much. Well, I mean, you know, w when we were all in churches all the time, you know, we were greeting, meeting and greeting each other, and I could have had a great conversation with you. And by the time I've talked to the third person after you it's gone. So that's why that's how Trello helps. I mean, I use it a lot of different places, but it does help me. It keeps me from getting kicked out of the church. So I may get kicked out for another reason, but at least I don't get kicked out off the committee for not doing my work. Leon Adato (10:18): Got it. Okay. What's up next? Doug Johnson (10:21): Um, the next one is not actually an app. It's, uh, it's called the Pomodoro technique. Uh, Pomodoro is Italian for a tomato and some Italian guy, had a timer, a little spinner timer thing that looked like a tomato. Leon Adato (10:40): Aha. Doug Johnson (10:40): And what he did was he came up with this tea, He would spin it to 25 minutes. He would work, heads down for 25 minutes. When the timer went up, he would get up and walk away for 5 minutes and then he'd come back and he'd spin it for 25 minutes and he would heads down and you would do one thing for that 25 minutes. And then you'd get up, uh, another tech in another way, you can do it like 45 and then 15 or 50 minutes and 10, you know, but it's a combination of block of time with a timer and then a break. Um, now again, back to ADD, SAD, all those kinds of wonderful things. Now, the only way I get anything done, the only way I can go ahead and do stuff is to say, ah, for the next 25 minutes, I'm going to read scripture. And I'll sit down and do it. Whereas if I sit down to go and read and I'm like 3 verses, and I go, Oh, that's a good idea. I'm going to go look at this other thing. And I look up something on that and look, and next thing you know, I've read 3 verses it's 3 hours later. Um, and You know, Leon Adato (11:43): You've rea 42 Wikipedia, half of 42 Wikipedia articles, Doug Johnson (11:46): OH exactly, Leon Adato (11:46): you've built three websites partially, Doug Johnson (11:51): Exactly, but I haven't finished, Leon Adato (11:51): And you're holding a chicken in one hand and an Apple in the other. Doug Johnson (11:55): Exactly. But I have not yet finished my scripture reading for the day. So. Leon Adato (12:00): Of course not. Doug Johnson (12:01): The Pomodoro technique is it helps me at work, but it also helps me with my spiritual life, because I can go ahead and say for this next 25 minutes, I'm reading scripture. Or for this next 25 minutes, I'm praying or what, and it's limited, it's time, limited time boxed. When that thing goes off, I can get, stand up and walk away from it and say, that's it. I did it good. It's just like, it's like a spiritual discipline except, you know, not exactly. Leon Adato (12:29): I always wonder I mean especially. Stephen Foskett (12:30): Except its the exact opposite of being disciplined. Doug Johnson (12:32): Exactly. It's spiritual discipline for those of us who have no discipline whatsoever. Leon Adato (12:37): Right. And I just want to imagine God's side of that conversation, right? Like, you know, you're praying for 25 minutes and, you know, the, the, the beginning starts off real slow and real careful. And at the end it's like, and then I went, Oh, I'm done. So wait. and its like. Doug Johnson (12:56): Well, . And again, it depends on how you pray. A lot of my prayer is like a couple of things, and then I just shut up because really. Leon Adato (13:02): Got it. Doug Johnson (13:03): I think, God talks to God talks to me a lot more than I, he knows what's going on with me. And he knows it's really messed up. I mean, that's just the way that's, he knows that. So, uh, so I find it it's a lot, a lot easier for me to just shut up and listen for God. And I always know it's God talking, because he always asks me to do stuff that I would never come up. Leon Adato (13:26): [snorts in laughter] Doug Johnson (13:26): with in a million years on my own. I, once I once wrote a children's Christmas play, that had, 30 kids from the church in it, that I directed, and acted in, because I knew that it would get the parents into church one day in the year that they would never have come in otherwise. Now, you know, that's from God. Cause she, Leon knows I'm not a, I'm not a great fan of kids. Uh, you know, it's just like it, Leon Adato (13:55): You're really a people person and you're not a small people person. Doug Johnson (13:58): No! And they love me for God only knows why, but it just, you know, and so there it is. I'm just, so that was God. Leon Adato (14:07): Got it. Okay. One more. We got one more, you only get three on these shows. Doug Johnson (14:11): Ok. One more, this one, this one's easy and this one's relatively new to me. I came across it. It's called habitbull as habit. The word habit and bull as in a cow except. Leon Adato (14:21): Moo. Doug Johnson (14:21): The male kind. Yes. Moo Stephen Foskett (14:23): I was thinking it was where the nuns put their hats. Doug Johnson (14:25): Um, could be. Leon Adato (14:28): You know, I haven't been on a farm a whole lot, but don't mess with the bull is, Doug Johnson (14:33): There's all kinds of ways we could. Stephen Foskett (14:34): I though it was bowl like a, like a cylinder, like a half of a sphere. Doug Johnson (14:37): Oh yes, no, no. In this case. Leon Adato (14:38): No, no, this is. Doug Johnson (14:38): it's a, yeah. The logo is, you know, like hook 'em horns, Texas, uh, university of Texas stuff, whatever. But. Leon Adato (14:46): Got it. Doug Johnson (14:46): Basically it's, it, uh, allows you to go ahead and habits that you want to do to go ahead and give it, uh, a frequency, a cadence, like I want every day I want to do this or 3 times a week. I want to do this. Or in the next month, I need to do this once a week. So you can lay out what they are, and it gives you reminders. And as you Mark them off, it gives you a string which actually builds that. Um, what are they, you, you, you you've put a string that string, that string of successes together. And after a while, you don't want to break the streak. So. Leon Adato (15:26): Got it. Doug Johnson (15:26): The beginning of this side, the first time I used it, I used at the beginning of the summer, when we were all locked down, I decided I should really start getting, and I got to like 80 or 90 days of walking, 8,000 steps every day. And I can tell you that since I'm not doing that at the moment, um, I managed to get 8,000 steps at least twice a month. Um, so. Leon Adato (15:48): wow. Doug Johnson (15:48): When I use it, and so basically what I, I had a scripture reading down my daily scripture, reading on habit bulletin, and that helps you maintain a streak. So it's really good. You, you get like 3 or 4 habits, uh, for the free version. And for, I forget however much it, you can get unlimited habits that you want to track, but Stephen Foskett (16:10): I just even thinking of the nuns, I'm sorry. Leon Adato (16:13): I was going to say, like you could see it on his face that he's just thinking of the nuns unlimited habits, it's like a panty raid but at a monastary. Stephen Foskett (16:19): how many can you put on it once, right? Doug Johnson (16:22): And now we know why the Catholic church, doesn't like the rest of us. Leon Adato (16:28): There's. I still. Doug Johnson (16:29): Oh, well, in any case, I'm going to let all of that just go because I am much more kind than that. Yeah. Okay. Bye none of us, none of us bye that, so, okay. But those are my three. Leon Adato (16:43): Great. And, and for the last one though, I, I like the idea of GAM, gamifying, your spiritual experience that, you know, I mean, we really are, you know, little monkeys sometimes as far as that goes and, you know, just feed the mice and the maze or whatever metaphor we want to use, you know, feed you know, you get that one little burst of endorphin and it just causes you to want to do more. And why not make your, your religious experience. Doug Johnson (17:09): Yeah, exactly. Well, and that's why Trello works for me because I get to check out, when my wife figured out that I like scratching things off lists. I mean, trust me, I get lists of things that she doesn't ask me to do anything more. She puts out on a list because she knows I'll check it off. So I'm a, I am for better or worse. I am really, I'm not a good human being, but I'm a heck of a monkey. So just so I use my tools to make me a better human being. Doug Johnson (17:40): There We go. All right. So Stephen Foskett (17:43): Were all just tech of a monkey, I think. Leon Adato (17:44): Yeah. Well, we're all, we're all wonderful monkeys. The question is whether we can make into better human beings as Well. Um, I like it. All right, Steven. Uh, I. Stephen Foskett (17:54): Yes. Leon Adato (17:54): Realized that that was a very, bizarre conversation to follow up on, but, uh, you've given us some thoughts. So I'm curious about the tech that you use. Stephen Foskett (18:04): All right. Well, I'm gonna, um, first apologize, uh, for, um, uh, you know, I'm going to defend Facebook, so I'm sorry. Um, I'm sorry, those of you who find that a sin, um, frankly, it's terrible. We all know it's terrible, but it's also kind of not terrible. Um, because truly, I think that essentially we all need to find ways of connecting to each other and frankly, it's where everyone is. And it's not only that, but if you squint and turn your head and mute enough, you can actually see some positives to it too. And, um, you know, for example, um, you know, here in, in my town, um, there's a terrible town Facebook group, and everyone has one of those. Um, there's also a group where people go out in nature and take pictures of owls and trees and ponds, and talk about how they've discovered something lovely and wonderful in the town. And somehow that group has not yet been polluted by red and blue comments, and it's just, you know, wonderful. And it's the same thing, you know what I mean? You know, connecting with your family, connecting, you know, maybe some people in your family, you kind of don't want to connect with any more, but you know what, it's important that we know who's graduating. It's important that we know who's sick and who's better. And it's important that we keep connected and frankly, whatever makes that happen is I think a pretty good tool. And, uh, again, I, I don't want to say anything nice about them, but this is what makes it happen for me, frankly. This is the tool that we're using to keep connected with our families and, you know, in the pandemic, I think that that's doubly important. Um, people who have distributed families like me, that's incredibly important. Um, and so, yeah, um, Leon Adato (20:09): Ok. Stephen Foskett (20:09): It's a great, it's a great thing. Leon Adato (20:11): I, you know, I can see the treatise now, you know, in defense of Facebook. Doug Johnson (20:18): I was away from it for a year and I came back and, you know, it's, it's not terrible. Um, I it's, I'm learning how to not follow people. That really are just over the side, but you're, I mean, there's a lot of good this, there, I, in fact, I miss Twitter because there were so many people that I enjoyed following, but everybody's just so wacko for a while there during the, during the Trump years. I'm, I'm, I'm, I'm hoping that it's just gonna chill some here. Leon Adato (20:46): Well And there's there. Just to add one quick comment, which is, um, a conversation that we were having a friend of mine. And I said, you know, he, he said, this is it. I can't deal with so-and-so anymore. I'm going to have to cut them out of my life. And, uh, you know, they're saying all this stuff on Facebook, it happened to be that I just can't, I can't deal with it. I can't fall. And into this conversation, my rabbi, who by the way, is on Twitter, which is a whole other conversation, but okay. And he said, you know, you don't actually have to listen to them. You could actually choose to mute. And again, this is by rabbi talking to me, the tech, you know, technology person and my friend who is a programmer and saying, you know, they have these options so that you never see anything that they say at all. And that way you wouldn't have to hear the horrible things that I'm not saying. They don't say horrible things. I'm just saying this doesn't have to impact your relationship with them in the sense of like, if the things they say bother you don't read them because they don't say them in public. Stephen Foskett (21:54): Yeah. And honestly, um, that, you know, I'm going to say, I'm going to, I'm going to change, changing up my, my list here. Um, I have to say that I've learned more about people and I've gained a better appreciation from people from dealing with people on social media, generally, um, Twitter. Um, so here's the thing, the other month I said something off the cuff that came off as incredibly stupid. And insensitive. Um, and it got retweeted a lot, like a lot, like I got probably 500 hateful comments, um, from people. And it was enough that I actually just got another spate of them last week because it's one of those famous things that keeps coming back, look at this stupid guy and this stupid thing he said, but, you know, what's funny. Um, and I think that this is, you know, perfectly fitting for, um, uh, context like this. The most remarkable thing is that I took the advice of, well, of all of the people that I admire and all the philosophers that I respect. And basically the answer was, you know, you did the thing, you know, recognize the humanity in these people. They're angry at you because of the way that they're perceiving you and, and, and what can you do with that? And so, instead of, um, and I haven't, I haven't talked about this really much. Um, so this is kind of a nice opportunity for me instead of, um, like yelling at people or telling them, you know, they're stupid or muting everybody or deleting it. Um, instead, you know, what I decided to do, I decided to write a response to every one of the people that contacted me, except if they swore at me, if they, if they swore at me or called me a Nazi or something, I was just like, okay, I don't need to engage. This person is just angry. Leon Adato (24:07): Uh huh. Stephen Foskett (24:07): And engaging with somebody who's just angry is probably not good. But if they said something like you're so insensitive, what about women? What about the disabled? You know, I replied and I said, you know what? I can see how you could get that from what I wrote. And I don't feel good about that. And that's not a reflection of who I am, and I'm sorry that you feel this way. And I'm sorry that I said something that, and you know, what happened next? What happened next was I got hundreds of responses back saying, wow, that was really nice. I really appreciated this response. You know, um, I'm still talking to some of these people, you know, six months later who basically introduced themselves by saying you're an idiot and you're insensitive. And I have to say, I've actually learned a lot more about people and I've learned how to work with people and how to, um, and I've learned more respect and humility from a bad day on Twitter than I did in a lot of Sunday school. Doug Johnson (25:15): Good. Leon Adato (25:15): Wow. Doug Johnson (25:15): Yes, I totally get that. I mean, it, it's hard to go ahead and, not, not strike back. And so that, that on your part is admirable. And, you know, being able to go ahead and essentially own what you own, what you did and be willing to engage. And I try and engage. I offend people all the time, not intentionally there's people who do it intentionally. Leon Adato (25:42): I can vouch for the truth of this. Doug Johnson (25:43): It is right. When people come to me and say, I'm an idiot and I'm insensitive. I go, boy, you're, I could, I, you are so right. And I, upon, you know, what, what did, what did I do today? All right. And, and so, and, and, but, you know, again, being willing to own it and apologize for it, if it deserves an apology or to say, Oh, I, you know, I did not even think of it that way. I apologize to, you know, it goes a long way towards connecting with people. Which im not great at. Stephen Foskett (26:12): Yeah. And what you find is that, you know, people are really, a lot of people are really hurting and a lot of people are really, um, angry at the situations that they see around them. And they're kind of ascribing things to these situations. And by basically opening up and listening, um, you know, you can get a lot more out of it. And a lot of like real personal growth out of it. Um, and really that kind of fits with my, you know, my beliefs, you know, I believe that, you know, that people can transcend what they are, and what they, what they seem to be. And if you give them a chance, a lot of the time they will. And like I said, truly, a lot of people are just angry and, you know, sometimes, you know, you got to just let that burn out a little bit. So anyway, so I have definitely learned a lot more about that. Um, you know, and frankly, I feel like, you know, the other things that I was going to talk about, um, you know, unlike Doug, I absolutely do not have the Bible memorized. Um, but I do have blue light, uh, blue letter Bible on my iPad. And that lets me look stuff up and cross reference it when I need to. Um, Leon Adato (27:29): I think that overall the, you know, if there's one thing about just devices in our pocket at all, it's having access to a text that I am comfortable with, as opposed to having to arrive at a building and pull a book off the shelf that I might not be as familiar with, or know where to find things or whatever, and in a language that I'm comfortable with in a font size that I'm comfortable with. Like, I think that just the single most effective use of technology is personalizing the text in ways that are very personal to us. I think that that makes a huge difference. So yeah, I can see that. Stephen Foskett (28:08): Yep. And the amazing power of computers to cross-reference. Leon Adato (28:12): Uh huh. Stephen Foskett (28:12): Is just, um, and then search is just incredible. I mean, to think that you can say, um, you know, I want to find like, like, like, you know, Doug, you're writing a sermon and you're like, I need to find that quote where Jesus says this one thing, and to be able to just like, like click the little magnifying glass and you're there, you know, I mean, Doug Johnson (28:34): And you find out it was actually Joshua who said it. Stephen Foskett (28:37): Yeah. Jesus didn't say a lot of the things people think he said. Leon Adato (28:42): Right. Stephen Foskett (28:42): Um, yeah. And then I guess the final thing that I'll give a pitch to is, um, especially in the pandemic, I think a lot of people are in need of some personal connection and, and someone to talk to and someone to talk back. And yet we can't really go out. And so I am, I never thought that I would be into audio books, but I got to say, audio books are awesome. And. Leon Adato (29:07): Uh huh. Stephen Foskett (29:07): Being able to, you know, to sit down and just listen as somebody reads you, their book is, uh, it's weird and cool. Um, also puts me to sleep, but, um, at least. Leon Adato (29:23): But in a good way. Stephen Foskett (29:23): it couldn't go back again. Leon Adato (29:25): In a good, but in a good way, I mean, you know, it is, it is that comforting voice of somebody who has basically promised no, no, I'm going to read to you until you're calm. I'm going to keep giving you some ideas that will distract you from the circle, spinning of your brain. And I'll be there. Stephen Foskett (29:42): And there's something wonderfully soothing about somebody reading to you. Leon Adato (29:46): Uh huh. Stephen Foskett (29:46): I think it's a, it's like one of those things, like, you know, we're, you know, from when we're children, like, we love to have somebody reading to us. And especially now, like I said, with the pandemic, you know, you're, you, you know, everybody's trapped inside, you can at least sit and you can listen to somebody and you can kind of escape from this, into your head in a good way. Leon Adato (30:05): Uh huh. Stephen Foskett (30:05): And, um, and, and I'm loving that. Leon Adato (30:09): So just to, to add on to that one, uh, again, as, as people have been listening are familiar with, but if, if you're not familiar with Orthodox Judaism, uh, on Shabbat, the Sabbath from Friday night sundown until Saturday Sunday, and if it has an on switch, it's off limits, that's the easiest way to say it. So that means that, um, you know, for, for 24, 25 hours playing an audio book, or the television or any of those things is, is not going to work. So what's happened in our house is that, um, I will read. You know, we'll, we'll pick a book. We've, we've worked our way through the Harry Potter series a couple of times. And I will read with all the voices and that's what we do and lows during the day. And then at night the same thing, like, you know, my wife is sitting there, her brain is spinning with all the things that have to happen, whatever. And of course your brain is spinning with things that have to happen that you can not do because it's Shabbat, right? So now you have nowhere to put this and nowhere, nothing to do with this. So what do you do? You know, I sit there and I read, I read until she falls asleep and it's really, it's just sort of a delightful and the kids all come trundling to the room. My kids are in their twenties. Okay. Let's just be honest about this. So they come in and they've got their blanket and they lay, you know on the floor or whatever it is and we read and it's just, You know. Stephen Foskett (31:31): That's about the nicest thing I have heard in months. Leon Adato (31:36): Yeah. It's, it's fun. And they look forward to it. It's one more reason to look forward to what a lot of people like, how can 24 hours without anything, how do you do that? I mean, well, in my house, it's like, is it Shabbat yet? Can't we have Shabbat now? Like still got two more days to go kid. Come on. Stephen Foskett (31:53): Can you do Dumbledore for us please? Leon Adato (31:56): [Reading Harry Potter] She may have taken you grudgingly furiously, unwillingly, bitterly, Yet, She still took you. And in doing so, she sealed the charm. I had placed upon you. Your mother's sacrifice made the bond of blood, the strongest shield I could give you. While you can still call the poll, call home the place where your mother's blood dwells there, you cannot be touched or harmed by Voldemort. He shed your blood. He shed her blond, but it lives on and you and your, and her sister, her blood became your refuge. So that's Dumbledore. Stephen Foskett (32:28): I hear it. I hear it. I'm really glad that you don't sound like the Dumbledore in the movies. Leon Adato (32:32): No, no, no. John Huston, John Huston is the voice of Gandalf and Dumbledore like that is the wizard voice. Um, that's just in my head. That's what he sounds like. Um, so anyway, uh, back to our conversation, back to the topic, uh, audible books certainly are, you know, a calming source so that I can see how that, that would, that would be good. Okay. So tell you what, after, uh, doing my Dumbledore impression, I'm gonna, uh, wrap this up with a couple of recommendations of mine. Uh, just two of them. The first one is something that I mentioned in another episode, hebecal.com. And I said that right as Stephen was taking a drink. So now I own the new cube, keyboard because he just spit all over it. Um, yeah, hebcal.com. That's actually a website and it is a calendar that will give you all the different holidays and times and things like that incredibly useful because, uh, the Jewish calendar can be insanely complicated. And that's something I mentioned in the other episode, but what I wanted to bring out here is that there's two particular features on that website. First one is after you have created your customized calendar, that shows the things that you want and not the things that you don't want, you can export that to an Ical format. So it's not just like you have to go back to that website every time you want something, you can create your own calendar, including things like, you know, people's the, the anniversary people's deaths within what's called a Yartzite, which is very important. You can output that in the Ical format and have that sort of in perpetuity year after year, you can have it built into your calendar. And I find that that's especially useful because it's easy to forget that it's the first night of Hanukkah because it changes from year to year across the regular calendar. The other part is that, and this is very, very, you know, technically religious, there's an API, there's an actual restful JSON API. So if you're building your own application that needs to grab a Hebrew date, or what Torah reading, what Torah portion is that week, or what time sundown is or whatever, or what holidays are coming up, you can actually make a function call to the website, through their API and grab all that information back and use that. And as a technologist who has written a couple of WordPress modules and things like that, it is incredibly helpful because they've done the legwork on all the really hair on the knuckles, hard, uh, calendar programming that is so difficult to do. So that's the first one. Doug Johnson (35:09): sweet. Leon Adato (35:09): And, um, Stephen Foskett (35:10): I really want to know if you can do a JSON post of why is this night different from any other night. Leon Adato (35:17): Uh, and get answers back. Stephen Foskett (35:19): Yeah. That I, that would be an API. So subscribed to, Leon Adato (35:22): I can, I can. Doug Johnson (35:23): That would actually be a get. Leon Adato (35:26): Well, hold on. No, no, no, no. Stephen Foskett (35:27): No no, That's something different. Doug Johnson (35:30): Unless you're going to send an unless you're sending your answer. Leon Adato (35:33): No, no, no. You need to do is you'd need to have the URL. And the first variable is which son you are. Doug Johnson (35:40): Right. Leon Adato (35:40): Because that's going to tell you what the return that's. So it would be, uh, a, uh, uh, get function. Doug Johnson (35:47): Alright, I know what I'm doing this weekend. Stephen Foskett (35:50): Yup, bracket quote. sun order colen. Doug Johnson (35:52): Right. I have to tell you, I am, I'm grateful for hebcal, because I remember Leon talking to me probably 10, 12 years ago about how we were going to build this thing. And fortunately, they got it built before I had to do it. Leon Adato (36:07): Right. Doug Johnson (36:08): We, we started talking about this and I'm going, Oh my God. Leon Adato (36:13): Right? And I don't know nearly enough to be able to spec that out appropriately either. So no, it, uh, Doug Johnson (36:19): It would have been if we'd still be working on it. Leon Adato (36:22): Yeah we would. And it would still be a horrible, it would never work Right. Doug Johnson (36:24): Exactly. So thank you, HebCal. Leon Adato (36:27): Thank you. So, and the last thing I want to bring up is just a website. Um, YeahThat'skosher.com. No, really. That's the website. YeahThat'skosher.com. There are a lot of websites that talk about whether a thing is kosher or not. This is actually a restaurant review website, and the guy who runs the website, um, does a lot of traveling, did a lot of traveling lives in the New York area. And he highlights the, the restaurants that are new and opening and what kind of cuisine they have. And honestly, you know, is it good? Is it run of the mill? Is it no, you really need to skip this place. He really does a good job of keeping up to date so that when I'm in a new city, typically I can rely on that to know what, uh, some of the places like I don't want to miss, or nah, that's, you know, I don't need to pay the cab fare or the, you know, Uber or Lyft ride to get out there it's not, it's going to be a hot dog and that's gonna be the end of it or whatever it is. So that, especially as somebody who travels to conferences and things, it helps me to know when there's a new place. Like, Oh, I've been in Vegas. No, no, no. They have a steakhouse. Now they have a kosher steak house. I would actually give away one of my children and I can name which one for the steak that I have. I fonder memories of the Tomahawk steak I had there than I have of at least one of my kids. Um, it's a really good kosher steak house, so that, but those are the kinds of things you can get from that. So that's very helpful unless you're one of my kids. Um, so that's, that's it, that's, that's the episode, uh, I'll quickly go to the lightning round, any final words or things that you want to add. Yeah, Stephen. Stephen Foskett (38:01): I actually, I really want to add something from my other world, from the world of watches. Leon Adato (38:06): Oh, go ahead. Stephen Foskett (38:06): There is a remarkable watchmaker who created a watch, a wristwatch that has the full Muslim calendar built into it. And it, and it actually shows the correct Islamic date using the phases of the moon. And one of the coolest things about mechanical watches that are all the cool things you can do with gears. So just imagine your API that tells us which day or which month it is. Okay. Now, now do that gears. Leon Adato (38:36): Uh huh. Stephen Foskett (38:36): Um, so if, if you want to look this up, it actually won the, one of the highest awards in watchmaking in 2020, uh, because it is a pretty remarkable achievement. Leon Adato (38:45): Great. Stephen Foskett (38:45): So it's by a company called Parmigiani, which is not Pomodoro, but it still has some pretty good technique. Leon Adato (38:51): So it's not the tomato, it's the cheese. Stephen Foskett (38:53): Yes. Leon Adato (38:54): That's great. And we'll have the links for everything that we talked about in the show notes. Um, okay, great. That's that's cool. Doug, any final comments? Doug Johnson (39:02): Nope. I like all of the stuff I've used, all the stuff that Stephen uses, uh, probably not as effectively as he has, but that's good. I mean, there's just a lot of good stuff out there. I was just thinking today, you know, I read through the calendar thing this today in calendar and I realized how much stuff has happened since I was born. Queen Elizabeth became queen Elizabeth about three months before I was born. Stephen Foskett (39:28): Did you know that Betty White really is older than sliced bread? Leon Adato (39:31): Yes, I saw that. Stephen Foskett (39:33): True fact. Doug Johnson (39:33): That's funny. I did not know that Leon Adato (39:36): She's something like 3 or 4 years. 3 or 4 years older then sliced bread. Yeah. Doug Johnson (39:40): And that, and that's the important stuff that we have now. The good thing about having only a part partial brain at least for half of the year is now we've got technology that fills in the rest of it. Um, so that I can make it look like I actually deserve to exist on this. Leon Adato (39:56): You're a functioning, functional adult. Doug Johnson (39:58): Yeah I get a lot more done now than I used to. And, um, even, even though, uh, my brain is not working at full, I, at least I I've got systems and tools set up that sort of prop me up. Stephen Foskett (40:11): Well, can I just make a pitch? I think what the, the best, uh, technology tool to help religious people would be, would be a head-up display inside your glasses that tells you who is that person? What was I talking to them about last time? And what's their mother's name? Doug Johnson (40:27): Yep there you go . Stephen Foskett (40:27): I think that would really help. Doug Johnson (40:28): Well, as, as soon as, yeah, I was going to say there there's a new batch of AR glasses that somebody is coming out with. It look a lot better than the, uh, than the ones we've had so far. So maybe that maybe that'll be my next side gig after I make my million billion on this first one. Leon Adato (40:44): There we go. Doug Johnson (40:45): Or actually 43rd one whenever when I'm on. Leon Adato (40:47): Well, uh, I definitely appreciate all the parts of your brain that you decided to bring to the show today. Doug Johnson (40:53): late. Leon Adato (40:53): And whenever you chose to bring them, look, I, you know, we're very flexible here and, uh, we're doing this, uh, you know, for fun. So it ain't like, uh, you're gonna, we're gonna dock your paycheck for it. So, uh, I appreciate you taking the time. Doug Johnson (41:10): I appreciate it. Thanks. I love this. Leon Adato (41:12): Good. Doug Johnson (41:13): Human beings. I like that I like, Oh my God. Stephen Foskett (41:17): I'm just glad to be able to meet Doug. Leon Adato (41:21): Yeah. Well, he's, you know. Doug Johnson (41:21): Oh, you say that now. Leon Adato (41:23): Yeah, someday soon. Thanks a lot, guys. Have a good night. Doug Johnson (41:28): Bye now. Roddie (41:29): Thank you for making time for us this week, to hear more of technically religious visit our website at technicallyreligious.com, where you can find our other episodes, leave us ideas for future discussions or connect with us on social media.

Python Podcast

Wir (Johannes, Dominik und Jochen) haben uns heute mal über REST unterhalten. Ein Thema, das wir auch schon immer mal besprechen wollten und mit dem man es relativ zwangsläufig zu tun bekommt, wenn man sich in aktuellen Webentwicklungsumgebungen bewegt. Es gab noch kleinere Ausflüge in Richtung GraphQL und Dateiformaten und natürlich immer so ein bisschen News aus der Community.     Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene Pattern Matching (Johannes) / Official Tutorial PEP 604 -- Allow writing union types as X | Y attrs / pydantic / dataclasses uvloop / asyncpg / psycopg3 Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies / Update: A single person flooded PyPI with 3,653 "RemindSupplyChainRisks" spam packages CORS und Websockets / CSWSH Happy birthday, Python, you're 30 years old this week / 20 Jahre Python Software Foundation 12 requests per second - Python Benchmark MagicStack / httptools High Performance Django - Peter Baumgartner Fly.io / AppPack / Button REST XML-RPC / SOAP / CORBA REST / Architectural Styles and the Design of Network-based Software Architectures Dissertation von Roy Fielding / HATEOAS GraphQL Django REST framework James Bennet über JWT / PASETO marshmallow pydantic pyramid 2.0 Flask FastAPI OpenAPI / Swagger APIStar / Starlette / httpx htmx EdgeDB FeinCMS / django-tree-queries Graphiti JSON:API Joe Celko's Trees and Hierarchies in SQL for Smarties Podlovers Podcast Episode: Podcatcher-Apps mit Jeanette Müller (Podcat) PodcastIndex MessagePack Django Async API-Aggregationsbeispiel Öffentliches Tag auf konektom

The MBS Plugins Podcast
FileMaker-JSON-API-EN

The MBS Plugins Podcast

Play Episode Listen Later Feb 6, 2021 37:34


Stefanie shows for Claris Engage, how to use JSON in FileMaker and connect to an API.

The MBS Plugins Podcast
FileMaker-JSON-API-EN

The MBS Plugins Podcast

Play Episode Listen Later Feb 6, 2021 37:34


Stefanie shows for Claris Engage, how to use JSON in FileMaker and connect to an API.

The MBS Plugins Podcast High Resolution

Stefanie shows for Claris Engage, how to use JSON in FileMaker and connect to an API.

The MBS Plugins Podcast High Resolution

Stefanie shows for Claris Engage, how to use JSON in FileMaker and connect to an API.

The MBS Plugins Podcast
FileMaker-JSON-API-DE

The MBS Plugins Podcast

Play Episode Listen Later Jan 22, 2021 46:52


Stefanie zeigt für Claris Engage, wie man JSON in FileMaker verwendet und damit eine API anspricht.

The MBS Plugins Podcast
FileMaker-JSON-API-DE

The MBS Plugins Podcast

Play Episode Listen Later Jan 22, 2021 46:52


Stefanie zeigt für Claris Engage, wie man JSON in FileMaker verwendet und damit eine API anspricht.

The MBS Plugins Podcast High Resolution

Stefanie zeigt für Claris Engage, wie man JSON in FileMaker verwendet und damit eine API anspricht.

The MBS Plugins Podcast High Resolution

Stefanie zeigt für Claris Engage, wie man JSON in FileMaker verwendet und damit eine API anspricht.

RWpod - подкаст про мир Ruby и Web технологии
21 выпуск 08 сезона. VCR 6.0.0, Snowpack 2.0, Brotli and Gzip Compression, AudioMass, AutoPilot, Dynamoid, Rough Notation и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later May 31, 2020 58:12


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Rails 6.1 adds support for signed ids to Active Record Brotli and Gzip Compression for Assets and JSON API in Rails Test-Driving a Decision Engine Two Commonly Used Rails Upgrade Strategies Comparison of approaches to multitenancy in Rails apps Never Query the Same Thing More Than Once Creating a Ruby Gem with Rust VCR 6.0.0 Delete_in_batches - the fastest way to delete 100k+ rows with ActiveRecord Dynamoid - an ORM for Amazon’s DynamoDB for Ruby applications Web Stack Overflow Developer Survey 2020 A first look at records and tuples in JavaScript Snowpack 2.0 htmx allows you to access AJAX, WebSockets and Server Sent Events directly in HTML, using attributes AudioMass - a free, open source, web-based Audio and Waveform Editor Fluor.js - sprinkle interactivity on your design AutoPilot - a simple cross-platform desktop automation library for Deno Rough Notation - a small JavaScript library to create and animate annotations on a web page Perspective - an interactive visualization component for large, real-time datasets

Futurice Tech Weeklies
Building a REST-ish JSON API with Play Scala

Futurice Tech Weeklies

Play Episode Listen Later Dec 31, 2019 30:28


You spend your days writing code, most likely a RESTful backend or a client consuming said backend. You’ve done this before, and you’re going to do this again. In the heat of the project it’s easy to forget to note all the things you did well and the things you could improve for next time.    In this talk I’ll list some of the best practices that have stuck with me on how to build a RESTful backend. We’ll cover general advice applicable to any technology choice as well as some practical examples using the Play Framework and Scala programming language.   Presenter - Oskar Ehnström 

Futurice Tech Weeklies
Building a REST-ish JSON API with Play Scala (Audio Only)

Futurice Tech Weeklies

Play Episode Listen Later Dec 31, 2019 30:29


You spend your days writing code, most likely a RESTful backend or a client consuming said backend. You’ve done this before, and you’re going to do this again. In the heat of the project it’s easy to forget to note all the things you did well and the things you could improve for next time.    In this talk I’ll list some of the best practices that have stuck with me on how to build a RESTful backend. We’ll cover general advice applicable to any technology choice as well as some practical examples using the Play Framework and Scala programming language.   Presenter - Oskar Ehnström

Akronymisierbar
033 - Meetups, Parteien und andere Religionsgemeinschaften

Akronymisierbar

Play Episode Listen Later Oct 18, 2019 106:29


Kevlin Henneys Talk "Old is the New New" u.a über Algol vs Functional ProgrammingGo with Preprocessorrust-swift-interopMensa Dresden in SwiftUIDark Mode im BrowserHallo SwiftWebAudio & GitarrenEffekt in JavaScript & RNNoiserocket tokio warpopen mensaMara PhonesREST vs JSON:API vs GraphQLKranky Geek watching in Dresden (kommt noch)C++ UG Dresdenhttps://twitter.com/ekuber/status/1184957619154247680Python 3.8Python Is Not A Great Programming LanguageGithub Starsasync-stdmini tokyo 3dManni App für iOS und Androidxkcdfsgrcov / kcov / cargo-kcovswift-bindgen/rust-bitcodePodcastempfehlungOffice LadiesMatrix by the MinuteRick and MortyChernobyl PodcastFireFlyPodcastHoaxillaAlternative TitelteaPotOSstudentefalsynessgenerische MonadenMittelalter GithubVerbotenMeetups, Parteien und andere ReligionsgemeinschaftenFixing bugs on prod like a proLowrider-Straßenbahnfefe coverage

Dash Open Podcast
Dash Open 11: Elide - Open Source Java Library - Easily Stand Up a JSON API or GraphQL Web Service

Dash Open Podcast

Play Episode Listen Later Jul 27, 2019 10:44


In this episode, Gil Yehuda, Sr. Director of Open Source, interviews Aaron Klish, a Distinguished Architect on the Verizon Media team. Aaron shares why Elide, an open source Java library that enables you to stand up a JSON API or GraphQL web service with minimal effort, was built and how others can use and contribute to Elide. Learn more at http://elide.io/.

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_019.log: 테라폼 0.12 베타, 데이터독 APM, 엘라스틱 APM, 해커스 크라우드 펀딩

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Mar 6, 2019 50:03


stdout.fm 19번째 로그에서는 테라폼 0.12 베타, 데이터독 APM, 엘라스틱 APM, 해커스 크라우드 펀딩에 대해서 이야기를 나눴습니다. 참가자: @seapy, @raccoonyy, @nacyo_t 오빠들 1위 위해 ‘스밍’… 극성팬 때문에 멍든 차트 - 조선닷컴 - 연예 > K-pop Write The Docs 서울의 2019 첫 번째 밋업 | Festa! Sticker Mule: Custom stickers that kick ass 미성출력 테라폼 0.12 베타 1 출시 및 개선된 HCL 문법 살펴보기 | 44bits.io Announcing Terraform 0.12 Beta 1 테라폼 0.12 지원 프로바이더 | HashiCorp Releases HashiCorp Terraform 0.12 Preview LaTeX - Wikipedia Metafont - Wikipedia HashiCorp on Twitter: “Terraform 0.12 is coming later this summer. …” hashicorp/hcl2: Temporary home for experimental new version of HCL Mitchell Hashimoto on Twitter: “Congratulations @GitHub on launching Actions! …” Release v2.0.0 · terraform-providers/terraform-provider-aws 테라폼을 가장 잘 지원하는 에디터는? - 젯브레인 인텔리J를 활용한 테라폼 코드 작성 | 44bits.io emacs.dev vim.dev 달물이 on Twitter: “한국식 MBTI를 개발했습니다. …” 어엉부엉 on Twitter: “트친님들의 도움을 받아 만든 개정판 개발자 MBTI… “ Modern monitoring & analytics | Datadog New Relic | Real-time insights for modern software Next-generation application performance monitoring | Datadog Elasticsearch을 이용한 오픈 소스 APM | Elastic Datadog - Watchdog amatsuda/jb: A simple and fast JSON API template engine for Ruby on Rails Datadog - Notebooks) Metricbeat: 경량 메트릭 수집기 | Elastic Soonson Kwon on Twitter: “스티븐 레비의 해커스가 크라우드펀딩으로 복간된다는 소식. …” 해커스 - YES24 Facebook - Hanbit Media: 굿바이 해커스. 해커스가 영어책만 있는 건 아닙니다. … 해커 그 광기와 비밀의 기록(삼인서각) - YES24 알라딘 중고 - 해커스 : 세상을 바꾼 컴퓨터 천재들 (무삭제판)

Full Stack Radio
107: Sam Selikoff - Pushing Complexity to the Client-Side

Full Stack Radio

Play Episode Listen Later Jan 30, 2019 50:00


In this episode, Adam continues his discussion with Sam Selikoff about building single page applications, this time focusing on strategies for keeping your API layer as simple as possible, so all of your complexity lives in your client-side codebase instead of being spread across both. Topics include: Building an API without writing any controller code Thinking of your API like a database as much as possible Modeling everything on the server as a resource, including things like S3 upload signatures Using tools like Firebase to avoid writing an API entirely Sponsors: Rollbar, sign up at https://rollbar.com/fullstackradio and install Rollbar in your app to receive a $100 gift card for Open Collective Cloudinary, sign up and get 300,000 images/videos, 10GB of storage and 20GB of monthly bandwidth for free Links: EmberMap, Sam's Ember.js training site JSON:API, the API spec Sam uses to build his SPA backends JSONAPI::Resources, the Rails gem for declaratively building a JSON:API compliant API Firebase Vuex Apollo GraphQL

Frontend First
The elephant in the room

Frontend First

Play Episode Listen Later Dec 5, 2018 64:40


Sam and Ryan discuss the difficulty of working with a design system that doesn't have good escape hatches, how implementing HTML and CSS can be more complex and time-consuming than coding user behavior, and some creative approaches to ensuring JSON:API payloads represent canonical server-side state. Topics include: 04:15: Design systems and when they break down 22:38: The complexity of implementing designs in HTML and CSS 34:38: JSON:API mutations. How incomplete response payloads can put your Ember app into an impossible state. Links: Forms JSON API Spec Conway’s Law

DrupalEasy Podcast
DrupalEasy Podcast 212 - Commerce Guys: decoupling and roadmap with Bojan Zivanovic and Matt Glaman

DrupalEasy Podcast

Play Episode Listen Later Dec 3, 2018


Direct .mp3 file download. Matt Glaman, (mglaman) and Bojan Zivanovic, (bojanz) join Mike live from Disney World to talk about decoupling Drupal Commerce as well as the roadmap for Drupal Commerce as a project. We take a quick side trip into some blog posts Matt recently wrote about running all of Drupal core's automated tests in DDEV-Local. Interview Commerce Guys Drupal Commerce project Commerce Guys projects on GitHub Decoupling Drupal Commerce Commerce Cart Flyout module JSON:API module Composer support in Drupal core initiative Commerce Shipping Matt's blog posts related to running Drupal's automated tests with DDEV: Nightwatch, FunctionalJavascript, PHPUnit. Snowboard wrist guards Three Colors: Blue DrupalEasy News Drupal Career Online - the 12-week (3 half-days/week) best-practice focused training program begins February 25, 2019. Learn more at one of our free Taste of Drupal webinars (December 17, January 9, February 6, February 20). Professional local development with DDEV - 2-hour, hands-on, online workshop held monthly (December 12). Local Web Development with DDEV Explained - new book from Mike! Upcoming events Florida DrupalCamp 2019 - Feb 15-17 - registration and session proposals now open. Sponsors Drupal Aid - Drupal support and maintenance services. Get unlimited support, monthly maintenance, and unlimited small jobs starting at $99/mo. WebEnabled.com - devPanel. Follow us on Twitter @drupaleasy @ultimike @bojan_zivanovic @nmdmatt Subscribe Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher. If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Laravel News Podcast
Carbon, Telescope, and form builders

Laravel News Podcast

Play Episode Listen Later Oct 24, 2018 42:25


Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.

Podlodka Podcast
Podlodka #65 – API и клиент-серверное взаимодействие

Podlodka Podcast

Play Episode Listen Later Jun 25, 2018 125:06


Podlodka #65 – API и клиент-серверное взаимодействие Вместе с Романом Экземпляровым — бэкэнд-разработчиком с 10-ти летним стажем работы и ех. руководителем разработки AviaKassa мы обсудили неотъемлемую часть работы практически любого сервиса — API. Поговорили о различных стандартах и подходах к созданию API, обсудили трудности, возникающие как при разработке, так и при интеграции с ними. Подняли важную тему взаимопонимания между клиент-сайд и сервер-сайд разработчиками и в догонку задали дилетантских вопросов, чтобы узнать, что творится "под капотом" у бэкэнда и почему не так легко "отдать все в один запрос". Поддержи лучший подкаст про мобильную разработку: www.patreon.com/podlodka Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях!
 Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodlodkaPodcast Содержание: - 00:00:40 - Знакомство с гостем - 00:01:30 - Какие API бывают и кому они нужны - 00:04:30 - Существующие стандарты - 00:14:15 - И вновь про GraphQL - 00:23:00 - Кто должен генерировать требования API? - 00:40:00 - Философские вопросы про REST - 00:51:00 - Документация и тесты к API - 00:56:20 - Обработка невалидных данных на клиенте - 01:17:00 - Почему нельзя все закидать серверами - 01:25:00 - Взаимодействие между командами - 01:35:00 - Инструменты для отладки взаимодействия с API - 01:41:45 - Про кросс-функциональные команды - 01:51:00 - Как мобильному разработчику начать рубиться в бэкэнд - 02:01:10 - Подведение черты Полезные ссылки: - Проект "Феникс" https://books.google.ru/books/about/%D0%9F%D1%80%D0%BE%D0%B5%D0%BA%D1%82_%D0%A4%D0%B5%D0%BD%D0%B8%D0%BA%D1%81_%D0%A0%D0%BE%D0%BC%D0%B0%D0%BD_%D0%BE.html?id=npNOCgAAQBAJ&redir_esc=y - JSON API specification http://jsonapi.org/

Frontend First
Steelman vs. strawman

Frontend First

Play Episode Listen Later Apr 5, 2018 46:53


Sam and Ryan talk about their new series, “Declarative rendering,” and why we should use steelman arguments instead of strawman arguments when talking about technology. They also answer some listener questions. Topics covered: Declarative rendering, their new series Steelman versus strawman arguments Listener questions: I care about lazy loading ember code, like routes. My knowledge is that’s its only possible with ember engines, but I’m not sure. Thanks a lot – @sommer_gerrit on Twitter Podcast about ember-engines might be cool – @iflask on Twitter My question: is it time to pitch PWA instead of native apps for clients wanting a presence on mobile (re: the new Safari release) – @real_ate on Twitter Question for the show -> how might writing glimmer components be different in the coming months as we unlock that as first class (thru the eyes of a traditional ember 2 developer) – @toranb on #topic-embermap from Slack Ember Data can get complicated in a hurry when not using a JSON:API standard API. What are some strategies to work with these kinds of APIs assuming the API cannot be changed? – @localpcguy on #topic-embermap from Slack

Frontend First
JSONAPI Operations, Caching in FastBoot, and Ember's Strengths

Frontend First

Play Episode Listen Later Feb 22, 2018 44:38


Sam and Ryan talk about the upcoming Operations addition to the JSON:API spec, adding FastBoot support to Storefront, how to think about caching in Fastboot, and a thought experiment around how Ember might niche down and focus on its strengths.

REACTIVE
96: Without Cheese You Are Nothing

REACTIVE

Play Episode Listen Later Feb 6, 2018 45:47


Henning and Raquel talk about, the Poison Dart Frog, moving cheese, when to maintain and when to develop features, a laptop theft ring and the fact that Slack is mostly PHP

Yakker Bot Talk
016 - Marc Littlemore and Getting Started with JSON API

Yakker Bot Talk

Play Episode Listen Later Feb 3, 2018 57:51


Marc Littlemore has a career in coding, but when he found chatbots he fell in love all over again. With his coding background he understands how to make bots do things that they normally can't do! Very cool stuff. Learn more about Marc here: Marc's Website is here Marc on Facebook Marc on Twitter Marc on GitHub Marc's Bot! Click here to get his free ebook Santa Giftbot Tell him Thomas send you :)

BoardWars.eu
Podcast #54 – Giving Advice

BoardWars.eu

Play Episode Listen Later Nov 3, 2017 160:30


Cleanup: The Image DB now has a JSON API! FFG Article – IA Regionals Locations FFG Rules – HotE Rulebook now available to download Community watch: TwinTroopers – HotE command cards Community watch: Rollfordamage.com – HotE Rebels Community watch: Rollfordamage.com – HotE Imperials Community watch: Rollfordamage.com – HotE Mercenaries Community watch: Custom card creator on the FFG boards Community watch: Skirmish […]

The Frontside Podcast
087: The JSON API and Orbit.js with Dan Gebhardt

The Frontside Podcast

Play Episode Listen Later Oct 26, 2017 45:07


Dan Gebhardt: @dgeb | Cerebris Show Notes: 01:33 - The JSON API Spec and Pain Points it Solves 08:40 - Tradeoffs Between GraphQL and JSON API 19:33 - Orbit.js 26:30 - Orbit and Redux 32:22 - Using Orbit 37:24 - What's coming in Orbit? Resources: orbitjs.com (Guide Site) ember-orbit Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 87. My name is Charles Lowell, a developer here at the Frontside and your podcast host-in-training. Joining me today in hosting the podcast is Elrick Ryan. Hello, Elrick. ELRICK: Hey, what's up, Charles? CHARLES: How are you doing today? ELRICK: I'm doing great. CHARLES: Are you pretty excited? ELRICK: I'm very excited for this podcast because this is a topic that I've heard a lot about but don't know much about and it just seems so awesome that I'm just very stoked to hear all the details today. CHARLES: Yeah, me too, especially because of who's going to be giving us those details, he's one of the kindest, smartest, most humble and wonderful people that I've had the pleasure of meeting, Mr Dan Gebhardt. Hello, Dan. DAN: Hey, Charles. Hey, Elrick. Thanks for having me on. I really enjoyed listening to this podcast. It's nice to be part of one. CHARLES: It's good to have you finally on the show. We talked over chat and we talked over email and we meet every once in a while and conferences and it's great to get to share more widely some of the great conversations that always arise in all of those contexts. For those who don't know you, you are a founder at Cerebris and that is your company, which is involved very heavily in a lot of open source projects that people are probably familiar with. One of them that we're going to be talking about today is JSON API. I bet most people didn't know that you are one of the biggest driving factors behind both the specification and several of the implementations out there. DAN: Yeah, that's been a pretty core focus of my open source work for the last few years. Actually JSON API Spec, which is perhaps a somewhat confusing name for those who aren't familiar with it. It was started by Yehuda Katz in almost three and a half years ago, I think now and it hit 1.0 a couple years ago and has stabilized since then and we've seen a lot of interesting implementations on top of it. There are some exciting stuff that's actually coming soon to this Spec that I'd like to share with you guys today. CHARLES: To give us a little bit of context, why? What pain am I experiencing that JSON API is going to solve or it's going to address or give me tools to deal with? DAN: One of its prime motivators is the elimination of bikeshedding. There's a lot of trivial decisions that are made with every implementation of an API and JSON API makes a lot of those decisions for you about how to structure your document, how to include relationships and lengths and metadata in a resource, how to represents relationships from hasOne/hasMany. Even polymorphic relationships have a type of that data. JSON API has opinions about all these things at the document structure level and it also has opinions about our protocol usage, how to use HTTP together with this media type to make requests and for servers to return responses, how to create a resource, how to add resources to relationships and things like that. CHARLES: Also, it's not just this is a serialization format. It's very much like also delving into the individual interactions and how they should be structured, more about the conversation between client and server. DAN: Yeah, in that way, it is somewhat unusual as a media type that covers both. CHARLES: Can you dig into that a little bit because I'm very curious? Something made my ears prick up was when you said, it tells you how to, for example add relationships to a resource. What would that look like? DAN: A lot of the influences behind JSON API are hypermedia-related. It's influenced by RESTful principles and includes a lot of hypermedia aspects. One aspect is how a resource represents relationships in terms of the data in document, the type and the ID that specify a linkage to that another resource in the same document but it can also include links to discover those relationships. There's a self-link for a relationship and a related link for relationship and the self-link will return the data for that relationship in the type/ID pairs. The related link will return the related resources. The Spec doesn't have strong requirements or any requirements about URL usage but instead, it describes where to find resources through these hypermedia links. If you want to say, add records to a relationship, you'd follow the self-link for that relationship when that was returned with a resource. Then you would send a post to that endpoint and you would include the relationship data in the terms of type and ID pairs. It gets down to that specifications so that removes the ambiguity of how to interact with these resources and mutate them and retrieve them. CHARLES: I see, so is there an idea then that you are going to explicitly model the relationships as individual resources? Or is that the recommendation or the requirement? DAN: The link for a relationship would point to an endpoint, which would then model the relationships that are represented that endpoint, so to say just to speak a little more concretely because certainly, it makes some simple concept sound a lot more esoteric than they really need to be. Let's just talk about an example. Let's say, we're talking about articles and comments and maybe an author. Let's say, you've fetched a collection of articles from an article's endpoint and within the article resource, you would have a relationships number, which would include comments and then comments could have links, which one of the links would be a self-link and a related link. The related link could be followed to then retrieve all the comments for that particular article. You could also, if you wanted to add a comment for that article, post to the self-link for that relationships. You post to that whatever endpoint that is specified. Maybe it's 'articles/1/comments.' It could be anything that you want. Now, the Spec does have some recommendations to make everything fit nicely in terms of the URL design patterns and such but those are not, by any means required but having those recommendations just eliminates more bikeshedding opportunities. We find it that people who are gravitate towards the Spec really appreciate having a lot of these trivial decisions made for them so even if we don't want to come down and be hard line about requiring those particular answers, we can at least provide some guidance for how things can work together nicely. There's a whole recommendation section on the site for things like URL design patterns. CHARLES: Right, so things that aren't prescribed but these are best practices that are recognized. DAN: Yeah, exactly. CHARLES: A question then that comes to mind, it sounds like JSON API solves a lot of these bikesheds or just kind of comes in and takes one side or the other for modeling both the resources and the relationships between those resources so there's the... I don't want to call it a schema but the boundaries around which resource are very clear and where they live and how they connect together. I was hoping we could maybe contrast that with some another approach, which is also become very popular and that's the GraphQL approach where you're essential assembling views at runtime for the client. It's very easy to marshal the data that you need to present to your view because you've got only one endpoint, as opposed to having to coordinate between them. I can understand the appeal of that and I was wondering if you have any insight into what the tradeoffs are between the systems and what are some of the capabilities that one can do that the other can't. CHARLES: Yeah, sure. I'm glad that you brought that up because I feel like GraphQL has become a real juggernaut, at least because of its marketing. It's very effective in being marketed for its use to developers and it's capabilities versus REST, as if a RESTful system can't possibly achieve the same outcome or the same efficiency. I'm glad to compare and contrast the two. To be honest, one of our short term goals is to better tell the story on the JSON API site, which was always a kind of a more technical spec-y site and a marketing site. That hasn't really helped its uptake as much as it could as some of the GraphQL sites are very sleek and polished. Anyway, let's get down to it. GraphQL allows you to basically define the data that you want for a particular view and that can bring together multiple related resources. It defines a way to specify exactly which fields you want in that graph of resources. We'll just stick with the articles, comments and authors example. You can specify that you want a collection of articles and perhaps the comments-related to that and the authors and you could have it all assembled in a single response. JSON API also allows you to do just that. It allows you to make requests for multiple related resources to constrain the fields that are returned for each resource and to include all of these related resources in a single document. The main difference in the representation is that JSON API requires that resources only be represented once in a single document. GraphQL may have repetition of resources throughout the document that's returned. For an instance, your articles that may nest authors and those authors like Charles Lowell, may have written three of those articles and that representation of that author is going to be repeated in that JSON API compound document, which is a term for document, which has a primary dataset combined with related resources. That single author would only be returned once as related resource and the linkage between the primary data and the related resources would be established to type/ID pairs. Instead of having the author represented three times, the same type/ID pairs would just be providing that linkage to the same author and that author resource would only be represented once. This happens to be ideal for client-side applications that number one, basically want to minimize the size of a payload that sent. Number two, don't want to after-handle repetition of data by doing extra processing of pushing the same record multiple times into a memory store that is keeping that data. I think that GraphQL is well-suited to applications that request data and display that data pretty much as returned. There is no intermediate holding onto that data in, say a memory store for later access. Basically, it lines up well with a component library link React, which wants to display that data that's returned from the server. If it wants to display that collection again, it will simply request that collection again and pretty much throw away the data once it has been rendered. CHARLES: I can see that. Dan, you and I might be some of the only folks who remember. I don't know if you ever did any Microsoft Access Program. DAN: Yes, I did, believe it or not. CHARLES: Doesn't it feel a little bit like the Access pattern all over again, where you have your components requesting data from basically, constructing a query and requesting it -- DAN: Yeah. CHARLES: And then throwing it up on the screen. DAN: You're going deep there but I do remember that. Definitely, there is that same paradigm. CHARLES: It's really powerful. DAN: It is and it's pretty accessible too because it's a direct representation of what you've requested and there's no intermediate processing. I guess the question is, whether that intermediate processing provides some value. Actually, holding onto that data provided some value because as far as I'm concerned, GraphQL is great for that rendering of DOM data, where the data has no meaning except outside of the rendering. But if you want to actually have models that have some intelligence about that data, then you want to use a store to keep those models in and you want to be able to reuse those models for other purposes. CHARLES: What might be an example? What's a concrete use case that we can ground this discussion? DAN: I would say that the big one is offline. You simply can't have just DOM data that's useful in any way in an offline application or an optimistic application, where you are doing some things client-side and only say undoing them if a request fails. But if your data is DOM and only structured for a particular view, then all you can do with that is redisplay that view. But if you understand the schema of your data and that data is available in a store, then regardless of whether you have a network connection, you can actually display that data in different ways. If that same article shows up in a collection in a list, you could also display that article on its own in a different format with more fields. If you want to, say allow editing of that data, you could allow for an editor when your app is offline. Allow changes to be made to that data and then redisplay it because you understand the fields that are in that data. CHARLES: Right and then at some point later, then spool those changes back to the server. DAN: That's right. CHARLES: It almost sounds like, ironically if a system like JSON API where you have very concrete boundaries around each of the underlying resources in your data model, it allows you to essentially do rich-querying on the client and not just the server. DAN: Yes, that's absolutely true. CHARLES: Because I feel like what you just described to me it's like, now we have some sort of store over which we can map all kinds of different queries to our own liking and there's no dependency on the server. DAN: Yeah and if you just want more web app to be pretty much a view representation of what's on the server and without additional intelligence, then GraphQL really lines up well with your needs because any extra processing you're doing is just not valuable to you. But I think a lot of the really interesting things being done in client-side applications are where your client application is pretty loaded with a lot of intelligence and you're out there autonomous and able to make sense of data. In that case, then thinking about the data only as it pertains to views is not nearly as powerful. CHARLES: Right, so you could do something like that with GraphQL but then you would have to, essentially structure your queries such that they drew the boundaries around the individual resources anyway, rather than composing them on the server. You'd have to query them discreetly into a store and then run your local operations. Then I guess at that point, it's like what are you doing? DAN: Yeah, you're still doing the extra processing of handling the repetition of any nodes that repeat and such. That's just extra processing you have to do but I agree that you certainly could structure your GraphQL queries to return data that is then loaded, say to a store that really has awareness of the data types but I don't think that is -- CHARLES: But then you're defeating the purpose, right? DAN: Yeah, it's not its selling point and it's not its strong suit. CHARLES: You've done a lot of work on the JSON API Spec. JSON API allows you to fetch discrete resources and their relationships but still, keeping one representation of each resource in the payload so it's optimized for wanting to do client-side processing and have intelligence based on these entities, which are in a store. You actually maintain a fairly mature, at this point, framework called Orbit, which helps you do some of these things. Now, what Orbit does today and I understand that you've got a lot of new features that are really exciting, that are coming down the pike. Before we get into those, what is Orbit and what do you use it for and how does it use JSON API? DAN: Orbit is a data access and synchronization library, which sounds sufficiently vague because it has a lot of low-level primitives for structuring client-side. Also, actually isomorphic can be run on the server and nodes so it's not even only used for client-side purposes but that was its original purpose. The abstraction that it includes are allowed for synchronization of data changes across multiple sources of data. Source of data might be represented by, say a JSON API server, an in-memory store, an IndexedDB database in your browser, a local storage, all of these sources of data can support an Orbit interface, which provides their access to their data and also broadcasts changes to that data. In order to coordinate the changes across multiple sources, say to back up all of your data that's in memory to IndexedDB source, you can observe the changes on one source and then sync those changes up with another. For instance, you want to structure an offline application which you have been in-memory store, which uses client-generated IDs, which then syncs up with a backend JSON API source and every change that gets made to the memory store needs to be backed up, you could configure multiple coordination strategies between the sources to make sure that the data flows so that every change that is made to the store is immediately backed up to IndexedDB. If it can't be backed up, then it fails. You can add some error handling and then when you're online, you can then also sync those changes up with a backend so you're basically pushing those changes that are local to a remote store and you're not slowing down your offline app, which you're communicating with optimistically and then only handling, say synchronization failures when there is a problem. In order to handle those problems, Orbit sources are very deterministic about their tracking of changes and they provide get-like rollback capabilities so you can look at the history of changes to a particular source and reset the history to any point there and basically handle conflicts and merges in a very get-like way. Often I use cases, the primary driver of Orbit's whole architecture, I realized that it needed to be able to give you the tools to handle any conflicts that happen when changes get sync up. Also, give you the different tools to model all the different places of data is kept in order to support the offline mode. That's a kind of a broad overview of Orbit. There's a new guide site, OrbitJS.com for those who want to dig a little deeper into it. The data is structured in the JSON API format internally to the store and the standard operations are very much influenced by the standard JSON API protocols that are allowed in the base Spec over creating records and removing records and all that crud for both records and relationships. That's where JSON API comes into Orbit. CHARLES: Right, I see. The primary use case for Orbit is offline. Is that fair to say? DAN: Yeah, that was the primary driver, although it's just not the primary -- CHARLES: It seems like you could use this in a lot of places where I might use Redux or something like that, like on the server to model... I don't know, a chat app. DAN: Yeah, definitely. CHARLES: I have a bunch of different information streams coming together and how am I going to merge them and make sense of them. DAN: Yeah, in fact, that it's primitive level. Orbit has essentially an async redux-like model for queuing up changes and applying those changes. The change sets are all immutable. There's actually a lot of immutability use here throughout the library. In order to ensure that the changes that are applied are tracked deterministically, we just can't have those changes mutating on us. There is definitely some overlap with Redux concepts in terms of the general tasks or action concepts in Redux but instead of Redux's synchronous approach, everything in Orbit is async. CHARLES: What does that mean? Redux is synchronous in the sense that there's a natural order to all actions. For those of us familiar with Redux, are you saying it would be like a store where actions can be dispatched at any time or is it more like, I've got multiple stores happening and I need to resolve them somehow so each one is synchronous? How can I make sense of that? DAN: In Redux, the actual application of an action is performed synchronously. CHARLES: Right. You can have asynchronous processes but there is a natural order to the actions that those asynchronous processes yield and then those are applied synchronously to the Redux Store. DAN: Yeah. To compare and contrast Orbit and Redux, I guess you'd first have to say there's a primary difference of -- CHARLES: I think there are a lot of people are familiar with Redux. I think it's not so much to compare and contrast it but just to use it is as an analogy of like, "Here's how it's the same here. Here's how it's different," because that's compare and contrast. DAN: There you go. CHARLES: But not in terms of evaluating them but it's like, "Maybe I should be using this instead." DAN: Right, they are sort of on different levels, although there are some primitives in Orbit to it and it's shipped across multiple libraries. There are some primitives, I think that could be useful outside of the main Orbit data application. Anyway, the way that Redux state changes are applied, the function is synchronous is all I was getting at and on Orbit, every state change that applied to a source is asynchronous so the result is never applied immediately. You'll always get a promise back and you'll never have that application happen immediately. That's one clear distinction. Another is that a redux has a big singleton global state for the entire application. Orbit very much has a model of state per source so there can be any number of sources in a particular application and the source might be an in-memory source or might represent a browser storage in XTP or might represented a socket that streaming data in. All of these have state at different, temporarily distinct state that even if they all converge to a common state, the Orbit models separately so that there's a set state per source. I'm just contrasting the global apps state that exists in Redux with the per source state in Orbit. CHARLES: It sounds like there's nothing that would be fundamentally incompatible of using Orbit really in conjunction with Redux, where Redux is kind of a materialized view of all of your different data sources presented as what you're going to render off of, right? DAN: Yeah. You could use it in a similar way to Redux Saga, where Orbit fills the role of Saga, where it's doing the asynchronous actions that results flow back into the Redux state. CHARLES: I'm just imagining having one big global atom, which is your Redux store and now I'm saying, prescribing this is an optimal architecture but I'm saying, one way it could work is it picks and chooses and assembles off of the different sources as new data becomes available. As the states change for those sources, it can be integrated into a snapshot state, which is suitable for rendering or provides one view for rendering. DAN: Yeah. You're basically talking about the in-memory source, perhaps merged with other applications state, which is not so resource-specific and that is possible to model. CHARLES: What I think I might be hear you saying is you could also just use another source which is the merge itself. DAN: Yeah. I'm not sure how much we want to continue this thought exercise because the architecture becomes almost not something I'd recommend. But I would actually like to explore how Orbit and Redux could be used together optimally. I played around a bit with Redux but I have not written a full-fledged application with it, other than a [inaudible] location. I definitely defer to you for Redux best practices and such and how people are using it in real world applications but I'd be really interested to talk that over again soon. CHARLES: Well, I just certainly don't count myself a Redux expert, although we have developed some applications with it. We'll put that on the back burner or something to explore it later. I will say this, I find Redux to be both wonderful and terrible, kind of in the same way the Java is both wonderful and terrible. We'll leave it at that. DAN: Okay. ELRICK: That was going to be my question. This is why I was very excited to hear about today was Orbit because I've heard so much about it. In terms of the implementation of Orbit into an application, what would that look like from a high-level? Has anyone used Orbit in the production app? Have you built any apps using Orbit? DAN: Yeah, definitely. There are people using Orbit with React, with Vue, with Angular and with Ember and there's an integration library called ember-orbit which makes Orbit usage really easy in Ember. In a lot of ways, working with ember-orbit feels a lot like working with Ember Data but it allows a lot more flexibility. I suppose one of its strengths and weaknesses is there's a lot of configuration that's possible because there's a lot of possibilities. The internals are exposed of how data get synchronized so you can define your strategies and sync up different sources. In terms of how it's actually used in an application, you'd start by modeling your data in terms of the resources that are in the application. You'd have a schema that defines your articles, your comments and your authors, just to keep that example going. Then that schema would be shared among all the sources in your application. You would have one source, say that might be the in-memory source and another source that is the representing a browser storage so you could, say swap out either local storage source or an IndexedDB source and use either one to provide that backup roll. You would declare those sources, you connect them to each other with strategies so that, say when memory storage changes, you would then sync that change to the browser storage source. Then you'd have back up and you'd be able to then, refresh your page and view the same data you were looking at before. Now then, if you probably want to wire up a remote source so that you're communicating with the servers so you bring in JSON API source and you would then set up a new strategy for working with that. You have to decide like, "When my memory storage changes, do I want this change to happen optimistically or pessimistically?" By that I mean, "Do I only want it to appear successful if it's been confirmed by the server." Depending upon whether you want to be optimistic or pessimistic, you setup your strategies a little differently. If you handle this change pessimistically, you wanted to block success on the successful completion of pushing of that change to your remote server. You have the set of tragedies that define the behavior of your application and then doing your crud operation is probably pretty much directly with your memory source. Then if you wanted to, say do an edit in a form, you might fork the store, now the store keeps its data in an immutable data structures. That forking that store is very cheap so you don't have a bunch of data that's copied. You're just keeping a pointer to that and getting a new pointer to that same immutable data structures. Every time they get changed, there been new references. There's an immutability under the hood but you're pretty well insulated from the annoyances of working with that immutable data structures. At that form, you make your changes, you then merge your changes back, you'd get a condensed change set of operations that then can flow through your strategies. It flow through to your backup source. It could flow through to the back to the server. I think it would feel pretty familiar for users of Ember Data because there are a lot of the API influences came from that library. But obviously, people are using just plain Orbit with other libraries, with other frameworks and finding it useful there but it definitely involves a little more configuration up front to do all that wiring that might be more implicit in library like Ember Data. CHARLES: I understand that before we go, there is some pretty exciting new things coming in Orbit. Do you feel like you're ready to mention a couple of those things or they've been kind of mixed in with the conversation? DAN: Let's see. I have the guides up, which I mentioned, which is pretty new in the last couple of months. In the last year, we did a rewrite and Orbit is now completely in TypeScript and there are no external dependencies. For a while there, I was using RxJS and observables internally and immutable JS so there's now an internal immutable library. It's lighter-weight with fewer dependencies now. I'm excited about that and finally feel like I can recommend people digging in with the guides that are up. I'm hoping to get up the API docs soon. I will say I'm excited. I just got back from a retreat in Greece. Séb Grosjean who owns the company, BookingSync does this amazing thing with the Ember community, where for the group that's working on Ember Data, he invites them every year to come to his family's place in Greece. He grew up working with his family on his rental properties, which was the inspiration for his company, BookingSync and said, "This is a fantastic opportunity that for us to get together and collaborate in a really nice place," and I had a really productive time this last week. This is the very first time I had gone. It was just fantastic and I worked with the Ember Data team. Igor Terzic and I spiked out some interesting collaboration between Orbit and Ember Data so I'm really looking for it to see where those go and hopefully, we'll see a little bit more Orbit, either directly or just through influence appearing in Ember Data. I'm looking forward in working more closely with the Ember Data team. We'll see what comes of that. CHARLES: Yeah, I, for one am very excited to see it. I'm resolved now. I'm just looking at these guides. These look fantastic and I'm resolved to give Orbit, at least a try here, either in some of our applications or maybe trying to spin up some new ones and have it the basis for some of ideas I've been playing with. DAN: That would be awesome and there's a [inaudible] channel, which I hang out into if you have any questions, if anyone out there does. CHARLES: Before we go, if anyone is interested in JSON API, is interested in Orbit, is interested in Cerebris, we mentioned a lot of things that in one way or another, map back to you. How do we get in touch to find out more about these different entities/projects? DAN: I'm at @DGeb on Twitter. My company site is Cerebris.com. Also check out, OrbitJS.com for the new guides. Reach out to me. I'm on the Ember Core Team so I'm also hanging out in the Ember community Slack, depending upon what you want to talk with me about. I'm in all these different places so I love to hear from you all. CHARLES: All right. Fantastic. We'll make sure that we put those in the show notes and I guess that's about it. Do you have anything else you want to leave folks with, any talks, papers or big news coming around soon? DAN: Something that we didn't really get a chance to talk about today, which I'm really excited about is JSON API operations, which is an extension to the base Spec, which I'll be proposing very soon. There's a future to the JSON API once it hit 1.0 a couple of years ago. It didn't just stop. We're looking at different ways to extend the base Spec and use it for different and interesting purposes. JSON API operations, I think one of the most interesting ones. The idea is basically to allow for multiple requests that are specified in the base Spec to be requested in a batch and perform transactionally on the server so the Spec will define how would each request gets wrapped. Each operation very much confirms with the base Spec concept of a request. For implementations, there's a lot of opportunity to reuse existing code for how to handle each particular operation but to provide a whole new set of capabilities by allowing you to batch them together and process and transactional it because it just unlocks a ton of different things you can do, all based on the same base concepts from JSON API. I'm really excited to have something to announce soon about that. CHARLES: That sounds like it might solve a lot of problems that are always associated with those things. It always comes up. What's our batch API look like? I don't think I've been on a project that didn't have a months-long discussion about that. I ended up like kicking it down the road and I'm just flumping something in place. DAN: Yeah, all those messy edge cases where people figure out how do we create multiple related records altogether in a single request and people do it ad hoc and do it with embedding and such and want to standardize that in the same way, that we've standardized the base operations. CHARLES: Well, that is really exciting, Dan. I wish you the best of luck and we'll be looking for it. DAN: Thanks a lot. Thanks for having me on, guys. CHARLES: It was our pleasure. Thanks. With that, we will say goodbye to everybody. Goodbye, Elrick. Goodbye, Dan. Goodbye everybody listening along at home. As always, you can get in touch with us. We're at @TheFrontside on Twitter or you can see our website at Frontside.io or just drop us a line at Contact@Frontside.io. Always love to hear from you with new podcast topics, anything that you might be interested in so look forward to hearing from you all and see you next week.

Rebuild
193: Winter Is Coming (gfx)

Rebuild

Play Episode Listen Later Oct 24, 2017 105:52


Goro Fuji さんをゲストに迎えて、Discord, Slack, GraphQL, RESTful API, Pixel 2, Kotlin, React Native などについて話しました。 Show Notes ISUCON Fastly Yamagoya Meetup 2017 fastly #yamagoya2017 - Togetterまとめ Discord Reactiflux is moving to Discord - React Blog Gitter Slack日本語版、年内に登場へ Introducing Shared Channels: Where you can work with anyone in Slack GraphQL | A query language for your API GraphQL: A data query language The GitHub GraphQL API Hypermedia Swagger rmosolgo/graphql-ruby: Ruby implementation of GraphQL Node.js + GraphQLでBFFを作った話 PromQL | Prometheus Apollo GraphQL Caching of GraphQL servers with Fastly / Varnish JSON API SSKDs and LSUDs Kibela Bloke takes over every .io domain by snapping up crucial name servers Is using an .ly domain right - or wrong? Google Pixel 2 How Google Built the Pixel 2 Camera 「Pixel 2」日本投入なくアプリ開発者が困惑 Pixel 2 and Pixel 2 XL are the first phones to support eSIM for Project Fi users Latest Chrome Beta Update Drops the Address Bar to the Bottom by Default Bottom navigation - Components - Material Design 507SH, Android One Kotlinのスキルを持たないAndroid開発者は恐竜のようになるリスクに直面 Jake Wharton Microsoft/reactxp necolas/react-native-web: React Native for Web If you use Twitter Lite you're now using a web app rendered by React Native for Web Relay

Matt Report - A WordPress podcast for digital business owners

On today's episode, Sam and Corey hit the halfway mark of Season 5B by interviewing Tom Willmot, the CEO of Human Made. Tom talks about the agency along with the Happytables SaaS Product which has been the niche product when Noel Tock joined Human Made as a partner in 2013. Listen to the show Matt Report - A WordPress podcast for digital business owners Season 5B: E:9 Tom Willmot Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:40:20 Subscribe Share RSS Feed Share Link Embed Download file | Play in new window | Duration: 00:40:20 Guest: Prior to founding Human Made with Joe in 2010, Tom cut his professional teeth with lead technical roles on some of the earliest examples of large-scale sites built with WordPress, including the ground-up rebuilds of both Geek.com and Digital Trends. In addition, Tom sits on the board at Happytables and has had advisory roles with Rufflr, Market Realist, United Influencers, and Clickbank. He's a regular public speaker, both offline and online. As CEO Tom splits his focus between the big-picture vision of where Human Made is going and how they will get there along with the day-to-day support of their amazing humans and clients. What you will learn from this episode: Happy Tables is a website builder platform for the restaurant niche.  (4:33) More recently there is a pivot to SaaS with the Restaurant Command station. (5:26) There are many restaurants running the WordPress version. New signups for the “older” WordPress version no longer exist. (5:58) It is difficult using WordPress for your SaaS without it dominating your UI. The most valuable part of the web builder platform is the dashboard with usable, presentable data. Supporting a SaaS (Software as a Service for WordPress): The pivot to the SaaS was inspired out of necessity. (6:34) Some needs of a restaurant are generic and they can get websites for minimal cost. The UI of Happytables was before Human Made moved to the restaurant niche. Human Made partnered with Noel Tock in 2013. The customized SaaS product became a website builder now into several versions. (8:02) The most valuable tool with the restaurant dashboard is the analytics and the restaurant data. Restaurant owners want that data. Happytables v2 is still built on WordPress in addition to other technologies. The dashboard uses custom Javascript and a different database using APIs back to WordPress. (11:28) Managing some things like users and website posts allows you to get to market quickly using WordPress as the application framework. (11:52) The JSON API in WordPress core has just come out so there is not a lot of repeatable development and process in the open source yet. (12:46) HappyTables v2 is multi-network which WordPress supports internally. (14:51) To scale the SaaS you need to solve problems at the software engineering levels to address scaling and security. (29:35) Why Stay With WordPress: Happytables is already developed in WordPress. It makes sense to use the technology that does the job best. (ex: user management, publishing, workflow, etc.)  (18:55) The SaaS application can use what WordPress offers for free. Decision Making for a Custom Admin: The standard WordPress admin seemed complicated for new users. (18:56) Noel designed a new admin that was much simpler based on the user's needs. (19:18) The product is not a large complex product. (21:00) In some cases, you are pushing ahead of WordPress with best practices, which may not yet exist. (22:22) Future of WordPress: There is a need to document best practices in WordPress. (ex: If you are building in React and connecting to WordPress you need standard libraries and workflow. (24:32) The current API does not have all the features to take advantage of the additional functionality. There is a lot that has not been exposed in the WordPress API.  (26:21) The API infrastructure was addressed with WordPress core. Not everything is exposed yet in the API. (27:23) The API needs software engineers to extend the functionality. (31:20) Every project that HappyTables does is using the API in some way. (26:44) WordPress will probably be developed to interact with other technologies rather than being everything to everybody. (37:11) EPISODE RESOURCES Nomad Base Human Made Happytables Noel Tock Follow Tom: Tom Willmot on Twitter Human Made Blog If you like the show, please leave a 5 Star review over on the Matt Report on iTunes. Sponsors: Gravity Forms Pagely ★ Support this podcast ★

The Frontside Podcast
069: Redux Part II with Toran Billups

The Frontside Podcast

Play Episode Listen Later May 11, 2017 40:28


Toran Billups @toranb | GitHub | Blog Show Notes: 01:44 - New Developments in ember-redux 04:23 - New Developments in the Wider Redux Community 06:26 - Using Redux in Ember 09:40 - Omit 10:45 - Reducers 25:42 - Fulfilling the Role of Middleware in Ember 28:12 - Ember Data in Redux-land 31:24 - What does Toran do with this stuff?? Links: The Frontside Podcast Episode 55: Redux and Ember with Toran Billups The Frontside Podcast Episode 18: Back-End Devs and Bridging the Stack with Toran Billups redux-offline ember-redux-yelp create-react-app "Mega Simple redux” Twiddle ember-concurrency Thomas Chen: ember-redux The Frontside Podcast Episode 067: ember-concurrency with Alex Matchneer normalizr Rich Hickey: Simple Made Easy Other Noteable Resources: ember redux: The talk Toran prepared for EmberJS DC in April 2017 github.com/foxnewsnetwork/ember-with-redux Transcript CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 69. My name is Charles Lowell. I'm a developer here at The Frontside and your podcast host-in-training. With me Wil Wilsman, also a developer here at The Frontside. Hello, Wil. WIL: Hello. CHARLES: Today, we have a special guest, an actual elite member of the three-timers club, counting this appearance. We have with us Toran Billups. Thank you for coming on to the show today. TORAN: Absolutely. I'm not sure how the third time happened but I'll take it. CHARLES: Well, this is going to be the second one, we're going to be talking about Redux and then I believe you're on the podcast back in 2014 or 2015. TORAN: That's right. CHARLES: That's one of our first episodes. Make sure to get in touch with our producer afterwards to pick up your commemorative mug and sunglasses to celebrate your third time on the show. Awesome. I'm glad to have you. We actually tend to have people back who are good podcast guests. TORAN: Thank you. CHARLES: Yeah, I'm looking forward to this one. This is actually a continuation of a podcast that we did back in January that was actually one of our more popular episodes. There was a big demand to do a second part of it. That podcast we talked about the ember-redux library, which you're a maintainer and just kind of working with Redux in Ember in general. We're going to continue where we left off with that but obviously, that was what? Almost six months ago? I was wondering maybe you can start there and there been any kind of new developments, exciting things, what's kind of the state of the state or the state of the reducer or the state of the store in ember-redux. TORAN: For ember-redux, in particular, we're working on three initiatives right now. The first is making the store creation more customizable. A lot of people that come from the React background in particular are very used to hand crafting how the stores put the other with the right middleware and enhancers and reducers and that's been fine. I wanted to drop people into the pit of success and everybody's cool with that but now we're getting to a point where there are people wanted to do different things and it's great to open the door for those people if they can, while keeping it very simple so we're working on that. We have here that's just undergoing some discussion. We're also, just as the wider Ember community -- you guys maybe involved in this as well -- and trying to get the entire stack over to Babel 6, the ember-cli Babel 6.10 plus stack. There is a breaking change between Babel 5 and 6 so we're also having some discussions about the ember-redux 3.0 version bump at some point later this year, just because we really can't adopt this without introducing basically a breaking change for older ember-cli users. CHARLES: Just in general, this is a little bit off topic, what does it mean to go from Babel 5 to Babel 6, if I'm an add-on author. TORAN: I would probably ensure that need to speak more with Robert Jackson about this. We just kind went back and forth because I thought I had a Babel compile error. He's like, "No, you're missing this dependency which is the object spread." Unfortunately, the object spread is rampant in React projects and this is totally cool. I had to actually add that and that's just a breaking difference between these two. If we adapt the new version of this in the shims underneath of it as an Ember 2.43 user, if you're on node four which is still supported, you will break without this. I'm trying to get some discussion going about what we should do here and if we even should push ahead and just say only node six is supported. There's some discussion and then back to your original question, the third piece is we've introduced the ability to replace the reducer but we need to get some examples for hot reloading the reducer. That's a separate project but it needs to be enabled by ember-redux. Those are the three main initiatives. CHARLES: Being able to you hot load your reducers, just to make changes to your reducer and you just thunk them into the application without having to lose any of the application state and that one of the reasons that's possible then is because you're reducers have no state themselves. They're just pure functions, right? TORAN: Exactly. CHARLES: Okay. Awesome. That sounds like there's a lot of cool stuff going on. Beyond ember-redux, are there any developments to look for on the horizon in the wider Redux community that might be coming to Ember soon? TORAN: Actually one of them is actually fairly new and it's already in Ember and just because I have already got a shim up for it is redux-offline, which I remember you had Alex on two episodes ago about breaking your brain around Rx. I feel like that happened for me trying to build apps offline first. This is, of course just another library that can drop in that place nicely with Redux but I feel like the community, at least it's got me thinking now about what an absolute that would really disrupt someone who's a big player today. I feel like you've built a great offline experience with true and well done data syncing. You could really step in and wreck someone who's in the space today. CHARLES: Right, so this is a sim around... what was the name of the library again? TORAN: Redux-offline. CHARLES: Okay, so it's just tooling for taking your store and making sure that you can work with your store if you're not actually connected to the network and like persisting your store across sessions? TORAN: Yeah. It uses a library that called redux-persist that takes care of and kind of hydrating the store if you have no network connectivity. But it's also beginning to apply some conventional pattern around how to retry and how to roll back. It's just an interesting look at the offline problem through the lens of an action-based immutable data flow story. It's interesting. I don't have a ton of experience with that kind of tweet and rewrote my Yelp clone with it and that was tough. That's what I mean by this. It's like I thought this is very trivial but you have to do a lot of optimistic rendering and then sort of optimistically gen primary keys that gets swapped out later and it's tricky. If you've never done offline first, which I have not, I just think offline is pretty cool and along those lines, there's been a lot of discussion around convention. There's of course, Create React app which is like a little library or CLI tool to kick off your Redux in React projects. It's kind of ember-cli, very trimmed down version of that right now and that's just getting incrementally better. Of course, you guys are in the React space so you may touch upon that story if you haven't already. CHARLES: All right. We talked to the very high level, I think the last time we had you on the show but now that the idea is gaining traction, kind of delve into more specifics about how you use Redux in Ember. I asked at the end of the last podcast, let's step through a use case like what would deleting an item look like in ember-redux land. Maybe we could pick that up right now and just understand how it all connects together. TORAN: Yeah, absolutely. Without understanding how this entire flow or this event bubbling happens is hard to get your head around it. The process we're going to walk through is exactly that use case you laid out, Charles. We're going to have a button in our component and that button, on-click the idea is to remove an item in a list that we happen to be rendering, let's say. If this is a child component like the very primitive literally button that you have and you just have your on-click equal probably a closure action in Ember, the parent component or the outer context is going to be responsible for providing the implementation details for this closure action and what it does. This is kind of the meat of what you're getting at. The high level here is there is a single method on the Redux store that you have access to and it is called dispatch. The nice thing about Redux again, the API surface area is very small. It's just a very handful of methods you need to get your head around. This one, in particular takes one parameter, this dispatch method. That parameter is a JavaScript object. Now, if you're just playing around, you just want to see the event flow up, there's only one requirement asked of you and that is type property. This JavaScript objects have a type that is often a string so it's very human friendly, you just put in there and the string remove item, let's say. Now of course, if you want to remove a particular item in this remove example, you of course want to pass some information as well beyond the type. The type is mostly just a Redux thing to help us identify it but in this case, you'll definitely know the primary key or the ID value, let's say of the item you want to remove. In addition to type, this JavaScript object, let's just say has an ID property and that can come up from the closure action somehow if you want. Once you click this, what's going to happen is you're going to fire this closure action, the closure action is going to invoke dispatch with the JavaScript object and dispatch is going to run through the reducer which is the very next step and what we do is we take the existing state. Let's say we have a list of three items, that's going to be the first argument now in the reducer. The second argument is this action which is just, as I describe it a JavaScript object with two properties: type and ID. You can imagine an ‘if' statement, conditional switch statement that says, "Is this the remove item action? If it is, okay." We had the ID of the item that want to remove and since we have a dictionary where the primary key of the item is the ID, we can just use lodash-omit and we basically use omit to filter out the ID and then use object assigned to a transform or produce a new state and then a callback occurs after this that tells your list component that somewhere in your Ember tree to re-render, now only showing two items. CHARLES: One of the things I want to point out there, you just touched on it but I think it's an interesting in the subtle point is that the lodash method that you used was omit and that's how this is kind of tangential or I'll say parallel to programming in this way is that you don't actually use methods that mutate any state. You calculate new states based on the old states. I think that's a great example of that -- that omit function -- omission is the way that you delete from something in an immutable fashion. You're actually filtering or you're returning a new copy of that dictionary that just doesn't have that entry. You're just a omitting it. You're not destroying the old one. You're not deleting anything. You're not changing it. You're just kind of Xeroxing it but without that one particular entry, which is ironic because the effect on the UI is that you have done a delete but really what you're doing is omitting there. I just think that's cool. I think it's one of the ways that using the systems teaches you to think about identity differently. Then the question I want to ask you was, this all happens in the reducer, what does that mean? Was that word mean -- reducer? I kind of like danced around that idea and I've tried to understand where that term came from and how it helps give insight into what it's doing and I come up short a lot. Maybe we can try and if we can't explain what the name of reducers, maybe there's some alternate term that we can help come up with to aid people's understanding of what is the responsibility of this thing? TORAN: Yeah, I think we can just break down reduce first then we'll talk about how it ends up looking. But I think it's reduced almost like it's defined in the array context. If I have an array of one, two and three and I invoke the reduce method on that, we actually just produced a single value, sort of flatten it out and produce a single value as the result to that, so three plus two plus one, six is the end result. What we've glossed over this entire time, probably last episode is that this reduce word, I believe is used because in Redux, we don't really just have one massive monolithic reducer for the entire state tree. We instead have many small reducers that are truly combined to do all the work across the tree. In my mind is I think reducer fits well here because we're actually going to combine all these reducers and they're all going to work on some small part of the state tree. But at the end of the day, we still have just one global add-on and that's the output. We want one global add-on with a new transform state and that is the reduced state. CHARLES: You take some set of arguments. One of which is the prior state and you reduce that into a single object. I like that. The other place where I've seen this term applied in a similar context is in Erlang. They talk about reduction so that you have an Erlang server where the way that the servers modeled is as a recursive function call where you pass in the prior state of the server plus any arguments if you're handling a request or something like that, bundled in there is going to be arguments of the request and then what that function returns is a new state, which is then used to pass in to the next state. They call this process reduction. We've got two data points. Maybe there, we can go search for the mathematical foundation of that later -- TORAN: I like it. CHARLES: -- if you want to geek out. I think that helps a lot. Essentially, to sum up the responsibility of the action is you take a set of arguments, it's going to take the existing state of the store, run it through your reducers and then it's going to set the next state of the store or yield the next state of the store. Is that a fair summary of what you would say the responsibility action is? TORAN: Yeah. I think you're right. In fact, in preparation for this talk, I just threw together really small Ember Twiddle that will link in a show notes what I call the mega trimmed down version of ember-redux. It's basically a really naive look but for conceptualizing this flow, it's about 24 lines of code that show exactly what you're saying which is I have this reducer, it's passed into this create store method in the syntax, how do this actually look? It better illustrates how the reducer is used when dispatch is invoked so a dispatch, if I was actually to walk line by line through this, which will be probably pretty terrible. But the very first line of dispatch is to just call the reducer. From that new state transformation, we just push in so the store gets a new entry into it's array of here's the next state and because we had never tampered or side effect-ted the old state, we could easily go back in time just by flipping the pointer in the array or that indexing the array back point. CHARLES: I guess my next question is we've talked a little bit about immutability and we know this reducer that we call at the very first point of the dispatch is a pure function. You were dealing with pure functions, immutable data but at least in perception for our users, our system is going to have a side effect. There's going to be calls to the network. We are, in the least in theory deleting something from that list. How do you go about modeling those side effects inside Redux? TORAN: This is a great intro because in fact a friend of mine is actually a teacher at a boot camp and he was telling me the other day that he was asked to do a brief look at Redux and his very first feeling when he was watching some of the Egghead.io videos is like, "Oh, so the reducer is pure but I have to side effects something so where do I do this?" It's not very clear, I think for the very beginner which is why we left it out of that part one podcast. Today, we're kind of hit that head on but before we get into that list, we can talk about what having actions in their simplest form look like today in your Ember app. As I mentioned earlier in this remove example, you got the button, it just takes the closure action the on-click. No big deal. The bigger work was on the parent contexts or the parent component to provide this action, which sounded very simple but imagine instead of just dispatching synchronously, which is we talked through. Imagine we only wants to dispatch that officially to change it if we have gotten a 204 back and the fetch request is deleted on the server -- normal Ajax or fetch-type flow. In this case, you start to add a little more code and imagine for the moment this is all inline in your component JS file. The component now is started to take on an additional responsibility. In addition to just providing a simple dispatch, as I say, "Go and remove this Mr Reducer later," you're now starting to put some asynchronous logic and as you imagine a real application, you grows and try catch stuff, some error handling, some loading, some modals. This gets out of hand. One of the things that I want to touch on briefly is, at least in the ember-redux case we ship this Promise-based middleware and I want to stop right there for just a second because I use that word 'middleware' and immediately we've got at least highlight what this is doing. In that pipeline -- we talked about earlier -- in the component I dispatch and it just goes right to the reducer. Well, technically there's actually a step or an extension point right before the reducer is involved and that is where middleware comes in. Technically, you could dispatch from this action and then you could handle and do the network IO type request in middleware instead, then porting on another dispatch of a different type with the final arguments to be transformed. That's really the role. CHARLES: Can middleware actually dispatch its own actions? TORAN: Correct, yep. In fact, one of the first big differences between this example as I'm kind of hacking around the component and I've got access to dispatch but there's two things I'm really actually lacking if I don't leverage middleware full on. The first is I do have some state in the component but often, it's actually just a very small slice of state that this component renders. If there's actually a little bit more information I need or actually need to tap into the full store, I don't have it and that might be considered a good constraint for most people. But there are times you imagine more complex apps, you need the store. You might even see a little bit more state, middleware provides that is where you trapped with just that slice of state and dispatch the keyword. That's about it in the component. But the other side effects are the benefit, as you break this out is you get another seam in the application where the component now is not involved with error handling and Promises and async flows or generators. It just does the basic closure action set up and fires dispatch almost synchronously like you did in our very simple example, allowing the middleware to actually step in and play the role of, "Okay, I'm going to do a fetch request and I'm going to use a generator." It's almost like the buffer for IO or asynchronous work that it was missing in our original equation. Imagine, you want to debounce something or you want to log something or you want to cancel a Promise, which you can't do, any of that stuff that's going to happen in this middleware component. That's one of the things I like about middleware as I learn more about it and the moment you get to a very complex async task where you're actually doing the typical type ahead, where you literally want to cancel and not do the JavaScript work or you like to cancel the Promise as quickly as you can, you can very quickly dive into something kind of like what you and Alex talk about with ember-currency in the Redux-land. It's called redux-saga. It's just a generator-based async work. CHARLES: Is saga kind of emerged as async library that everybody uses? I know it's very popular. TORAN: Yeah and a good reason, I mean it solves a lot of the problems that if you were to try and do the cancelation token Promise stuff that came out a while back where we're trying to figure out how to cancel Promises. There's a lot more ceremony and just a lot more state tracking on your own that generators and even when I played with this last week, which is actually redux-observable which is an Rx-based middleware. It's built by, I think Ben Lesh and Jay Phelps from Netflix or... sorry, Jay is still at Netflix but anyway... You could use Rx, you could use generators. This really is just the escape valve for async and complex side effect programming that can't or shouldn't take place in the reducer because it's pure. It shouldn't take place necessarily in a component because one of the best pieces of advice I got when I was younger was, " Toran, make sure you do or delegate," and we're talking really about levels of depth in your methods at the simplest. But it applies here as well, which is I would love it if I had just a very declarative component and I didn't have to get into the weeds as I was looking at it about, is this a Promise? Is this Redux thunk, as they call it? Is this a generator or is it Rx? I don't even care in the component for the most part. I just need to know the name of the action and the arguments. If I'm having a problem with the Rx side of the generator, I'll go into the middleware and work from that particular abstraction but you can see the benefits of the seam there. CHARLES: Are the middlewares match on the action payload in the same way that the reducers do? Is that fair to say? TORAN: That is fair and I will warn. If that seems very strange, you're probably not alone. In fact, the first time I did this with redux-saga, I was dispatching, only to turn around then dispatch again. It feels very strange the first time but again, keeping in mind that you're really trying to have a separation from the work that is side effect and the work that is pure. The first action in that scenario, we'll call it remove-saga because it's actually going to fire something to a middleware. That work is all going to be network-heavy and it's not really as easily undo-redo because it's not pure. But the second event invoked from the actual middleware itself that says, "Remove item. Here's the ID. We're good." That work could be undone-redone all day because it hits the reducer, which is sure you can in and out. CHARLES: It sounds like basically the middleware is allowing you to have a branching flow structure because they all do involve more actions getting dispatched back to the store to record any bookkeeping that needs to happen as part of that. If you want to set some spinner state, that will be an action that gets dispatched. But in terms of sync, they allow you to set up sequences of actions or if you basically have one action that will actually get resolved as ten actions or something like that. If you think about an asynchronous process, you have the action that starts it but that might end up being composed of five different actions, right? Like I want to set the application into some state that knows that I've started my delete and that means I want to show like the spinner. Then at some later point, I might want to show progress like this deletes really taking a long time and I might want to dispatch five different actions indicating each one of those little bits of progress. Then finally, I might want to say it's done or it fails so really those got decomposed into 10 actions or five actions or whatever so the middleware is really where you do that, where you decompose high level actions into smaller actions? Or it's one of the places? Is that a correct understanding? TORAN: Yeah, I think if you're an old school developer for a minute, it will cater to the audience that maybe came from early 2000s backend dev. Now, they're still pretty current in web dev. I see it talked about as business logic. I feel like this is really the bulk of the complex work, especially if you're using Rx. You're actually creating these declarative pipelines for the events to flow through. My components are much thinner by comparison. They're truly just fire off this action with the information to kick the async pipeline but in the async piece of it, there's a lot more work happening and that's I think because there's a lot of complexity in async programming. CHARLES: Right, and it's almost like with the reducers then, there's not so much business logic because you're just resolving the implications of the new state. Is that fair to say? It's like now we've got this information, what does this imply directly? TORAN: Yeah, I think there is this old [inaudible] thing where they're talking about what's should be thick. You know, thick controllers or thick models, what should it be? Of course, we never want 'thick' anything, is the right answer, I think. But the apps on building today, I feel like if any was thick-er -- a measure of degree bigger in effort or work -- it is these middleware components right now. I think the nature of what you describe, which is the reducer is not supposed to be doing anything complex. It's literally taking a piece of data in, producing a new piece of data out. Logically thinking about that takes much less effort, I think than the human brain applies to async programming in JavaScript. CHARLES: Right. I think it makes sense and some of these things are just going to be necessarily gnarly and hairy because that's where the system is coupled. I can't say anything about whether the delete succeeded or failed until I've actually fired off the request. Those are implicitly sequenced. There has to be some glue or some code declaring that those things are sequenced. That has to be specified somewhere, whereas theoretically with your reducers, you could just run them all in parallel, even. If JavaScript supported multithreading, there's absolutely no dependency between those bits of code. TORAN: I think so, yeah. I think there are still some challenges because in the reducer sometimes. We can talk about this in a few minutes but you may actually be changing several top level pieces of the tree. If you're de-normalizing, which is what we probably should touch on next, there are some cases where you want to be a little careful but like you said, generically immutable programming enables multithreading. We're not touching the same piece of state at same time. CHARLES: Right as long as that piece of state that you're touching, like you need to resolve the leaf nodes of the tree first but at any siblings, I guess is what I'm saying on there, ought to be able to be resolved in parallel. It's more an exercise in theory or just a way of thinking about it because like why you're able to do those reducers as kind of these pure functions is because there's no dependency between them. I guess I'm just trying to point out that to wrap my head around, there are places where there are just clear sequences and dependencies and those are things that would be in the middleware. TORAN: Gotcha. I came a little scared of service worker. [Laughter] CHARLES: Actually a great point is what kind is the analogous -- if there is anything analogous -- in Ember today that's fulfilling the role of middleware. What's the migration path? What's the alternative, just trying to explore like where you might be able to use these techniques that we've been describing inside your app? TORAN: I think, at least my look at it has been a service injected into a service, which sounds completely bad or sounds broken the first time you see because you're like, "We're injecting a service into an existing service." I say that because, for me at least there is a top level service that owns the data and provides read-only attributes but there should be some other piece of code -- not the component -- that is doing this asynchronous complex processing, we just talk about as middleware, that is often a different service than the service that owns the state add-on. That's what I meant by service-injected. There's some Ember service whose job is to manage the complexity and probably ended up in middleware from the Redux perspective or ember-concurrency is literally solving that in my opinion. They do a lot for you: solving the async problem generally and I haven't dug into ember-concurrency enough to know. The pipeline stuff, I think which you guys talked about, which is an RFC, that may have eventually be what I consider the Rx or redux-saga of the middleware today. CHARLES: Right. I think ember-concurrency is just absolutely fantastic but it is a hairy problem but there's some overlap in terms of what it is solving. I think that is interesting. I guess a case where you would use middleware would be anywhere that you would use ember-concurrency. I think the interesting thing to compare in contrast there is one of maybe advantages or disadvantages -- let's just call it a tradeoff -- is that with ember-concurrency, you have this middleware that is associated with an object. It's associated usually with a component or a route. When things happen to that component, you're able to affect your ember-concurrency process but it does mean, these things are sprinkles throughout your application and the rules that are governing them can be really different, depending on which part you're operating in or just because sometimes you're using them on a route. Sometimes, you're doing it on a component. Sometimes, you're doing it on a service, whereas with the putting in the middleware, it sounds like they're going to all be in one particular place. All right. Let's move on from the simple to the more complex because that's where it's at. We've talked about modeling async processes, we've talked about handling state transitions and all that, nothing typifies that more profoundly in Ember community than Ember Data as just a foundation for state and syncing it over to the network. Love it or hate it, it's very popular. What are the things that you do in ember-redux land? One, is it fundamentally incompatible with Ember Data or is it just more easily served with other alternatives? How do you handle those foundational interactions, those fun foundational async loading network interactions with Ember Data, just using an ember-redux? TORAN: Yeah, for myself, I don't have an experience actually using the two together on purpose. There is a gentleman who did a talk sometime last year and I'll dig up the YouTube clip for you guys, where he talked about his approach where you would actually produce new states so it's still Redux friendly. Ember Data itself just be a new Ember Data model, every time you transformed it. But one of the tricky points is the philosophy of both so in Ember Data, you're just invoking set on everything and not just how it works. That's how the events bubble through the system as you re-render. You never actually create a new state of the system that's a copy, minus or plus, other attributes. You just always touching a single source of truth. I felt like that was always a sticking point. Anyway, Thomas who did this talk and I'll point you guys to it, did a great job of saying like, "Look, if you're stuck in this world with a lot of Ember Data, you're having some pain points with it and you want to try Redux to see if this alleviates those by not changing the state, here is a middle ground," which he did, I think a fabulous job driving it out. Although I must admit, there's got to be some challenges in there just because of the philosophical difference between the two. CHARLES: Yeah. It definitely sounds like there's some challenges but I'm actually pretty eager to go and watch that, to see what they came up with. If you're using these snapshot states of your Ember objects, would it be possible then to take all of your save, delete records, even query and have them inside of middlewares like have a redux-saga for every single operation you want to take on the Ember Data store. TORAN: The example he showed is basically the best of both worlds. You don't want to ember-mutate so he has a special bit of code to do that. But because the rest of it is vanilla Ember, you could drop in concurrency if you want to do the saga-type generator stuff. But you could also just make your changes as you would otherwise. You use the adapters, you fetch, you save, you delete, whatever you want to do for the most part. It saves a lot on the de-normalized side, which you would have to do manually. You don't write any Ajax code, which you have to do manually on the Redux side. I think there are benefits if someone could get it to work where you're just not changing the state of Ember Data, which may actually just be the future Ember Data at some point as well. CHARLES: It sounds like there is a pathway forward if that's the way that you want to go and the road that you want to walk so we'll look for that in the show notes. But my question then is, you're here on the podcast, what do you do? TORAN: I do want to have one disclaimer here, just so that I'm not a complete poser but I am. If you guys don't know this, I'm not trying to hide this from the community but I don't work on ember-redux at work. I don't have a side gate, making money with it. I don't use it ever. I literally just build examples to try stuff out. There's both a blessing and a curse of that. The curse is that you're asking, "Hey, Toran, you're the author. How does this work, man?" I can give you my run at it which I will but there is the other side which is very clear is that I have not built and shipped Facebook -- the current company I'm with -- with X million people hitting every month. We're not using it. This isn't exactly 'Toran-stamp of approval' here but I do mess around with -- this week in particular -- Rx which I like. I think Rx is just something that it changes the way you think about the way programming, especially async programming works. I definitely cannot comment much on Rx other than I like Alex's challenge to the community on your podcasting. Go use Rx, even if you use ember-concurrency or don't use ember-concurrency, how to break your brain. It will be for the better. Actually, Jay did a mini code review with me because my first pass at Rx was just using the fetch-promise because I was like, "I want Rx for side effect modeling but I wanted to still work with Ember acceptance testing," because I still feel like Ember is leading in that way, as you guys talk about in podcast recently. What was really cool is actually there's a shim that obviously Rx has its own little Ajax thing but it is not actually Promise-based. The advantage of this that Jay called out is in the Promise-based, where I'm using ember-fetch, let's say and I'm just wrapping it with Rx, those Promises are still not really cancel-able so what Jay was warning me about is like, "If you're going to use this quick and dirty, great. but in a real app, these will still queue up in Chrome or IE and block the amount of network requests you can actually make," so don't use Promises, even though they're very familiar. Use this operator, I think it is or a helper inside Rx which is the Ajax non-Promise-based operator. Long story short though, there's a good amount of work involved that grass is greener. In Ember Data, if you've ever used 'belongs-to' or 'has-many', you have done the most magical thing right there. In all the right ways, it is amazing because once you're in Redux and you're like, "I have this very nested object wrap," but Redux isn't meant to operate on this nested object wrap. It's meant to operate on this single tree structure, at these many top level entities as I can. As a project that's pretty popular in React called normalizer, you will likely use this project eventually. Maybe, not your first 'Hello, world', but you'll use this to actually break apart or truly de-normalized the structure. What that does a lot of times is if you have a blog with comments nested all inside of it from your JSON API call or your GraphQL call, that's fine coming from the network. But since you're going to have different components listening for just the comments, maybe or different components that just listen and re-render when the blog name changes and they don't care about the comments, you want to actually de-structure that so you have a separate blog top level item and you have a separate comments top level item. They're still related so the blog can get through its comments and vice versa as belongs-to and has-many works in Ember Data but you've got to do that work now. There is, of course magical projects like redux-orm that I just can't speak to how well they work or don't work but they try and solve the more Ember Data, look at the problem which has define this and there's the RM take care of the magic for you. I actually don't mind normalizer. It's just something people should be aware of because it's more work. You've got to break that apart yourself, just as much as writing your own network requests. You're, hopefully not going to duplicate Ajax all over the place but you will have to do the work that you otherwise do not do in Ember Data for sure. CHARLES: It's very interesting. If you look at the Ember Data internals, it sounds like the Ember Data store is actually structured very similarly to the way you had structure a Redux store using something like normalizer where you have these top level collections and then some mechanism to both declare and then compute the relationships between these collections of top level objects. But I want to go back to your other point too. I just wanted to say this. Toran, you know, don't sell yourself short because you give an incredible amount of time, an incredible amount of support to the community. You're very active in the Ember Redux channel. When problems come up, you think about them, you fix them. Even if you're not actually watering the trees, you're planting the seeds. I think that's actually great. I think that is a very valuable and necessary role in any community is to have people who are essentially the Johnny Appleseed of a particular technology. I think you go around and you throw these seeds around and see where they take root, even if you're not there. You're on to the next shady lane to plant seeds, rather than stay and enjoy the shade and the fruit of the apple trees. TORAN: Yeah. I appreciate your kind words there because a couple of years ago, I got into open source because it provides good personal branding. It's like, "This project, it's Toran's. We should hire Toran." It just makes you look more from that perspective. It also gives you almost a way to skip out on tough interviews at times if people are like, "This guy have a decent program. Let's take a look at his PR. He communicates with other humans online." It gets rid of some of that. But there is a dark side. We don't talk about it because there is an upside to it, especially for consulting but the dark side can be time commitment: how much bandwidth do you have outside of your family life, your hobby, if you have one and any other open source or work-related stuff that you already have to do. For me, this is really an exercise in thought leader-y stuff. I saw the benefits of this. It made my Ember better. Even if I wasn't using Redux, it just made the Ember code I wrote at work better. It inspired me to look at different things like ember-concurrency and Rx, things that are just way out of my comfort zone two years ago. I think those are all the benefits that come with it but the easiest part is got to be some value from it. The juice is it worth the squeeze. I think the community we've built and the people using it and the problems we're solving are all definitely worth the squeeze. CHARLES: It definitely is and you can tell from the vibrancy of that community that a lot of people are experiencing that value from it. To your point, I think something that is often lost on people is that you can actually use a project without actually using it. I think that there might be many people, for example in the Ember community that have never use React but are actually in fact using it because of the wonderful patterns that have come out and it's brought to the fore. I had thought about immutability, certainly on the server side but I had thought about it really deeply on the client side until a library like React came along and people started talking about it. I would say before we actually started using React the library, we were using it in thought. You touched on that when you were saying, "It's changed the way I think, it's changed the way I code." Even it's changed the way that I do things at work, in fact you're using it in spirit, if not the actual structure but I almost feel like that's more important. It's longer-lasting and has a greater impact on you, 10 years down the road or even five years down the road when neither of the technologies that we're actually talking about today are even going to be in wide use. TORAN: Yeah, that's true. In fact the one thing I would call out that people check out, I think Alex mentioned this or at least you guys have to talked about it in passing but definitely, sometime this weekend, watch the Simple Made Easy talk by Rich Hickey. It will definitely make you think differently, regardless of the simple side or the easy side that you follow on, projects of course make tradeoffs both sides of that but it is a great talk. Especially, if someone who's been programming six months or a year or two years, they're going to get huge benefit from it, just as much as someone older like myself who has got 10 plus years in the biz. CHARLES: Yeah, I know. That is a fantastic talk. We need to link to it at every single show. TORAN: Exactly. CHARLES: Well, I think that is a fantastic note to end on so we will wrap it up. That's it from Frontside for this week. We're going to have you back, obviously Toran there's so much that we could cover. Six months down the road, we'll do part three but for now, that's it. Thank you, Wil. Thank you, Toran for podcasting with us this morning. TORAN: Thanks for having me on guys. I really appreciate it. CHARLES: Then everybody else, take it easy and we'll see you all next week.

The Frontside Podcast
065: Data Loading Patterns with the JSON API with Balint Erdi

The Frontside Podcast

Play Episode Listen Later Apr 6, 2017 33:15


Balint Erdi: @baaz | balinterdi.com | Rock and Roll with Ember.js Show Notes: 01:58 - What is JSON API? Advantages 03:22 - Tooling and Libraries 05:49 - Relationship Loading 07:51 - Designing a Data Loading Strategy 11:23 - Pitfalls of Not Designing a Data Loading Strategy 13:53 - Ember Data 16:37 - Pagination & Sorting 23:06 - Writing a Book 25:48 - Implementing Searches with Filters 31:08 - What's next for Balint? Resources: Balint Erdi: Data Loading Patterns with JSON API @ EmberConf 2017 (Talk) Balint Erdi: Data Loading Patterns @ EmberConf 2017 (Slides) jsonapi-resources GraphQL JSON API By Example by Adolfo Builes ember-cli 101 by Adolfo Builes 33 Page Minibook + Coupon Code! Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 65. My name is Charles Lowell. I'm a developer here at The Frontside. With me, also from The Frontside is Elrick Ryan. Thank you for being with us, Elrick. I know this is your first podcast. ELRICK: This is my first podcast. It's great to be here. CHARLES: All right. Fantastic. Yes, we hired Elrick a little bit ago and it's been fantastic. I'm glad to get you on. With us today is a really awesome guest. His name is Balint Erdi. I actually like to tell a little bit of a story when I have an anecdote and I do have one about you that I think you might like, although you might not even remember it. But it was shortly after EmberConf. Last year you and I got on a pairing session remotely and I don't even remember what we were working on but I was struggling with this way to decorate objects without changing them, without touching them or mutating them in any way and you showed me this technique of actually decorating it by creating a new object with the old object as the prototype. Do you remember that? BALINT: Yes. I totally knew. How could I forget? CHARLES: Yeah. That one hot tip changed my life. It is one of the best techniques that I have discovered in the last five years of working with JavaScript. It really was great and I use it all the time. BALINT: Wow. Amazing. CHARLES: Thank you. I don't know if I ever said, "Thank you," but thank you, thank you, thank you. BALINT: Yeah, no problem. I also learned a lot from this pairing session actually. I didn't know that my small contribution made such an impact. I'm glad to hear that. CHARLES: Yeah, that was fantastic. We need to actually make that happen again. I don't know why we only did that once. BALINT: Yeah, we should. CHARLES: Anyway, we're here to talk about data loading, it's something that is absolutely critical to building good frontend and building UI and yet, it's something that the users never really see. Sometimes, it feels like it's 90% of the problem. BALINT: Exactly, yeah. ELRICK: Yeah, that's so true. CHARLES: We're going to talk about techniques that we use and you use and, in particular JSON API, what it is and what's so great about it. So, what is JSON API for folks who've never heard of it? BALINT: JSON API is a standard way to build APIs. I think the specification has reached 1.0, I would say two years ago or three years ago. I remember it was in June, I'm not sure which year. It basically lays down everything that's usually consider when you build an API: how do fetch relationships, how to paginate data, how to sort, all of these things that I think developers tend to invent again and again. I think probably the biggest advantage of JSON API is that it just declare a standard way to do that. It basically reduces the byte sharing going on at the start of the project. Well, not just at the start, later on too. In my talk at EmberConf, I coined a JSON API the conventional over configuration for APIs. CHARLES: I see. Pagination is something that everybody does. Why need to byte share over the syntax, like the actual data from it? BALINT: All of these things are things that everybody does. It's just that everybody does it differently. There's a lot of discussion going on which the best ways, for example when there's a team, at least every team that I was involved with had several discussions going on about what data formats to send data in and how to paginate and all these, I think details where it's more important to get to an agreement just to agree on something and move on than to get it perfectly, if at all there is a perfect way to do it. CHARLES: I guess my question is if you have the standard way of doing of everything, what kind of tooling can you build, that can you kind of inherit for free? At what level, both from the low level and then up to the top level? When I say top level, I mean what the user is seeing. BALINT: By low level, I guess you mean the actual libraries that implement JSON API in different frameworks, right? CHARLES: Exactly. Are there now a lot of libraries out there so whatever I'm using, if I'm using JSON API, is it available in a lot of different ecosystems now? BALINT: Yes. It definitely is. There is a full page on the JSONAPI.org, on the official JSON API page that just list all of the different libraries that are now implementing all these languages. I have experience with Rails, probably at Ruby too and there are three libraries and I think all of them are pretty good just for implementing JSON API. The one I use is called JSON API resources and it's very telling. Well, it's a rather simple application but I basically didn't have to write a single line of code. I've only had to write very little code in the server to implement a JSON API specific feature. Most of the relationships could be implemented with just declaring JSON API resources and then the name of the resource in the Rails application. For all the other things, I really didn't have to do that much so in every time, it was just adding one or two lines or changing a configuration value, then it was just there. CHARLES: Now, how do you choose then what relationship you want to load and which order? Is that controlled by JSON API? BALINT: It's controlled by the frontend. It is a frontend application that's going to send these requests to the backend so that's where you should consciously think about what relationships you are fetching and how. CHARLES: Right so part of the specification is a way of specifying which relationships you want to load. In my understanding, part of JSON API is an interface along with say, the user, "I want to load all of the posts that this user has made." BALINT: Yes. JSON API indeed has a keyword called 'included', which you can implement on your backend which does this. If you specify 'include' and then the name of the relationship or relationships for several ones, the backend must comply with that request and also send back those related resources. That is called compound document in JSON API parlance. ELRICK: Is that the reverse of what they doing in GraphQL because at GraphQL, I think you have to request the relationships you want on the frontend and then it kicks it off to the backend and it gives you the information back. Is it like JSON API that includes that same thing but in reverse? BALINT: I'm not sure because I'm very familiar with GraphQL but all the things it does is that you are normally fetching a resource and then if you specify 'include' then you are telling the backend, "Please also include these related resources with a primary source." CHARLES: I think it occupies a very similar concept that you want to have control on the frontend about which resources or what data gets fetched in addition. ELRICK: Yeah, it sounds very similar. CHARLES: Well, I'd love actually to do a comparison because I know there's a lot of overlap between GraphQL and this. Maybe we can get into that a little bit later. One of the things that you said back there is having this, gives you kind of fine-grain control over when and how you load your data because that always seems a pretty difficult problem to attack on your frontend because as you're rendering your application, you have to incrementally fetch little pieces of data here and there and make sure that it's already, at the time that you actually need to render something like a component and then it's got the right data at the right time. It seems like it's this constant dance of whack-a-mole and like, "I'm loading too much here. It's taking too long," versus, "I've got too many loads happening. I've got 20 requests to run the single page." How can I back those up into a single thing? How do you go about thinking and designing that data loading strategy, as you're ready to render pages? BALINT: That's a really good question. I think the short answer is how you load data has to be part of the design process. You really have to spend time thinking about how you're doing that based on the needs of the UI. I think the way you need to render the UI will suggest the way to do data loading. Especially in Ember, I think do I really need ways to, for example block the page from rendering too early so anything that you want to render first, you can fetch in this non-blocking way into model hook of Ember. Then anything else, if you're okay with rendering later, you can fetch in other ways like to set up controller hook or from the templates or from controllers or whatever way but not in the model hook. CHARLES: But if you are doing something outside of the model hook -- because this is something I feel like a pattern that comes up a lot -- and regardless of where you're operating, if you're using Ember, if you're using another framework, you have kind of this top level data loading. But then you have your nested components might need more data, how do you go about loading that data. I guess you have to think about that also. Upfront it's like, "I've got this component that might want to request more data," how do I actually design that and think about that of I've got this data that's going to come, who knows, maybe minutes after the initial route is rendered? BALINT: That would be different but I think that you can apply the same principle. You can fetch some data in the model hook in this blocking way and pass it all it down to your component if you don't want to render the component before the page renderers or you can just even fetch it from the component while the component is rendered. CHARLES: You're thinking maybe, in terms of streamlining, the rendering process so that you can begin rendering while your data is loading. Is that the use case? BALINT: Yes. If you, for example fetch the data from your component, then it's just going to fetch the data as needed. You have the whole page load as render and then when the component is render and it fetches the data, then you're going to see other requests go out to the backend. The data comes back and then the component is going to be rerendered with the new data. I think in most cases, that's totally fine but you might have a use case when you don't want to render the page before the component is all over the data that it needs. In that case, what you can do is once again, if the fetched data is needed in the model hook, it will just pass it all down to the component. ELRICK: What are some of the pitfalls that you would run into by not thinking about your data loading strategy beforehand that you can pull out and explain? BALINT: I think the classic one is what was known name in Rails and other framework is 'N + 1 problem' and it's when you fetch many related resources, then you might end up with doing end requests for a number of resources. In the case of Ember, you have for example a blog post as your model and then in your templates you write model.comments, then what's going to happen -- depending on actual library that you use on the backend but I ran into this myself -- is by default, if you are going to make those end requests. Ember solution is really, they just works. I mean, the default solution just works and you might not even run into this because you might not have the scenario. You might just have a few records. But if you have a great number of them, then it's going to be a bad experience. ELRICK: So someone that just dealing with Ember and then they go and make a request and then see all these requests come back, that would be something that they would then have to turn around and asked or fix within the backend to say like, "Only give me a certain set of this." BALINT: Yes, exactly. ELRICK: Or is there something on the Ember side with Ember data that you can say, "Only fetch X amount." BALINT: Right. I think both. I can speak about using Ember data and JSON API resources so what you can do to mitigate this case is to use links or cause relationship links instead of fetching the comments one by one, the backend can actually drive Ember data to fetch all the comments in one request from the link that it sends the frontend. You first ask for the blog post itself and then a JSON API compliant backend will send back the blog post resource but it's going to have a relationship link inside. An Ember data automatically records this so when it needs the comments for the blog post then it's going to fetch down from that provided URL. Actually, I haven't really talk about it in a lot of detail at EmberConf. CHARLES: Yeah, I see. I'm going to go off on a little bit of a tangent because I feel like this is a pattern that's coming up more and more. To give a little bit of context, I feel like the way that our data loading strategies have evolved is we're used to the page loads, then we kind of analyze the URL, we decide what data needs to be loaded to render our components and then we pull that data from the server based on a decision and then we do our render. But it seems increasingly more prevalent than we are having a combination of both pulling the data from the server and having the server push data onto us. One is what are the strategies for dealing with that? Given kind of certainly, at least in the Ember world, routes aren't reactive. Then does JSON API actually help with that at all? BALINT: Yeah, a good question. I don't think that JSON API specifies how [inaudible]. You are thinking about something like web sockets like pushing data from the server sockets. I don't think JSON API covers this summary. CHARLES: Yeah. It definitely seems like beyond the scope but I'm wondering if there are any thoughts about general strategies, about how to handle this model state that sitting at the top of your render tree. In Ember for example, in your route and how do you handle the fact that the route requests data and how do you handle data coming in after the initial render? BALINT: If you use Ember data, then you can push data coming in from a web socket, for example to the store. You probably do some massaging on the data that comes in and then you push it to the store and then depending on how you fetch the data, you might have a live collection. For example, if you do a store, find all notifications and then notification is coming in that is going to get displayed right away on your page because then your template is bound to a live collection. But not all of things in Ember data are live collections. CHARLES: Okay, it's mostly library dependent but if you're using Ember data, then you just push those directly into the store, those live collections. It kind of like a real time query. BALINT: Yeah, exactly. But what I'm saying is that not all Ember data query that you do are live collections. For example, relationships are probably not, depending on what method you use to fetch that data. You might have to do some additional footwork. CHARLES: Okay. Now, getting back into the area in which JSON API really does shine at things that can really shave off a lot of time and consequently money from your work, let's talk about those a little bit. For example, you mentioned pagination. How is JSON API going to help me if I want to have paginated data? What are the scenarios on the client where I would need paginated data? Then maybe we can walk back from here's this user interaction that I need paginated data for and how is JSON API going to help me with that? BALINT: Sure. I guess the typical scenario on the frontend is there is a long list of items and you don't want to overwhelm the user by showing them what it wants. You need to just show them page by page so JSON API recommends to use so it's agnostic about the exact paging strategy that you use. You can use the classic page-based approach or cursor-based one or start of pagination technique. It doesn't really force you to choose the way you want to do that. It also mentions that, I think the way it frames it, you might want to use the page for your parameter. I think that the libraries, at least at JSON API resources for sure, you use that page parameter to send back paginated data. You have a page and a square brackets number and then page size. The request variables are page number and page size. Then the server knows that I just have to send back the second page if the page size is 25 and they just set that [inaudible]. CHARLES: Then if you're using a library on the server side, then you don't have to do any extra work. BALINT: Yeah, exactly, depending on the library I guess. JSON API resources made this one really simple. CHARLES: I see. Then in terms of library support on the client, I assume that there are libraries, like Ember data that automatically will support this so when you're creating these live queries, you can include information about the page. BALINT: Yes. I'm very in to Ember so I'm not sure about the frameworks but I suppose that there are some ways that make this very easy for the developer in many of these frameworks like Angular and React. CHARLES: Right. Something that has just bit me on the butt so many times is when I have paginated sorted data. Imagine you've got some infinite scrolling table or not infinite scrolling but you've got some a bunch of table rows that are maybe 300 of them or 3000 of them so you don't want to load them all at the same time. But at the same time, you've got complex sorting that's happening on each column. You might have seven layers of sort. You want to sort by name, followed by ID, followed by date. One of the biggest problems I've encountered is trying to reconcile and sort on the client versus trying to sort on the server. Are there any facilities to help you deal with that? BALINT: Yes. I think the approach I usually take is if you do it on the server, then do it on the server. Do not mix the two things. In this case for example, if you are sorting and then you change just sort field, I would just send a request to the server to have the items returned according to the new sort criteria. I think that's the simplest approach because as you said, I've probably experienced things can get really messy, if you want to do that on the client. CHARLES: Then you've got the third page but when you do the sort, the contents of the third page could be anything. BALINT: Yeah, that's the other thing. I think if you change the sorting criteria, you'd probably want to go to the first page. I haven't thought through all of the scenarios but I think it's really rare that you want to stay on the fourth page while you change the sorting criteria to be created at descending. You probably want to see the first item the way it was created. CHARLES: Someone should seriously write a book about sorting and pagination and loading these data sets because seriously, I feel there's this tribal knowledge of things that people have learned from screwing it up. There's not a written down way of this is how you build the data loading layer for an infinite dataset so that you can sort, you can paginate and here are the problems that you'll encounter. BALINT: There's actually a book written by Adolfo Builes. CHARLES: He wrote a book on Ember CLI, too, right? BALINT: Exactly, Ember CLI 101. He's the same guy and he wrote JSON API By Example. I have it on my mental to-do-list to buy that book. I'm not sure if it covers these exact scenarios but he must cover several in that book. CHARLES: Well, I'll be sure to reach out from because certainly there are a couple of scenarios that have bit me too. The other one is where you have some collection that's paginated and sorted, then someone adds on the client side a thing to that collection. Say, you want to create a new row in that table, well then what do you do with that new row? chances are it's not going to be anywhere on the screen because who knows what the sort order is and the terms of the total sort, which is only the server knows and who knows what page it's going to be on? You get all these problems that compound and it would be great to have one place where people could reference them or have a little cookbook. BALINT: Sure, absolutely. I think in that scenario, the simplest thing, which I think probably works best like 99% of the cases is again, just to reset the sorting and the pagination. If I create a new record, I really want to see that new record. CHARLES: Yeah, maybe you put the new record up in a box up, at the top in a special new record place. BALINT: Sure you can go fancy and do that. That would be a good solution too. But probably you can just reset the page number and show the first page with the new record. CHARLES: Ah, yeah. I see. BALINT: It's tricky. It's going to get more complicated than this. CHARLES: You might have a problem where you don't want to lose that context to the records that you were looking at. BALINT: That's right. Somebody should write the book in this. I know somebody who wrote a book [inaudible]. CHARLES: It could be you. You haven't written a book in over a year, right? [Laughter] BALINT: It's been two years already. CHARLES: It's been two years. BALINT: Exactly. CHARLES: I'm never going to write a book again. I don't know. Do you think you might? BALINT: I think I might. I actually have this urge. CHARLES: If I recall correctly, you're one of the few people I've talked to who was like, "Yeah, you know, writing a book, it wasn't that bad. It was kind of an amazing experience." And literally, everyone else I talked to have been like, "Urgh!" ELRICK: That's really interesting too because his book keeps up with the releases of Ember so that makes it even harder. That's surprising to hear that like, "Oh, it wasn't that bad." BALINT: Exactly. Well, I kind of put off writing the second book for a while because if I just wrote a book, then I could just be done with it, I would be happy to start writing the second book. But if I have to maintain it for years, I don't have to do that but it's just so much extra work that I'd have to take this into consideration. CHARLES: Yeah. Essentially, what you've done is you've rewritten the book. In terms of absolute content, given how much everything has changed, you probably flipped over the content of the book in the same way that people flip over the atoms in their body. I think there was something like you don't have a single atom in your body three years from now that you have today. It takes about three years and you have all the matters completely and totally exchanged. It's kind of like that. BALINT: Yes, it's kind of like that. Well, I think maybe half of the book is still relatively as it was when I released it because since Ember 2 came out, Ember didn't change that much so in the 1.X series it did change a lot and I think my book originally came out when Ember was 1.10, I would say. There's a lot less work that require now to maintain that it was back then. But it had changed a lot for sure. ELRICK: Yeah, I think I bought the book on its first release. It was 1.10. I guess you automatically get assigned to the GitHub repo so you just see a constant barrage of updates, updates, updates and I'm like, "Wow, Balint is really killing it in updating this book." BALINT: Yeah. CHARLES: It is a good book and everybody should go buy it. The other thing that I want to cover to, as long as we're talking about scenarios that come up again and again, we talked about pagination, we talked about sorting, what about things like search? Is there a uniform mechanism to help you out there? BALINT: You can implement searches by using filters. It's a JSON API concept of using filters. You can pass a parameter called filter to your query and then a square bracket. You have the name of the field that you want to search and then just pass the value, the search term basically and then backend should return the items that matched according to some criteria. That's the simple case. CHARLES: It's a simple case but clearly, it's up to the backend to implement that API. BALINT: Yes. CHARLES: So I'm wondering what libraries are available if I'm doing something in Elixir or I'm doing something in Sinatra or I'm doing something using Express, how seamless is it because I feel like a lot of times you can run into problems where these leaky abstractions about the fact that one thing is a Mongo backend, then one is based on Postgres. Maybe that's a better example than a different server technology but more of sticking with a single server technology -- let's use Ruby -- but one is I'm using a Postgres backend and when I'm using, say MongoDB or some other key value store. In your experience, if you've seen this, how much does the backing store leak into the frontend? Is JSON API a good protection and the ecosystem around it from those leaks? BALINT: Yeah, that's a good question. I was about the say that the backend needs to be the abstraction that's use you from having to know what kind of persistence layer you use. The frontend shouldn't care about -- or any client of that backend -- whether you use MongoDB or Postgres. That's a responsibility of the API. You can still send, in this case for example, a filter query. However, the backend translates this to database queries. That's his job. I think the answer to your question is that JSON API does protect you from having to know the intricate details of the database. CHARLES: You might have some work to do but it's possible. BALINT: Sure. You might have a lot of work to do on the backend but it's possible. But it's not just JSON API that protects you. Any kind of API should protect you from this kind of knowledge. CHARLES: Right, unless you're using GraphQL. BALINT: Could be. I don't know exactly how GraphQL works. CHARLES: Yeah. I think there would be actually nice to read something from somebody who's got a lot of good experience with all of these, like different technologies to make a comparison. I feel kind of in the dark. Unfortunately, the problem with any technology, I feel like most of the comparisons out there, if you're going to compare, most people have a huge implicit bias for one tool and a little bit of experience with the other, maybe dangerous of like, "I don't definitely want to render opinions on my GraphQL and certainly not versus the API that I'm used to because I feel like I can't make a good comparison." BALINT: Yeah, absolutely. That's a good one. CHARLES: But it is something that I'm so intrigued because I feel like there's a lot of overlap there but who knows. BALINT: Yeah. ELRICK: They need that to do API like how they had to do MVC, they need to do API comparison. CHARLES: Yeah. One thing we could do, we could implement JSON API over GraphQL and kind of just move your backend on to the frontend. BALINT: I think you could but you just said that with GraphQL, you have to know the client on the frontend, like feels the database. CHARLES: Right. You could technically have a JSON API on top of your GraphQL backend because I think the thing that kind of freaks me out -- this is a crazy idea that no one should ever do it but I hope someone does -- about GraphQL is being as old as I am, I saw so many projects ruined by the visual basic kind of mantra of just like, "Just query your database right inside your components," and that was literally the rope that hung ten thousand projects and just made people despise visual basic development because there was no shield and then literally every button was coupled to your database. But you could theoretically have the best of both worlds where you're sitting on an abstraction that's also on your client but just moving your query language from your server over to your client or something like that. BALINT: Yes something like that could work, in theory. CHARLES: In theory. Like I said, no one should ever do that unless they really want to. BALINT: Yes. CHARLES: Fantastic. The other thing I want to ask you is kind of what do you have cookin'. You got your book, you wrote but you keep updated, you recently have been evangelizing data loading patterns most recently at Ember Conf. What's next? What's now? Or you're just kind of taking a break? BALINT: I already have published book a mini-book about these data loading scenarios. Actually, I just cover the things I told them out at EmberConf then add some more but I might make this into a full-fledged book, providing I don't have to update it. [Laughter] CHARLES: I'm going to write this book -- BALINT: But just once. CHARLES: -- But just once. BALINT: Yeah, I still have to find a way of doing that. CHARLES: All right. We will look for all of those things. One thing that just occurred to me is just how much of actually building UI and building frontend really is about thinking about the structure and flow of how you load your data and how much the user doesn't see that but how important that is to provide a good experience to that user. That's one of the things that sometimes we, as UI engineers don't like to think about but I think it is absolutely true and crucial and foundational. Thank you so much, Balint for coming by and talking with us about these important topics. We will see everybody next week with that. I will bid everybody to do. Good bye, Elrick. Good bye, Balint. BALINT: Yeah, thank you very much for having me. Goodbye.

REACTIVE
62: Using Facts Instead Of Emotion

REACTIVE

Play Episode Listen Later Feb 17, 2017 60:57


Henning was at Sunshine PHP. Henning bought his son a Lego Mindstorm Set. Raquel switched to Visual Studio Code. Node might be in Hennings future. Next.js v2 is coming. Henning shipped JSON API endpoints. We talk about bringing work laptops home. What is a real web dev? Raquel drops the knowledge on salary negotiations.

The Frontside Podcast
058: Rust and Going Into Business with Carol Goulding

The Frontside Podcast

Play Episode Listen Later Feb 16, 2017 37:53


Carol Goulding: @Carols10cents | GitHub | Blog | Integer 32 Show Notes: 00:58 - Going Into Business Using Rust 03:42 - Getting Paid to do Open Source 05:31 - Prototyping Projects in Rust 06:21 - Why Rust? (Benefits) 09:58 - The Language Server 14:52 - Error Messages 19:46 - The Rust Programming Language Book 23:35 - Crates.io 27:41 - The Backend 31:11 - Working with Rust and Ember Together 33:31 - Rust Belt Rust Conference 35:59 - Integer 32 Resources: The Rust Programming Language Book The Frontside Podcast Episode 51: Rust and APIs with Steve Klabnik Rust For Rubyists Working Effectively with Legacy Code by Michael Feathers Clippy Cargo rustlings Python Koans Rust - exercism.io No Starch Tokio Diesel Rocket Nickel Iron Pencil Rust Belt Rust Conference RustFest.EU RustConf Transcript: STEPHANIE: Hello, everybody. Welcome to The Frontside Podcast. This is Stephanie Riera. I am a developer here at The Frontside and with us, we have some very special guests. We have Chris Freeman who is a former Frontsider. He is a developer at a startup here in town in Austin called OJO. I'm going to let Chris introduce Carol Nichols. CHRIS: Hi, everyone. Today we've get Carol Nichols. She is involved in a lot of different things related to the Rust programming language. She is on the Rust community team. She is the co-author of the Rust book. She's the co-founder of a Rust consulting company called Integer 32 and she's the maintainer of Crates.io. How are you doing today, Carol? CAROL: I'm great. Thank you for having me on the show. CHRIS: Thanks for joining us. I have a lot of questions for you. I'm very interested in Rust but I am especially interested in some of the stuff you're doing that's kind of ancillary to it, namely you decided to go into business using a pretty new programming language that in some ways, I think is a little bit niche-related to some other things that people might go into business for say, web development. I was hoping maybe you could talk about what is Integer 32? What led you to starting this business? What kind of consulting work do you find working in something like Rust? CAROL: Integer 32 is my husband and I, Jake Goulding and we decided to form this company because we really wanted to get paid to work on Rust. We think Rust is really interesting and that is moving the industry forward and we see a future in Rust. As far as we can tell, we are the first Rust-focus consultancy in the world, which either makes us trendsetters or really stupid. I'm not sure about that yet but we're figuring it out. We consider ourselves pretty qualified to be running a Rust consultancy. As you mentioned, I'm the co-author on the book. I've been working with Rust for a couple of years now. Jake has the most points on Stack Overflow in the Rust tag. We've got a lot of experience in getting to know Rust. We've been watching the development, helping people learn Rust so we are offering a bunch of different services. One is to build an MVP or a prototype for Rust so that companies can evaluate whether Rust would be a good fit for their problem, without diverting their whole team to be able to learn Rust enough to evaluate it properly. We've done some prototypes. We're also interested in doing training and pairing so we have some training, things in the works. We've also gotten some jobs that are adding to open source libraries in Rust. The ecosystem is still being built up and there's a lot of libraries that do whatever the person who wrote them need them to do but they're not feature complete so if someone else just needs that extra feature on some library, they can pay us to add it if they'd like. One of the things I really want to do with my consultancy is have our proprietary work subsidize our open source work because I really wanted to get paid for open source stuff. We have a different rate that we charge for a proprietary versus open source. We've had a few gigs that are adding stuff to open source libraries and I love those because we're not only benefiting the company who needs something but we're benefiting the entire community. CHRIS: When you say you work on an open source thing, do you mean like a company that happens to be a consumer of an open source library would pay you to add a feature? Or is it the maintainers of the libraries themselves are coming to you and hiring help? CAROL: So far, it has been the former but we have talked to some people about the latter but open source projects typically don't have much funding. I think that's a little rarer but definitely, were open to companies paying us to add what they need to a library. CHRIS: Has there been any friction there like you kind of showing up and say a company is paying us to try and add features to your project? Do the maintainers ever pushback or are they very happy to just have someone helping? CAROL: Yes, so far no. All the maintainers we worked with have been amazing. We're not going to come in and rewrite the whole project. We're going to come in and work with their style and make any modifications they want to be able to incorporate into their library. But as I said, a lot of libraries are gotten to a certain point and I think the maintainers would like their libraries to become more feature complete but everyone only has so much time and you don't necessarily know what's useful to people but this is a very, very strong signal that this library would be useful to someone if only it had this little extra thing. I think most maintainers are open to making their libraries more feature complete to be more useful to more people. CHRIS: Yeah, that is a pretty sweet deal from the standpoint of an open source maintainer. It's nice enough when people show up to help at all. It is especially nice to show up to help and aren't motivated by money. CAROL: Yeah. CHRIS: That's very cool. When it comes to prototyping things with Integer 32, what kind of projects are people coming to you and asking you to prototype in Rust? CAROL: A lot of them are existing projects that they have and written in something else that they want to either perform better and be safer as opposed to rewriting it in C or C++ to get performance out of it. Sometimes, they want something to interface with other Rust things. We're starting to see projects like that but mostly, they have a hunch that Rust will be good for their projects and solve some problem they're having with their current implementation. We scope their projects way down to whatever will let them evaluate, whether Rust is a good fit or not and we go with that. CHRIS: Cool. STEPHANIE: Going from there, the question that I have is why Rust? I don't know a lot about Rust so I'd like to know what would be some of the benefits of using Rust, if you're familiar with programming. If you are in web development like I'm familiar with Ember, why would I like to use Rust or learn Rust? CAROL: I could talk all day about this. I really love working with Rust. I feel like it is adding more tools to help me to write better code and taking care of little details that usually I would have to spend a lot of brainpower thinking about to get right all the time. But now I can actually concentrate on whatever it is I'm trying to get done and let the compiler take care of those details for me. The way it's implemented, it happens really fast. The way I got into Rust was I'm a Rails developer previously to this job and I spend a lot of time optimizing Rails, looking for places where essentially too many Ruby objects and memory leaks and [inaudible] a lot of trying to make Rails go faster. At some point, you can't. There's only so far you can take Ruby and Rails so I look at where I want my career to go next and I love making things go faster but I'm terrified of C. I should be nowhere near production C. You have to spend years learning all the quirks and all the ways that C can go wrong and crash and be insecure. Around this time, I know you had Steve Klabnik on the show, in the previous episode. Steve is from Pittsburgh, where I am and he came home for Christmas one year and came to a Ruby meet up and was talking about this new language called Rust and how awesome it was. Steve gets distracted by new awesome things all the time so I was like, "Yeah, yeah, okay, whatever." The next year, he came home for Christmas again and was still talking about how awesome Rust was. At that point, I was like, "There's got to be something to this." At that point, he was writing his book, 'Rust for Rubyist' which has lead into his work on the Rust programming language book. I was like, "Rust for Rubyist? I can handle this. This is something I can do and capable of," so I started reading his book and submitting corrections and things which is again, how I got involved with the Rust programming language book. If you've ever gotten the error 'undefined method on nil' or 'undefined is not a function' in JavaScripts, like in production at runtime that happens all the time. That's just normal in Ruby and JavaScript land. Rust prevents those problems at compile time so there's no null, there's no nil. It's strongly typed so it checks that you have the thing you think you have before your code even can run. There's no garbage collector so you don't have memory leaks. The system of ownership and borrowing and the borrow checker and lifetimes which is weird. It's tricky to get your head around at first because it's different than any other language. But once you get that that's the part that enables your code to go faster without needing the garbage collector. You actually don't have to think about your memory management as much as you would in C, where you have to say, "Please give me some memory." Okay, I'm done with it now. You are manually managing your memory but you don't have to think about it as much because the compiler is thinking about it for you, if that makes sense. CHRIS: I have a follow up question, kind of related to the fact that Rust is kind of performing at the level of C or C++ but a lot more friendly in the fact that both you and Steve and I think a lot of other people, have come to Rust from scripting languages, from higher level languages. I remember at first that I started paying attention on Rust like right before the 1.0 happened, I thought it sounded interesting and wrote it off because it was just insane and I had only ever done Python and JavaScript and higher level things. In a relatively short time, it has developed a level of ergonomics that I'm envious of, even in the 'more comfortable' languages, things like Cargo, things like the compiler is really great but now the compiler has really friendly and informative error messages so that 'undefined is not a function' never happens but when you try to make it happen, it now shows you like, "No, no, no. On this exact line, in this place, this is where you're doing it wrong." But I recently heard it and I'm curious if you know anything about it that there's also development on a Rust language service, kind of like I guess TypeScript test, where it's a whole set of tools that you can run under the hood that any editor can plug into so that you just get this tool box of things to help you develop in Rust that are all packaged up and handed over and all you have to do is hook into it. Have you try that at all? Are you familiar with that? CAROL: I am not. I've been watching but I haven't worked on it and I haven't tried it out yet. I am excited for the language server because it's going to enable IDEs to do more interesting things. Coming from Ruby where it's so dynamic that you can't do things like ensure that you renamed all of the places and method it's called because you can't know that. I've read books like Michel Feathers' Working Effectively with Legacy Code and a lot of the chapters in that talked about leaning on your IDE, on your refactoring tools to do automated refactoring. RubyMine has a few of those things but not all of them because it's just impossible so I'm really looking forward to having real refactoring tools that can let you do automated refactorings and things like that that are possible in other statically-typed languages but with Rust in an IDE. I haven't used an IDE in years because I haven't found them to be useful but once the language server is up and running, I'm thinking about going back to an IDE so it's definitely exciting. CHRIS: That's some pretty cool right now. There's one called Clippy which I love because of the name like it takes you back to my Microsoft Word days. There's a lot of very good stuff that they have added that I didn't expect from a 'systems language' but it has definitely benefited from a lot of things that people in the scripting world have learned. CAROL: One of the goals of 2017 for the core team is increasing people's productivity in Rust so getting people over the learning curve, providing them with tools like the language server and making it easier for people to build things in Rust without having to manage things around Rust. Just Cargo in itself has made systems programming so much better. I see people who develop in C and C++ who really try to minimize the amount of libraries they bring in because that makes your build system so much more complicated and you have to have libraries in the right place and so much more can go wrong. But with Cargo, it's just Cargo install and you have a Cargo.lock and cargo.toml that manages versions. It just work so it's been interesting watching people figure that out and change their opinions on bringing in dependencies with npm and JavaScript and Bower and Ruby Gems that we're all used to like, "Oh, there's a gem for that. Let's just pull that in and go." Systems people have been really reluctant to do that but Cargo is enabling that to be better and easier which is really exciting to watch. I want anyone listening to this who thinks, "I can't do system programming. It was too hard." No, you totally can. You can do Rust. Rust is going to let you do this and that's why Rust is really exciting because it's enabling a whole new group of people to get into the systems programming space where things need to be optimized and faster and letting people build these sorts of things without having the programs be vulnerable to crashes and security bugs and things like that. It's really, really exciting. CHRIS: Very cool. STEPHANIE: I'm curious in Rust, if there's an error, how would you know that there's an error? Is the whole thing going to stop? Is it going to break? Do you get a useful stack trace? What would I expect to see? CAROL: A lot of the errors in Rust are at compile time. It won't even let you try to run your code if you have certain kinds of problems and they tried to move as much as possible into that compile time space. There are always going to be things that you can't catch a compile time like the user enters a number that's too big for whatever you're trying to do. That's still going to be a runtime error because we can't possibly know what a user is going to put in when you're compiling. They've done a lot of work on the compiler errors as Chris was talking about, to make them friendlier and point here's where your error is, here's why it's happening, here's a hint as to how you might want to fix it. This has been really great. I was volunteering in a local code school with students just starting Ruby and I'm used to Ruby's error messages by now but they were just getting started and getting all sorts of errors and I was like, "Wow, these error messages are not helpful at all," and I forgot how bad that is and how confusing it can be for a beginner to just think you understand, think you have got it working and then you go and run the code and it's just like 'string is not a symbol' or whatever. The worst is when you forget to close the block and just expected to see [inaudible] end of file instead and it's not helpful at all. I was just like apologizing the whole time like, "I'm sorry. This is telling you that you need to write 'end' at the end of the file," but it's not telling you that in any way you could possibly know that. That made me appreciate much more all of the work that's going into Rust error messages that are really trying to help. Some people talk about, especially the borrow checker, fighting the borrow checker like they're not used to having a compiler tell them their code is wrong so often so people talk about fighting the borrow checker a lot but it's not trying to fight with you. It's not trying to make you feel bad about your code. It's trying to help you make your code better and prevent errors that might happen at runtime by catching them earlier. I actually have a little project called Rustlings that is full of little snippets of Rust that intentionally don't compile. You run them and you get an error message right away and your job is to read the error message and learn how to fix it. When I was starting out, I was really frustrated because I was trying to do something and I would get an error message and I would have to stop whatever is doing to deal with the error message. I was like, "If I could just get some practice just dealing with the error messages and learning how to fix them so that when I'm trying to do something else, I already have experience fixing that kind of error," so that's how that project came to be and people found that useful. I haven't had much time to work on it lately but it could definitely use more examples because I think people are used to error messages that are not helping. People used to back traces that are really long and don't say anything useful. Sometimes, you don't stop and read and think but the Rust error messages are really trying to help you and often times, they are telling you exactly what you need to do to change the code to work. I think getting practice seeing the compiler as more of a pair who is trying to help you and not someone who's trying to reject all your code is a different mindset that I don't think people are used to but I think it's really useful. STEPHANIE: That's excellent. I was going to ask you if there are any resources or any repos to check out for someone who is interested in getting into Rust. It's funny, last night I was poking around with Python and there's something similar to Rustlings. It's called Python Koans and it's basically like what you're already familiar if you do web development. You want to get your test to pass so it'll tell you, you need to think about this one or you need to meditate on this and then you try to get it to pass and then it says you have reached enlightenment or you have not yet reached enlightenment and you have to sit there and think about it and then run it again. It's very useful in trying to get started with language in a way that you are already sort of familiar with. CAROL: Yeah, I've definitely gotten inspiration from the Koans project that have existed in other languages. There's also an exercism track for Rust that people found really useful and of course, I'm working with Steve Klabnik on the Rust programming language book. We're rewriting the whole thing so there's an existing version that if you go to the Rust documentation and click on book, you'll get the existing version which is complete but the new version is going to be way better. Especially with the explanations of ownership and borrowing, people have said that the new version is way, way better than the old version. Someone even made the analogy of doing medical research and you see that trial case is doing so much better than the placebo case that is not ethical to continue the trial. It's more ethical to stop the trial and use the new thing because it's helping so many people. Someone was like, "You need to replace the old book with the new book right now because it's so much better," but the new book isn't complete yet. The new book is in a different repo which we can put in the show notes so I'd recommend starting with the new book and then working back and forth with the old book once you run out of content. But we're getting closer all the time so hopefully, that should be done and printed by No Starch sometime in 2017 -- CHRIS: Woah! It's being printed by No Starch? CAROL: Yeah. CHRIS: That's cool. I didn't know that. Congratulations. CAROL: I thought Steve mentioned that in the last one. CHRIS: He may have but he talked about it a long time ago and I thought he always meant the old one. How long have you been rewriting the Rust book for? CAROL: It's been a while. CHRIS: Longer than I knew about then, probably. CAROL: It's kind of like software. It's more work than you think it's going to be and estimating, it's going to be done when it's done. If you kept telling people, "It's going to be out on this time," and like Steve, "There's no way it's not getting done by then," so now he's not allowed to say it when it's going to be out. CHRIS: Nice. CAROL: I'm really grateful to see this opportunity because I don't think I would have written a book on my own and I'm learning a whole lot about the process of writing a book. It turns out there's a lot of editing, a lot of back and forth, a lot of trying to build a narrative through this long stretch of text so that you're building on top of what you've already covered and not introducing things that aren't mentioned. It's a lot of work and I'm learning a lot and I have no idea when it's going to be [inaudible] because I think there's more work that I still don't know about coming, as we get closer to going to print. It's definitely one of those things that you can't make agile because you've got to put it on paper that costs money and it's going to be around a long time at some point. It's definitely a different kind of working that I'm used to with software. CHRIS: Although, I have to say, I clicked around and I think this is the new version: Rustlings.Github.io/Book. Is that the new one? CAROL: Yes, that's the new version. CHRIS: There is a lot here and it's not quite what I would have expected to see here like it's not done yet. I've been clicking links and I have yet to find one that says 'to-do'. CAROL: I think 15 through 20 are like outlines right now. We're maybe three-quarters through with the content but then we have to go through revisions and editing and copyediting. CHRIS: I'm looking at the headings and I was a big fan of the first Rust book but I can already see it calling out things I wish had been hit on more specifically in the original book. There's a lot of good looking stuff here so I'm excited about this. I'm going to go and read this thing. CAROL: I'm excited for people to read it. CHRIS: Earlier, you were talking about one of the things that is really nice about the Rust tooling is that Cargo makes it really easy to bring in dependencies. I happen to know that you are recently, I believe the maintainer of Crates.io which is where all of Cargoes crates, which are the libraries are hosted. Is that correct? CAROL: That is correct. I have commitment Crates.io now which is very exciting. Crates.io is like Ruby Gems or npm. It's the site where people publish their libraries and you can go and search for a library for what you need. As part of the Rust 2017 goals, we want to make it easier for people to find high-quality libraries that do the things they needed to do. I've been doing some work on adding badges and categories. Rust makes major decisions on the language and on things through an RFC process, which I think Ember is doing now too. I forget which way we stole that. Do we steal it from Ember or did you steal from us? I can't remember. CHRIS: If I remember right, I think -- I could be wrong, Twitter -- Ember did it first. Rust borrowed it and then added the 'how do we teach this?' section. I think Ember took that back and added it to their RFCs. CAROL: Okay, I'm super excited about that section. Now, when you propose a change of language, you have to go through this RFC process where you write up what you want to change, why you want to change it, any downsides, any alternative designs. Then the community talks about it and makes comments when you revise it and things like that. Now, there's a new section that just got added. That's 'how do we teach this?' Before something can be stabilized in the language, you have to document it. This is still kind of starting to take effect but I'm super excited about it because people can't use something unless they know how to use it. Right now, Steve's the only person getting paid full time to work on the documentation and I need him to write the book so I'm excited that more people will be thinking about documentation and thinking about how to help people use their new features. Anyway, I have an RFC about how to rank Crates within a category that we're trying to work through. In some automated ways, we can recommend different Crates for different purposes. I'm working through that with the community to try and figure out how to best recommend Crates in different circumstances. Crates.io is written in Rust and it performs really well. It just got added to the Heroku things so you can deploy it too. Looking at the analytics and their response times for is just like the Ruby apps I work on would be thrilled to have these stats. The backend of it is Rust, the frontend is Ember and [inaudible] who was an Ember person is also interested in Rust and he thinks Rust on the backend and Ember on the frontend worked really well together. He's always trying to figure out ways that we can work together. Crates.io is an existing project that I'm still learning Ember. There's lots of words I don't really understand about like components and Bowers. I would love Ember help on Crates.io. I'm starting to pull out issues that would be good at first time issues or more Ember-focus or I have some idea of how to fix that I could help someone fix. I'm starting to tag those things with 'has mentor' in our labels so I love for people to come check out issues on Crates.io who know JavaScript and know Ember and might want to get into Rust because there are definitely some issues that need a little bit of frontend, a little bit of backend so it might be a good way for people to get into Rust. CHRIS: Very cool. I'm personally very interested in that and will likely hit you up. But I'm sure many of our listeners will as well because I think we have a lot of Ember-friendly listeners so look Carol up because it sounds like she could use some help. Actually, I'm curious, the backend, I know that pretty recently, Rust has kind of gone through this period of kind of explosion in terms of Rust as a web language. There have been a number of different things that have come out pretty recently for a web framework in Rust or there's that Tokio thing. I know Diesel is like the ORM for Rust in talking to databases. It looks like it's about to hit 1.0. There's a lot of stuff happening so I'm curious, what are you using to write the backend. I know you're using Rust but are you using one of these frameworks or have you rolled your own? How's that work right now? CAROL: Crates.io is one of the first web apps that has been written in Rust. Actually, if you look at the backend code, you'll see SQL being built by hand. It's going to the Rust postgres library so it has SQL injection protection. All the things are [inaudible] so don't worry about that but they're still like SQL with the Rust code so it's not using an ORM yet. I'm going to have to look up. There is a library that is using that I'm blanking on the name of it for but it's very low level. It just let you send HTTP requests and let you respond to HTTP. We're in kind of a Cambrian explosion period with Rust web frameworks. There's a lot of different ones. One that I'm excited about that I haven't gotten to tryout yet is Rocket. That was just released. The thing I love about Rocket is that everyone's really excited about it because it was announced and they have an awesome website with lots of awesome docs so that should be a lesson to any open source project that's launching is if you want to get excited about it, you've got to launch some docs. That will help so much. There's a lot of different frameworks happening. They're still little trilobites and little animals that can't walk on land on their own quite yet so there's still no Rails. There's the pieces of Rails. There's Diesel which would act like a record. There's Nickel and Pencil and Iron and Rocket. Tokio is the async framework that is getting more and more stable by the day. We got to try it out on a project recently and it's pretty fun. I still am working on wrapping my head around promises and futures and working in that way but I think as that stabilizes and people use that, that is going to cause like another explosion of libraries that enable really fast but safe web backend stuff, which I think is really exciting. If you're looking for the Rails experience of being able to plug things together and nicely, just declare a few things, it works but not quite there yet. But if it excites you to try out new things and figure out the best ways to do the things you want to do in Rust, this is a great time to jump in and help. CHRIS: I will say the Rocket website is beautiful and it even has this templating section, a testing library section. This is very exciting. It really looks like as the closest thing to a Rail-style web framework that I've seen in Rust so far. People should definitely check this stuff out. I'm curious, I know a lot is really interested in Rust and Ember, which doesn't surprise me because lots is really interested in Ember in general, which I think is awesome. But is there anything specific about working with the Rust and Ember together that seems, especially well suited or even like some gotchas that you guys have run into? One of the things I'm thinking of is like Ember is really big into JSON API spec and I don't know if Rust has a JSON API library for serializing things in that format. Is that something you guys have to tackle at all? CAROL: There might be. I'm not sure. Crates.io is using the Rust API adapter for Ember so we might not be keeping up with the latest of Ember. But I know there are people who want them to interface them better with each other. Actually, that's an interesting thing. Both Ember and Rust are on a six-week release trains sort of things so the way Rust people will say -- I don't know if Ember people do -- is stability without stagnation so they're both changing. Rust has backwards-compatibility guarantees so the code you wrote with Rust 1.0 is still going to compile today. You might have some warnings and there's probably new cooler stuff that you could switch to but it's still going to compile. I'm not sure about Ember's upgrade path things. Someone just sent in a pull request that we merged like three days ago to upgrade us from two Ember point versions. There were a couple of things that like [inaudible] and we weren't doing quite right and we had to fix. It's been interesting to kind of fit together, keep both of the sides, update it and upgrade it and continuing on using the best things. But I think they have similar philosophies around making things better all the time. CHRIS: Yeah, the whole stable upgrade path and backwards-compatibility guarantees is definitely mirrored on the Ember side of things. I can see that being a little kind of comforting place to be knowing that both your frontend and your backend are not going to suddenly just break on you one day because some new feature came out that breaks your router or something. That's very cool. One of the thing that I know you're involved in -- you're involved in a lot of things -- when it comes to Rust, it's very cool. But you also run or a co-run a conference, right? Rust Belt Rust? CAROL: Yeah, we had our first year in 2016 in Pittsburgh. I ran Steel City Ruby before then so I love running conferences and I love having them near me one, because it's convenient and I get to trick all of my friends into coming to visit me. But two, because there's a lot of tech stuff happening in the Rust Belt and places that aren't San Francisco or New York. People don't necessarily know about that and people who live here don't necessarily have the opportunities to travel as easy to conferences. I sort of start Rust Belt Rust, one because of the pun opportunity and one of our speakers drew a little bar graph. There were three conferences last year. There was Rust Fest in Europe which has [inaudible] amount of Rust. There's RustConf, the official Rust Conference in Portland that has a lot amount of Rust and then Rust Belt Rust has double the amount of Rust in its name so we're the Rust-serious Rust conference. We're going to do it again, in 2017 we're going to move it to a different Rust Belt city. I'm not going to say which one yet but we're closing in on a date and a venue in the Rust Belt city so watch out for an announcement on that. It was a lot of fun. We had a day of workshops and then a day of single-track talks and a lot of time for conversation. A bunch of the core team members came out and it was fun talking with a friend of mine who was trying out something with Tokio. This was in October so Tokio was still working towards their first big release and he was trying to do something with Tokio. I looked over and I saw Carl Lerche, Alex Crichton and Aaron Turon standing together and talking like 30 feet from us and I was like, "If only the three people working on Tokio were nearby to answer your question --" so he just walked over and talked about Tokio with them. I love getting people together to talk to other people working with things, talk to the people who are working on the things they're using and meeting the people behind the names on the internet so I love running conferences and having events like that. STEPHANIE: Carol, you have a Rust consultancy called Integer 32. How is that going? CAROL: It's going pretty well. We're learning a lot. One of the reasons I wanted to start it is because I felt like I wasn't learning more in my job. In my Rails job, I felt like I had kind of tapped out with that knowledge. In starting a business, I get to learn a lot of stuff like sales and marketing and taxes and invoices. Sometimes, I even get to program a little. We're still learning how to effectively find our target customers. We do have availability, if anyone listening is interested in hiring some Rust experts. Right now, I'm trying to figure out when can we bring more people on the team. I'm trying to decide if we can have an intern for the summer. It should be fun so yeah, it's going pretty well. It's been a slow build but we're lucky enough to have savings and be able to spend some time building our business but it's been really gratifying to feel like I'm in charge of my destiny somewhat, as opposed to the whims of a company. STEPHANIE: And if I were interested in some Rust consulting, what would be the best way to reach you? CAROL: We have a website at Integer32.com and a contact form on there. STEPHANIE: Thank you so much for speaking with us, Carol. It was a pleasure. I feel like I learned a lot about Rust. CAROL: Thank you for having me. STEPHANIE: All right, y'all. That's it from us. Thank you so much for tuning in. Until next time. Bye-bye.

The Frontside Podcast
054: The Ember Ecosystem & ember-try with Katie Gengler

The Frontside Podcast

Play Episode Listen Later Jan 20, 2017 37:46


Katie Gengler @katiegengler | GitHub | Code All Day Show Notes: 01:23 - Testing 06:20 - ember-try 14:11 - Add-ons; Ember Observer 17:43 - Scoring and Rating Add-ons 25:25 - Contribution and Funding 27:41 - Code Search 30:59 - Data Visualization 32:27 - Change in the Ember Ecosystem Since Last EmberConf? 34:35 - Code All Day 35:39 - What's Next? Resources: ember-qunit liquid-fire capybara Selenium appraisal emberCLI Bower Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast Episode 54. I am your host, Charles Lowell, with me is Alex Ford. Today, we're going to be interviewing Katie Gengler. I remember very distinctly the first time that I met Katie, it was actually at the same dinner, I think that I met Godfrey at EmberConf in 2014. That was just a fantastic conversation that was had around the table and I did not realize how important the people that I was meeting were going to be in my life over the next couple of years. But Katie has gone on to do things like identify a hole in Ember add-on ecosystem so she created Ember Observer. There's a huge piece missing from being able to test this framework that spans multiple years and multiple versions and being able to make sure that your tests, especially for add-on authors, run against multiple versions so she created and maintains Ember-try. She's a part of the EmberCLI core team. She's a principal at Code All Day, which is a software consultancy and just an all-around fantastic woman. Thank you, Katie for coming on to the show and talking with us. KATIE: Thanks for having me. CHARLES: One of the things I wanted to start out the conversation with is something that's always struck me about you is there's a lot of people when it comes to testing, they talk the talk but you have always struck me as someone who walks the walk. Not just in terms of you make sure that your apps have tests in them, where your add-ons have tests in them but talking to people about testing patterns, making sure that when there are huge pieces of the ecosystem missing like Ember-try. I remember this as something that I struggled with. I was running up against this problem and all of the sudden, here comes Ember-try and you've been such a huge part of that. I want to know more about kind of your walk with testing and how that permeates so much of what you do because I think it's very important for people to hear that. KATIE: I got really lucky right out of college. My first job was at a place that where people think of mythical themes, XP-focused developers so the first thing I was told is everything is test first, everything is test-driven. I was primarily doing Ruby in Rails at the time but also JavaScript. At the beginning, we didn't have a way to test JavaScripts and there was a lot of missteps in the way of testing JavaScript until we came right around to QUnit. I was QUnit long before Ember even came along. It's kind of bit ingrained in my whole career. Michelle as well. Michelle is my partner in Code All Day. We're both very test focused. I think that's what drew us to start a company together and working together. Every project we're on, we try to write encompassing tests: test drive everything, if we're on it, projects upgrade or any project to fix. We try to write tests as a framework for everything that we're doing so we know whether we're doing something right or not. When it comes to Ember-try, that wasn't entirely my own idea. That was something that Robert Jackson and Edward Faulkner were looking for something right. I remembered that appraisals gem from Ruby. I really enjoyed being [inaudible] gems that I had written Rails so I wanted it to exists for Ember so I just kind of took a promise of to do it. It was extracted from Liquid Fire. I had some scripts that would sort of test multiple versions but it was rough. It wasn't as easy as it is today. CHARLES: Yeah, it does speak to a certain philosophy because if you're coming to a problem and it's difficult to test, you often come to a crossroads where you say, "You know what? I have a choice to make here. I can either give up and not write a test or trying and test some subset of it," Or, "I can write the thing that will let me write the test." It seems like you fall more into that second category. What would you say to people who are either, new to this idea or new in their careers and they butt up against this problem of not knowing when to give up and when to write the thing to write the test. KATIE: I almost never don't write the test so if you're suspicions are true, I will write something to be able to write the test. But there are times that I'm [inaudible] and sometimes I'm just like, "This is not going to be tested. This is not going to happen." Finding that line is pretty hard but it should be extremely rare. It's not when people come to me, I work with a client and they're telling me, "No, it's too hard to write the tests." A lot of times, it's not only how you write the test, spreading the test and learning how to write a test. It's the code you're trying to test that could be a problem. If you have a very complicated code, very side-effect driven code, it's very hard to write the easier sell, which might Ember acceptance tests. What you're really kind of on a level of integration because you do have a little bit of knowledge of what's going on and you have to be within the framework of what Ember tests wanted you to do, which is async is all completed by the time you want to have this assertions and test. That means to look different tools like going back to something like Capybara or Selenium and have some sort of test around on what you're doing in order to replace the code that makes it hard to test it at a lower level to begin with. I think a lot of people are just missing the framework for knowing what to do when their code is intractable or not, necessarily that the testing and the guides that have you tested. I think most people could go through tutorial and do tests for a little to do MVC app perfectly fine. But that's easy when you [inaudible] size of the equation so if you're already struggling with code and you're not quite sure, either in Ember, it can seem very, very hard to write tests for that. I think that's true with Rails as well. I think people that begin in Rails don't understand what they're going to testing, especially if they have an existing app that trying to add test to but unfortunately, Rails not a long ago, kind of got into everybody's heads that your tests go with what you're doing. It's just ingrained part of the Rails community. Hopefully, that will become how it is with Ember. But a lot of people are kind of slowly bring their apps Ember so they really have a lot of JavaScript and they don't necessarily know what to do or they're write in JavaScript have always are written with jQuery and a little bit [inaudible]. They don't understand how to test that. ALEX: How does Ember-try help with that? Actually I want to roll back and talk about what is Ember-try and how does it fit into testing? You mentioned the appraisal gem which I'm not familiar with. I haven't done much Rails in my life or Ruby. But can we talk about what Ember-try is? KATIE: Sure. Ember-try, at the basis, let's you run different scenarios with your test. At some point, I would've said, let's run different scenarios of dependencies for your tests so primarily changing your Ember version and that's pretty much what add-ons do but a lot of people are using it for scenarios that are completely outside of dependencies so different environment variables, different browsers. They're just having one place to have all these scenarios, where if you just put it in travis.yml like your CI configuration, you want it as easily be able to run that locally. But with Ember-try, you can do that locally. I found that it's kind of beyond my intentions, expanded beyond dependencies. Primarily, it lets you run your test in your application with different configurations. I could see running it with different feature flags, it would be what something to be interesting to do, if that's something you use. Primarily, it just lets you try the conversions and appraisal gems let you run test with different gem sets so you have a different gem file for each scenario you possibly had. That was definitely dependency-focus. CHARLES: That sounds really cool. It's almost sounds like you could even get into some sort of generative testing, where you're kind of not specifying the scenarios upfront but having some sort of mechanism to generate those scenarios so you can try and surface bugs that would only occur outside of what you're explicitly testing for, which is kind of randomly choosing different versions of environment, variables, feature flags, dependencies and stuff like that. They didn't thought of that? KATIE: Randomization [inaudible] but Ember-try really does have a kind of general route way of working on and that's we're leading to that. If you wanted to, especially for add-ons, you can specify this version compatibility keyword and your packet at JSON and give Ember and give an Ember string a version and it will generate the scenarios for you and test all those versions. These Ember strings are pretty powerful so you can say specifically versions you want. You can do a range of versions and it will take the latest patch release and a reminder, at least you don't want to be too crazy and test each of those for that add-on. But I can definitely see something random, they're really cool. Some testing thing that's like just tries to do random input into all of your inputs on a page. I've really been meaning to try that out. Sounds like [inaudible]. CHARLES: Yeah, just like to try and break it. I remember a world before Ember-try and I can't speak highly enough about it and the fact that how many bugs it has caught in the add-ons that we maintain because you're always working on the latest, hottest, greatest version of Ember and you're not thinking what about two-point releases back. They're might be not a deal breaker but some subtle bug that surfaces and break your tests and the coverage has just gotten so much better. In fact, I think that they're as brilliant as it is bundled with EmberCLI when you are building an add-on. It's like you now you get it for free. It's one of those things where it's hard to imagine what it was like, even though we lived it. KATIE: And it was less than a year ago. [Laughter] KATIE: Ember-try existed for more than 15 bundle with EmberCLI so since last EmberConf or so. CHARLES: Yeah, but it's absolutely a critical piece of the infrastructure now. KATIE: I'm glad it caught bugs for you. I don't think I've actually caught a bug with it. CHARLES: Really? KATIE: Yeah, but I don't do a lot of Ember re-add-ons. I do a lot of EmberCLI-ish add-ons. It can't change versions of EmberCLI. Not yet, we're working on that. I get some weird npm errors when I tried it but I haven't dug into it much yet. CHARLES: I don't want to dig too much into the mechanics but even when I first heard about it, I was like, "How does that even work?" Just replacing all the dependencies and having a separate node modules directory and bower and I'm like, "Man, there's so many moving parts." It was one of those things where we're so ambitious. I didn't even think it was possible. Or I didn't even think about writing it myself or whatever. It's one of those like, "Wow, okay. It can be done." ALEX: This exists now. CHARLES: Yeah. ALEX: Add-on authors are accountable now for making their add-ons work with revisions or versions a few points back like you said but it makes it so easy. The accountability is hardly accountability, turning Ember-try. It's really amazing. KATIE: What I'm laughing about is that what it actually does is not very sophisticated or crazy at all. For instance, for bower, it moves your existing bower components structure. It's a placeholder. It changes the bower.json run install. Then after the scenario, it put's everything back. CHARLES: But I don't know, it sounds so hard. It's intimidating. You got all this state and you got to make sure you put it all back. What do you do with if something hit you and abort midway. I'm sure you had to think about and deal with all that stuff at some point. ALEX: I kill my tests all the time in Ember-try and I was like, "Oops." I forgot I shouldn't do that just like for this. KATIE: Yeah, it doesn't recover so well. It's pretty hard to do things on process exit in node correctly, at least and I don't think I gotten it quite right. But there is a clean up command. Unfortunately, with the way it interacts with EmberCLIs dependency tracker so when you run an Ember command, Ember checks them to make sure all your dependencies are installed. If you still have the different bower.json and install haven't run, you have to run install before you can run the cleanup command which is kind of a drag. CHARLES: I have one final question about Ember-try. Have you given any thought to how this might be extracted and more generally applicable to the greater JavaScript ecosystem because I see this is something that Ember certainly was a trailblazer in this area. Some of these ideas came from Rails and other places. This is going to be more generally applicable and had you given thought to extracting that? KATIE: Yeah, some of [inaudible] since we first did it because we realized very early on that it doesn't depend on EmberCLI. I didn't even using it as a command line arguments parser which doesn't seem too important. But there are some assumptions we get to make. For it being an Ember app, we know how EmberCLI was structured. Some of those assumptions, I wouldn't really know with the greater node community and I gotten those assumption might not be possible at all because they don't have the standards we have on EmberCLI. It generated EmberCLI. There's generally certain things that are in place in Ember [inaudible] works for [inaudible] so there's no part of it that could be extracted. But I worry about some assumptions about no modules always being in the directory that they are in because then can be link the node modules above it. In EmberCLI, it usually doesn't support that. But other places, obviously have to. I realized that it could definitely happen but I'm not so sure that I'd want to personally support that because it's a little bit time commitment. CHARLES: Right. Maybe if someone from the outside want to step in involuntarily, you might work with it but not to personally champion that cause. KATIE: Definitely. I think it would be really cool and I do think it will end up having its own [inaudible] parser eventually, just to be able to do things like different EmberCLI versions. As long as it's not part of EmberCLI, I think that would be less confusing, though. In theory, that can be done with an EmberCLI still but I'm not clear on that. I've had people talk to me about that and I haven't fully process it yet. CHARLES: Right. Alex you mentioned something earlier I had not thought about but that was the technologies like Ember-try keep the add-on community accountable and keep them healthy by making sure that add-ons are working across a multiplicity of Ember versions and working in conjunction with other add-ons that might have version ranges. Katie, you've been a critical part of that effort. But there's also something else that you've been critical part of that you built from the ground up and that is Ember Observer. That is a different way of keeping add-ons accountable. But I think perhaps, even in a more valuable way, more of a social engineering way and that's through the creation of Ember Observer. Maybe we can talk about Ember Observer a little bit. What it is and what gave you the insight that this is something that needs to be built so I'm going to step forward and I'm going to build it? KATIE: I'm definitely going to refer again back to the Rails community. I'm a big fan of Ruby Toolbox. Whenever I needed to jam, I would go there and try to see what was available in that category kind of way. There's variables that have on there. It will have something like the popularity in the number of GitHub stars and the last time it was updated. You can see a lot of inspiration for Ember Observer in there so maybe I should step back and explain the Ember Observer. Ember Observer is a listing of all of the add-ons for the Ember community. Anything that has the Ember add-on keyword will show up there. We pull it off from npm and it all show you all that kind of information: the last updated, the number of GitHub commits, the number of stars, the number of contributors and we put all of that information and a manual view together to put a score on each add-on. You can look at it and we categorized them as well. If you look at a category, say, you're looking at a category for doing models. You would see all of a different model add-ons and be able to look between them and compare them, decide which ones to use. Or if you're thinking of building something, you can go in there and be like, "This already exists. Maybe I should just contribute to an existing thing." What gave me the idea for is I was looking at Ember add-ons, which just shows you the most recently published add-ons for that Ember add-on keyword everyday and every time I was clicking on these add-ons I go, "They did the same thing," and it just seemed like such a waste in [inaudible]. People were creating the same things and then they started clicking into and I was like, "Why they bother clicking into this. It doesn't have anything. It's just an empty add-on." We're pushing add-ons just to try that out so I thought I'd be nice if something filtered that out and I happened to have some time so I got started and dragged my husband, Phil into it. He's also an Ember and Rails developers so that's pretty convenient and my friend, Lew. Now, Michelle works on it a little bit as well. That's what drove us to build it and it's been pretty cool. I like looking all the add-ons when they come up anyway. I feel like it's not any actual work for me. It's quicker than my email each day to look at the new add-ons. ALEX: How many new add-ons are published every day on average? KATIE: On average, it's probably four to six maybe but it varies widely. If you get a holiday, you'll get like 20 add-ons because people have time off. You know, if somebody just feeling the grind at two and you'll notice that the add-on struck commeasurably too. It would be else that come on the same kind of week. ALEX: You mentioned that an add-on gets a score. Can you explain that score and how you rate at add-ons? KATIE: Sure. The score is most driven by details about the add-ons. There's Ember factors that go into it and it's out of 10 points. Five of them are from purely mechanical things, whether or not there's been more than two Git commits in the last three months, whether or not there's been a release in the last three months, whether or not they're in the top 10% of add-ons, top 10% of npm downloads for add-ons, whether they're in the top 10% of GitHub stars for add-ons. ALEX: I know that I'm a very competitive person but it also applies to software. Not just on sports or other types of competition. But I remember a moment and I'm an add-on author and my add-on had a 9 out of 10. I was just about to push some code and just about to get a release. Even before that happened, it went to a ten. The amount of satisfaction I got is kind of ridiculous. But I like it. I like the scoring system, not just for myself but also for helping me discover add-ons and picking the ones that might be right for me. I'll check out any add-on basically that might fit the description. As long as there's a readme, I'll go check it out. But it still helps along with the categorization. CHARLES: Correct me if I'm wrong but I believe you can achieve an 8 out of 10 without it being a popularity contest. By saying there's a certain concrete steps that you can take to make sure that you have test and you have a readme, that's a thing of substance. I don't remember what all the criteria are but you can get a high score without getting into the how many stars or if you're in the top 10%. I think that's awesome. But it does mean that if I see an add-on with a five or something like that, it means that they're not taking my concrete steps or it might not be as well-maintained. You know, that's definitely something to take into account. I'm curious if there are any different parameters you've thought, tweaks you thought of making to the system. Because this gets to second part of that, what things had you considered just throughout out of hand, as maybe not good ways to rate add-ons. KATIE: We haven't thought about everything. I don't particularly like the popularity aspect of it. It did feel necessary to include it in some way. The stars and theory are representative of interest, not as much as popularity but it probably gets into popularity as well then downloads are inferior for popularity. But the problem downloads and I found this happening more and more frequently is that if large companies start publishing their own add-ons, then they have a lot of developers, they are going through the roof on those downloads so they're getting that to a point that just from their own developers and I have no way of knowing if anybody else is using this. CHARLES: Probably their continuous integration with containers, right? Like if it's running on Travis or Circle, it's just sitting there spinning like pumping the download numbers. KATIE: Yeah and that frustrates me quite a bit. But I haven't found another thing that really representative popularity. Unfortunately, with npm you can get the download counts but you can't tell where they came from. There's no way to do that. I simply would like to see the things that are popular, they rate higher than the they currently do, like you said either, it's 10 points go without the popularity coming to affect it. But you do need a collaboration with at least one of the person to get eight points. If you give seven by yourself, you need to have another contributor. If you only have one contributor, you don't get that point because that's trying to be representative of sort of a bust factor but it's not truly there. You can just have one commit from somebody else to get that. CHARLES: Of all the pieces, I think that's totally fair. KATIE: Yeah, there's definitely a few other things I have in mind to bring in to the metrics but we're not quite there yet. I need to entirely refactor how the score is given so it's not exactly out of 10. The idea is to have some questions and some points that are relevant only to certain categories about us. Whether or not in add-ons testing its different versions of Ember, it might have matters for add-on but it's only adding 10 for CLI or whether or not they have a recent release. Might not matter if it's kind of a one-off with the sass plugin or something more of the build tool that doesn't change for everyone. CHARLES: Yeah, I've noticed we have that happened a couple times where we've got a component that just wraps a type of input. Until the HTML is back changes or a major API changes happens in Ember, there's no need to change it. I can definitely see that. How do you market it is like something that changes infrequently. Is it just that add-on author says it's done? You give them a bigger window or something like that? KATIE: There are probably some sort of categories that fall under that. For the input, I think if it's doing something that Ember producing components, it probably want to be upgrading every CLI, at least every three months. I think in that case, it's probably fair to require an update within that period of time. But some of the things that are more close to Broccoli like as they are in Ember add-ons, that make sense to not have that requirement. Maybe not that's the [inaudible] exact example of the kind of questions that [inaudible]. ALEX: In Ember add-on was the first time that I gave back to the open source community. It was my first open source project and Ember Observer really helped me along the way to say what is an open source best practice and I thought that was really cool. Now, it sounds like with some of the point totals, you're leaning towards Ember best practices to help Ember add-on authors along that way. I think that's really awesome and very, very useful. I would not like to see what the Ember add-on ecosystem would look like without Katie. It would be a very different place. KATIE: Thanks. I'm glad it had some help on that and I'm affecting add-on authors. I actually didn't originally think about it when I was first building it and I was really hoping to help consuming of add-ons but it really has kind of driven out people finding add-ons to not build because they contribute to existing ones. Also, driving them for the score because as you said, people get very competitive. I really didn't realize what kind of drive the score to be because to me I'm like, "Somebody else's score me. How dare you?" [Laughter] KATIE: I have had people say that to me, "How dare you score me? How dare you score my add-ons?" Well, it's mostly computerized. Even the review was manual but only thing about it has any sort of leeway is are there meaningful tests. That's really the only thing when I go through an add-on is whether or not that has any sort of leeway for the judgment of the person that's doing the reading. If there's a readme and we're kind of a rubric so that goes if you have anything in there that's other than the default Ember [inaudible] readme. Whether or not there's a build that is based entirely whether or not there's a CI tag in readme, these are for the owner to go look for them [inaudible]. We hope to automate that so I don't have to keep looking for those. A lot of add-ons turned out have to have builds when they didn't have any meaningful tests. They just had tests so that's kind of confusing. CHARLES: What do you do in that situation? You actually manually review it so that add-on would not get the point for tests. KATIE: No, they don't get the point for test and if they don't get the point for test, I put 'N/A' for whether or not, they have built so it doesn't apply. It doesn't mean anything to me if they haven't build their own test. CHARLES: Right, that makes sense. A lot of it is automated but it still sounds like consume some of your time, some of Phil's time, some of Michelle's time. I guess, my question is do you accept donations or a way that people can contribute because I see this as kind of part of the critical infrastructure of the community at this point. There might be some people out there who think, "Maybe, I could help in some way." Is there a way that people can help? If so, I'd love to hear about it. KATIE: We don't have any sort of donation or anything like that. I mean, we should. We consider it primarily just part of our open source works, part of our contributions to the community because we also make a great deal of use out of the community. Fortunately, it's not very expensive to run. It's only $20 a month VPS. Other than time, it's not really consuming very many resources. That may change over time. The number of hits is increasing and we're doing some more resource-intensive things like Code Search and we're running Ember-try scenarios from the top 100 add-ons to generate compatibility tables. It hasn't been the most reliable. Think about if you're trying to do nvm-installed times 100 add-ons, times every day, times the different Ember dependency settings so it's been very much like a game of whack-a-mole but for now, it's not bad. But we probably should think about some sort of donation by then. Maybe something that writes out the exact numerical cost of something like Ember Observer. The API is getting about 130,000 hits a month but that's the API so that's some number of quests per person. [inaudible] tells me something about 12,000 visitors each month. CHARLES: Does Ember Observer has an API? Are there any the third-party apps that you know about, that people built on top of the Ember Observer? KATIE: None that have been kind of public. I know a couple of private companies seem to be hitting the API but it's not a public API. It's really not public yet. I'm literally process of switching over to JSON API and at some point, I'll make some portion of that -- a public API -- but it's pretty hard to support that at the same time. It change Ember server pretty frequently and do any kind of migrations we need to do. Ember add-ons does all the scores from us from API end point. CHARLES: I actually wasn't aware. I remember the announcement of Code Search but how do you kind of see the usage of that? What's the primary use case when you would use Code Search on Ember add-on or Ember Observers? KATIE: I think the primary use case is if you're looking for how to use a feature. If you're creating an add-on and you want to know how to use certain hooks like [inaudible] or something like that. You can do the Code Search for that and see what other add-ons are doing. It's only searching Ember add-ons that have the repository are all set so you'll only find Ember results. That will be nicer compared to searching GitHub. Then I find another use case is more by the core team to see who is using what APIs and whether or not they can deprecate something or change something or something has become widely used since we're pretty excited about that possibility, we've never ever been searching. ALEX: That is brilliant. CHARLES: Yeah, that's fantastic. The other question that I had was this running Ember-try scenarios on the top 100 add-ons and that's something that you're doing now. Are you actually reflecting that in the Ember Observer interface? Is that an information or is that an experimental feature? Or is that reflected all the way through so if I go to Ember Observer today, I'll see that information based on those computations? KATIE: I start playing in the Ember Observer interface. It's only for the top 100 add-ons currently but hopefully expanding that to all add-ons. Especially, for few months but maybe it's not easy to notice. The only on top 100 add-ons would be on the right side bar and there will be a list of the scenarios we ran it with and whether or not it pass or not. There's add-on information there. The top 100, I'll link to right on the main page of Ember Observer so you can see in the front. CHARLES: How do you get that information back to the author of that top add-on? KATIE: We haven't actually done that. It's just on Ember Observer. It's more meant for consumers to be able to see that this add-on is compatible with all these version. We're not using their scenarios. We're using our own scenarios saying Ember from this version to this version, unless they have specified that version compatibility thing and then we'll use those auto-generated scenarios. This might get harder for add-ons that have complex scenarios so it need something else to vary along with the versions like Ember Data or maybe they're using Liquid Fire and Liquid Fire have these three different version and for each versions, it's being used. For those, we'll just have things which are unable to test this. But hopefully, this is still providing some useful information for some add-ons. A lot of add-ons, their build won't run unless they commit. In this case, this is running every night so new Ember versions released will see if that fails. On other side of it, we have a dashboard where we can see which add-ons failed and maybe see if a new commit to something broke a bunch of add-ons. It commits to something like Ember and for CLI Ember-Gate, one of the main things. CHARLES: I know that certainly that right after we get over this podcast, I'm going and running or checking up all add-ons that we maintain and making sure everything is copacetic. If you guys see me take off my headphones and dash out the door, you know where I'm going. KATIE: Got you. ALEX: I just have a further comment that I'm excited for the public API of Ember Observer just because I've been thinking a lot about data visualization lately. I think it would be really cool tool to do a deprecated API, like one of those bubble charts where like the area that's covered by this deprecated API -- I'm doing a bad job of explaining this -- just like the most use deprecated API methods and visualizing that, I think they'll be really interesting to see it. CHARLES: Right, seeing how they spread across the add-on. ALEX: Yeah, or just all add-ons in general. KATIE: I am most nervous about a public API for Code Search, though because it's a little bit resource-intensive so just freaking out a little bit about the potential of a public API for it. But an Ember Observer client is open source. If you want to add anything to the app, that I consider as public think it is. Adding to that, I really do want to figure out some way to have like a performance budget for when people add to the client because sometimes I'll get people who want to add features and I'm like, "That's just going to screw all of it. It's going to be a problem for all Ember Observer and it's going to make everything slower and it's really little slow. But JSON API fortunately, I have kind of a beta version that running and it's going to be much faster, thank God. I probably shouldn't said that either. CHARLES: Definitely we want to get that donation bin set up before the API goes public. Okay, let's turn to the internet now and we'll answer some of the questions that got twitted in. we've got a question from Jonathan Jackson and he wanted to ask you, "Where have you seen the most change in the Ember ecosystem since last EmberConf?" which was March of 2016. KATIE: There was definitely fewer add-ons being published but the add-ons that are being published are kind of, say more grown up things. We've got... I don't know if engines was before or after March. I have no idea. Time has one of those things that engines and then people doing things related to Fastfood so things are coming from more collaborative efforts, I think. This is just my gut-feeling. I have no data on this. Isn't a gut-feeling from looking at add-ons and then there's a lot of add-ons that are coming out that are specific to a particular company. I think that maybe, I hope representative of more companies getting it to Ember but hopefully, they'll make things more generic and share them back. The other problem with the popularity is like about before, where big company is getting itself into the top 100 list, probably with just its own employees only appear over the summer. I tried a few different ways to mutate the algorithm to try to get them out of there but there was no solution there. It's much fewer, novel things. Very rarely do I look at Ember add-ons and I'd be like, "Oh, that's great," but when I do, it's something very exciting. CHARLES: Right so there's a level of maturity that we're starting to see. Then I actually think that there is something in the story too, of there are now larger companies with big, big code bases that have lots of fan out on their dependency tree that just weren't there before. KATIE: Definitely. I don't think some of the large companies were there before but I think some of the largest companies are probably keeping most of their add-ons private so there's kind of mid-range of company that's big enough to donate things or willing to put things open source. A few of these companies that can have a lot of add-ons now and a lot of them are very similar to things that have already existed so you're going to be like, "I don't know why I use this," but they obviously make changes for some reason. CHARLES: The other thing that I want to talk to you about, before we wrap up, is you actually are in partnership in Code All Day. What kind of business is that? What is it you guys do? What's it like running your company, while on the same time, you're kind of managing these large pieces of the Ember ecosystem? KATIE: Code All Day is very small. It's just me and Michelle. It's a consulting company, we kind of partnered together after we left to startup and decided to do consulting together. We primarily do Ember projects, also some Rails and we try to work together and we love test-driven things. It's pretty kind of loose end. We ended up running it since it's just a partnership. We don't have any employees. But Ember Observer will take up a lot of our time and we really had an idea that it might help us get clients that way so I suppose it kind of helps our credibility but it hasn't really been great for leads so much. But fortunately, there hasn't been a big problem for us. We really enjoys spending our time. We enjoy the flexibility that consulting gives us and while that flexibility is what's going to making these things keep running. CHARLES: All right-y. Well, are there any kind of skunkworks, stealth, secret things you've got brewing in the lab, crazy ideas that you might be ready to give us a sneak preview about for inquiring minds that may want to know? KATIE: Some of them are really [inaudible] which is redoing Ember Observer with JSON API instead of currently, it's using ActiveModel serializers, which is a kind of custom API to Rails and [inaudible] fortunately, it's an API now. They're removing something called JSON API Resources so that will get the performance of Ember Observer much better and that's pretty much my primary focus at the moment. I don't really have any big skunkworks, exciting projects. I have far off ideas that hopefully will materialize into some sort of skunkworks projects. CHARLES: All right. Well, fantastic. I want to say thank you, Katie for coming on the show. I know that you are kind of a hero of mine. I think a lot of people come to our community and they see like, "Where's the value in being a member of this community, in terms of the things that I can take out of it? What does it provide for me?" And you demonstrate on a day-to-day basis, asking what you can do for your community, rather than what your community can do for you, to paraphrase JFK. I think you live that every day so I look up to you very much in that. Thank you for being such a [inaudible] of the community which I'm a part of and thank you for coming on the show. KATIE: I'm very happy that I've been here and thank you. I use a lot of your guys add-ons and it's really the community has given so much to me, which is why I ever want to participate in it. It's really great group of people. CHARLES: Yep, all right-y. Well, bye everybody. ALEX: Bye.

REACTIVE
53: It's About Removing Cognitive Load

REACTIVE

Play Episode Listen Later Oct 28, 2016 67:41


Grizzly bear facts. Baby talk. Henning is team-lead now. JSON API + Swagger =

Ember Weekend
Episode 56: didInsertEpisodeTitle

Ember Weekend

Play Episode Listen Later Apr 25, 2016 17:00


Chase and Jonathan talk about Composable helpers, route action handlers, JSON API, and web vs. mobile development.

The Bike Shed
52: You're an Elixir Developer Now

The Bike Shed

Play Episode Listen Later Feb 17, 2016 46:31


Derek and Laila discuss Derek's excitement for Elixir and Phoenix. Is Elixir as fun to write as Ruby? Is Phoenix a better Rails? Elixir and Phoenix Routes in Phoenix Using ctags with Elixir Static Assets in Phoenix ja_serializers ecto Is There a JSON Schema describing JSON API? Elixir 1.2 Map and MapSet scale better ExMachina - factories for Elixir Elixir Typespecs and Behaviours

Changelog Master Feed
JSON API and API Design (The Changelog #189)

Changelog Master Feed

Play Episode Listen Later Jan 1, 2016 98:01


Yehuda Katz joined the show to talk about JSON.API — where the spec came from, who’s involved, compliance, API design, the future, and more. We also finally got Yehuda on the show alone, so we were able to talk with him about his origins, how he got started as a programmer, and his thoughts on struggle vs aptitude.

The Changelog
JSON API and API Design

The Changelog

Play Episode Listen Later Jan 1, 2016 98:01


Yehuda Katz joined the show to talk about JSON.API — where the spec came from, who’s involved, compliance, API design, the future, and more. We also finally got Yehuda on the show alone, so we were able to talk with him about his origins, how he got started as a programmer, and his thoughts on struggle vs aptitude.

REACTIVE
4: A Little Mouse Named Henning

REACTIVE

Play Episode Listen Later Aug 13, 2015 57:04


Our team is complete again for this episode! Raquel, Kahlil and Henning discuss Google's announcement about Alphabet and speculate about what it all means. Raquel shares her enthusiasm about the screen sharing app Screenhero and how it compares to other services and apps in the space. Henning reports on his progress with implementing JSON-API and last but not least Kahlil talks about Redux and client-side state management.

The Bike Shed
25: Throwing the Schema Out With the SOAPy Bathwater (Gordon Fontenot)

The Bike Shed

Play Episode Listen Later Jul 28, 2015 51:11


Derek is joined by Gordon Fontenot for a discussion of the JSON API specification, problems consuming it from Swift, and the future of functional programming in Swift. This episode of The Bike Shed is sponsored by: Code School: Entertaining online learning for existing and aspiring developers. Leave a review on our iTunes page to be entered to win a free month of Code School. Links / Show Notes JSON API Argo: Functional JSON parsing in Swift Swift Optionals Spine: A Swift JSON API client Curry: Swift framework for function currying. HAL: Hypertext Application Language SOAP JSON Schema Runes Build Phase- For more of Gordon's insight into baseball and iOS development Gordon on Twitter Cookie Clicker Swarm Sim

Build Phase
88: We Built This City

Build Phase

Play Episode Listen Later Jul 23, 2015 38:17


At the end of a whirlwind week in Boston, Mark and Gordon talk about, like, every possible topic for 50 minutes. Even Thom up and left half way through. Topics include new open source projects (like Static, Tropos, and Curry), localization, and the world's oldest Red Sox fan. Game recap for Houston vs Boston on July 3 Static PR bringing the settings in-app for Tropos PR adding Polish localization to Tropos Mysterious Trousers tapbots/calcbot-localization PR bringing Haskell's precedence to Runes PR to make Result's flatMap operator work with the version from Runes PR making it easier to use Runes as a lightweight internal dependency RFC PR to use Runes as a lightweight internal dependency PR removing Runes as an external dependency for Argo PR to use throws instead of Decoded in Argo Curry.framework Source file for GHC.Tuple JSON:API Example of using partial application to solve JSON side loading with immutable value objects

Ruby on Rails Podcast
193: GitHub Summit & Outage, Silence, ActionCable, JSON API, Self Help

Ruby on Rails Podcast

Play Episode Listen Later Jul 13, 2015 74:16


Sean Devine and Kyle Daigle talk about the GitHub Summit (this week) & outage (last week), working in silence, ActionCable (tests, collaboration, copyright), Sean's JSON API talk at Boston Ember, and SELF HELP.

Ruby on Rails Podcast
193: GitHub Summit & Outage, Silence, ActionCable, JSON API, Self Help

Ruby on Rails Podcast

Play Episode Listen Later Jul 12, 2015 74:16


Sean Devine and Kyle Daigle talk about the GitHub Summit (this week) & outage (last week), working in silence, ActionCable (tests, collaboration, copyright), Sean's JSON API talk at Boston Ember, and SELF HELP.

Ember Weekend
Episode 7: Champions needed

Ember Weekend

Play Episode Listen Later May 4, 2015 17:23


Chase and Jonathan discuss routable components, ember-cli-mirage (again), JSON API, and the new EmberJax twitter account.

Ruby on Rails Podcast
187: Dan Gebhardt - json-api, jsonapi-resources, orbit.js & Ember Data

Ruby on Rails Podcast

Play Episode Listen Later Apr 2, 2015 80:11


Dan Gebhardt joins Sean Devine to talk about json-api, jsonapi-resources, orbit.js, and Ember Data.

Ruby on Rails Podcast
187: Dan Gebhardt - json-api, jsonapi-resources, orbit.js & Ember Data

Ruby on Rails Podcast

Play Episode Listen Later Apr 2, 2015 80:11


Dan Gebhardt joins Sean Devine to talk about json-api, jsonapi-resources, orbit.js, and Ember Data.

The Frontside Podcast
021: Best of EmberConf 2015 (part 2)

The Frontside Podcast

Play Episode Listen Later Mar 19, 2015 35:28


Charles, Brandon and Stanley wrap up part two of their discussion about their favorite talks and technologies from EmberConf 2015. Stanley sings a Staind song, and proposes to the entire internet. Show Links: Ember CLI Deploy Aaron Patterson (Tenderlove) Orbit.js Jamie White - Growing Ember One Tomster At A Time Ember Community Guidelines Brittany Storoz - Building Custom Apps With Ember CLI Edward Faulkner - Physical Design Liquid Fire Bryan Langslet Mitch Lloyd - Ember Islands Ember Observer

JavaScript Jabber
150 JSJ OIMs with Richard Kennard, Geraint Luff, and David Luecke

JavaScript Jabber

Play Episode Listen Later Mar 11, 2015 62:28


Check out RailsClips on Kickstarter!!   02:01 - Richard Kennard Introduction Twitter GitHub Kennard Consulting Metawidget 02:04 - Geraint Luff Introduction Twitter 02:07 - David Luecke Introduction Twitter GitHub 02:57 - Object-relational Mapping (ORM) NoSQL Duplication 10:57 - Online Interface Mapper (OIM) CRUD (Create, Read, Update, Delete) UI (User Interface) 12:53 - How OIMs Work Form Generation Dynamic Generation Static Generation Duplication of Definitions Runtime Generation 16:02 - Editing a UI That’s Automatically Generated Shape Information => Make Obvious Choice 23:01 - Why Do We Need These? 25:24 - Protocol? Metawidget 27:56 - Plugging Into Frameworks backbone-forms JSON Schema 33:48 - Making Judgement Calls WebComponents, React JSON API AngularJS 49:27 - Example OIMs JSON Schema Metawidget Jsonary 52:08 - Testing Picks The Legend of Zelda: Majora's Mask 3D (AJ) 80/20 Sales and Marketing: The Definitive Guide to Working Less and Making More by Perry Marshall (Chuck) A Wizard of Earthsea by Ursula K. Le Guin (Chuck) Conform: Exposing the Truth About Common Core and Public Education by Glenn Beck (Chuck) Miracles and Massacres: True and Untold Stories of the Making of America by Glenn Beck (Chuck) 3D Modeling (Richard) Blender (Richard) Me3D (Richard) Bandcamp (David) Zones of Thought Series by Vernor Vinge (David) Citizenfour (Geraint) Solar Fields (Geraint) OpenPGP.js (Geraint) forge (Geraint)

Devchat.tv Master Feed
150 JSJ OIMs with Richard Kennard, Geraint Luff, and David Luecke

Devchat.tv Master Feed

Play Episode Listen Later Mar 11, 2015 62:28


Check out RailsClips on Kickstarter!!   02:01 - Richard Kennard Introduction Twitter GitHub Kennard Consulting Metawidget 02:04 - Geraint Luff Introduction Twitter 02:07 - David Luecke Introduction Twitter GitHub 02:57 - Object-relational Mapping (ORM) NoSQL Duplication 10:57 - Online Interface Mapper (OIM) CRUD (Create, Read, Update, Delete) UI (User Interface) 12:53 - How OIMs Work Form Generation Dynamic Generation Static Generation Duplication of Definitions Runtime Generation 16:02 - Editing a UI That’s Automatically Generated Shape Information => Make Obvious Choice 23:01 - Why Do We Need These? 25:24 - Protocol? Metawidget 27:56 - Plugging Into Frameworks backbone-forms JSON Schema 33:48 - Making Judgement Calls WebComponents, React JSON API AngularJS 49:27 - Example OIMs JSON Schema Metawidget Jsonary 52:08 - Testing Picks The Legend of Zelda: Majora's Mask 3D (AJ) 80/20 Sales and Marketing: The Definitive Guide to Working Less and Making More by Perry Marshall (Chuck) A Wizard of Earthsea by Ursula K. Le Guin (Chuck) Conform: Exposing the Truth About Common Core and Public Education by Glenn Beck (Chuck) Miracles and Massacres: True and Untold Stories of the Making of America by Glenn Beck (Chuck) 3D Modeling (Richard) Blender (Richard) Me3D (Richard) Bandcamp (David) Zones of Thought Series by Vernor Vinge (David) Citizenfour (Geraint) Solar Fields (Geraint) OpenPGP.js (Geraint) forge (Geraint)

All JavaScript Podcasts by Devchat.tv
150 JSJ OIMs with Richard Kennard, Geraint Luff, and David Luecke

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Mar 11, 2015 62:28


Check out RailsClips on Kickstarter!!   02:01 - Richard Kennard Introduction Twitter GitHub Kennard Consulting Metawidget 02:04 - Geraint Luff Introduction Twitter 02:07 - David Luecke Introduction Twitter GitHub 02:57 - Object-relational Mapping (ORM) NoSQL Duplication 10:57 - Online Interface Mapper (OIM) CRUD (Create, Read, Update, Delete) UI (User Interface) 12:53 - How OIMs Work Form Generation Dynamic Generation Static Generation Duplication of Definitions Runtime Generation 16:02 - Editing a UI That’s Automatically Generated Shape Information => Make Obvious Choice 23:01 - Why Do We Need These? 25:24 - Protocol? Metawidget 27:56 - Plugging Into Frameworks backbone-forms JSON Schema 33:48 - Making Judgement Calls WebComponents, React JSON API AngularJS 49:27 - Example OIMs JSON Schema Metawidget Jsonary 52:08 - Testing Picks The Legend of Zelda: Majora's Mask 3D (AJ) 80/20 Sales and Marketing: The Definitive Guide to Working Less and Making More by Perry Marshall (Chuck) A Wizard of Earthsea by Ursula K. Le Guin (Chuck) Conform: Exposing the Truth About Common Core and Public Education by Glenn Beck (Chuck) Miracles and Massacres: True and Untold Stories of the Making of America by Glenn Beck (Chuck) 3D Modeling (Richard) Blender (Richard) Me3D (Richard) Bandcamp (David) Zones of Thought Series by Vernor Vinge (David) Citizenfour (Geraint) Solar Fields (Geraint) OpenPGP.js (Geraint) forge (Geraint)

Ruby on Rails Podcast
184: Sean Devine Interview on Descriptive Podcast

Ruby on Rails Podcast

Play Episode Listen Later Feb 20, 2015 114:45


Sean Devine was interviewed on the Descriptive Podcast with Kahlil Lechelt. Topics include Sean's career as a programmer, TDD, DHH, API-first development, JSON API, Ruby on Rails and EmberJS.

Ruby on Rails Podcast
184: Sean Devine Interview on Descriptive Podcast

Ruby on Rails Podcast

Play Episode Listen Later Feb 20, 2015 114:45


Sean Devine was interviewed on the Descriptive Podcast with Kahlil Lechelt. Topics include Sean's career as a programmer, TDD, DHH, API-first development, JSON API, Ruby on Rails and EmberJS.

Ruby on Rails Podcast
181: Brian Cardarella of DockYard - Running a Software Consultancy & Betting on Ember

Ruby on Rails Podcast

Play Episode Listen Later Jan 9, 2015 78:51


Brian Cardarella of the software consultancy DockYard joins Sean Devine to talk about some of the lessons that he's learned growing their business. Other topics include Ember (including ember-cli and ember-data) and DockYard's focus on client-side applications, the business of Ruby on Rails development, the JSON:API standard, and more.

Ruby on Rails Podcast
181: Brian Cardarella of DockYard - Running a Software Consultancy & Betting on Ember

Ruby on Rails Podcast

Play Episode Listen Later Jan 9, 2015 78:51


Brian Cardarella of the software consultancy DockYard joins Sean Devine to talk about some of the lessons that he's learned growing their business. Other topics include Ember (including ember-cli and ember-data) and DockYard's focus on client-side applications, the business of Ruby on Rails development, the JSON:API standard, and more.

DevNexus Podcast
Devnexus 2014 - Les Hazlewood - Designing a Beautiful REST+JSON API.mp3

DevNexus Podcast

Play Episode Listen Later Sep 3, 2014


Very French Trip WordPress Podcast
Podcast WordPress #2 – WP Tech, json API, édition Fron-end, powerpress…

Very French Trip WordPress Podcast

Play Episode Listen Later Jun 9, 2014 51:00


En route avec Very French Trip, pour ce deuxième Podcast dédié à WordPress. Au programme Aujourd’hui nous accueillons Daniel Roch et Willy Bahuaud pour nous parler du WP Tech qui se tiendra sur Nantes, le 29 novembre 2014. WP Tech c’est une journée technique entièrement dédiée à WordPress et son écosystème, pas vraiment orienté débutant, […]