Podcasts about Thundra

  • 24PODCASTS
  • 41EPISODES
  • 54mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 28, 2023LATEST
Thundra

POPULARITY

20172018201920202021202220232024


Best podcasts about Thundra

Latest podcast episodes about Thundra

Capes and Lunatics
Marvel Tales Ep #55: Fantastic Four #133 & Iron Man #313

Capes and Lunatics

Play Episode Listen Later Dec 28, 2023 71:37


Marvel Tales Ep #55: Fantastic Four #133 & Iron Man #313 Welcome back to Marvel Tales! In this episode, Phil and Justin review New Year's Eve tales from Fantastic Four #133 (April 1973) featuring a battle between The Thing and Thundra for the life of Alicia Masters and Iron Man #313 (February 1995) featuring a not so pleasant New Year's Eve for Tony Stark.   Tune in today and don't forget to review the show on Apple Podcasts, Spotify, YouTube, and anywhere else you can!     Marvel Tales Links  → Twitter http://www.twitter.com/MarvelTalesPod → Instagram https://www.instagram.com/capeslunatics/ → Facebook facebook.com/MarvelTalesPod → YouTube https://www.youtube.com/c/CapesandLunatics   ==================  

Marvel by the Month
#190: January 1973 - "Spawn of the Flesh-Eater!"

Marvel by the Month

Play Episode Listen Later Mar 8, 2023 79:13


For more than 15 minutes of extra content, including our discussion of Fantastic Four #133 (in which Thundra and Ben Grimm duke it out at Shea Stadium), support us at patreon.com/marvelbythemonth. Subscribers at the $4/month level get instant access to our bonus feed of content that contains nearly 80 extended and exclusive episodes — with more being added every week! Stories Covered in this Episode: "Spawn of the Flesh-Eater!" - Incredible Hulk #162, written by Steve Englehart, art by Herb Trimpe and Sal Trapani, letters by Artie Simek, colors by David Hunt, ©1973 Marvel Comics"Four Against the Gods" - Defenders #3, written by Steve Englehart, art by Sal Buscema and Jim Mooney, ©1972 Marvel Comics"The New Defender!" - Defenders #4, written by Steve Englehart, art by Sal Buscema and Frank McLaughlin, letters by Artie Simek, colors by Petra Goldberg, ©1972 Marvel Comics"World Without End?" - Defenders #5, written by Steve Englehart, art by Sal Buscema and Frank McLaughlin, letters by Charlotte Jetter, colors by Glynis Wein, ©1973 Marvel Comics "Marvel by the Month" theme v. 3.0 by Robb Milne, sung by Barb Allen. All incidental music by Robb Milne.Visit us on internet at marvelbythemonth.com, follow us on Instagram at @marvelbythemonth and support us on Patreon at patreon.com/marvelbythemonth.Much of our historical context information comes from Wikipedia. Please join us in supporting them at wikimediafoundation.org. And many thanks to Mike's Amazing World of Comics, an invaluable resource for release dates and issue information.

Year One Comics
Episode 241: Avengers West Coast 73-75

Year One Comics

Play Episode Listen Later Nov 7, 2022 44:32


We finish up the Pacific Overlords plot, say goodbye to three long time team members (including one of the founders) and welcome three new team members!  Then it's a fill-in issue with Arkon and Thundra!

arkon thundra avengers west coast
Software Engineering Daily
Thundra – Lee

Software Engineering Daily

Play Episode Listen Later Oct 20, 2022 44:51


The post Thundra – Lee appeared first on Software Engineering Daily.

Buluta Doğru
Thundra

Buluta Doğru

Play Episode Listen Later Sep 26, 2022 58:23


Arkadaşlar 31. bölümden herkese merhaba. Bu bölümde observability alanında Sidekick ve Foresight gibi ürünleri olan Ankara merkezli Thundra firmasından sevgili Barış Kaya ve Burak Kantarcı konuğumuz oldu. Kendileriyle firma, ürünler ve bu ürünlerin çözüm sunduğu sorunlar hakkında bir sohbet gerçekleştirdik. https://www.thundra.io/

MJ Morning Show on Q105
MJ Morning Show: May 3, 2022

MJ Morning Show on Q105

Play Episode Listen Later May 3, 2022 177:32


Can the gang guess what the mystery clip is? MJ finds a story about a body being found in Lake Mead in Nevada... and more could be found. This story reminds Froggy about the final song in Goodfellas, but he can't figure out what it's called. Today's Morons in the News features a man who didn't know how to drive stick, and crashed a very expensive car. Two alcohol related stories also make the Morons in the News, and one relates to Roxanne's fiancé. Thundra the Dreamweaver talks with MJ and the crew about MJ's dream last night. Is it possible for a chef to fire their customers? There's an update on the Alabama deputy and murder suspect. A listener has a problem with Froggy's Elon Musk impression. It sounds a lot like a character from Beverly Hills Cop. There's an update on the Depp vs. Heard trial, and it has something to do with an audio bite MJ played at the beginning of the show... Can the gang finally guess what is being said?Ulta is facing backlash after an e-mail that was sent out regarding Kate Spade... Should someone be fired for this? Olivia Wilde was served while on stage at a convention.

Serverless Chats
Episode #105: Building a Serverless Banking Platform with Patrick Strzelec

Serverless Chats

Play Episode Listen Later Jun 14, 2021 66:01


About Patrick StrzelecPatrick Strzelec is a fullstack developer with a focus on building GraphQL gateways and serverless microservices. He is currently working as a technical lead at NorthOne making banking effortless for small businesses.LinkedIn: Patrick StrzelecNorthOne Careers: www.northone.com/about/careersWatch this episode on YouTube: https://youtu.be/8W6lRc03QNU  This episode sponsored by CBT Nuggets and Lumigo. TranscriptJeremy: Hi everyone. I'm Jeremy Daly, and this is Serverless Chats. Today, I'm joined by Patrick Strzelec. Hey, Patrick, thanks for joining me.Patrick: Hey, thanks for having me.Jeremy: You are a lead developer at NorthOne. I'd love it if you could tell the listeners a little bit about yourself, your background, and what NorthOne does.Patrick: Yeah, totally. I'm a lead developer here at NorthOne, I've been focusing on building out our GraphQL gateway here, as well as some of our serverless microservices. What NorthOne does, we are a banking experience for small businesses. Effectively, we are a deposit account, with many integrations that act almost like an operating system for small businesses. Basically, we choose the best partners we can to do things like check deposits, just your regular transactions you would do, as well as any insights, and the use cases will grow. I'd like to call us a very tailored banking experience for small businesses.Jeremy: Very nice. The thing that is fascinating, I think about this, is that you have just completely embraced serverless, right?Patrick: Yeah, totally. We started off early on with this vision of being fully event driven, and we started off with a monolith, like a Python Django big monolith, and we've been experimenting with serverless all the way through, and somewhere along the journey, we decided this is the tool for us, and it just totally made sense on the business side, on the tech side. It's been absolutely great.Jeremy: Let's talk about that because this is one of those things where I think you get a business and a business that's a banking platform. You're handling some serious transactions here. You've got a lot of transactions that are going through, and you've totally embraced this. I'd love to have you take the listeners through why you thought it was a good idea, what were the business cases for it? Then we can talk a little bit about the adoption process, and then I know there's a whole bunch of stuff that you did with event driven stuff, which is absolutely fascinating.Then we could probably follow up with maybe a couple of challenges, and some of the issues you face. Why don't we start there. Let's start, like who in your organization, because I am always fascinated to know if somebody in your organization says, “Hey we absolutely need to do serverless," and just starts beating that drum. What was that business and technical case that made your organization swallow that pill?Patrick: Yeah, totally. I think just at a high level we're a user experience company, we want to make sure we offer small businesses the best banking experience possible. We don't want to spend a lot of time on operations, and trying to, and also reliability is incredibly important. If we can offload that burden and move faster, that's what we need to do. When we're talking about who's beating that drum, I would say our VP, Blake, really early on, seemed to see serverless as this amazing fit. I joined about three years ago today, so I guess this is my anniversary at the company. We were just deciding what to build. At the time there was a lot of architecture diagrams, and Blake hypothesized that serverless was a great fit.We had a lot of versions of the world, some with Apache Kafka, and a bunch of microservices going through there. There's other versions with serverless in the mix, and some of the tooling around that, and this other hypothesis that maybe we want GraphQL gateway in the middle of there. It was one of those things that we wanted to test our hypothesis as we go. That ties into this innovation velocity that serverless allows for. It's very cheap to put a new piece of infrastructure up in serverless. Just the other day we wanted to test Kinesis for an event streaming use case, and that was just a half an hour to set up that config, and you could put it live in production and test it out, which is completely awesome.I think that innovation velocity was the hypothesis. We could just try things out really quickly. They don't cost much at all. You only pay for what you use for the most part. We were able to try that out, and as well as reliability. AWS really does a good job of making sure everything's available all the time. Something that maybe a young startup isn't ready to take on. When I joined the company, Blake proposed, “Okay, let's try out GraphQL as a gateway, as a concept. Build me a prototype." In that prototype, there was a really good opportunity to try serverless. They just ... Apollo server launched the serverless package, that was just super easy to deploy.It was a complete no-brainer. We tried it out, we built the case. We just started with this GraphQL gateway running on serverless. AWS Lambda. It's funny because at first, it's like, we're just trying to sell them development. Nobody's going to be hitting our services. It was still a year out from when we were going into production. Once we went into prod, this Lambda's hot all the time, which is interesting. I think the cost case breaks down there because if you're running this thing, think forever, but it was this GraphQL server in front of our Python Django monolift, with this vision of event driven microservices, which has fit well for banking. If you just think about the banking world, everything is pretty much eventually consistent.Just, that's the way the systems are designed. You send out a transaction, it doesn't settle for a while. We were always going to do event driven, but when you're starting out with a team of three developers, you're not going to build this whole microservices environment and everything. We started with that monolith with the GraphQL gateway in front, which scaled pretty nicely, because we were able to sort of, even today we have the same GraphQL gateway. We just changed the services backing it, which was really sweet. The adoption process was like, let's try it out. We tried it out with GraphQL first, and then as we were heading into launch, we had this monolith that we needed to manage. I mean, manually managing AWS resources, it's easier than back in the day when you're managing your own virtual machines and stuff, but it's still not great.We didn't have a lot of time, and there was a lot of last-minute changes we needed to make. A big refactor to our scheduling transactions functions happened right before launch. That was an amazing serverless use case. And there's our second one, where we're like, “Okay, we need to get this live really quickly." We created this work performance pattern really quickly as a test with serverless, and it worked beautifully. We also had another use case come up, which was just a simple phone scheduling service. We just wrapped an API, and just exposed some endpoints, but it was just a lot easier to do with serverless. Just threw it off to two developers, figure out how you do it, and it was ready to be live. And then ...Jeremy: I'm sorry to interrupt you, but I want to get to this point, because you're talking about standing up infrastructure, using infrastructure as code, or the tools you're using. How many developers were working on this thing?Patrick: How many, I think at the time, maybe four developers on backend functionality before launch, when we were just starting out.Jeremy: But you're building a banking platform here, so this is pretty sophisticated. I can imagine another business case for serverless is just the sense that we don't have to hire an operations team.Patrick: Yeah, exactly. We were well through launching it. I think it would have been a couple of months where we were live, or where we hired our first dev ops engineer. Which is incredible. Our VP took a lot of that too, I'm sure he had his hands a little more dirty than he did like early on. But it was just amazing. We were able to manage all that infrastructure, and scale was never a concern. In the early stages, maybe it shouldn't be just yet, but it was just really, really easy.Jeremy: Now you started with four, and I think, what are you now? Somewhere around 25 developers? Somewhere in that space now?Patrick: About 25 developers now, we're growing really fast. We doubled this year during COVID, which is just crazy to think about, and somehow have been scaling somewhat smoothly at least, in terms of just being able to output as a dev team promote. We'll probably double again this year. This is maybe where I shamelessly plug that we're hiring, and we always are, and you could visit northone.com and just check out the careers page, or just hit me up for a warm intro. It's been crazy, and that's one of the things that serverless has helped with us too. We haven't had this scaling bottleneck, which is an operations team. We don't need to hire X operations people for a certain number of developers.Onboarding has been easier. There was one example of during a major project, we hired a developer. He was new to serverless, but just very experienced developer, and he had a production-ready serverless service ready in a month, which was just an insane ramp-up time. I haven't seen that very often. He didn't have to talk to any of our operation staff, and we'd already used serverless long enough that we had all of our presets and boilerplates ready, and permissions locked down, so it was just super easy. It's super empowering just for him to be able to just play around with the different services. Because we hit that point where we've invested enough that every developer when they opened a branch, that branch deploys its own stage, which has all of the services, AWS infrastructure deployed.You might have a PR open that launches an instance of Kinesis, and five SQS queues, and 10 Lambdas, and a bunch of other things, and then tear down almost immediately, and the cost isn't something we really worry about. The innovation velocity there has been really, really good. Just being able to try things out. If you're thinking about something like Kinesis, where it's like a Kafka, that's my understanding, and if you think about the organizational buy-in you need for something like Kafka, because you need to support it, come up with opinions, and all this other stuff, you'll spend weeks trying it out, but for one of our developers, it's like this seems great.We're streaming events, we want this to be real-time. Let's just try it out. This was for our analytics use case, and it's live in production now. It seems to be doing the thing, and we're testing out that use case, and there isn't that roadblock. We could always switch off to a different design if you want. The experimentation piece there has been awesome. We've changed, during major projects we've changed the way we've thought about our resources a few times, and in the end it works out, and often it is about resiliency. It's just jamming queues into places we didn't think about in the first place, but that's been awesome.Jeremy: I'm curious with that, though, with 25 developers ... Kinesis for the most part works pretty well, but you do have to watch those iterator ages, and make sure that they're not backing up, or that you're losing events. If they get flooded or whatever, and also sticking queues everywhere, sounds like a really good idea, and I'm a big fan of that, but it also, that means there's a lot of queues you have to manage, and watch, and set alarms and all that kind of stuff. Then you also talked about a pretty, what sounds like a pretty great CI/CD process to spin up new branches and things like that. There's a lot of dev ops-y ops work that is still there. How are you handling that now? Do you have dedicated ops people, or do you just have your developers looking after that piece of it?Patrick: I would say we have a very spirited group of developers who are inspired. We do a lot of our code-sharing via internal packages. A few of our developers just figured out some of our patterns that we need, whether it's like CI, or how we structure our events stores, or how we do our Q subscriptions. We manage these internal packages. This won't scale well, by the way. This is just us being inspired and trying to reduce some of this burden. It is interesting, I've listened to this podcast and a few others, and this idea of infrastructure as code being part of every developer's toolbox, it's starting to really resonate with our team.In our migration, or our swift shift to full, I'd say doing serverless properly, we've learned to really think in it. Think in terms of infrastructure in our creating solutions. Not saying we're doing serverless the right way now, but we certainly did it the wrong way in the past, where we would spin up a bunch of API gateways that would talk to each other. A lot of REST calls going around the spider web of communication. Also, I'll call these monster Lambdas, that have a whole procedure list that they need to get through, and a lot of points of failure. When we were thinking about the way we're going to do Lambda now, we try to keep one Lambda doing one thing, and then there's pieces of infrastructure stitching that together. EventBridge between domain boundaries, SQS for commands where we can, instead of using API gateway. I think that transitions pretty well into our big break. I'm talking about this as our migration to serverless. I want to talk more about that.Jeremy: Before we jump into that, I just want to ask this question about, because again, I call those fat, some people call them fat Lambdas, I call them Lambda lifts. I think there's Lambda lifts, then fat Lambdas, then your single-purpose functions. It's interesting, again, moving towards that direction, and I think it's super important that just admitting that you're like, we were definitely doing this wrong. Because I think so many companies find that adopting serverless is very much so an evolution, and it's a learning thing where the teams have to figure out what works for them, and in some cases discovering best practices on your own. I think that you've gone through that process, I think is great, so definitely kudos to you for that.Before we get into that adoption and the migration or the evolution process that you went through to get to where you are now, one other business or technical case for serverless, especially with something as complex as banking, I think I still don't understand why I can't transfer personal money or money from my personal TD Bank account to my wife's local checking account, why that's so hard to do. But, it seems like there's a lot of steps. Steps that have to work. You can't get halfway through five steps in some transaction, and then be like, oops we can't go any further. You get to roll that back and things like that. I would imagine orchestration is a huge piece of this as well.Patrick: Yeah, 100%. The banking lends itself really well to these workflows, I'll call them. If you're thinking about even just the start of any banking process, there's this whole application process where you put in all your personal information, you send off a request to your bank, and then now there's this whole waterfall of things that needs to happen. All kinds of checks and making sure people aren't on any fraud lists, or money laundering lists, or even just getting a second dive from our compliance department. There's a lot of steps there, and even just keeping our own systems in sync, with our off-provider and other places. We definitely lean on using step functions a lot. I think they work really, really well for our use case. Just the visual, being able to see this is where a customer is in their onboarding journey, is very, very powerful.Being able to restart at any point of their, or even just giving our compliance team a view into that process, or even adding a pause portion. I think that's one of the biggest wins there, is that we could process somebody through any one of our pipelines, and we may need a human eye there at least for this point in time. That's one of the interesting things about the banking industry is. There are still manual processes behind the scenes, and there are, I find this term funny, but there are wire rooms in banks where there are people reviewing things and all that. There are a lot of workflows that just lend themselves well to step functions. That pausing capability and being able to return later with a response, so that allows you to build other internal applications for your compliance teams and other teams, or just behind the scenes calls back, and says, "Okay, resume this waterfall."I think that was the visualization, especially in an events world when you're talking about like sagas, I guess, we're talking about distributed transactions here in a way, where there's a lot of things happening, and a common pattern now is the saga pattern. You probably don't want to be doing two-phase commits and all this other stuff, but when we're looking at sagas, it's the orchestration you could do or the choreography. Choreography gets very messy because there's a lot of simplistic behavior. I'm a service and I know what I need to do when these events come through, and I know which compensating events I need to dump, and all this other stuff. But now there's a very limited view.If a developer is trying to gain context in a certain domain, and understand the chain of events, although you are decoupled, there's still this extra coupling now, having to understand what's going on in your system, and being able to share it with external stakeholders. Using step functions, that's the I guess the serverless way of doing orchestration. Just being able to share that view. We had this process where we needed to move a lot of accounts to, or a lot of user data to a different system. We were able to just use an orchestrator there as well, just to keep an eye on everything that's going on.We might be paused in migrating, but let's say we're moving over contacts, a transaction list, and one other thing, you could visualize which one of those are in the red, and which one we need to come in and fix, and also share that progress with external stakeholders. Also, it makes for fun launch parties I'd say. It's kind of funny because when developers do their job, you press a button, and everything launches, and there's not really anything to share or show.Jeremy: There's no balloons or anything like that.Patrick: Yeah. But it was kind of cool to look at these like, the customer is going through this branch of the logic. I know it's all green. Then I think one of the coolest things was just the retry ability as well. When somebody does fail, or when one of these workflows fails, you could see exactly which step, you can see the logs, and all that. I think one of the challenges we ran into there though, was because we are working in the banking space, we're dealing with sensitive data. Something I almost wish AWS solved out of the box, would be being able to obfuscate some of that data. Maybe you can't, I'm not sure, but we had to think of patterns for tokenization for instance.Stripe does this a lot where certain parts of their platform, you just get it, you put in personal information, you get back a token, and you use that reference everywhere. We do tokenization, as well as we limit the amount of details flowing through steps in our orchestrators. We'll use an event store with identifiers flowing through, and we'll be doing reads back to that event store in between steps, to do what we need to do. You lose some of that debug-ability, you can't see exactly what information is flowing through, but we need to keep user data safe.Jeremy: Because it's the use case for it. I think that you mentioned a good point about orchestration versus choreography, and I'm a big fan of choreography when it makes sense. But I think one of the hardest lessons you learn when you start building distributed systems is knowing when to use choreography, and knowing when to use orchestration. Certainly in banking, orchestration is super important. Again, with those saga patterns built-in, that's the kind of thing where you can get to a point in the process and you don't even need to do automated rollbacks. You can get to a failure state, and then from there, that can be a pause, and then you can essentially kick off the unwinding of those things and do some of that.I love that idea that the token pattern and using just rehydrating certain steps where you need to. I think that makes a ton of sense. All right. Let's move on to the adoption and the migration process, because I know this is something that really excites you and it should because it is cool. I always know, as you're building out applications and you start to add more capabilities and more functionality and start really embracing serverless as a methodology, then it can get really exciting. Let's take a step back. You had a champion in your organization that was beating the drum like, "Let's try this. This is going to make a lot of sense." You build an Apollo Lambda or a Lambda running Apollo server on it, and you are using that as a strangler pattern, routing all your stuff through now to your backend. What happens next?Patrick: I would say when we needed to build new features, developers just gravitated towards using serverless, it was just easier. We were using TypeScript instead of Python, which we just tend to like as an organization, so it's just easier to hop into TypeScript land, but I think it was just easier to get something live. Now we had all these Lambdas popping up, and doing their job, but I think the problem that happened was we weren't using them properly. Also, there was a lot of difference between each of our serverless setups. We would learn each time and we'd be like, okay, we'll use this parser function here to simplify some of it, because it is very bare-bones if you're just pulling the Serverless Framework, and it took a little ...Every service looked very different, I would say. Also, we never really took the time to sit back and say, “Okay, how do we think about this? How do we use what serverless gives us to enable us, instead of it just being an easy thing to spin up?" I think that's where it started. It was just easy to start. But we didn't embrace it fully. I remember having a conversation at some point with our VP being like, “Hey, how about we just put Express into one of our Lambdas, and we create this," now I know it's a Lambda lift. I was like, it was just easier. Everybody knows how to use Express, why don't we just do this? Why are we writing our own parsers for all these things? We have 10 versions of a make response helper function that was copy-pasted between repos, and we didn't really have a good pattern for sharing that code yet in private packages.We realized that we liked serverless, but we realized we needed to do it better. We started with having a serverless chapter reading between some of our team members, and we made some moves there. We created a shared boilerplate at some point, so it reduced some of the differences you'd see between some of the repositories, but we needed a step-change difference in our thinking, when I look back, and we got lucky that opportunity came up. At this point, we probably had another six Lambda services, maybe more actually. I want to say around, we'd probably have around 15 services at this point, without a governing body around patterns.At this time, we had this interesting opportunity where we found out we're going to be re-platforming. A big announcement we just made last month was that we moved on to a new bank partner called Bancorp. The bank partner that supports Chime, and they're like, I'll call them an engine boost. We put in a much larger, more efficient engine for our small businesses. If you just look at the capabilities they provide, they're just absolutely amazing. It's what we need to build forward. Their events API is amazing as well as just their base banking capabilities, the unit economics they can offer, the times on there, things were just better. We found out we're doing an engine swap. The people on the business side on our company trusted our technical team to do what we needed to do.Obviously, we need to put together a case, but they trusted us to choose our technology, which was awesome. I think we just had a really good track record of delivering, so we had free reign to decide what do we do. But the timeline was tight, so what we decided to do, and this was COVID times too, was a few of our developers got COVID tested, and we rented a house and we did a bubble situation. How in the NHL or MBA you have a bubble. We had a dev bubble.Jeremy: The all-star team.Patrick: The all-star team, yeah. We decided let's sit down, let's figure out what patterns are going to take us forward. How do we make the step-change at the same time as step-change in our technology stack, at the same time as we're swapping out this bank, this engine essentially for the business. In this house, we watched almost every YouTube video you can imagine on event driven and serverless, and I think leading up. I think just knowing that we were going to be doing this, I think all of us independently started prototyping, and watching videos, and reading a lot of your content, and Alex DeBrie and Yan Cui. We all had a lot of ideas already going in.When we all got to this house, we started off with this exercise, an event storming exercise, just popular in the domain-driven design community, where we just threw down our entire business on a wall with sticky notes, and it would have been better to have every business stakeholder there, but luckily we had two people from our product team there as representatives. That's how invested we were in building this outright, that we have products sitting in the room with us to figure it out.We slapped down our entire business on a wall, this took days, and then drew circles around it and iterated on that for a while. Then started looking at what the technology looks like. What are our domain boundaries, and what prototypes do we need to make? For a few weeks there, we were just prototyping. We built out what I'd called baby's first balance. That was the running joke where, how do we get an account opened with a balance, with the transactions minimally, with some new patterns. We really embraced some of this domain-driven-design thinking, as well as just event driven thinking. When we were rethinking architecture, three concepts became very important for us, not entirely new, but important. Item potency was a big one, dealing with distributed transactions was another one of those, as well as the eventual consistency. The eventual consistency portion is kind of funny because we were already doing it a lot.Our transactions wouldn't always settle very quickly. We didn't know about it, but now our whole system becomes eventually consistent typically if you now divide all of your architecture across domains, and decouple everything. We created some early prototypes, we created our own version of an event store, which is, I would just say an opinionated scheme around DynamoDB, where we keep track of revisions, payload, timestamp, all the things you'd want to be able to do event sourcing. That's another thing we decided on. Event sourcing seemed like the right approach for state, for a lot of our use cases. Banking, if you just think about a banking ledger, it is events or an accounting ledger. You're just adding up rows, add, subtract, add, subtract.We created a lot of prototypes for these things. Our events store pattern became basically just a DynamoDB with opinions around the schema, as well as a package of a shared code package with a simple dispatch function. One dispatch function that really looks at enforcing optimistic concurrency, and one that's a little bit more relaxed. Then we also had some reducer functions built into there. That was one of the packages that we created, as well as another prototype around that was how do we create the actual subscriptions to this event store? We landed on SNS to SQS fan-out, and it seems like fan-out first is the serverless way of doing a lot of things. We learned that along the way, and it makes sense. It was one of those things we read from a lot of these blogs and YouTube videos, and it really made sense in production, when all the data is streaming from one place, and then now you just add subscribers all over the place. Just new queues. Fan-out first, highly recommend. We just landed on there by following best practices.Jeremy: Great. You mentioned a bunch of different things in there, which is awesome, but so you get together in this house, you come up with all the events, you do this event storming session, which is always a great exercise. You get a pretty good visualization of how the business is going to run from an event standpoint. Then you start building out this event driven architecture, and you mentioned some packages that you built, we talked about step functions and the orchestration piece of this. Just give me a quick overview of the actual system itself. You said it's backed by DynamoDB, but then you have a bunch of packages that run in between there, and then there's a whole bunch of queues, and then you're using some custom packages. I think I already said that but you're using ... are you using EventBridge in there? What's some of the architecture behind all that?Patrick: Really, really good question. Once we created these domain boundaries, we needed to figure out how do we communicate between domains and within domains. We landed on really differentiating milestone events and domain events. I guess milestone events in other terms might be called integration events, but this idea that these are key business milestones. An account was open, an application was approved or rejected, things that every domain may need to know about. Then within our domains, or domain boundaries, we had these domain events, which might reduce to a milestone event, and we can maintain those contracts in the future and change those up. We needed to think about how do we message all these things across? How do we communicate? We landed on EventBridge for our milestone events. We have one event bus that we talked to all of our, between domain boundaries basically.EventBridge there, and then each of our services now subscribed to that EventBridge, and maintain their own events store. That's backed by DynamoDB. Each of our services have their own data store. It's usually an event stream or a projection database, but it's almost all Dynamo, which is interesting because our old platform used Postgres, and we did have relational data. It was interesting. I was really scared at first, how are we going to maintain relations and things? It became a non-issue. I don't even know why now that I think about it. Just like every service maintains its nice projection through events, and builds its own view of the world, which brings its own problems. We have DynamoDB in there, and then SNS to SQS fan-out. Then when we're talking about packages ...Jeremy: That's Office Streams?Patrick: Exactly, yeah. We're Dynamo streams to SNS, to SQS. Then we use shared code packages to make those subscriptions very easy. If you're looking at doing that SNS to SQS fan-out, or just creating SQS queues, there is a lot of cloud formation boilerplate that we were creating, and we needed to move really quick on this project. We got pretty opinionated quick, and we created our own subscription function that just generates all this cloud formation with naming conventions, which was nice. I think the opinions were good because early on we weren't opinionated enough, I would say. When you look in your AWS dashboard, the read for these aren't prefixed correctly, and there's all this garbage. You're able to have consistent naming throughout, make it really easy to subscribe to an event.We would publish packages to help with certain things. Our events store package was one of those. We also created a Lambda handlers package, which leverages, there's like a Lambda middlewares compose package out there, which is quite nice, and we basically, all the common functionality we're doing a lot of, like parsing a body from S3, or SQS or API gateway. That's just the middleware that we now publish. Validation in and out. We highly recommend the library Zod, we really embrace the TypeScript first object validation. Really, really cool package. We created all these middlewares now. Then subscription packages. We have a lot of shared code in this internal NPM repository that we install across.I think one challenge we had there was, eventually you extracted away too much from the cloud formation, and it's hard for new developers to ... It's easy for them to create events subscriptions, it's hard for them to evolve our serverless thinking because they're so far removed from it. I still think it was the right call in the end. I think this is the next step of the journey, is figuring out how do we share code effectively while not hiding away too much of serverless, especially because it's changing so fast.Jeremy: It's also interesting though that you take that approach to hide some of that complexity, and bake in some of that boilerplate that, someone's mostly didn't have to write themselves anyways. Like you said, they're copying and pasting between services, is not the best way to do it. I tried the whole shared packages thing one time, and it kind of worked. It's just like when you make a small change to that package and you have 14 services, that then you have to update to get the newest version. Sometimes that's a little frustrating. Lambda layers haven't been a huge help with some of that stuff either. But anyways, it's interesting, because again you've mentioned this a number of times about using queues.You did mention resiliency in there, but I want to touch on that point a little bit because that's one of those things too, where I would assume in a banking platform, you do not want to lose events. You don't want to lose things. and so if something breaks, or something gets throttled or whatever, having to go and retry those events, having the alerts in place to know that a queue is backed up or whatever. Then just, I'm thinking ordering issues and things like that. What kinds of issues did you face, and tell me a little bit more about what you've done for reliability?Patrick: Totally. Queues are definitely ... like SQS is a workhorse for our company right now. We use a lot of it. Dropping messages is one of the scariest things, so you're dead-on there. When we were moving to event driven, that was what scared me the most. What if we drop an event? A good example of that is if you're using EventBridge and you're subscribing Lambdas to it, I was under the impression early on that EventBridge retries forever. But I'm pretty sure it'll retry until it invokes twice. I think that's what we landed on.Jeremy: Interesting.Patrick: I think so, and don't quote me on this. That was an example of where drop message could be a problem. We put a queue in front of there, an SQS queue as the subscription there. That way, if there's any failure to deliver there, it's just going to retry all the time for a number of days. At that point we got to think about DLQs, and that's something we're still thinking about. But yeah, I think the reason we've been using queues everywhere is that now queues are in charge of all your retry abilities. Now that we've decomposed these Lambdas into one Lambda lift, into five Lambdas with queues in between, if anything fails in there, it just pops back into the queue, and it'll retry indefinitely. You can drop messages after a few days, and that's something we learned luckily in the prototyping stage, where there are a few places where we use dead letter queues. But one of the issues there as well was ordering. Ordering didn't play too well with ...Jeremy: Not with DLQs. No, it does not, no.Patrick: I think that's one lesson I'd want to share, is that only use ordering when you absolutely need it. We found ways to design some of our architecture where we didn't need ordering. There's places we were using FIFO SQS, which was something that just launched when we were building this thing. When we were thinking about messaging, we're like, "Oh, well we can't use SQS because they don't respect ordering, or it doesn't respect ordering." Then bam, the next day we see this blog article. We got really hyped on that and used FIFO everywhere, and then realized it's unnecessary in most use cases. So when we were going live, we actually changed those FIFO queues into just regular SQS queues in as many places as we can. Then so, in that use case, you could really easily attach a dead letter queue and you don't have to worry about anything, but with FIFO things get really, really gnarly.Ordering is an interesting one. Another place we got burned I think on dead-letter queues, or a tough thing to do with dead letter queues is when you're using our state machines, we needed to limit the concurrency of our state machines is another wishlist item in AWS. I wish there was just at the top of the file, a limit concurrent executions of your state machine. Maybe it exists. Maybe we just didn't learn to use it properly, but we needed to. There's a few patterns out there. I've seen the [INAUDIBLE] pattern where you can use the actual state machine flow to look back at how many concurrent executions you have, and pause. We landed on setting reserved concurrency in a number of Lambdas, and throwing errors. If we've hit the max concurrency and it'll pause that Lambda, but the problem with DLQs there was, these are all errors. They're coming back as errors.We're like, we're fine with them. This is a throttle error. That's fine. But it's hard to distinguish that from a poison message in your queue, so when do you dump those into DLQ? If it's just a throttling thing, I don't think it matters to us. That was another challenge we had. We're still figuring out dead letter queues and alerting. I think for now we just relied on CloudWatch alarms a lot for our alerting, and there's a lot you could do. Even just in the state machines, you can get pretty granular there. I know once certain things fail, and announced to your Slack channel. We use that Slack integration, it's pretty easy. You just go on a Slack channel, there's an email in there, you plop it into the console in AWS, and you have your very early alerting mechanism there.Jeremy: The thing with Elasticsearch ... not Elasticsearch, I'm sorry. I'm totally off-topic here. The thing with EventBridge and Lambda, these are one of those things that, again, they're nuances, but event bridge, as long as it can deliver to the Lambda service, then the Lambda service kicks off and queues it automatically. Then that will retry at a certain number of times. I think you can control that now. But then eventually if that retries multiple times and eventually fails, then that kicks it over to the DLQ or whatever. There's all different ways that it works like that, but that's why I always liked the idea of putting a queue in between there as well, because I felt you just had a little bit more control over exactly what happens.As long as it gets to the queue, then you know you haven't lost the message, or you hope you haven't lost a message. That's super interesting. Let's move on a little bit about the adoption issues. You mentioned a few of these things, obviously issues with concurrency and ordering, and some of that other stuff. What about some of the other challenges you had? You mentioned this idea of writing all these packages, and it pulls devs away from the CloudFormation a little bit. I do like that in that it, I think, accelerates a lot of things, but what are some of the other maybe challenges that you've been having just getting this thing up and running?Patrick: I would say IAM is an interesting one. Because we are in the banking space, we want to be very careful about what access do you give to what machines or developers, I think machines are important too. There've been cases where ... so we do have a separate developer set up with their own permissions, in development's really easy to spin up all your services within reason. But now when we're going into production, there's times where our CI doesn't have the permissions to delete a queue or create a queue, or certain things, and there's a lot of tweaking you have to do there, and you got to do a lot of thinking about your IAM policies as an organization, especially because now every developer's touching infrastructure.That becomes this shared operational overhead that serverless did introduce. We're still figuring that out. Right now we're functioning on least privilege, so it's better to just not be able to deploy than deploy something you shouldn't or read the logs that you shouldn't, and that's where we're starting. But that's something that, it will be a challenge for a little while I think. There's all kinds of interesting things out there. I think temporary IAM permissions is a really cool one. There are times we're in production and we need to view certain logs, or be able to access a certain queue, and there's tooling out there where you can, or at least so I've heard, you can give temporary permissions. You have this queue permission for 30 minutes, and it expires and it's audited, and I think there's some CloudTrail tie-in you could do there. I'm speaking about my wishlist for our next evolution here. I hope my team is listening ...Jeremy: Your team's listening to you.Patrick: ... will be inspired as well.Jeremy: What about ... because this is something too that I always found to be a challenge, especially when you start having multiple services, and you've talked about these domain events, but then milestone events. You've got different services that need to communicate across services, or across domains, and realize certain things like that. Service discovery in and of itself, and which queue are we mapping to, or which service am I talking to, and which version of the service am I talking to? Things like that. How have you been dealing with that stuff?Patrick: Not well, I would say. Very, very ad hoc. I think like right now, at least we have tight communication between the teams, so we roughly know which service we need to talk to, and we output our URLs in the cloud formation output, so at least you could reference the URLs across services, a little easier. Really, a GraphQL is one of the only service that really talks to a lot of our API gateways. At least there's less of that, knowing which endpoint to hit. Most of our services will read into EventBridge, and then within services, a lot of that's abstracted away, like the queue subscription's a little easier. Service discovery is a bit of a nightmare.Once our services grow, it'll be, I don't know. It'll be a huge challenge to understand. Even which services are using older versions of Node, for instance. I saw that AWS is now deprecating version 10 and we'll have to take a look internally, are we using version 10 anywhere, and how do we make sure that's fine, or even things like just knowing which services now have vulnerabilities in their NPM packages because we're using Node. That's another thing. I don't even know if that falls in service discovery, but it's an overhead of ...Jeremy: It's a service management too. It's a lot there. That actually made me, it brings me to this idea of observability too. You mentioned doing some CloudWatch alerts and some of that stuff, but what about using some observability tool or tracing like x-ray, and things like that? Have you been implementing any of that, and if you have, have you had any success and or problems with it?Patrick: I wish we had a better view of some of the observability tools. I think we were just building so quickly that we never really invested the time into trying them out. We did use X-Ray, so we rolled our own tooling internally to at least do what we know. X-Ray was one of those, but the problem with X-Ray is, we do subscribe all of our services, but X-Ray isn't implemented everywhere internally in AWS, so we lose our trail somewhere in that Dynamo stream to SNS, or SQS. It's not a full trace. Also, just digesting that huge graph of information is just very difficult. I don't use it often, I think it's a really cool graphic to show, “Hey, look, how many services are running, and it's going so fast."It's a really cool thing to look at, but it hasn't been very useful. I think our most useful tool for debugging and observability has been just our logging. We created a JSON logger package, so we get up JSON logs and we can actually filter off of different properties, and we ship those to Elasticsearch. Now you can have a view of all of the functions within a given domain at any point in time. You could really see the story. Because I think early on when we were opening up CloudWatch and you'd have like 10 tabs, and you're trying to understand this flow of information, it was very difficult.We also implemented our own trace ID pattern, and I think we just followed a Lumigo article where we introduced some properties, and in each of our Lambdas at a higher level, and one of our middlewares, and we were able to trace through. It's not ideal. Observability is something that we'll probably have to work on next. It's been tolerable for now, but I can't see the scaling that long.Jeremy: That's the other thing too, is even the shared package issue. It's like when you have an observability tool, they'll just install a layer or something, where you don't necessarily have to worry about updating your own tool. I always find if you are embracing serverless and you want to get rid of all that undifferentiated heavy lifting, observability tools, there's a lot of really good ones out there that are doing some great stuff, and they're specializing in it. It might be worth letting someone else handle that for you than trying to do it yourself internally.Patrick: Yeah, 100%. Do you have any that you've used that are particularly good? I know you work with serverless so-Jeremy: I played around with all of them, because I love this stuff, so it's always fun, but I mean, obviously Lumigo and Epsagon, and Thundra, and New Relic. They're all great. They all do things slightly differently, but they all follow a similar implementation pattern so that it's very easy to install them. We can talk more about some recommendations. I think it's just one of those things where in a modern application not having that insight is really hard. It can be really hard to debug stuff. If you look at some of the tools that AWS offers, I think they're there, it's just, they are maybe a little harder to implement, and not quite as refined and targeted as some of the observability tools. But still, you got to get there. Again, that's why I keep saying it's an evolution, it's a process. Maybe one time you get burned, and you're like, we really needed to have observability, then that's when it becomes more of a priority when you're moving fast like you are.Patrick: Yeah, 100%. I think there's got to be a priority earlier than later. I think I'll do some reading now that you've dropped some of these options. I have seen them floating around, but it's one of those things that when it's too late, it's too late.Jeremy: It's never too late to add observability though, so it should. Actually, a lot of them now, again, it makes it really, really easy. So I'm not trying to pitch any particular company, but take a look at some of them, because they are really great. Just one other challenge that I also find a lot of people run into, especially with serverless because there's all these artificial account limits in place. Even the number of queues you can create, and the number of concurrent Lambda functions in a particular region, and stuff like that. Have you run into any of those account limit issues?Patrick: Yeah. I could give you the easiest way to run into an account on that issue, and that is replay your entire EventBridge archive to every subscriber, and you will find a bottleneck somewhere. That's something ...Jeremy: Somewhere it'll fall over? Nice.Patrick: 100%. It's a good way to do some quick check and development to see where you might need to buffer something, but we have run into that. I think the solution there, and a lot of places was just really playing with concurrency where we needed to, and being thoughtful about where is their main concurrency in places that we absolutely needed to stay functioning. I think the challenge there is that eats into your total account concurrency, which was an interesting learning there. Definitely playing around there, and just being thoughtful about where you are replaying. A couple of things. We use replays a lot. Because we are using these milestone events between service boundaries, now when you launch a new service, you want to replay that whole history all the way through.We've done a lot of replaying, and that was one of the really cool things about EventBridge. It just was so easy. You just set up an archive, and it'll record everything coming through, and then you just press a button in the console, and it'll replay all of them. That was really awesome. But just being very mindful of where you're replaying to. If you replay to all of your subscriptions, you'll hit Lambda concurrency limits real quick. Even just like another case, early on we needed to replace ... we have our own domain events store. We want to replace some of those events, and those are coming off the Dynamo stream, so we were using dynamo to kick those to a stream, to SNS, and fan-out to all of our SQS queues. But there would only be one or two queues you actually needed to subtract to those events, so we created an internal utility just to dump those events directly into the SQS queue we needed. I think it's just about not being wasteful with your resources, because they are cheap. Sure.Jeremy: But if you use them, they start to cost money.Patrick: Yeah. They start to cost some money as well as they could lock down, they can lock you out of other functionality. If you hit your Lambda limits, now our API gateway is tapped.Jeremy: That's a good point.Patrick: You could take down your whole system if you just aren't mindful about those limits, and now you could call up AWS in a panic and be like, “Hey, can you update our limits?" Luckily we haven't had to do that yet, but it's definitely something in your back pocket if you need it, if you can make the case to AWS, that maybe you do need bigger limits than the default. I think just not being wasteful, being mindful of where you're replaying. I think another interesting thing there is dealing with partners too. It's really easy to scale in the Lambda world, but not every partner could handle that volume really quickly. If you're not buffering any event coming through EventBridge to your new service that hits a partner every time, you're going to hit their API rate limit really quickly, because they're just going to just go right through it.You might be doing thousands of API calls when you're instantiating a new service. That's one of those interesting things that we have to deal with, and particularly in our orchestrators, because they are talking to different partners, that's why we need to really make sure we could limit the concurrent executions of the state machines themselves. In a way, some of our architecture is too fast to scale.Jeremy: It's too good.Patrick: You still have to consider downstream. That, and even just, if you are using relational databases or anything else in your system, now you have to worry about connection limits and ...Jeremy: I have a whole talk I gave on that.Patrick: ... spikes in traffic.Jeremy: Yes, absolutely.Patrick: Really cool.Jeremy: I know all about it. Any final advice for companies like you that are trying to bite off a piece of the serverless apple, I guess, That's really bad. Anyways, any advice for people looking to get into this?Patrick: Yeah, totally. I would say start small. I think we were wise to just try it out. It might not land with your development team. If you don't really buy in, it's one of those things that could just end up unnecessarily messy, so start small, see if you like it in-shop, and then reevaluate, once you hit a certain point. That, and I would say shared boilerplate packages sooner than later. I know shared code is a problem, but it is nice to have an un-opinionated starter pack, that you're at least not doing anything really crazy. Even just things like having opinions around logging. In our industry, it's really important that you're not logging sensitive details.For us doing things like wrapping our HTTP clients to make sure we're not logging sensitive details, or having short Lambda packages that make sure out-of-the-box you're opinionated about not doing something terribly awful. I would say those two things. Start small and a boiler package, and maybe the third thing is just pay attention to the code smell of a growing Lambda. If you are doing three API calls in one Lambda, chances are you could probably break that up, and think about it in a more resilient way. If any one of those pieces fail, now you could have retry ability in each one of those. Those are the three things I would say. I could probably talk forever about the rest of our journey.Jeremy: I think that was great advice, and I love hearing about how companies are going through this process, what that process looks like, and I hope, I hope, I hope that companies listen to this and can skip a lot of these mistakes. I don't want to call them all mistakes, and I think it's just evolution. The stuff that you've done, we've all made them, we've all gone through that process, and the more we can solidify these practices and stuff like that, I think that more companies will benefit from hearing stories like these. Thank you very much for sharing that. Again, thank you so much for spending the time to do this and sharing all of this knowledge, and this journey that you've been on, and are continuing to be on. It would great to continue to get updates from you. If people want to contact you, I know you're not on Twitter, but what's the best way to reach out to you?Patrick: I almost wish I had a Twitter. It's the developer thing to have, so maybe in the future. Just on LinkedIn would be great. LinkedIn would be great, as well as if anybody's interested in working with our team, and just figuring out how to take serverless to the next level, just hit me up on LinkedIn or look at our careers page at northone.com, and I could give you a warm intro.Jeremy: That's great. Just your last name is spelled S-T-R-Z-E-L-E-C. How do you say that again? Say it in Polish, because I know I said it wrong in the beginning.Patrick: I guess for most people it would just be Strzelec, but if there are any Slavs in the audience, it's "Strzelec." Very intense four consonants last name.Jeremy: That is a lot of consonants. Anyways again, Patrick, thanks again. This was great.Patrick: Yeah, thank you so much, Jeremy. This has been awesome.

Serverless Chats
Episode #101: How Serverless is Becoming More Extensible with Julian Wood

Serverless Chats

Play Episode Listen Later May 17, 2021 64:40


About Julian WoodJulian Wood is a Senior Developer Advocate for the AWS Serverless Team. He loves helping developers and builders learn about, and love, how serverless technologies can transform the way they build and run applications at any scale. Julian was an infrastructure architect and manager in global enterprises and start-ups for more than 25 years before going all-in on serverless at AWS.Twitter: @julian_woodAll things Serverless @ AWS: ServerlessLandServerless Patterns CollectionServerless Office Hours – every Tuesday 10am PTLambda ExtensionsLambda Container ImagesWatch this episode on YouTube: https://youtu.be/jtNLt3Y51-gThis episode sponsored by CBT Nuggets and Lumigo.TranscriptJeremy: Hi everyone, I'm Jeremy Daly and this is Serverless Chats. Today I'm joined by Julian Wood. Hey Julian, thanks for joining me.Julian: Hey Jeremy, thank you so much for inviting me.Jeremy: Well, I am super excited to have you here. I have been following your work for a very long time and of course, big fan of AWS. So you are a Serverless Developer Advocate at AWS, and I'd love it if you could just tell the listeners a little bit about your background, so they get to know you a bit. And then also, sort of what your role is at AWS.Julian: Yeah, certainly. Well, I'm Julian Wood. I am based in London, but yeah, please don't let my accent fool you. I'm actually originally from South Africa, so the language purists aren't scratching their heads anymore. But yeah, I work within the Serverless Team at AWS, and hopefully do a number of things. First of all, explain what we're up to and how our sort of serverless things work and sort of, I like to sometimes say a bit cheekily, basically help the world fall in love with serverless as I have. And then also from the other side is to be a proxy and sort of be the voice of builders, and developers and whoever's building service applications, and be their voices internally. So you can also keep us on our toes to help build the things that will brighten your days.And just before, I've worked for too many years probably, as an infrastructure racker, stacker, architect, and manager. I've worked in global enterprises babysitting their Windows and Linux servers, and running virtualization, and doing all the operations kind of stuff to support that. But, I was always thinking there's a better way to do this and we weren't doing the best for the developers and internal customers. And so when this, you know in inverted commas, "serverless way" of things started to appear, I just knew that this was going to be the future. And I could happily leave the server side to much better and cleverer people than me. So by some weird, auspicious alignment of the stars, a while later, I managed to get my current dream job talking about serverless and talking to you.Jeremy: Yeah. Well, I tell you, I think a lot of serverless people or people who love serverless are recovering ops and infrastructure people that were doing racking and stacking. Because I too am also recovering from that and I still have nightmares.I thought that it was interesting too, how you mentioned though, developer advocacy. It's funny, you work for a specific company, AWS obviously, but even developer advocacy in general, who is that for? Who are you advocating for? Are you advocating for the developers to use the service from the company? Are you advocating for the developers so that the company can provide the services that they actually need? Interesting balance there.Julian: Yeah, it's true. I mean, the honest answer is we don't have great terms for this kind of role, but yeah, I think primarily we are advocating for the people who are developing the applications and on the outside. And to advocate for them means we've got to build the right stuff for them and get their voices internally. And there are many ways of doing that. Some people raise support requests and other kind of things, but I mean, sometimes some of our great ideas come from trolling Twitter, or yes, I know even Hacker News or that kind of thing. But also, we may get responses from 10 different people about something and that will formulate something in our brain and we'll chat with other kind of people. And that sort of starts a thing. It's not just necessarily each time, some good idea in Twitter comes in, it gets mashed into some big surface database that we all pick off.But part of our job is to be out there and try and think and be developers in whatever backgrounds we come from. And I mean, I'm not a pure software developer where I've come from, and I come, I suppose, from infrastructure, but maybe you'd call that a bit of systems engineering. So yeah, I try and bring that background to try and give input on whatever we do, hopefully, the right stuff.Jeremy: Right. Yeah. And then I think part of the job too, is just getting the information out there and getting the examples out there. And trying to create those best practices or at least surface those best practices, and encourage the community to do a lot of that work and to follow that. And you've done a lot of work with that, obviously, writing for the AWS blog. I know you have a series on the Serverless Lens and the Well-Architected Framework, and we can talk about that in a little while. But I really want to talk to you about, I guess, just the expansion of serverless over the last couple of years.I mean, it was very narrowly focused, probably, when it first came out. Lambda was ... FaaS as a whole new concept for a lot of people. And then as this progressed and we've gotten more APIs, and more services and things that it can integrate with, it just becomes complex and complicated. And that's a good thing, but also maybe a bad thing. But one of the things that AWS has done, and I think this is clearly in reaction to the developers needing it, is the ability to extend what you can do with a Lambda function, right? I mean, the idea of just putting your code in there and then, boom, that's it, that's all you have to do. That's great. But what if you do need access to lifecycle hooks? Or what if you do want to manipulate the underlying runtime or something like that? And AWS, I think has done a great job with that.So maybe we can start there. So just about the extensibility of Lambda in general. And one of the new things that was launched recently was, and recently, I don't know what was it? Seven months ago at this point? I'm not even sure. But was launched fairly recently, let's say that, is Lambda Extensions, and a couple of different flavors of that as well. Could you kind of just give the users an over, the users, wow, the listeners an overview of what Lambda Extensions are?Julian: I could hear the ops background coming in, talking about our users. Yeah. But I mean, from the get-go, serverless was always a terrible term because, why on earth would you name something for what it isn't? I mean, you know? I remember talking to DBAs, talking about noSQL, and they go, "Well, if it's not SQL, then what is it?" So we're terrible at that, serverless as well. And yeah, Lambda was very constrained when it came out. Lambda was never built being a serverless thing, that's what was the outcome. Sometimes we focus too much on the tools rather than the outcome. And the story is S3, just turning 15. And the genesis of Lambda was being an event trigger for S3, and people thought you'd upload something to S3, fire off a Lambda function, how cool is that? And then obviously the clever clubs at the time were like, "Well, hang on, let's not just do this for S3, let's do this for a whole bunch of kind of things."So Lambda was born out of that, as that got that great history, which is created an arc sort of into the present and into the future, which I know we're also going to get on about, the power of event driven applications. But the power of Lambda has always been its simplicity, and removing that operational burden, and that heavy lifting. But, sometimes that line is a bit of a gray area and there're people who can be purists about serverless and can be purists about FaaS and say, "Everything needs to be ephemeral. Lambda functions can't extend to anything else. There shouldn't be any state, shouldn't be any storage, shouldn't be any ..." All this kind of thing.And I think both of us can agree, but I don't want to speak for you, but I think both of us would agree that in some sense, yeah, that's fine. But we live in the real world and there's other stuff that needs to connect to and we're not here about building purist kind of stuff. So Lambda Extensions is a new way basically to integrate Lambda with your favorite tools. And that's the sort of headline thing we like to talk about. And the big idea is to open up Lambda to more effectively work mainly with partners, but also your own tools if you want to write them. And to sort of have deeper hooks into the Lambda lifecycle.And other partners are awesome and they do a whole bunch of stuff for serverless, plus customers also have connections to on-prem staff, or EC2 staff, or containers, or all kind of things. How can we make the tools more seamless in a way? How can we have a common set of tools maybe that you even use on-prem or in the cloud or containers or whatever? Why does Lambda have to be unique or different or that kind of thing? And Extensions is sort of one of the starts of that, is to be able to use these kind of tools and get more out of Lambda. So I mean, just the kind of tools that we've already got on board, there's things like Splunk and AppDynamics. And Lumigo, Epsagon, HashiCorp, Honeycomb, CoreLogic, Dynatrace, I can't think. Thundra and Sumo Logic, Check Point. Yeah, I'm sorry. Sorry for any partners who I've forgotten a few.Jeremy: No, right, no. That's very good. Shout them out, shout them out. No, I mean just, and not to interrupt you here, but ...Julian: No, please.Jeremy: ... I think that's great. I mean, I think that's one of the things that I like about the way that AWS deals with partners, is that ... I mean, I think AWS knows they can't solve all these problems on their own. I mean, maybe they could, right? But they would be their own way of solving the problems and there's other people who are solving these problems differently and giving you the ability to extend your Lambda functions into those partners is, there's a huge win for not only the partners because it creates that ecosystem for them, but also for AWS because it makes the product itself more valuable.Julian: Well, never mind the big win for customers because ultimately they're the one who then gets a common deployment tool, or a common observability tool, or a HashiCorp Vault that you can manage secrets and a Lambda function from HashiCorp Vault. I mean, that's super cool. I mean, also AWS services are picking this up because that's easy for them to do stuff. So if anybody's used Lambda Insights or even seen Lambda Insights in the console, it's somewhere in the monitoring thing, and you just click something over and you get this tool which can pull stuff that you can't normally get from a Lambda function. So things like CPU time and network throughput, which you couldn't normally get. But actually, under the hoods, Lambda Insights is using Lambda extensions. And you can see that if you look. It automatically adds the Lambda layer and job done.So anyway, this is how a lot of the tools work, that a layer is just added to a Lambda function and off you go, the tool can do its work. So also there's a very much a simplicity angle on this, that in a lot of cases you don't have to do anything. You configure some of the extensions via environment variables, if that's cooled you may just have an API key or a log retention value or something like that, I don't know, any kind of example of that. But you just configure that as a normal Lambda environment variable at this partner extension, which is just a Lambda layer, and off you go. Super simple.Jeremy: Right. So explain Extensions exactly, because I think that's one of those things because now we have Lambda layers and we have Lambda Extensions. And there's also like the runtime API and then something else. I mean, even I'm not 100% sure what all of the naming conventions. I'm pretty sure I know what they do ...Julian: Yeah, fair enough.Jeremy: ... but maybe we could say the names and say exactly what they do as well.Julian: Yeah, cool. You get an API, I get an API, everybody gets an API. So Lambda layers, let's just start, because that's, although it's not related to Extensions, it's how Extensions are delivered to the power core functions. And Lambda layers is just another way to add code to a Lambda function or not even code, it can be a dependency. It's just a way that you could, and it's cool because they are shareable. So you have some dependencies, or you have a library, or an SDK, or some training data for something, a Lambda layer just allows you to add some bits and bobs to your Lambda function. That's a horrible explanation. There's another word I was thinking of, because I don't want to use the word code, because it's not necessarily code, but it's dependency, whatever. It's just another way of adding something. I'll wake up in a cold sweat tonight thinking of the word I was thinking of, but anyway.But Lambda Extensions introduces a whole new companion API. So the runtime API is the little bit of code that allows your function to talk to the Lambda service. So when an event comes in, this is from the outside. This could be via API gateway or via the Lambda API, or where else, EventBridge or Step Functions or wherever. When you then transports that data rise in the Lambda services and HTTP call, and Lambda transposes that into an event and sends that onto the Lambda function. And it's that API that manages that. And just as a sidebar, what I find it cool on a sort of geeky, technical thing is, that actually API sits within the execution environment. People are like, "Oh, that's weird. Why would your Lambda API sit within the execution environment basically within the bubble that contains your function rather than it on the Lambda service?"And the cool answer for that is it's actually for a security mechanism. Like your function can then only ever talk to the Lambda runtime API, which is in that secure execution environment. And so our security can be a lot stronger because we know that no function code can ever talk directly out of your function into the Lambda service, it's all got to talk locally. And then the Lambda service gets that response from the runtime API and sends it back to the caller or whatever. Anyway, sidebar, thought that was nerdy and interesting. So what we've now done is we've released a new Extensions API. So the Extensions API is another API that an extension can use to get information from Lambda. And they're two different types of extensions, just briefly, internal and external extensions.Now, internal extensions run within the runtime process so that it's just basically another thread. So you can use this for Python or Java or something and say, when the Python runtime starts, let's start it with another parameter and also run this Java file that may do some observability, or logging, or tracing, or finding out how long the modules take to launch, for example. I know there's an example for Python. So that's one way of doing extensions. So it's internal extensions, they're two different flavors, but I'll send you a link. I'll provide a link to the blog posts before we go too far down the rabbit hole on that.And then the other part of extensions are external extensions. And this is a cool part because they actually run as completely separate processes, but still within that secure bubble, that secure execution environment that Lambda runs it. And this gives you some superpowers if you want. Because first of all, an extension can run in any language because it's a separate process. So if you've got a Node function, you could run an extension in other kind of languages. Now, what do we do recommend is you do run your extension in a compiled binary, just because you've got to provide the runtime that the extensions got to run in any way, so as a compiled binary, it's super easy and super useful. So is something like Go, a lot of people are doing because you write a single extension and Go, and then you can use it on your Node functions, your Java functions, your PowerShell functions, whatever. So that's a really good, simple way that you can have the portability.But now, what can these extensions do? Well, the extensions basically register with extensions API, and then they say to Lambda, "Lambda, I want to know about what happens when my functions invoke?" So the extension can start up, maybe it's got some initialization code, maybe it needs to connect to a database, or log into an observability platform, or pull down a secret order. That it can do, it's got its own init that can happen. And then it's basically ready to go before the function invokes. And then when the extension then registers and says, "I want to know when the function invokes and when it shuts down. Cool." And that's just something that registers with the API. Then what happens is, when a functioning invoke comes in, it tells the runtime API, "Hello, you now have an event," sends it off to the Lambda function, which the runtime manages, but also extension or extensions, multiple ones, hears information about that event. And so it can tell you the time it's going to run and has some metadata about that event. So it doesn't have the actual event data itself, but it's like the sort of Lambda context, a version of that that it's going to send to the extension.So the extension can use that to do various things. It can start collecting telemetry data. It can alter instrument some of your code. It could be managing a secret as a separate process that it is going to cache in the background. For example, we've got one with AppConfig, which is really cool. AppConfig is a service where you manage parameters external to your Lambda function. Well, each time your Lambda function warm invokes if you've got to do an external API call to retrieve that, well, it's going to be a little bit efficient. First of all, you're going to pay for it and it's going to take some time.So how about when the Lambda function runs and the extension could run before the Lambda function, why don't we just cache that locally? And then when your Lambda function runs, it just makes a local HTTP call to the extension to retrieve that value, which is going to be super quick. And some extensions are super clever because they're their own process. They will go, "Well, my value is for 30 minutes and every 30 minutes if I haven't been run, I will then update the value from that." So that's useful. Extensions can then also, when the runtime ... Sorry, let me back up.When the runtime is finished, it sends its response back to the runtime API, and extensions when they're done doing, so the runtime can send it back and the extension can carry on processing saying, "Oh, I've got the information about this. I know that this Lambda function has done X, Y, Z, so let me do, do some telemetry. Let me maybe, if I'm writing logs, I could write a log to S3 or to Kinesis or whatever. Do some kind of thing after the actual function invocation has happened." And then when it says it's ready, it says, "Hello, extensions API, I'm telling you I'm done." And then it's gone. And then Lambda freezes the execution environment, including the runtime and the extensions until another invocation happens. And the cycle then will happen.And then the last little bit that happens is, instead of an invoke coming in, we've extended the Lambda life cycles, so when the environment is going to be shut down, the extension can receive the shutdown and actually do some stuff and say, "Okay, well, I was connected to my observer HTTP platform, so let me close that connection. I've got some extra logs to flush out. I've got whatever else I need to do," and just be able to cleanly shut down that extra process that is running in parallel to the Lambda function.Jeremy: All right.Julian: So that was a lot of words.Jeremy: That was a lot and I bet you that would be great conversation for a dinner party. Really kicks things up. Now, the good news is that, first of all, thank you for that though. I mean, that's super technical and super in-depth. And for anyone listening who ...Julian: You did ask, I did warn you.Jeremy ... kind of lost their way ... Yes, but something that is really important to remember is that you likely don't have to write these yourself, right? There is all those companies you mentioned earlier, all those partners, they've already done this work. They've already figured this out and they're providing you access to their tools via this, that allows you to build things.Julian: Exactly.Jeremy: So if you want to build an extension and you want to integrate your product with Lambda or so forth, then maybe go back and listen to this at half speed. But for those of you who just want to take advantage of it because of the great functionality, a lot of these companies have already done that for you.Julian: Correct. And that's the sort of easiness thing, of just adding the Lambda layer or including in a container image. And yeah, you don't have to worry any about that, but behind the scenes, there's some really cool functionality that we're literally opening up our Lambda operates and allowing you to impact when a function responds.Jeremy: All right. All right. So let me ask another, maybe an overly technical question. I have heard, and I haven't experienced this, but that when it runs the life cycle that ends the Lambda function, I've heard something like it doesn't send the information right away, right? You have to wait for that Lambda to expire or something like that?Julian: Well, yes, for now, about to change. So currently Extensions is actually in preview. And that's not because it's in Beta or anything like that, but it's because we spoke to the partners and we didn't want to dump Extensions on the world. And all the partners had to come out with their extensions on day one and then try and figure out how customers are going to use them and everything. So what we really did, which I think in this case works out really well, is we worked with the partners and said, "Well, let's release this in preview mode and then give everybody a whole bunch of months to work out what's the best use cases, how can we best use this?" And some partners have said, "Oh, amazing. We're ready to go." And some partners have said, "Ah, it wasn't quite what we thought. Maybe we're going to wait a bit, or we're going to do something differently, or we've got some cool ideas, just give us time." And so that's what this time has been.The one other thing that has happened is we've actually added some performance enhancements during it. So yes, currently during the preview, the runtime and all extensions need to finish before we give you your response back to your Lambda function. So if you're in an asynchronous mode, you don't really care, but obviously if you're in a synchronous mode behind an API, yeah, you don't really want that. But when Extensions goes GA, which isn't going to be long, then that is no longer the case. So basically what'll happen is the runtime will respond and the result goes directly back to whoever's calling that, maybe API gateway, and the extensions can carry on, partly asynchronously in the background.Jeremy: Yep. Awesome. All right. And I know that the plan is to go GA soon. I'm not sure when around when this episode comes out, that that will be, but soon, so that's good to know that that is ...Julian: And in fact, when we go GA that performance enhancement is part of the GA. So when it goes GA, then you know, it's not something else you need to wait for.Jeremy: Perfect. Okay. All right. So let's move on to another bit of, I don't know if this is extensibility of the actual product itself or more so I think extensibility of maybe the workflow that you use to deploy to Lambda and deploy your serverless applications, and that's container image support. I mean, we've discussed it a lot. I think people kind of have an idea, but just give me your quick overview of what that is to set some context here.Julian: Yeah, sure. Well, container image support in a simple sort of headline thing is to be able to build and package your functions as a container image. So you basically build a function using a Docker file. So before if you use a zip function, but a lot of people use Serverless Framework or SAM, or whatever, that's all abstracted away from you, but it's actually creating a zip file and uploading it to Lambda or S3. So with container image support, you use a Docker file to build your Lambda function. That's the headline of what's happening.Jeremy: Right. And so the idea of creating, and this is also, and again, you mentioned packaging, right? I mean, that is the big thing here. This is a packaging format. You're not actually running the container in a Lambda function.Julian: Correct. Yeah, let's maybe think, because I mean, "containers," in inverted commas again for people who are on the audio, is ...Jeremy: What does it even mean?Julian: Yeah, exactly. And can be quite an overload of terms and definitely causes some confusion. And I sort of think maybe there's sort of four things that are in the container world. One, containers is an isolation mechanism. So on Linux, this is UNC Group, seccomp, other bits and pieces that can be used to isolate processes or maybe groups of processes. And then a second one, containers as the packaging mechanism. This is what Docker really popularized and this is about taking some code and the dependencies needed to run the code, and then packaging them all out together, maybe with some metadata to describe it.And then, three is containers as also a design philosophy. This is the idea, if we can package and isolate software, it's easier to run. Maybe smaller pieces of software is easy to reason about and manage independently. So I don't want to necessarily use microservices, but there's some component of that with it. And the emphasis here is on software rather than services, and standardized tooling to simplify your ops. And then the fourth thing is containers as an ecosystem. This is where all the products, tools, know how, all the actual things to how to do containers. And I mean, these are certain useful, but I wouldn't say there're anything about the other kind of things.What is cool and worth appreciating is how maybe independent these things are. So when I spoke about containers as isolation, well, we could actually replace containers as isolation with micro VMs such as we do with Firecracker, and there's no real change in the operational properties. So one, if we think, what are we doing with containers and why? One of those is in a way ticked off with Lambda. Lambda does have secure isolation. And containers as a packaging format. I mean, you could replace it with static linking, then maybe won't really be a change, but there's less convenience. And the design philosophy, that could really be applicable if we're talking microservices, you can have instances and certainly functions, but containers are all the same kind of thing.So if we talk about the packaging of Lambda functions, it's really for people who are more familiar with containers, why does Lambda have to be different? You've got, why does Lambda to have to be a snowflake in a way that you have to manage differently? And if you are packaging dependencies, and you're doing npm or pip install, and you're used to building Docker files, well, why can't we just do that for Lambda at the same things? And we've got some other things that come with that, larger function sizes, up to 10 gig, which is enabled with some of this technology. So it's a packaging format, but on the backend, there's a whole bunch of different stuff, which has to be done to to allow this. Benefits are, use your tooling. You've got your CI/CD pipelines already for containers, well, you can use that.Jeremy: Yeah, yeah. And I actually like that idea too. And when I first heard of it, I was like, I have nothing against containers, the containers are great. But when I was thinking about it, I'm like, "Wait container? No, what's happening here? We're losing something." But I will say, like when Lambda layers came out, which was I think maybe 2019 or something like that, maybe 2018, the idea of it made a lot of sense, being able to kind of supplement, add additional dependencies or code or whatever. But it always just seemed awkward. And some of the publishing for it was a little bit awkward. The versioning used like a numbered versioning instead of like semantic versioning and things like that. And then you had to share it to multiple places and if you published it as a SAR app, then you got global distri ... Anyways, it was a little bit hard to use.And so when you're trying to package large dependencies and put those in a layer and then combine them with a Lambda function, the other problem you had was you still had a maximum size that you could use for those, when those were combined. So I like this idea of saying like, "Look, I'd like to just kind of create this little isolate," like you said, "put my dependencies in there." Whether that's PyCharm or some other thing that is a big dependency that maybe I don't want to install, directly in a Lambda layer, or I don't want to do directly in my Lambda function. But you do that together and then that whole process just is a lot easier. And then you can actually run those containers, you could run those locally and test those if you wanted to.Julian: Correct. So that's also one of the sort of superpowers of this. And that's when I was talking about, just being able to package them up. Well, that now enables a whole bunch of extra kind of stuff. So yes, first of all is you can then use those container images that you've created as your local testing. And I know, it's silly for anyone to poo poo local testing. And we do like to say, "Well, bring your testing to the cloud rather than bringing the cloud to your testing." But testing locally for unit tests is super great. It's going to be super fast. You can iterate, have your Lambda functions, but we don't want to be mocking all of DynamoDB, all of building harebrained S3 options locally.But the cool thing is you've got the same Docker file that you're going to run in Lambda can be the same Docker file to build your function that you run locally. And it is literally exactly the same Lambda function that's going to run. And yes, that may be locally, but, with a bit of a stretch of kind of stuff, you could also run those Lambda functions elsewhere. So even if you need to run it on EC2 instances or ECS or Fargate or some kind of thing, this gives you a lot more opportunities to be able to use the same Lambda function, maybe in different way, shapes or forms, even if is on-prem. Now, obviously you can't recreate all of Lambda because that's connected to IM and it's got huge availability, and scalability, and latency and all that kind of things, but you can actually run a Lambda function in a lot more places.Jeremy: Yeah. Which is interesting. And then the other thing I had mentioned earlier was the size. So now the size of these container or these packages can be much, much bigger.Julian: Yeah, up to 10 gig. So the serverless purists in the back are shouting, "What about cold starts? What about cold starts?"Jeremy: That was my next question, yes.Julian: Yeah. I mean, back on zip functional archives are also all available, nothing changes with that Lambda layers, many people use and love, that's all available. This isn't a replacement it's just a new way of doing it. So now we've got Lambda functions that can be up to 10 gig in size and surely, surely that's got to be insane for cold starts. But actually, part of what I was talking about earlier of some of the work we've done on the backend to support this is to be able to support these super large package sizes. And the high level thing is that we actually cache those things really close to where the Lambda layer is going to be run.Now, if you run the Docker ecosystem, you build your Docker files based on base images, and so this needs to be Linux. One of the super things with the container image support is you don't have to use Amazon Linux or Amazon Linux 2 for Lambda functions, you can actually now build your Lambda functions also on Ubuntu, DBN or Alpine or whatever else. And so that also gives you a lot more functionality and flexibility. You can use the same Linux distribution, maybe across your entire suite, be it on-prem or anywhere else.Jeremy: Right. Right.Julian: And the two little components, there's an interface client, what you install, it's just another Docker layer. And that's that runtime API shim that talks to the runtime API. And then there's a runtime interface emulator and that's the thing that pretends to be Lambda, so you can shunt those events between HTTP and JSON. And that's the thing you would use to run locally. So runtime interface client means you can use any Linux distribution at the runtime interface client and you're compatible with Lambda, and then the interface emulators, what you would use for local testing, or if you want to spread your wings and run your Lambda functions elsewhere.Jeremy: Right. Awesome. Okay. So the other thing I think that container support does, I think it opens up a broader set of, or I guess a larger audience of people who are familiar with containerization and how that works, bringing those two Lambda functions. And one of the things that you really don't get when you run a container, I guess, on EC2, or, not EC2, I'm sorry, ECS, or Fargate or something like that, without kind of adding another layer on top of it, is the eventing aspect of it. I mean, Lambda just is naturally an event driven, a compute layer, right? And so, eventing and this idea of event driven applications and so forth has just become much more popular and I think much more mainstream. So what are your thoughts? What are you seeing in terms of, especially working with so many customers and businesses that are using this now, how are you seeing this sort of evolution or adoption of event driven applications?Julian: Yeah. I mean, it's quite funny to think that actually the event of an application was the genesis of Lambda rather than it being Serverless. I mentioned earlier about starting with S3. Yeah, the whole crux of Lambda has been, I respond to an event of an API gateway, or something on SQS, or via the API or anything. And so the whole point in a way of Lambda has been this event driven computing, which I think people are starting to sort of understand in a bigger thing than, "Oh, this is just the way you have to do Lambda." Because, I do think that serverless has a unique challenge where there is a new conceptual learning maybe that you have to go through. And one other thing that holds back service development is, people are used to a client's server and maybe ports and sockets. And even if you're doing containers or on-prem, or EC2, you're talking IP addresses and load balances, and sockets and firewalls, and all this kind of thing.But ultimately, when we're building these applications that are going to be composed of multiple services talking together through using APIs and events, the events is actually going to be a super part of it. And I know he is, not for so much longer, but my ultimate boss, but I can blame Jeff Bezos just a little bit, because he did say that, "If you want to talk via anything, talk via an API." And he was 100% right and that was great. But now we're sort of evolving that it doesn't just have to be an API and it doesn't have to be something behind API gateway or some API that you can run. And you can use the sort of power of events, particularly in an asynchronous model to not just be "forced" again in inverted commas to use APIs, but have far more flexibility of how data and information is going to flow through, maybe not just your application, but your suite of applications, or to and from your partners, or where that is.And ultimately authentications are going to be distributed, and maybe that is connecting to partners, that could be SaaS partners, or it's going to be an on-prem component, or maybe things in other kind of places. And those things need to communicate. And so the way of thinking about events is a super powerful way of thinking about that.Jeremy: Right. And it's not necessarily new. I mean, we've been doing web hooks for quite some time. And that idea of, something is going to happen somewhere and I want to be notified of it, is again, not a new concept. But I think certainly the way that it's evolved with Lambda and the way that other FaaS products had done eventing and things like that, is just those tight integrations and just all of the, I guess, the connective tissue that runs between those things to make sure that the events get delivered, and that you can DLQ them, and you can do all these other things with retries and stuff like that, is pretty powerful.I know you have, I actually just mentioned this on the last episode, about one of my favorite books, I think that changed my thinking and really got me thinking about how microservices communicate with one another. And that was Building Microservices by Sam Newman, which I actually said was sort of like my Bible for a couple of years, yes, I use that. So what are some of the other, like I know you have a favorite book on this.Julian: Well, that Building Microservices, Sam Newman, and I think there's a part two. I think it's part two, or there's another one ...Jeremy: Hopefully.Julian: ... in the works. I think even on O'Riley's website, you can go and see some preview copies of it. I actually haven't seen that. But yeah, I mean that is a great kind of Bible talking. And sometimes we do conflate this microservices things with a whole bunch of stuff, but if you are talking events, you're talking about separating things. But yeah, the book recommendation I have is one called Flow Architectures by James Urquhart. And James Urquhart actually works with VMware, but he's written this book which is looking sort of at the current state and also looking into the future about how does information flow through our applications and between companies and all this kind of thing.And he goes into some of the technology. When we talk about flow, we are talking about streams and we're talking about events. So streams would be, let's maybe put some AWS words around it, so streams would be something like Kinesis and events would be something like EventBridge, and topics would be SNS, and SQS would be queues. And I know we've got all these things and I wish some clever person would create the one flow service to rule them all, but we're not there. And they've got also different properties, which are helpful for different things and I know confusingly some of them merge. But James' sort of big idea is, in the future we are going to be able to moving data around between businesses, between applications. So how can we think of that as a flow? And what does that mean for designing applications and how we handle that?And Lambda is part of it, but even more nicely, I think is even some of the native integrations where you don't have to have a Lambda function. So if you've got API gateway talking to Step Functions directly, for example, well, that's even better. I mean, you don't have any code to manage and if it's certainly any code that I've written, you probably don't want to manage it. So yeah. I mean this idea of flow, Lambda's great for doing some of this moving around. But we are even evolving to be able to flow data around our applications without having to do anything and just wire up some things in a console or in a terminal.Jeremy: Right. Well, so you mentioned, someone could build the ultimate sort of flow control system or whatever. I mean, I honestly think EventBridge is very close to that. And I actually had Mike Deck on the show. I think it was like episode five. So two years ago, whenever it was when the show came out. I mean, when EventBridge came out. And we were talking and I sort of made the joke, I'm like, so this is like serverless web hooks, essentially being able, because there was the partner integrations where partners could push events onto an event bus, which they still can do. But this has evolved, right? Because the issue was always sort of like, I would have to subscribe to web books, I'd have to build a web hook to get events from a particular company. Which was great, always worked fine, but you're still maintaining that infrastructure.So EventBridge comes along, it creates these partner integrations and now you can just push an event on that now your applications, whether it's a Lambda function or other services, you can push them to an SQS queue, you can push them into a Kinesis stream, all these different destinations. You can go ahead and pull that data in and that's just there. So you don't have to worry about maintaining that infrastructure. And then, the EventBridge team went ahead and released the destination API, I think it's called.Julian: Yeah, API destinations.Jeremy: Event API destinations, right, where now you can set up these integrations with other companies, so you don't even have to make the API call yourself anymore, but instead you get all of the retries, you get the throttling, you get all that stuff kind of built in. So I mean, it's just really, really interesting where this is going. And actually, I mean, if you want to take a second to tell people about EventBridge API destinations, what that can do, because I think that now sort of creates both sides of that equation for you.Julian: It does. And I was just thinking over there, you've done a 10 times better job at explaining API destinations than I have, so you've nailed it on the head. And packet is that kind of simple. And it is just, events land up in your EventBridge and you can just pump events to any arbitrary endpoint. So it doesn't have to be in AWS, it can be on-prem. It can be to your Raspberry PI, it can literally be anywhere. But it's not just about pumping the events over there because, okay, how do we handle failover? And how do we handle over throttling? And so this is part of the extra cool goodies that came with API destinations, is that you can, for instance, if you are sending events to some external API and you only licensed for 1,000 invocations, not invocations, that could be too Lambda-ish, but 1,000 hits on the API every minute.Jeremy: Quotas. I think we call them quotas.Julian: Quotas, something like that. That's a much better term. Thank you, Jeremy. And some sort of quota, well, you can just apply that in API destinations and it'll basically store the data in the meantime in EventBridge and fire that off to the API destination. If the API destination is in that sort of throttle and if the API destination is down, well, it's going to be able to do some exponential back-off or calm down a little bit, don't over-flood this external API. And then eventually when the API does come back, it will be able to send those events. So that does just really give you excellent power rather than maintaining all these individual API endpoints yourself, and you're not handling the availability of the endpoint API, but of whatever your code is that needs to talk to that destination.Jeremy: Right. And I don't want to oversell this to anybody, but that also ...Julian: No, keep going. Keep going.Jeremy: ... adds the capability of enhanced security, because you're not exposing those API keys to your developers or anybody else, they're all baked in and stored within, the API destinations or within an EventBridge. You have the ability, you mentioned this idea of not needing Lambda to maybe talk directly, API gateway to DynamoDB or to step function or something like that. I mean, the cool thing about this is you do have translation capabilities, or transformation capabilities in EventBridge where you can transform the event. I haven't tried this, but I'm assuming it's possible to say, get an event from Salesforce and then pipe it into Stripe or some other API that you might want to pipe it into.So I mean, just that idea of having that centralized bus that can communicate with all these different things. I mean, we're talking about distributed systems here, right? So why is it different sending an event from my microservice A to my microservice B? Why can't I send it from my microservice A to company-wise, microservice B or whatever? And being able to do that in a secure, reliable, just with all of that stuff kind of built in for you, I think it's amazing. So I love EventBridge. To me EventBridge is one of those services that rivals Lambda. It's as, I guess as important as Lambda is, in this whole serverless equation.Julian: Absolutely, Jeremy. I mean, I'm just sitting here. I don't actually have to say anything. This is a brilliant interview and Jeremy, you're the expert. And you're just like laying down all of the excellent use cases. And exactly it. I mean, I like to think we've got sort of three interlinked services which do three different things, but are awesome. Lambda, we love if you need to do some processing or you need to do something that's literally your business logic. You've got EventBridge that can route data from in and out of SaaS partners to any other kind of API. And then you've got Step Functions that can do some coordination. And they all work together, but you've got three different things that really have sort of superpowers in terms of the amount of stuff you can do with it. And yes, start with them. If you land up bumping up against any kind of things that it doesn't work, well, first of all, get in touch with me, I'll work on that.But then you can maybe start thinking about, is it containers or EC2, or that kind of thing? But using literally just Lambda, Step Functions and EventBridge, okay. Yes, maybe you're going to need some queues, topics and APIs, and that kind of thing. But ...Jeremy: I was just going to say, add DynamoDB in there for some permanent state or for some data persistence. Right? Yeah. But other than that, no, I think you nailed it. Honestly, sometimes you're starting to build applications and yeah, you're right. You maybe need a queue here and there and things like that. But for the most part, no, I mean, you could build a lot with those three or four services.Julian: Yeah. Well, I mean, even think of it what you used to do before with API destinations. Maybe you drop something on a queue, you'd have Lambda pull that from a queue. You have Lambda concurrency, which would be set to five per second to then send that to an external API. If it failed going to that API, well, you've got to then dump it to Lambda destinations or to another SQS queue. You then got something ... You know, I'm going down the rabbit hole, or just put it on EventBridge ...Jeremy: You just have it magically happen.Julian: ... or we talk about removing serverless infrastructure, not normal infrastructure, and just removing even the serverless bits, which is great.Jeremy: Yeah, no. I think that's amazing. So we talked about a couple of these different services, and we talked about packaging formats and we talked about event driven applications, and all these other things. And a lot of this stuff, even though some of it may be familiar and you could probably equate it or relate it to things that developers might already know, there is still a lot of new stuff here. And I think, my biggest complaint about serverless was not about the capabilities of it, it was basically the education and the ability to get people to adopt it and understand the power behind it. So let's talk about that a little bit because ... What's that?Julian: It sounds like my job description, perfectly.Jeremy: Right. So there we go. Right, that's what you're supposed to be doing, Julian. Why aren't you doing it? No, but you are doing it. You are doing it. No, and that's why I want to talk to you about it. So you have that series on the Well-Architected Framework and we can talk about that. There's a whole bunch of really good resources on this. Obviously, you're doing videos and conferences, well, you used to be doing conferences. I think you probably still do some of those virtual ones, right? Which are not the same thing.Julian: Not quite, no.Jeremy: I mean, it was fun seeing you in Cardiff and where else were you?Julian: Yeah, Belfast.Jeremy: Cardiff and Northern Ireland.Julian: Yeah, exactly.Jeremy: Yeah, we were all over the place together.Julian: With the Guinness and all of us. It was brilliant.Jeremy: Right. So tell me a little bit about, sort of, the education process that you're trying to do. Or maybe even where you sort of see the state of Serverless education now, and just sort of where it's evolved, where we're getting best practices from, what's out there for people. And that's a really long question, but I don't know, maybe you can distill that down to something usable.Julian: No, that's quite right. I'm thinking back to my extensions explanation, which is a really long answer. So we're doing really long stuff, but that's fine. But I like to also bring this back to also thinking about the people aspect of IT. And we talk a lot about the technology and Lambda is amazing and S3 is amazing and all those kinds of things. But ultimately it is still sort of people lashing together these services and building the serverless applications, and deciding what you even need to do. And so the education is very much tied with, of course, having the products and features that do lots of kinds of things. And Serverless, there's always this lever, I suppose, between simplicity and functionality. And we are adding lots of knobs and levers and everything to Lambda to make it more feature-rich, but we've got to try and keep it simple at the same time.So there is sort of that trade-off, and of course with that, that obviously means not just the education side, but education about Lambda and serverless, but generally, how do I build applications? What do I do? And so you did mention the Well-Architected Framework. And so for people who don't know, this came out in 2015, and in 2017, there was a Serverless Lens which was added to it; what is basically serverless specific information for Well-Architected. And Well-Architected means bringing best practices to serverless applications. If you're building prod applications in the cloud, you're normally looking to build and operate them following best practices. And this is useful stuff throughout the software life cycle, it's not just at the end to tick a few boxes and go, "Yes, we've done that." So start early with the well-architected journey, it'll help you.And just sort of answer the question, am I well architected? And I mean, that is a bit of a fuzzy, what is that question? But the idea is to give you more confidence in the architecture and operations of your workloads, and that's not a goal it's in, but it's to reduce and minimize the impact of any issues that can happen. So what we do is we try and distill some of our questions and thoughts on how you could do things, and we built that into the Well-Architected Framework. And so the ServiceLens has a few questions on its operational excellence, security, reliability, performance, efficiency, and cost optimization. Excellent. I knew I was going to forget one of them and I didn't. So yeah, these are things like, how do you control access to an API? How do you do lifecycle management? How do you build resiliency into your application? All these kinds of things.And so the Well-Architected Framework with Serverless Lens there's a whole bunch of guidance to help you do that. And I have been slowly writing a blog series to literally cover all of the questions, they're nine questions in the Well-Architected Serverless Lens. And I'm about halfway through, and I had to pause because we have this little conference called re:Invent, which requires one or two slides to be created. But yeah, I'm desperately keen to pick that up again. And yeah, that's just providing some really and sort of more opinionated stuff, because the documentation is awesome and it's very in-depth and it's great when you need all that kind of stuff. But sometimes you want to know, well, okay, just tell me what to do or what do you think is best rather than these are the seven different options.Jeremy: Just tell me what to do.Julian: Yeah.Jeremy: I think that's a common question.Julian: Exactly. And I'll launch off from that to mention my colleague, James Beswick, he writes one or two things on serverless ...Jeremy: Yeah, I mean, every once in a while you see something from it. Yeah.Julian: ... every day. The Besbot machine of serverless. He's amazing. James, he's so knowledgeable and writes like a machine. He's brilliant. Yeah, I'm lucky to be on his team. So when you talk about education, I learn from him. But anyway, in a roundabout way, he's created this blog series and other series called the Lambda Operations Guide. And this is literally a whole in-depth study on how to operate Lambda. And it goes into a whole bunch of things, it's sort of linked to the Serverless Lens because there are a lot of common kind of stuff, but it's also a great read if you are more nerdily interested in Lambda than just firing off a function, just to read through it. It's written in an accessible way. And it has got a whole bunch of information on how to operate Lambda and some of the stuff under the scenes, how to work, just so you can understand it better.Jeremy: Right. Right. Yeah. And I think you mentioned this idea of confidence too. And I can tell you right now I've been writing serverless applications, well, let's see, what year is it? 2021. So I started in 2015, writing or building applications with Lambda. So I've been doing this for a while and I still get to a point every once in a while, where I'm trying to put something in cloud formation or I'm using the Serverless Framework or whatever, and you're trying to configure something and you think about, well, wait, how do I want to do this? Or is this the right way to do it? And you just have that moment where you're like, well, let me just search and see what other people are doing. And there are a lot of myths about serverless.There's as much good information is out there, there's a lot of bad information out there too. And that's something that is kind of hard to combat, but I think that maybe we could end it there. What are some of the things, the questions people are having, maybe some of the myths, maybe some of the concerns, what are those top ones that you think you could sort of ...Julian: Dispel.Jeremy: ... to tell people, dispel, yeah. That you could say, "Look, these are these aren't things to worry about. And again, go and read your blog post series, go and read James' blog post series, and you're going to get the right answers to these things."Julian: Yeah. I mean, there are misconceptions and some of them are just historical where people think the Lambda functions can only run for five minutes, they can run for 15 minutes. Lambda functions can also now run up to 10 gig of RAM. At re:Invent it was only 3 gig of RAM. That's a three times increase in Lambda functions within a three times proportional increase in CPU. So I like to say, if you had a CPU-intensive job that took 40 minutes and you couldn't run it on Lambda, you've now got three times the CPU. Maybe you can run it on Lambda and now because that would work. So yeah, some of those historical things that have just changed. We've got EFS for Lambda, that's some kind of thing you can't do state with Lambda. EFS and NFS isn't everybody's cup of tea, but that's certainly going to help some people out.And then the other big one is also cold starts. And this is an interesting one because, obviously we've sort of solved the cold start issue with connecting Lambda functions to VPC, so that's no longer an issue. And that's been a barrier for lots of people, for good reason, and that's now no longer the case. But the other thing for cold starts is interesting because, people do still get caught up at cold starts, but particularly for development because they create a Lambda function, they run it, that's a cold start and then update it and they run it and then go, oh, that's a cold start. And they don't sort of grok that the more you run your Lambda function the less cold starts you have, just because they're warm starts. And it's literally the number of Lambda functions that are running at exactly the same time will have a cold start, but then every subsequent Lambda function invocation for quite a while will be using a warm function.And so as it ramps up, we see, in the small percentages of cold starts that are actually going to happen. And when we're talking again about the container image support, that's got a whole bunch of complexity, which people are trying to understand. Hopefully, people are learning from this podcast about that as well. But also with the cold starts with that, those are huge and they're particular ways that you can construct your Lambda functions to really reduce those cold starts, and it's best practices anyway. But yeah, cold starts is also definitely one of those myths. And the other one ...Jeremy: Well, one note on cold starts too, just as something that I find to be interesting. I know that we, I even had to spend time battling with that earlier on, especially with VPC cold starts, that's all sort of gone away now, so much more efficient. The other thing is like provision concurrency. If you're using provision concurrency to get your cold starts down, I'm not even sure that's the right use for provision concurrency. I think provision concurrency is more just to make sure you have enough capacity because of the ramp-up time for Lambda. You certainly can use it for cold starts, but I don't think you need to, that's just my two cents on that.Julian: Yeah. No, that is true. And they're two different use cases for the same kind of thing. Yeah. As you say, Lambda is pretty scalable, but there is a bit of a ramp-up to get up to many, many, many, many thousands or tens of thousands of concurrent executions. And so yeah, using provision currency, you can get that up in advance. And yeah, some people do also use it for provision concurrency for getting those cold starts done. And yet that is another very valid use case, but it's only an issue for synchronous workloads as well. Anything that is synchronous you really shouldn't be carrying too much. Other than for cost perspective because it's going to take longer to run.Jeremy: Sure. Sure. I have a feeling that the last one you were going to mention, because this one bugs me quite a bit, is this idea of no ops or some people call it ops-less, which I think is kind of funny. But that's one of those things where, oh, it drives me nuts when I hear this.Julian: Yeah, exactly. And it's a frustrating thing. And I think often, sometimes when people are talking about no ops, they either have something to sell you. And sometimes what they're selling you is getting rid of something, which never is the case. It's not as though we develop serverless applications and we can then get rid of half of our development team, it just doesn't work like that. And it's crazy, in fact. And when I was talking about the people aspect of IT, this is a super important thing. And me coming from an infrastructure background, everybody is dying in their jobs to do more meaningful work and to do more interesting things and have the agility to try those experiments or try something else. Or do something that's better or even improve the way your build or improve the way your CI/CD pipeline runs or anything, rather than just having to do a lot of work in the lower levels.And this is what serverless really helps you do, is to be able to, we'll take over a whole lot of the ops for you, but it's not all of the ops, because in a way there's never an end to ops. Because you can always do stuff better. And it's not just the operations of deploying Lambda functions and limits and all that kind of thing. But I mean, think of observability and not knowing just about your application, but knowing about your business. Think of if you had the time that you weren't just monitoring function invocations and monitoring how long things were happening, but imagine if you were able to pull together dashboards of exactly what each transaction costs as it flows through your whole entire application. Think of the benefit of that to your business, or think of the benefit that in real-time, even if it's on Lambda function usage or something, you can say, "Well, oh, there's an immediate drop-off or pick-up in one region in the world or one particular application." You can spot that immediately. That kind of stuff, you just haven't had time to play with to actually build.But if we can take over some of the operational stuff with you and run one or two or trillions of Lambda functions in the background, just to keep this all ticking along nicely, you're always going to have an opportunity to do more ops. But I think the exciting bit is that ops is not just IT infrastructure, plumbing ops, but you can start even doing even better business ops where you can have more business visibility and more cool stuff for your business because we're not writing apps just for funsies.Jeremy: Right. Right. And I think that's probably maybe a good way to describe serverless, is it allows you to focus on more meaningful work and more meaningful tasks maybe. Or maybe not more meaningful, but more impactful on the business. Anyways, Julian, listen, this was a great conversation. I appreciate it. I appreciate the work that you're doing over at AWS ...Julian: Thank you.Jeremy: ... and the stuff that you're doing. And I hope that there will be a conference soon that we will be able to attend together ...Julian: I hope so too.Jeremy: ... maybe grab a drink. So if people want to get a hold of you or find out more about serverless and what AWS is doing with that, how do they do that?Julian: Yeah, absolutely. Well, please get hold of me anytime on Twitter, is the easiest way probably, julian_wood. Happy to answer your question about anything Serverless or Lambda. And if I don't know the answer, I'll always ask Jeremy, so you're covered twice over there. And then, three different things. James is, if you're talking specifically Lambda, James Beswick's operations guide, have a look at that. Just so much nuggets of super information. We've got another thing we did just sort of jump around, you were talking about cloud formation and the spark was going off in my head. We have something which we're calling the Serverless Patterns Collection, and this is really super cool. We didn't quite get to talk about it, but if you're building applications using SAM or serverless application model, or using the CDK, so either way, we've got a whole bunch of patterns which you can grab.So if you're pulling something from S3 to Lambda, or from Lambda to EventBridge, or SNS to SQS with a filter, all these kind of things, they're literally copy and paste patterns that you can put immediately into your cloud formation or your CDK templates. So when you are down the rabbit hole of Hacker News or Reddit or Stack Overflow, this is another resource that you can use to copy and paste. So go for that. And that's all hosted on our cool site called serverlessland.com. So that's serverlessland.com and that's an aggregation site that we run because we've got video talks, and we've got blog posts, and we've got learning path series, and we've got a whole bunch of stuff. Personally, I've got a learning path series coming out shortly on Lambda extensions and also one on Lambda observability. There's one coming out shortly on container image supports. And our team is talking all over as many things as we can virtually. I'm actually speaking about container images of DockerCon, which is coming up, which is exciting.And yeah, so serverlessland.com, that's got a whole bunch of information. That's just an easy one-stop-shop where you can get as much information about AWS services as you can. And if not yet, get in touch, I'm happy to help. I'm happy to also carry your feedback. And yeah, at the moment, just inside, we're sort of doing our planning for the next cycle of what Lambda and what all the service stuff we're going to do. So if you've got an awesome idea, please send it on. And I'm sure you'll be super excited when something pops out in the near issue, maybe just in future for a cool new functionality you could have been involved in.Jeremy: Well, I know that serverlessland.com is an excellent resource, and it's not that the AWS Compute blog is hard to parse through or anything, but serverlessland.com is certainly a much easier resource to get there. S

26th & Glencoe Media Network
Pokemon crown thundra Final Thoughts

26th & Glencoe Media Network

Play Episode Listen Later Nov 10, 2020 39:33


Flow and Meta are back to give their final thoughts and reflections on the latest Pokémon Sword and Shield DLC expansion pack, Crown Thundra. Follow Flow on Social Media: www.twitter.com/flowmyhero www.instagram.com/flowmyhero Follow Meta on Social Media: www.twitter.com/Metrometa26 www.instagram.com/Metrometa26 Follow 26th & Glencoe: www.twitter.com/26thandG www.instagram.com/26thandglencoe www.facebook.com/26thandglencoe www.youtube.com/26thandglencoe  

26th & Glencoe Media Network
Pokémon Crown Thundra News Reaction

26th & Glencoe Media Network

Play Episode Listen Later Sep 29, 2020 6:43


Flow is here to review the news! Pokémon Sword and Shield: Crown Thundra will be releasing soon and Flow breaks down how he felt about it. Pokémon Sword and Shield: The Crown Tundra[a] is the second downloadable content expansion pack for the 2019 role-playing video games Pokémon Sword and Shield on Nintendo Switch and a part of the Expansion Pass. The Crown Tundra centers around the legendary Pokémon Calyrex, resembling a mix of a deer, snowy plants, and the Crown jewels. Twitter: @Flowmyhero @MetroMeta26 @26thandG Instagram: Flowmyhero @MetroMeta26 @26thandG

Paraşüt'le Üretim Bandı
Thundra nasıl ürün geliştiriyor?

Paraşüt'le Üretim Bandı

Play Episode Listen Later Jul 9, 2020 35:34


Serverless ve container tabanlı mikroservis mimarileri için izlenebilirlik ve güvenlik çözümleri sunan Thundra'nın VP of Product'ı Emrah Şamdan ile Thundra'nın ürün geliştirme kültürünü, Opsgenie içinden nasıl yeni bir şirket çıkardıklarını ve intrapreneurship örneği gösterdiklerini konuştuk. * Thundra'nın Opsgenie'den çıkış hikayesi* Topluluk oluşturarak müşteriye ulaşma* Teknolojik değişim dalgalarını yakalamak* Müşterin yazılımcı olunca ne tür geri bildirimlerle karşılaşıyorlar* Churn'e bakış açıları* Fiyatlandırma* Hem küçük hem büyük firmalara satış* Rakipten müşteri kazanmayı kolaylaştıracak geliştirmeler* Thundra'nın organizasyon şeması- Shape Up metodu ile ürün geliştirme

Paraşüt'le Üretim Bandı
Thundra nasıl ürün geliştiriyor?

Paraşüt'le Üretim Bandı

Play Episode Listen Later Jul 9, 2020 35:34


Serverless ve container tabanlı mikroservis mimarileri için izlenebilirlik ve güvenlik çözümleri sunan Thundra'nın VP of Product'ı Emrah Şamdan ile Thundra'nın ürün geliştirme kültürünü, Opsgenie içinden nasıl yeni bir şirket çıkardıklarını ve intrapreneurship örneği gösterdiklerini konuştuk. * Thundra'nın Opsgenie'den çıkış hikayesi* Topluluk oluşturarak müşteriye ulaşma* Teknolojik değişim dalgalarını yakalamak* Müşterin yazılımcı olunca ne tür geri bildirimlerle karşılaşıyorlar* Churn'e bakış açıları* Fiyatlandırma* Hem küçük hem büyük firmalara satış* Rakipten müşteri kazanmayı kolaylaştıracak geliştirmeler* Thundra'nın organizasyon şeması- Shape Up metodu ile ürün geliştirme

Talking Serverless
#16 - Emrah Samdan VP of Products at Thundra.io

Talking Serverless

Play Episode Listen Later Jun 24, 2020 37:43


This week we chat with Emrah Samdan, VP of Products at Thundra.io, where he discusses the benefits of implementing Chaos Engineering in your development process. --- Send in a voice message: https://anchor.fm/talking-serverless/message

Serverless Transformation
Thundra - Emrah & Serkan

Serverless Transformation

Play Episode Listen Later May 20, 2020 32:53


In this episode of Serverless-Transformation your host, Ben Ellerby (VP Engineering at Theodo) chats with Emrah and Serkan from Thundra, Follow us on Medium: https://serverless-transformation.com/ Newsletter: https://www.getrevue.co/profile/serverless-transformation Theodo: https://www.theodo.co.uk/experts/serverless

nerdoholics
Shenanigans & Mayhem S02E14: Welcome to Thundra

nerdoholics

Play Episode Listen Later Apr 11, 2020 51:16


facebook: www.facebook.com/nerdoholicsemail:www.nerdoholics@yahoo.comtwitter.com/@nerdoholics123instagram nerdoholics123HELLO ADVENTURES!!!  Holy crap we finally did it, we recorded a new game. It's been a long time and it took a good minute to remember how to play but we're back at it. And it took very little time for the boys to throw curve balls at mark he wasn't expecting. Thank you for waiting and thank you for listening. Now let's roll!!!when you are done with us please check out our friends our friendsthe animated batcast https://www.facebook.com/theanimatedbatcast/Number One Comic Bookshttp://shoutengine.com/NumberOneComicBooks/Relatively RandomHeiress Anonymous https://www.heiressanonymous.com/tales from yard https://soundcloud.com/user-752223467LOLA https://soundcloud.com/user-752223467/ladies-of-the-leftover-army-podcastBrute force and Ignorance a D&D podcast https://soundcloud.com/user-449327696Heros of noise https://heroesofnoise.com/Blerds R' us https://soundcloud.com/user-589437294and the leftover army podcast https://soundcloud.com/user-752223467

CBSI:Comic Book Bolo Podcast
Comic Book Market Trends for 3/18/20

CBSI:Comic Book Bolo Podcast

Play Episode Listen Later Mar 18, 2020 20:37


We discuss 3 hot and 3 cold comic book market trends for March 18, 2020 that are causing comic book price trends to either heat up or cool down. This week we discuss Thundra, CBCS, Sleeper, and more!

Made in Turkey
#33 Geçtiğimiz Günlerde 4 Milyon Dolar Yatırım Alan Thundra Ekibinden Serkan Özal ve Emrah Şamdan

Made in Turkey

Play Episode Listen Later Mar 2, 2020 28:25


Bu bölümümüzde geçtiğimiz yıl 295 milyon dolara Atlassian’a satılan Opsgenie firmasının içerisinde başlayan ve satışın gerçekleştiği gün kurulan Thundra ekibinin cto’su Serkan Özal ve ürün müdürü Emrah Şamdan’ı ağırladık. Kendileriyle Opsgenie’den Thundra’ya geçiş hikayelerini, geçtiğimiz aylarda aldıkları 4 milyon dolarlık yatırımı ve komünite yaratmanın ne kadar değerli olduğunu konuştuk. Teknik konulara ve özellikle serverless mimariye ilgili olanların bu bölümü dinlemelerini öneririz.00:01 - Serkan Özal kimdir?00:54 - Emrah Şamdan kimdir?01:40 - Thundra hikayesi nasıl başladı?02:19 - Monitor etmek nedir?04:28 - Thundra ismi nereden geldi?05:10 - Thundra şirket kurulumu08:15 - İş Modeli08:35 - Thundra ne yapar?11:05 - Fiyatlandırma ve freemium paketi özellikleri12:50 - İlk müşteriler nasıl bulundu ve yurtdışı oranları15:51 - Thought Leader ile yapılan anlaşmalar19:25 - 4 milyon dolarlık yatırım hikayesi20:25 - Gelen yatırım nereye harcanacak.21:18 - Digital marketing ve eventlerin payı24:51 - Deneyimler26:55 - KapanışSerkan Özal İletişim:https://www.linkedin.com/in/serkanozal/Emrah Şamdan İletişim: https://www.linkedin.com/in/emrah-samdan-424ba124/Hazelcast: https://hazelcast.com/Opsgenie: opsgenie.comComodo: comodo.comDatadog: datadoghq.comAWS summit: https://aws.amazon.com/events/summits/?global-event-sponsorship.sort-by=item.additionalFields.sortdate&global-event-sponsorship.sort-order=ascServerless Türkiye: https://www.meetup.com/Cloud-Serverless-Turkey/?chapter_analytics_code=UA-93907526-1Serverless Nedir: https://medium.com/serverless-turkey/nedir-bu-serverless-637a59a44e81Berkay Mollamustafaoğlu: https://www.linkedin.com/in/berkay/?originalSubdomain=tr

Talking Serverless
#2 - Serkan Özal Founder Thundra.io

Talking Serverless

Play Episode Listen Later Nov 17, 2019 36:59


In episode #2 of the Talking Serverless podcast we talk to Serkan Özal the Founder and CTO of Thundra. Twitter --- Send in a voice message: https://anchor.fm/talking-serverless/message

Comic Book Syndicate
Quasar Qua-Nology #5 - Project Pegasus

Comic Book Syndicate

Play Episode Listen Later Oct 2, 2019 63:22


Project Pegasus is reviewed on Quasar Qua-Nology #5! Mike-EL & Josh take a look at Marvel Two-In-One #53-58, by Mark Gruenwald, Ralph Macchio, John Byrne & George Perez, the classic story featuring The Thing, Deathlok, Giant-Man, Thundra, Wundarr, Aquarian, and of course, Quasar!

The Fantasticast
Episode 344: Marvel Two-in-One #67 - Passport to Oblivion

The Fantasticast

Play Episode Listen Later Sep 21, 2019 49:57


This Fantasticast Could Be Worth $2500 To You! Hello, and welcome to episode 344 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. This week, we bid a fond farewell to one of our favourite supporting characters from the Fantastic Four comics of the 1970s. Treading a fine line between ally and nemesis, Thundra finally heads home, but the journey is not easy, especially when you've got a Hyperion by your side, a Thing in the way, and the Nth Command on your tail... Mark Gruenwald, Ralph Macchio, Ron Wilson, Gene Day, Frank Martin, John Costanza, Jim Salicrup, Jim Shooter, and Joe Sinnott present Marvel Two-in-One #67 - Passport to Oblivion, in Alicia steps out (of her house on a possible date), Thundra and Hyperion step out (of our universe and into a mirror of Femizonia), and Steve and Andy step out (of their comfort zone by reading Shogun Warriors #20 and wondering just what the hell was going on in that). Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

Lords of the Long Box
Wolverine/Hulk Movie, Gambit show & Thundra to debut in She-Hulk Series

Lords of the Long Box

Play Episode Listen Later Sep 7, 2019 35:13


The Black Knight Report plus a Mikey Sutton exclusive scoop with some details on the Wolverine/Hulk Movie! Plus what female villain will be in She-Hulk? #LOTLB #Wolverine #Hulk T-Shirts now available here http://www.thegeekyswagshop.com/products/lords-of-the-longbox Show is Sponsored by http://www.krscomics.com use discount code of LOTLB to get 10% off any KRS comics Exclusives and 15% off at http://www.thegeekyswagshop.com Itunes https://podcasts.apple.com/us/podcast/lords-of-the-long-box/id1433149382 SoundCloud http://www.soundcloud.com/tim-vo1/ Stitcher http://www.stitcher.com/podcast/lords-of-the-long-box

Alphabet Flight: A Marvel Encyclopedic Adventure

Megan (Oh no, Lit Class and Rolling Misadventures podcast) joins Jessie in discussing a tall lady that is a warrior and had a thing for Thing. Music by Fnshlne Twitter and Instagram: @Alphabetflight   Patreon

The Fantasticast
Episode 320: Marvel Two-in-One #57 - When Walks Wundarr

The Fantasticast

Play Episode Listen Later Mar 23, 2019 45:24


Shout! Shout! Let It All Out! Hello, and welcome to episode 320 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four.   We're returning to the depths of the original Project Pegasus, as the Project Pegasus Saga moves towards its climax! Solarr and Klaw run rampant through the tunnels of Project Pegasus, while Thundra mopes in a cell, and Wundarr goes for a wanderr. Mark Gruenwald, Ralph Macchio, George Perez, Gene Day, Bob Sharen, John Costanza, Roger Stern, Al Milgrom, and Jim Shooter present Marvel Two-in-One #57. Join us, as we enjoy a conference in a bubble, a valiant attempt to make everything make sense, and a perfectly reasonable way to derail a train. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 318: Marvel Two-in-One #56 - The Deadlier Of The Species

The Fantasticast

Play Episode Listen Later Feb 23, 2019 52:50


Five Wrestlers In Search Of Whatchamacallit Hello, and welcome to episode 318 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. Grab yer spandex and yer definitely-not-choreographed-at-all moves - it's time to bring some wrasslin' to the Pegasus Project! Thundra, blackmailed by a mysterious man with a cigar, leads the Grapplers into the tunnels of everyone's favourite energy research complex, and only Black Goliath, Quasar, and The Thing stand in their way! Mark Gruenwald, Ralph Macchio, George Perez, Gene Day, Bob Sharen, Irv Watanabe, Roger Stern, Jim Salicrup, Jim Shooter, John Byrne, and Terry Austin present Marvel Two-in-One #56 - The Deadlier Of The Species! Featuring a maudlin Thundra, a confusing timejump, and the true debut of Screaming Mimi/Songbird. And, as a bonus, we catch up with a sickly Mr Fantastic as he helps Spider-Man identify the formula for an anti-matter bomb. Uh, Reed, you know that a formula alone doesn't make a bomb, right? Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

Turuncu Pasaport
Thundra - Yeni Bir Ürün Doğarken

Turuncu Pasaport

Play Episode Listen Later Dec 19, 2018 41:55


Opsgenie'nin bir parçası olarak doğan ve ayrı bir şirket haline gelen Thundra'nın başka bir ürün ve şirket oluşunun hikayesini Serkan ve Emrah'tan dinliyoruz. - Thundra'nın ayrı bir ürün olarak konumlandırılması - Son iki sene yaşanılan private beta'dan public beta'ya gelen süreç - Splunk ve Honeycomb gibi ürünler ile entegrasyon neden yapıldı - Ürün yöneticisinin Opsgenie'den Thundra'ya olan yolculuğu - Nasıl webinar yapılmaz :) - Ürün satış modelleri - Ayrı şirket olması için gereken teknik değişikler - Neden ayrı bir ürün de Opsgenie'nin parçası değil - Kullandıkları SaaS ürünler neler - En hızlı şekilde ürün çıkarmak için ne gibi seçimler yaptılar - Kısa dönem hedefleri nelerdir Katılımcılar: Serkan Özal, founder ve CTO: https://www.linkedin.com/in/serkanozal/ Emrah Şamdan, VP of Products: https://www.linkedin.com/in/emrah-samdan-424ba124/ Serhat Can: https://www.linkedin.com/in/serhatcan/

Turuncu Pasaport
Thundra - Serverless akımı ve Thundra'nın doğuşu

Turuncu Pasaport

Play Episode Listen Later Dec 4, 2018 33:15


Opsgenie'den çıkan ve Atlassian satışından sonra ayrı bir şirket haline gelen Thundra.io'nun kurucu ve CTO'su Serkan Özal ile Serverless ve şirketin kuruluş sürecini konuştuk. Konu başlıkları: - Serverless nedir ve neden önemli - AWS Lambda nedir ve neden önemli - AWS Lambda'nın zaman içerisinde değişimi ve gelişimi - Opsgenie içerisinde AWS Lambda'yı nasıl kullanıyoruz - Thundra nasıl bir ihtiyaç sonrası doğdu - Thundra'nın ayrı bir ürün olması süreci Konuk: Serkan Özal https://www.linkedin.com/in/serkanozal/ Host: Serhat Can https://www.linkedin.com/in/serhatcan

The Fantasticast
Episode 256: Fantastic Four #184 - Aftermath: The Eliminator

The Fantasticast

Play Episode Listen Later Nov 11, 2017 55:31


Abyssinia Surbiton Hello, and welcome to episode 256 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. It's a bittersweet week for one of our hosts on The Fantasticast. It's the final episode recorded in the place where the podcast was conceived, where every episode was recorded, edited, and published. And yet, among all the endings is a reminder of beginnings, as we cover the first comic that Steve ever brought that was older than him - Fantastic Four #184. Len Wein, George Perez, and Joe Sinnott are on hand to transition us away from the crowded Baxter Building to the dilapidated Whisper Hill, as the Fantastic Four start their quest to locate Reed and Sue's missing son Franklin, encountering cyborgs, movies, vacuum jets, metallic eggs, and the many costumes of Thundra. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 246: Fantastic Four #179 - A Robinson Crusoe In The Negative Zone

The Fantasticast

Play Episode Listen Later Sep 2, 2017 50:41


Hardback, Softback, Any Kind Of Back Hello, and welcome to episode 246 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. This week, we're peeling back the covers of Fantastic Four #179. Reed Richards is lost and almost naked in the Negative Zone, forced to hunt bats for sustenance and to make fire from quite literally nothing. His doppelganger, The Brute, has taken his place on the team. Thundra is flirting outrageously with Ben, Sue's got her doubts, and Tigra just wants lunch. Roy Thomas, Gerry Conway, Ron Wilson and Joe Sinnott present a tale full of double-crossing, film noir marathons, almost-pizza, a discussion of Kai Cole's recent statement regarding her marriage, unlikely giant robots, and sunken garbage scows. Oh, and a completely bland number one single. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 244: Fantastic Four #178 - Call My Killer -- The Brute

The Fantasticast

Play Episode Listen Later Aug 19, 2017 64:00


The Counter-Earth Pantsless Pipe-Smoking Society Hello, and welcome to episode 244 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. Following an unexpected diversion with the last episode, we're back on track this week with a look at Fantastic Four #178. The Brute has joined the Frightful Four, and the team of villains is holding the Fantastic Four hostage in the Baxter Building. Will New York pay the princely sum of... ONE MILLION DOLLARS? Or will Marvel's First Family pay the ultimate price? Roy Thomas, George Perez, and Dave Hunt present Call My Killer --  The Brute, with a whole host of guest stars including Thundra, Tigra, Jimmy Carter, Ronald Reagan, Gerald Ford, The Impossible Man, and more. As well as this, we're also taking a look at Tom Hanks' oovoo, and discovering that there is nothing quite like The Fonz. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

TVDONUT
TV Donut - Episode 3.14 - Ultimate Spider-Man

TVDONUT

Play Episode Listen Later May 25, 2017 61:30


Episode 14 - "I Hope We All Get Spider-Man" - The Donut crew is back in the Spiderman universe. Our second Spidey series in Season 3. We talk the adorableness of Stan Lee, the frustrations of Auto-Correct, how Spidey is actually exactly like Zack Morris, and how Hannah secretly hates Deadpool and not so secretly loves Thundra. #HotDogtoPig

Year One Comics
Episode 78: Annual 8 and Issue 178

Year One Comics

Play Episode Listen Later Dec 3, 2016 38:20


Wasp is possessed by the evil Doctor Spectrum's power prism after Hank tries to give it to her as a birthday gift. Many walls are smashed. Then Beast tries to pick up some women, gets caught in a thunderstorm, and falls victim to some idiot calling himself the Manipulator. Can you say fill-in issue? I knew you could...

Rolled Spine Podcasts
Tigra Two-in-One featuring Spider-Man & The Thing

Rolled Spine Podcasts

Play Episode Listen Later Sep 14, 2016 73:47


Note: We like our language NSFW salty, and there be spoilers here…Face Front, True Believer! Dr. Anj of Supergirl Comic Box Commentary fame finally joins Diabolu Frank & Mister Fixit for an episode focusing on the Were-Woman! Connecting the dots of Tigra’s various late Bronze Age solo and guest appearances, we cover Marvel Chillers #4 and Marvel Two-In-One #19 (both 1976,) Fantastic Four #177-184 (moving into 1977,) Marvel Team-Up #67 and Marvel Premiere #42 (1978,) before concluding with Marvel Team-Up #125 (January, 1983!) Besides multiple pairings of Tigra with Ben Grimm and Peter Parker, she also faces Kraven the Hunter twice; joins Invisible Girl, Thundra, Human Torch and Mr. Fantastic for an extended adventure; loses her mentor Dr. Tumolo; and faces the return of Zabo from her comics debut as Greer Grant Nelson: The Cat! Talent on these stories includes Chris Claremont, John Byrne, George Pérez, Roy Thomas, Gerry Conway, Ron Wilson, Joe Sinnott, J.M. DeMatteis, Kerry Gammill, Len Wein, Sal Buscema, Bill Mantlo, Frank Robbins, Jim Shooter, Archie Goodwin, John Warner, Ed Hannigan, Mike Vosburg & Ernie Chan! That kind of spread requires a episode tumblr gallery! We even managed to work Superman & Big Barda in (and out again!) That’s a lot of comics to bring in at under 90 minutes! Excelsior!As you can tell, we love a fierce conversation, so why don’t you socialize with us, either by leaving a comment on this page or…Friend us on FacebookRoll through our tumblrEmail us at rolledspinepodcasts@gmail.comTweet us as a group @rolledspine, or individually as Diabolu Frank & Illegal Machine. Fixit don’t tweet.If The Marvel Super Heroes Podcast Blogger page isn’t your bag, try the umbrella Rolled Spine Podcasts WordPress blog.

The Fantasticast
Episode 189: Fantastic Four #152 - A World Of Madness Made

The Fantasticast

Play Episode Listen Later Jul 23, 2016 62:54


An Actual Ice-Cream Van Hello, and welcome to episode 189 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. This week, we bid a fond farewell to Gerry Conway with Fantastic Four #152, his last regular issue. Thundra and the team are lost in a nightmarish, hyper-masculine future. Rich Buckler's taking on too much work, and Reed Richards has the wrong number of appendages. As it turns out, not only is this Conway's final issue, but it's not even the conclusion to the story, and it's one of the most notorious rushed comics. We're also activating our childish-snigger circuits as we get to grips with Giant-Size Man-Thing #2, by Steve Gerber, John Buscema and Klaus Janson. As well as this, we're talking about San Diego Comic-Con, celebrating some new equipment, celebrating the release of something long-lost, and getting interrupted by an actual ice-cream van. It's a packed episode of The Fantasticast, so why are you reading this? Get listening! Find Amazing Spider-Man Classics at http://amazingspiderman.libsyn.org. Donate to Stacey Taylor's fundraising at https://www.justgiving.com/fundraising/spcplive2 and listen to our appearance on her 24 hour live podcast at http://popcultureparlour.podbean.com/e/spcp-live-2-part-1-a-side-order-of-sass/ Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 187: Fantastic Four #151 - Thundra And Lightning Part One

The Fantasticast

Play Episode Listen Later Jul 9, 2016 66:48


A Wombful Of Wombats Hello, and welcome to episode 187 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. This week is another double-comic episode. We start with Avengers #128, which sees the Fantastic Four's now-redundant nanny Agatha Harkness head take up a new position as the Defence Against Dark Arts teacher for the Avengers. Following this, we peer into the pages of Fantastic Four #151, as Gerry Conway starts his final Fantastic Four story by revealing the origin of Thundra. This does, however, mean that we also get the first appearance of the walking manifestation of brutish masculinity, Mahkizmo. It's a battle of the sexes with the entire (possible) future of the Earth hanging in the balance. Can this issue carefully walk the line between comment and crassness? Find out in this week's episode! We are live on Stacey Taylor's 24 Hour Pop Culture Parlour Podcast at 10am BST, Saturday 9th July. Listen to the show at http://mixlr.com/spcp-live/ Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 180: Fantastic Four #148 - War On The Thirty-Sixth Floor

The Fantasticast

Play Episode Listen Later May 21, 2016 48:20


Such Maths. Wow. Hello, and welcome to episode 180 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. This week, Steve and Andy are journeying to a purely fictional location within the otherwise-completely-non-fictional Baxter Building to age war, in Fantastic Four #148. The Frightful Four have returned to battle the Fantastic Four for reasons that are carefully explained in detail, and Thundra's definitely not out to betray them at all, oh no. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 177: Giant-Size Super Stars #1 - The Mind Of The Monster

The Fantasticast

Play Episode Listen Later Apr 30, 2016 58:18


Giant-Size Rental Lycanthropy Hello, and welcome to episode 177 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. Giant-Size greetings to you all, as we take our first dip into the mid-1970s Marvel Giant-Size comics. Giant-Size Super Stars #1 is, to all intents and purposes, Giant-Size Fantastic Four #1, and it features another bout of fighting between the Hulk and the Thing... or does it? We're looking at the the writing of Gerry Conway, the penciling of Rich Buckler, and the inking of Joe Sinnott. Oh, and the colouring of Petra Goldberg, and the lettering of John Constanza. We also have a re-appearance of Thundra, we go Over The Escher-esque Garden Wall, and discover that it's all Joe Quesada's fault! Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at www.TheFantasticast.com. Follow us on twitter, where we are @fantasticast The Fantasticast is Patreon supported. Visit www.patreon.com/fantasticast to donate and support us. The Fantasticast is part of the Flickering Myth Podcast network. Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

Wait, What?
Baxter Building, Episode 16

Wait, What?

Play Episode Listen Later Apr 28, 2016 90:14


Welcome back to Baxter Building, the podcast reading through the first 416 issues of the Fantastic Four! Today, Graeme McMillan and Jeff Lester ponder issues 126-133 of the Fantastic Four, by John Buscema, Joe Sinnott, Roy Thomas, Gerry Conway, Ross Andru, and Ramona Fradon. Roy Thomas brings us three mini-epics featuring the Inhumans, the Frightful Four, and just about anyone with an underground kingdom, as well as the terrors of his post-marital psyche with the split between Reed and Sue, Johnny and Crystal, and the introduction of the woman called Thundra! And we are here to partially hide our eyes at all of it! Show notes eagerly await you at waitwhatpodcast.com, we welcome your comments and questions at WaitWhatPodcast@gmail.com, and we invite you to look out for us on Twitter, Tumblr, and Patreon!

ComicMania Podcast
ComicMania Podcast #183

ComicMania Podcast

Play Episode Listen Later Jan 29, 2016


Programa grabado el Miércoles 27 de Enero del 2016En esta ocasión los temas fueron:Mr. TerrificThundraWonder GirlHarley QuinnMad HatterLegends of Tomorrow

The Fantasticast
Episode 156: Fantastic Four #133 - Thundra At Dawn

The Fantasticast

Play Episode Listen Later Nov 28, 2015 73:42


Thought Bubble Be Crazy Hello, and welcome to episode 156 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. It's a week of new beginnings for the Fantastic Four. It's the first issue to be written by Marvel mainstay Gerry Conway. It's also the first issue to be drawn by a woman, as Silver and Bronze Age legend Ramona Fradon provides fill-in pencils for this issue. And what pencils they are! Tundra's back, and spoiling for a fight with the Thing on New Year's Eve. We've got plots of mass destruction, foreshadowing for the climax of Iron Man 2, and someone literally attempting to turn back time. We've also got the unusual story of how Steve ended up partying with DMC (the clue's in the title), and a moment of pure physical pain for one of the hosts. Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at fantasticflameon.wordpress.com. Follow us on twitter, where we are @fantasticast Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

The Fantasticast
Episode 151: Fantastic Four #129 - The Frightful Four Plus One

The Fantasticast

Play Episode Listen Later Oct 24, 2015 69:26


Attack Of The Giant Asterisk Hello, and welcome to episode 151 of The Fantasticast. Each week, Steve Lacey and Andy Leyland guide you through every issue, guest-appearance and cameo of The Fantastic Four. It's another landmark issue of The Fantastic Four, as issue #129 sees the introduction of the fiery Femizon, Thundra. As the Fantastic Four start to splinter and fracture, the Frightful Four mount a brutal attack on the Thing and Medusa. Roy Thomas and John Buscema have big changes in store for the team, and we're here to guide you through them. As well as this, we're also displaying totes accurate accents, questioning the choice of chauvinism over survival, and spanking the Trapster's arse. Look, if we didn't, who would? Send in your feedback to fantastic4podcast@gmail.com, leave your comments at the libsyn site, or at fantasticflameon.wordpress.com. Follow us on twitter, where we are @fantasticast Original artwork by Michael Georgiou. Check out his work at mikedraws.co.uk Episode cover design by Samuel Savage.

Acmecast
Acmecast #222 - Damage Control!

Acmecast

Play Episode Listen Later Jan 23, 2015 66:03


Jermaine and Stephen collect and analyze all the available data about the Big Two spring/ summer events, DC's Convergence and Marvel's Secret Wars, in an attempt to assuage fears, educate prospective readers and have some fun! They also gush over Spider-Verse and Guardians of the Galaxy and beg for more advance reader copies to books like Black Mask's We Can Never Go Home! Show Notes: Correcting my own mistake, Titania and Volcana were the teenagers from Denver transformered in Secret Wars, whereas Thundra is a "Femizon" created by Roy Thomas and Gerry Conway in Fantastic Four. "Brian Michael Bendis talks 'Powers,' 'Secret Wars' on 'Late Night'" at Comic Book Resources.com. "DC Comics Cancels 13 Series (including Batman) Just as Convergence Event Ramps Up" at the Mary Sue.com. Doug Loves Movies with James Gunn, Sean Gunn and Michael Rooker. NOT FOR YOUNGER LISTENERS!