Podcasts about anycast

  • 20PODCASTS
  • 40EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 26, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about anycast

Latest podcast episodes about anycast

linkmeup. Подкаст про IT и про людей
telecom №146. CDN ВКонтакте, CDN VK..

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Apr 26, 2025


CDN ВКонтактеCDN VK и зачем нам еще шесть..Кто: Андрей Старченков. Тим лид Вконтакте, отвечаю за разработку, когда-то был сетевиком Дмитрий Радчук. Тим лид Вконтакте, отвечаю за кэши и прокси и другие граничные сервисы Вконтакте. CCIEx4, CCDE, HCIEО чем: Глава 1. CDN ВКонтакте Контент, который у нас есть и какие задачи мы пытаемся решить при помощи CDN: Раздать и покешировать js/css/шрифты и прочие файлы близко к пользователю Есть еще миниаппы и приложения, которые тоже надо раздать Фоточки и музыка превьюшки Видео Как заворачиваем пользователей в CDN: GEOIP база Генерирование ссылок для пользователя Площадки, которые у нас есть: Нейтральные кеши Операторские кеши Железо, мониторинг, сетевая связность на площадках и прочее Anycast/не anycast Руот колелкторы Глава 2. CDN VK CDN ВКонтакте не есть CDN VK проблемы переиспользования причины движения в сторону коробочного продукта Единый CDN Технологии балансировки Алгоритмы выбора площадки Anycast, GSLB, ALLB Prefix Based VS GeoIP RUM Utilization GSLB проблемы преимущества нет проблемы per user легкость интеграции проблемы не знаем о контенте рекурсивы инерция транзитный трафик детектирование методы борьбы решения RUM ALLB преимущества TOP ContentMap Sharding проблемы интеграция нагрузка per user решения batching GSLB

ANYCAST
ANY169 – Investiert in ANYCAST!

ANYCAST

Play Episode Listen Later Mar 16, 2025 85:10


Ich mach dich nieder, Deutschland, wenn du mich jetzt hier stehen lässt wie 'ne Deppen. Dann mach ich dich nieder. Ich ruinier dich. Isch mach disch fertisch. Isch kleb dich zu von oben bis unten. Mit meinem Geld. Isch kauf disch einfach. Isch kauf dir ne Mehrwertsteuersenkung für die Gastro, da stell isch dir noch'ne Agrardieselrückvergütung davor. Deinem Weib schick' isch jeden Tag ne Mütterrente. Isch schieb et dir hinten und vorne rein. Isch scheiß dich sowat von zu mit meinem Geld, dass de keine ruhige Minute mehr hast. Und die Versuchung is' so groß, da nimmst's und dann hab isch dich, dann jehörste mir. Und dann biste mein Knecht. Isch mach mit dir, wat isch will, verstehste, Junge.

Cables2Clouds
The Strengths and Challenges of Azure Networking - C2C036

Cables2Clouds

Play Episode Listen Later Jun 26, 2024 53:22 Transcription Available


Curious about how Azure stacks up against AWS and Google in the cloud networking arena? Join us as we host Brian "Woody" Woodworth, a seasoned expert in Azure networking, who unveils the nuances of Microsoft's mighty global network. From its robust backbone and unique Anycast public IPs to the strategic advantages of Microsoft's data centers, we cover every facet that sets Azure apart. Woody provides an insider's look at the intricate and sometimes fragmented nature of Azure's networking teams and how they compare to AWS's more streamlined approach.How does Azure Firewall capitalize on AWS's gaps, and why do Network Security Groups offer enhanced usability? We tackle these questions head-on, exploring Azure's strengths and areas where it still lags behind AWS, particularly in speed and feeds. The conversation takes a fascinating turn as we delve into Microsoft's cautious approach to open source with Sonic, a containerized network operating system poised to reduce vendor lock-in. Woody explains the complexities and rewards of integrating Sonic into Azure's high-availability infrastructure.Lastly, we peel back the layers on the often frustrating pricing models of cloud networking services, advocating for more transparent and equitable solutions. We highlight the potential roadblocks high costs pose to cloud adoption and suggest ways providers can improve. Wrapping up, we spotlight the impressive growth of Microsoft Azure, driven by strategic investments in AI, machine learning, and developer-friendly tools. This episode is packed with insights, predictions, and practical advice for anyone navigating the hybrid and multi-cloud networking landscape. Don't miss it!Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com

Software Engineering Radio - The Podcast for Professional Software Developers

Xe Iaso of Fly.io discusses their hosting platform with host Jeremy Jung. They cover building globally distributed applications with Anycast, using Wireguard to encrypt inter-service communication, writing custom code to handle load balancing and scaling with fly-proxy, why serving EU customers has unique requirements, letting users use docker images without the docker runtime by converting them to firecracker and cloud hypervisor microVMs, the differences between regular VMs and microVMs, challenges of acquiring and serving GPUs to customers. when to use Kubernetes, and dealing with abuse on the platform. Brought to you by IEEE Computer Society and IEEE Software magazine.

ANYCAST
ANY155 – Der ANYCAST-Bußgeldkatalog für die Menschen

ANYCAST

Play Episode Listen Later Mar 3, 2024 70:04


Euer Erfolgspodcast ist aus einer Dschungelpause endlich zurück. Wir starten mit gutem Meckern übers Schaltjahr und Veganer. Außerdem nehmen wir uns den Bußgeldkatalog für den Straßenverkehr vor.

ANYCAST
ANY153 – Eine ANYCAST-Weihnacht für die Menschen

ANYCAST

Play Episode Listen Later Dec 24, 2023 64:37


Freut euch, Christus ist geboren! Freut euch mehr, ein neuer ANYCAST! Es Weihnachtsthemen, noch mehr Weihnachtsthemen, außerdem paar Updates und sonstige Besinnlichkeiten. Schön!

ANYCAST
ANY150 – Herbst des Laberns

ANYCAST

Play Episode Listen Later Sep 5, 2023 75:51


In der Podcast-Folge wurden verschiedene Themen behandelt. Die Sendung begann mit einem herzlichen Intro und einer freundlichen Begrüßung. Danach gab es Updates zu verschiedenen Investitionen, gefolgt von einer Diskussion über das Chaos Communication Camp 2023. Ein interessantes Gespräch wurde dann mit Michael Thürnau von NDR 1 Niedersachsen geführt, der im ANYCAST zu Gast war. Die Themen wechselten zu Dokumentationen und praktischen Aspekten wie der Beschaffung von Fahrerlaubnissen und anderen wichtigen Dokumenten. Eine längere Diskussion drehte sich um die Suche nach dem richtigen Auto für Dennis. Die Sendung klang mit einer herzlichen Verabschiedung aus, bevor der Bonus-Track "Konzerttickets für Die Ärzte" vorgestellt wurde. Abgerundet wurde die Episode mit dem Lied "Die Internationale".

Console DevTools
Cloud infra, with Kurt Mackey (Fly.io) - S04E11

Console DevTools

Play Episode Listen Later Jul 6, 2023 35:31


In this episode, we speak with Kurt Mackey, CEO of Fly.io. We discuss what it's like running physical servers in data centers around the world, why they didn't build on top of the cloud, and what the philosophy is behind the focus on pure compute, networking, and storage primitives. Kurt sheds light on the regions where Fly.io is most popular, why they're adding GPUs, and the technology that makes it all work behind the scenes.Hosted by David Mytton (Console) and Jean Yang (Akita Software).Things mentioned:Ars TechnicaY CombinatorMongoHQServerCentral (now Deft)MiamiNAPAWSThe Everything StoreVercelSimon WillisonDjango datasetChatGPTNvidiaFirecracker VMNomadPhoenixLiteFSSQLiteHacker NewsElixirDeno FreshRemixM2 MacBook Air Nvidia A100 GPU ABOUT KURT MACKEYKurt Mackey is the CEO of Fly.io, a company that deploys app servers close to your users for running full-stack applications and databases all over the world without any DevOps. He began his career as a tech writer for Ars Technica and learned about databases while building a small retail PHP app. He went to Y Combinator in 2011 where he joined a company called MongoHQ (now Compose) that hosted Mongo databases which he sold to IBM, before turning his attention to building Fly.io. Highlights: [Kurt Mackey]: The original thesis for this company was there's not really any good CDNs for developers. If you could crack that, it'd be very cool. The first thing we needed was servers in a bunch of places and a way to route traffic to them. What we wanted was AnyCast, which is kind of a part of the core internet routing technology. What it does is it offloads getting a packet to probably the closest server, to the internet backbones almost. You couldn't actually do AnyCast on top of the public cloud at that point. I think you can on top of AWS now. So we were sort of forced to figure out how to get our IPs, we were sort of forced into physical servers for that reason. For a couple of years, it felt like we got deeply unlucky because we had to do physical servers. You'd talk to investors, and they'd be like, “Why aren't you just running on the public cloud and then saving money later?” Then last year, that flipped. Now, we're very interesting because we don't run on the public clouds.— [0:11:14 - 0:12:03] [Kurt Mackey]: I think there's another thing that we've probably all reckoned with since 2011; a lot of the abstractions were wrong. As the front end got more powerful, I think we tried a lot of different things for— and what we ended up doing was inflicting this weird distributed systems problem on frontend developers. So I think that, in some ways, we just have the luxury of ignoring a lot of things that people have been trying to figure out for 10 years because we probably think that's wrong at this point. So we happen to be doing well at a time when server-side rendering is all the rage in a front-end community, which is perfect for us and nobody really cares about shipping static files around in the same way. I think it's just evolutionary. We kind of have a different idea of what's right now and can do simpler things and then we'll probably get big and complicated in 10 years and be in the same situation again.— [0:18:25 - 0:19:11]Let us know what you think on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmyttonOr by email: hello@console.devAbout ConsoleConsole is the place developers go to find the best tools. Our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to. Sign up for free at: https://console.dev

linkmeup. Подкаст про IT и про людей
telecom №119. Программный балансировщик L3/L4

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jan 25, 2023


Продолжаем тему балансировки трафика и ищем ответ на вопрос, как обработать терабит трафика. В прошлый раз мы поговорили про общие концепции и разобрали иерархическую балансировку. Теперь разберёмся, как напрограммировать бланасировщик. Про что: Что такое балансировщик - не просто ли роутер? Компоненты балансировки L3/L4: VIP, RIP, Real, Health-Check Существующие решения - IPVS, Keepalived При чём тут бразильские реалы и их вес? Анонсы сервисов - BGP, Anycast Щедулер, стейты и сессии Синхронизация стейтов или консистентное хэширование? CAP-теорема Что не так с ICMP QUIC. Что он нам несёт? Заход на L7, TLS, работа с заголовками Сообщение telecom №119. Программный балансировщик L3/L4 появились сначала на linkmeup.

Screaming in the Cloud
The Controversy of Cloud Repatriation With Amy Tobey of Equinix

Screaming in the Cloud

Play Episode Listen Later Sep 27, 2022 38:34


About AmyAmy Tobey has worked in tech for more than 20 years at companies of every size, working with everything from kernel code to user interfaces. These days she spends her time building an innovative Site Reliability Engineering program at Equinix, where she is a principal engineer. When she's not working, she can be found with her nose in a book, watching anime with her son, making noise with electronics, or doing yoga poses in the sun.Links Referenced: Equinix: https://metal.equinix.com Twitter: https://twitter.com/MissAmyTobey TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn, and this episode is another one of those real profiles in shitposting type of episodes. I am joined again from a few months ago by Amy Tobey, who is a Senior Principal Engineer at Equinix, back for more. Amy, thank you so much for joining me.Amy: Welcome. To your show. [laugh].Corey: Exactly. So, one thing that we have been seeing a lot over the past year, and you struck me as one of the best people to talk about what you're seeing in the wilderness perspective, has been the idea of cloud repatriation. It started off with something that came out of Andreessen Horowitz toward the start of the year about the trillion-dollar paradox, how, at a certain point of scale, repatriating to a data center is the smart and right move. And oh, my stars that ruffle some feathers for people?Amy: Well, I spent all this money moving to the cloud. That was just mean.Corey: I know. Why would I want to leave the cloud? I mean, for God's sake, my account manager named his kid after me. Wait a minute, how much am I spending on that? Yeah—Amy: Good question.Corey: —there is that ever-growing problem. And there have been the examples that people have given of Dropbox classically did a cloud repatriation exercise, and a second example that no one can ever name. And it seems like okay, this might not necessarily be the direction that the industry is going. But I also tend to not be completely naive when it comes to these things. And I can see repatriation making sense on a workload-by-workload basis.What that implies is that yeah, but a lot of other workloads are not going to be going to a data center. They're going to stay in a cloud provider, who would like very much if you never read a word of this to anyone in public.Amy: Absolutely, yeah.Corey: So, if there are workloads repatriating, it would occur to me that there's a vested interest on the part of every major cloud provider to do their best to, I don't know if saying suppress the story is too strongly worded, but it is directionally what I mean.Amy: They aren't helping get the story out. [laugh].Corey: Yeah, it's like, “That's a great observation. Could you maybe shut the hell up and never make it ever again in public, or we will end you?” Yeah. Your Amazon. What are you going to do, launch a shitty Amazon Basics version of what my company does? Good luck. Have fun. You're probably doing it already.But the reason I want to talk to you on this is a confluence of a few things. One, as I mentioned back in May when you were on the show, I am incensed and annoyed that we've been talking for as long as we have, and somehow I never had you on the show. So, great. Come back, please. You're always welcome here. Secondly, you work at Equinix, which is, effectively—let's be relatively direct—it is functionally a data center as far as how people wind up contextualizing this. Yes, you have higher level—Amy: Yeah I guess people contextualize it that way. But we'll get into that.Corey: Yeah, from the outside. I don't work there, to be clear. My talking points don't exist for this. But I think of oh, Equinix. Oh, that means you basically have a colo or colo equivalent. The pricing dynamics have radically different; it looks a lot closer to a data center in my imagination than it does a traditional public cloud. I would also argue that if someone migrates from AWS to Equinix, that would be viewed—arguably correctly—as something of a repatriation. Is that directionally correct?Amy: I would argue incorrectly. For Metal, right?Corey: Ah.Amy: So, Equinix is a data center company, right? Like that's why everybody knows us as. Equinix Metal is a bare metal primitive service, right? So, it's a lot more of a cloud workflow, right, except that you're not getting the rich services that you get in a technically full cloud, right? Like, there's no RDS; there's no S3, even. What you get is bare metal primitives, right? With a really fast network that isn't going to—Corey: Are you really a cloud provider without some ridiculous machine-learning-powered service that's going to wind up taking pictures, perform incredibly expensive operations on it, and then return something that's more than a little racist? I mean, come on. That's not—you're not a cloud until you can do that, right?Amy: We can do that. We have customers that do that. Well, not specifically that, but um—Corey: Yeah, but they have to build it themselves. You don't have the high-level managed service that basically serves as, functionally, bias laundering.Amy: Yeah, you don't get it in a box, right? So, a lot of our customers are doing things that are unique, right, that are maybe not exactly fit into the cloud well. And it comes back down to a lot of Equinix's roots, which is—we talk but going into the cloud, and it's this kind of abstract environment we're reaching for, you know, up in the sky. And it's like, we don't know where it is, except we have regions that—okay, so it's in Virginia. But the rule of real estate applies to technology as often as not, which is location, location, location, right?When we're talking about a lot of applications, a challenge that we face, say in gaming, is that the latency from the customer, so that last mile to your data center, can often be extremely important, right, so a few milliseconds even. And a lot of, like, SaaS applications, the typical stuff that really the cloud was built on, 10 milliseconds, 50 milliseconds, nobody's really going to notice that, right? But in a gaming environment or some very low latency application that needs to run extremely close to the customer, it's hard to do that in the cloud. They're building this stuff out, right? Like, I see, you know, different ones [unintelligible 00:05:53] opening new regions but, you know, there's this other side of the cloud, which is, like, the edge computing thing that's coming alive, and that's more where I think about it.And again, location, location, location. The speed of light is really fast, but as most of us in tech know, if you want to go across from the East Coast to the West Coast, you're talking about 80 milliseconds, on average, right? I think that's what it is. I haven't checked in a while. Yeah, that's just basic fundamental speed of light. And so, if everything's in us-east-1—and this is why we do multi-region, sometimes—the latency from the West Coast isn't going to be great. And so, we run the application on both sides.Corey: It has improved though. If you want to talk old school things that are seared into my brain from over 20 years ago, every person who's worked in data centers—or in technology, as a general rule—has a few IP addresses seared. And the one that I've always had on my mind was 130.111.32.11. Kind of arbitrary and ridiculous, but it was one of the two recursive resolvers provided at the University of Maine where I had my first help desk job.And it lives on-prem, in Maine. And generally speaking, I tended to always accept that no matter where I was—unless I was in a data center somewhere—it was about 120 milliseconds. And I just checked now; it is 85 and change from where I am in San Francisco. So, the internet or the speed of light have improved. So, good for whichever one of those it was. But yeah, you've just updated my understanding of these things. All of this is, which is to say, yes, latency is very important.Amy: Right. Let's forget repatriation to really be really honest. Even the Dropbox case or any of them, right? Like, there's an economic story here that I think all of us that have been doing cloud work for a while see pretty clearly that maybe not everybody's seeing that—that's thinking from an on-prem kind of situation, which is that—you know, and I know you do this all the time, right, is, you don't just look at the cost of the data center and the servers and the network, the technical components, the bill of materials—Corey: Oh, lies, damned lies, and TCO analyses. Yeah.Amy: —but there's all these people on top of it, and the organizational complexity, and the contracts that you got to manage. And it's this big, huge operation that is incredibly complex to do well that is almost nobody's business. So the way I look at this, right, and the way I even talk to customers about it is, like, “What is your produ—” And I talk to people internally about this way? It's like, “What are you trying to build?” “Well, I want to build a SaaS.” “Okay. Do you need data center expertise to build a SaaS?” “No.” “Then why the hell are you putting it in a data center?” Like we—you know, and speaking for my employer, right, like, we have Equinix Metal right here. You can build on that and you don't have to do all the most complex part of this, at least in terms of, like, the physical plant, right? Like, right, getting a bare metal server available, we take care of all of that. Even at the primitive level, where we sit, it's higher level than, say, colo.Corey: There's also the question of economics as it ties into it. It's never just a raw cost-of-materials type of approach. Like, my original job in a data center was basically to walk around and replace hard drives, and apparently, to insult people. Now, the cloud has taken one of those two aspects away, and you can follow my Twitter account and figure out which one of those two it is, but what I keep seeing now is there is value to having that task done, but in a cloud environment—and Equinix Metal, let's be clear—that has slipped below the surface level of awareness. And well, what are the economic implications of that?Well, okay, you have a whole team of people at large companies whose job it is to do precisely that. Okay, we're going to upskill them and train them to use cloud. Okay. First, not everyone is going to be capable or willing to make that leap from hard drive replacement to, “Congratulations and welcome to JavaScript. You're about to hate everything that comes next.”And if they do make that leap, their baseline market value—by which I mean what the market is willing to pay for them—approximately will double. And whether they wind up being paid more by their current employer or they take a job somewhere else with those skills and get paid what they are worth, the company still has that economic problem. Like it or not, you will generally get what you pay for whether you want to or not; that is the reality of it. And as companies are thinking about this, well, what gets into the TCO analysis and what doesn't, I have yet to see one where the outcome was not predetermined. They're less, let's figure out in good faith whether it's going to be more expensive to move to the cloud, or move out of the cloud, or just burn the building down for insurance money. The outcome is generally the one that the person who commissioned the TCO analysis wants. So, when a vendor is trying to get you to switch to them, and they do one for you, yeah. And I'm not saying they're lying, but there's so much judgment that goes into this. And what do you include and what do you not include? That's hard.Amy: And there's so many hidden costs. And that's one of the things that I love about working at a cloud provider is that I still get to play with all that stuff, and like, I get to see those hidden costs, right? Like you were talking about the person who goes around and swaps out the hard drives. Or early in my career, right, I worked with someone whose job it was this every day, she would go into data center, she'd swap out the tapes, you know, and do a few things other around and, like, take care of the billing system. And that was a job where it was kind of going around and stewarding a whole bunch of things that kind of kept the whole machine running, but most people outside of being right next to the data center didn't have any idea that stuff even happen, right, that went into it.And so, like you were saying, like, when you go to do the TCO analysis, I mean, I've been through this a couple of times prior in my career, where people will look at it and go like, “Well, of course we're not going to list—we'll put, like, two headcount on there.” And it's always a lie because it's never just to headcount. It's never just the network person, or the SRE, or the person who's racking the servers. It's also, like, finance has to do all this extra work, and there's all the logistic work, and there is just so much stuff that just is really hard to include. Not only do people leave it out, but it's also just really hard for people to grapple with the complexity of all the things it takes to run a data center, which is, like, one of the most complex machines on the planet, any single data center.Corey: I've worked in small-scale environments, maybe a couple of mid-sized ones, but never the type of hyperscale facility that you folks have, which I would say is if it's not hyperscale, it's at least directionally close to it. We're talking thousands of servers, and hundreds of racks.Amy: Right.Corey: I've started getting into that, on some level. Now, I guess when we say ‘hyperscale,' we're talking about AWS-size things where, oh, that's a region and it's going to have three dozen data center facilities in it. Yeah, I don't work in places like that because honestly, have you met me? Would you trust me around something that's that critical infrastructure? No, you would not, unless you have terrible judgment, which means you should not be working in those environments to begin with.Amy: I mean, you're like a walking chaos exercise. Maybe I would let you in.Corey: Oh, I bring my hardware destruction aura near anything expensive and things are terrible. It's awful. But as I looked at the cloud, regardless of cloud, there is another economic element that I think is underappreciated, and to be fair, this does, I believe, apply as much to Equinix Metal as it does to the public hyperscale cloud providers that have problems with naming things well. And that is, when you are provisioning something as a customer of one of these places, you have an unbounded growth problem. When you're in a data center, you are not going to just absentmindedly sign an $8 million purchase order for new servers—you know, a second time—and then that means you're eventually run out of power, space, places to put things, and you have to go find it somewhere.Whereas in cloud, the only limit is basically your budget where there is no forcing function that reminds you to go and clean up that experiment from five years ago. You have people with three petabytes of data they were using for a project, but they haven't worked there in five years and nothing's touched it since. Because the failure mode of deleting things that are important, or disasters—Amy: That's why Glacier exists.Corey: Oh, exactly. But that failure mode of deleting things that should not be deleted are disastrous for a company, whereas if you've leave them there, well, it's only money. And there's no forcing function to do that, which means you have this infinite growth problem with no natural limit slash predator around it. And that is the economic analysis that I do not see playing out basically anywhere. Because oh, by the time that becomes a problem, we'll have good governance in place. Yeah, pull the other one. It has bells on it.Amy: That's the funny thing, right, is a lot of the early drive in the cloud was those of us who wanted to go faster and we were up against the limitations of our data centers. And then we go out and go, like, “Hey, we got this cloud thing. I'll just, you know, put the credit card in there and I'll spin up a few instances, and ‘hey, I delivered your product.'” And everybody goes, “Yeah, hey, happy.” And then like you mentioned, right, and then we get down the road here, and it's like, “Oh, my God, how much are we spending on this?”And then you're in that funny boat where you have both. But yeah, I mean, like, that's just typical engineering problem, where, you know, we have to deal with our constraints. And the cloud has constraints, right? Like when I was at Netflix, one of the things we would do frequently is bump up against instance limits. And then we go talk to our TAM and be like, “Hey, buddy. Can we have some more instance limit?” And then take care of that, right?But there are some bounds on that. Of course, in the cloud providers—you know, if I have my cloud provider shoes on, I don't necessarily want to put those limits to law because it's a business, the business wants to hoover up all the money. That's what businesses do. So, I guess it's just a different constraint that is maybe much too easy to knock down, right? Because as you mentioned, in a data center or in a colo space, I outgrow my cage and I filled up all that space I have, I have to either order more space from my colo provider, I expand to the cloud, right?Corey: The scale I was always at, the limit was not the space because I assure you with enough shoving all things are possible. Don't believe me? Look at what people are putting in the overhead bin on any airline. Enough shoving, you'll get a Volkswagen in there. But it was always power constrained is what I dealt with it. And it's like, “Eh, they're just being conservative.” And the whole building room dies.Amy: You want blade servers because that's how you get blade servers, right? That movement was about bringing the density up and putting more servers in a rack. You know, there were some management stuff and [unintelligible 00:16:08], but a lot of it was just about, like, you know, I remember I'm picturing it, right—Corey: Even without that, I was still power constrained because you have to remember, a lot of my experiences were not in, shall we say, data center facilities that you would call, you know, good.Amy: Well, that brings up a fun thing that's happening, which is that the power envelope of servers is still growing. The newest Intel chips, especially the ones they're shipping for hyperscale and stuff like that, with the really high core counts, and the faster clock speeds, you know, these things are pulling, like, 300 watts. And they also have to egress all that heat. And so, that's one of the places where we're doing some innovations—I think there's a couple of blog posts out about it around—like, liquid cooling or multimode cooling. And what's interesting about this from a cloud or data center perspective, is that the tools and skills and everything has to come together to run a, you know, this year's or next year's servers, where we're pushing thousands of kilowatts into a rack. Thousands; one rack right?The bar to actually bootstrap and run this stuff successfully is rising again, compared to I take my pizza box servers, right—and I worked at a gaming company a long time ago, right, and they would just, like, stack them on the floor. It was just a stack of servers. Like, they were in between the rails, but they weren't screwed down or anything, right? And they would network them all up. Because basically, like, the game would spin up on the servers and if they died, they would just unplug that one and leave it there and spin up another one.It was like you could just stack stuff up and, like, be slinging cables across the data center and stuff back then. I wouldn't do it that way now, but when you add, say liquid cooling and some of these, like, extremely high power situations into the mix, now you need to have, for example, if you're using liquid cooling, you don't want that stuff leaking, right? And so, it's good as the pressure fittings and blind mating and all this stuff that's coming around gets, you still have that element of additional training, and skill, and possibility for mistakes.Corey: The thing that I see as I look at this across the space is that, on some level, it's gotten harder to run a data center than it ever did before. Because again, another reason I wanted to have you on this show is that you do not carry a quota. Although you do often carry the conversation, when you have boring people around you, but quotas, no. You are not here selling things to people. You're not actively incentivized to get people to see things a certain way.You are very clearly an engineer in the right ways. I will further point out though, that you do not sound like an engineer, by which I mean, you're going to basically belittle people, in many cases, in the name of being technically correct. You're a human being with a frickin soul. And believe me, it is noticed.Amy: I really appreciate that. If somebody's just listening to hearing my voice and in my name, right, like, I have a low voice. And in most of my career, I was extremely technical, like, to the point where you know, if something was wrong technically, I would fight to the death to get the right technical solution and maybe not see the complexity around the decisions, and why things were the way they were in the way I can today. And that's changed how I sound. It's changed how I talk. It's changed how I look at and talk about technology as well, right? I'm just not that interested in Kubernetes. Because I've kind of started looking up the stack in this kind of pursuit.Corey: Yeah, when I say you don't sound like an engineer, I am in no way shape or form—Amy: I know.Corey: —alluding in any respect to your technical acumen. I feel the need to clarify that statement for people who might be listening, and say, “Hey, wait a minute. Is he being a shithead?” No.Amy: No, no, no.Corey: Well, not the kind you're worried I'm being anyway; I'm a different breed of shithead and that's fine.Amy: Yeah, I should remember that other people don't know we've had conversations that are deeply technical, that aren't on air, that aren't context anybody else has. And so, like, I bring that deep technical knowledge, you know, the ability to talk about PCI Express, and kilovolts [unintelligible 00:19:58] rack, and top-of-rack switches, and network topologies, all of that together now, but what's really fascinating is where the really big impact is, for reliability, for security, for quality, the things that me as a person, that I'm driven by—products are cool, but, like, I like them to be reliable; that's the part that I like—really come down to more leadership, and business acumen, and understanding the business constraints, and then being able to get heard by an audience that isn't necessarily technical, that doesn't necessarily understand the difference between PCI, PCI-X, and PCI Express. There's a difference between those. It doesn't mean anything to the business, right, so when we want to go and talk about why are we doing, for example, multi-region deployment of our application? If I come in and say, “Well, because we want to use Raft.” That's going to fall flat, right?The business is going to go, “I don't care about Raft. What does that have to do with my customers?” Which is the right question to always ask. Instead, when I show up and say, “Okay, what's going on here is we have this application sits in a single region—or in a single data center or whatever, right? I'm using region because that's probably what most of the people listening understand—you know, so I put my application in a single region and it goes down, our customers are going to be unhappy. We have the alternative to spend, okay, not a little bit more money, probably a lot more money to build a second region, and the benefit we will get is that our customers will be able to access the service 24x7, and it will always work and they'll have a wonderful experience. And maybe they'll keep coming back and buy more stuff from us.”And so, when I talk about it in those terms, right—and it's usually more nuanced than that—then I start to get the movement at the macro level, right, in the systemic level of the business in the direction I want it to go, which is for the product group to understand why reliability matters to the customer, you know? For the individual engineers to understand why it matters that we use secure coding practices.[midroll 00:21:56]Corey: Getting back to the reason I said that you are not quota-carrying and you are not incentivized to push things in a particular way is that often we'll meet zealots, and I've never known you to be one, you have always been a strong advocate for doing the right thing, even if it doesn't directly benefit any given random employer that you might have. And as a result, one of the things that you've said to me repeatedly is if you're building something from scratch, for God's sake, put it in cloud. What is wrong with you? Do that. The idea of building it yourself on low-lying, underlying primitives for almost every modern SaaS style workload, there's no reason to consider doing something else in almost any case. Is that a fair representation of your position on this?Amy: It is. I mean, the simpler version right, “Is why the hell are you doing undifferentiated lifting?” Right? Things that don't differentiate your product, why would you do it?Corey: The thing that this has empowered then is I can build an experiment tonight—I don't have to wait for provisioning and signed contracts and do all the rest. I can spend 25 cents and get the experiment up and running. If it takes off, though, it has changed how I move going forward as well because there's no difference in the way that there was back when we were in data centers. I'm going to try and experiment I'm going to run it in this, I don't know, crappy Raspberry Pi or my desktop or something under my desk somewhere. And if it takes off and I have to scale up, I got to do a giant migration to real enterprise-grade hardware. With cloud, you are getting all of that out of the box, even if all you're doing with it is something ridiculous and nonsensical.Amy: And you're often getting, like, ridiculously better service. So, 20 years ago, if you and I sat down to build a SaaS app, we would have spun up a Linux box somewhere in a colo, and we would have spun up Apache, MySQL, maybe some Perl or PHP if we were feeling frisky. And the availability of that would be one machine could do, what we could handle in terms of one MySQL instance. But today if I'm spinning up a new stack for some the same kind of SaaS, I'm going to probably deploy it into an ASG, I'm probably going to have some kind of high availability database be on it—and I'm going to use Aurora as an example—because, like, the availability of an Aurora instance, in terms of, like, if I'm building myself up with even the very best kit available in databases, it's going to be really hard to hit the same availability that Aurora does because Aurora is not just a software solution, it's also got a team around it that stewards that 24/7. And it continues to evolve on its own.And so, like, the base, when we start that little tiny startup, instead of being that one machine, we're actually starting at a much higher level of quality, and availability, and even security sometimes because of these primitives that were available. And I probably should go on to extend on the thought of undifferentiated lifting, right, and coming back to the colo or the edge story, which is that there are still some little edge cases, right? Like I think for SaaS, duh right? Like, go straight to. But there are still some really interesting things where there's, like, hardware innovations where they're doing things with GPUs and stuff like that.Where the colo experience may be better because you're trying to do, like, custom hardware, in which case you are in a colo. There are businesses doing some really interesting stuff with custom hardware that's behind an application stack. What's really cool about some of that, from my perspective, is that some of that might be sitting on, say, bare metal with us, and maybe the front-end is sitting somewhere else. Because the other thing Equinix does really well is this product we call a Fabric which lets us basically do peering with any of the cloud providers.Corey: Yeah, the reason, I guess I don't consider you as a quote-unquote, “Cloud,” is first and foremost, rooted in the fact that you don't have a bandwidth model that is free and grass and criminally expensive to send it anywhere that isn't to you folks. Like, are you really a cloud if you're not just gouging the living piss out of your customers every time they want to send data somewhere else?Amy: Well, I mean, we like to say we're part of the cloud. And really, that's actually my favorite feature of Metal is that you get, I think—Corey: Yeah, this was a compliment, to be very clear. I'm a big fan of not paying 1998 bandwidth pricing anymore.Amy: Yeah, but this is the part where I get to do a little bit of, like, showing off for Metal a little bit, in that, like, when you buy a Metal server, there's different configurations, right, but, like, I think the lowest one, you have dual 10 Gig ports to the server that you can get either in a bonded mode so that you have a single 20 Gig interface in your operating system, or you can actually do L3 and you can do BGP to your server. And so, this is a capability that you really can't get at all on the other clouds, right? This lets you do things with the network, not only the bandwidth, right, that you have available. Like, you want to stream out 25 gigs of bandwidth out of us, I think that's pretty doable. And the rates—I've only seen a couple of comparisons—are pretty good.So, this is like where some of the business opportunities, right—and I can't get too much into it, but, like, this is all public stuff I've talked about so far—which is, that's part of the opportunity there is sitting at the crossroads of the internet, we can give you a server that has really great networking, and you can do all the cool custom stuff with it, like, BGP, right? Like, so that you can do Anycast, right? You can build Anycast applications.Corey: I miss the days when that was a thing that made sense.Amy: [laugh].Corey: I mean that in the context of, you know, with the internet and networks. These days, it always feels like the network engineering as slipped away within the cloud because you have overlays on top of overlays and it's all abstractions that are living out there right until suddenly you really need to know what's going on. But it has abstracted so much of this away. And that, on some level, is the surprise people are often in for when they wind up outgrowing the cloud for a workload and wanting to move it someplace that doesn't, you know, ride them like naughty ponies for bandwidth. And they have to rediscover things that we've mostly forgotten about.I remember having to architect significantly around the context of hard drive failures. I know we've talked about that a fair bit as a thing, but yeah, it's spinning metal, it throws off heat and if you lose the wrong one, your data is gone and you now have serious business problems. In cloud, at least AWS-land, that's not really a thing anymore. The way EBS is provisioned, there's a slight tick in latency if you're looking at just the right time for what I think is a hard drive failure, but it's there. You don't have to think about this anymore.Migrate that workload to a pile of servers in a colo somewhere, guess what? Suddenly your reliability is going to decrease. Amazon, and the other cloud providers as well, have gotten to a point where they are better at operations than you are at your relatively small company with your nascent sysadmin team. I promise. There is an economy of scale here.Amy: And it doesn't have to be good or better, right? It's just simply better resourced—Corey: Yeah.Amy: Than most anybody else can hope. Amazon can throw a billion dollars at it and never miss it. In most organizations out there, you know, and most of the especially enterprise, people are scratching and trying to get resources wherever they can, right? They're all competing for people, for time, for engineering resources, and that's one of the things that gets freed up when you just basically bang an API and you get the thing you want. You don't have to go through that kind of old world internal process that is usually slow and often painful.Just because they're not resourced as well; they're not automated as well. Maybe they could be. I'm sure most of them could, in theory be, but we come back to undifferentiated lifting. None of this helps, say—let me think of another random business—Claire's, whatever, like, any of the shops in the mall, they all have some kind of enterprise behind them for cash processing and all that stuff, point of sale, none of this stuff is differentiating for them because it doesn't impact anything to do with where the money comes in. So again, we're back at why are you doing this?Corey: I think that's also the big challenge as well, when people start talking about repatriation and talking about this idea that they are going to, oh, that cloud is too expensive; we're going to move out. And they make the economics work. Again, I do firmly believe that, by and large, businesses do not intentionally go out and make poor decisions. I think when we see a company doing something inscrutable, there's always context that we're missing, and I think as a general rule of thumb, that at these companies do not hire people who are fools. And there are always constraints that they cannot talk about in public.My general position as a consultant, and ideally as someone who aspires to be a decent human being, is that when I see something I don't understand, I assume that there's simply a lack of context, not that everyone involved in this has been foolish enough to make giant blunders that I can pick out in the first five seconds of looking at it. I'm not quite that self-confident yet.Amy: I mean, that's a big part of, like, the career progression into above senior engineer, right, is, you don't get to sit in your chair and go, like, “Oh, those dummies,” right? You actually have—I don't know about ‘have to,' but, like, the way I operate now, right, is I remember in my youth, I used to be like, “Oh, those business people. They don't know, nothing. Like, what are they doing?” You know, it's goofy what they're doing.And then now I have a different mode, which is, “Oh, that's interesting. Can you tell me more?” The feeling is still there, right? Like, “Oh, my God, what is going on here?” But then I get curious, and I go, “So, how did we get here?” [laugh]. And you get that story, and the stories are always fascinating, and they always involve, like, constraints, immovable objects, people doing the best they can with what they have available.Corey: Always. And I want to be clear that very rarely is it the right answer to walk into a room and say, look at the architecture and, “All right, what moron built this?” Because always you're going to be asking that question to said moron. And it doesn't matter how right you are, they're never going to listen to another thing out of your mouth again. And have some respect for what came before even if it's potentially wrong answer, well, great. “Why didn't you just use this service to do this instead?” “Yeah, because this thing predates that by five years, jackass.”There are reasons things are the way they are, if you take any architecture in the world and tell people to rebuild it greenfield, almost none of them would look the same as they do today because we learn things by getting it wrong. That's a great teacher, and it hurts. But it's also true.Amy: And we got to build, right? Like, that's what we're here to do. If we just kind of cycle waiting for the perfect technology, the right choices—and again, to come back to the people who built it at the time used—you know, often we can fault people for this—used the things they know or the things that are nearby, and they make it work. And that's kind of amazing sometimes, right?Like, I'm sure you see architectures frequently, and I see them too, probably less frequently, where you just go, how does this even work in the first place? Like how did you get this to work? Because I'm looking at this diagram or whatever, and I don't understand how this works. Maybe that's a thing that's more a me thing, like, because usually, I can look at a—skim over an architecture document and be, like, be able to build the model up into, like, “Okay, I can see how that kind of works and how the data flows through it.” I get that pretty quickly.And comes back to that, like, just, again, asking, “How did we get here?” And then the cool part about asking how did we get here is it sets everybody up in the room, not just you as the person trying to drive change, but the people you're trying to bring along, the original architects, original engineers, when you ask, how did we get here, you've started them on the path to coming along with you in the future, which is kind of cool. But until—that storytelling mode, again, is so powerful at almost every level of the stack, right? And that's why I just, like, when we were talking about how technical I bring things in, again, like, I'm just not that interested in, like, are you Little Endian or Big Endian? How did we get here is kind of cool. You built a Big Endian architecture in 2022? Like, “Ohh. [laugh]. How do we do that?”Corey: Hey, leave me to my own devices, and I need to build something super quickly to get it up and running, well, what I'm going to do, for a lot of answers is going to look an awful lot like the traditional three-tier architecture that I was running back in 2008. Because I know it, it works well, and I can iterate rapidly on it. Is it a best practice? Absolutely not, but given the constraints, sometimes it's the fastest thing to grab? “Well, if you built this in serverless technologies, it would run at a fraction of the cost.” It's, “Yes, but if I run this thing, the way that I'm running it now, it'll be $20 a month, it'll take me two hours instead of 20. And what exactly is your time worth, again?” It comes down to the better economic model of all these things.Amy: Any time you're trying to make a case to the business, the economic model is going to always go further. Just general tip for tech people, right? Like if you can make the better economic case and you go to the business with an economic case that is clear. Businesses listen to that. They're not going to listen to us go on and on about distributed systems.Somebody in finance trying to make a decision about, like, do we go and spend a million bucks on this, that's not really the material thing. It's like, well, how is this going to move the business forward? And how much is it going to cost us to do it? And what other opportunities are we giving up to do that?Corey: I think that's probably a good place to leave it because there's no good answer. We can all think about that until the next episode. I really want to thank you for spending so much time talking to me again. If people want to learn more, where's the best place for them to find you?Amy: Always Twitter for me, MissAmyTobey, and I'll see you there. Say hi.Corey: Thank you again for being as generous with your time as you are. It's deeply appreciated.Amy: It's always fun.Corey: Amy Tobey, Senior Principal Engineer at Equinix Metal. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that tells me exactly what we got wrong in this episode in the best dialect you have of condescending engineer with zero people skills. I look forward to reading it.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Mac Admins Podcast
Episode 267: Fraser Hess

Mac Admins Podcast

Play Episode Listen Later May 30, 2022 80:04


One of those things we all end up managing is how our Apple fleet routes traffic into the interwebs. We often have to go through apps that load network extensions that effectively proxy traffic. Sometimes we call what they do fancy new industry buzzwords. One of the first platforms to use Apple's frameworks was Cisco Umbrella. In this episode, Fraser Hess joins us to talk about using Umbrella and some interesting things he found along the way! Hosts: Tom Bridge - @tbridge777 Charles Edge - @cedge318 Marcus Ransom - @marcusransom Guests: Fraser Hess Links: https://umbrella.cisco.com https://en.wikipedia.org/wiki/Anycast https://datatracker.ietf.org/doc/html/rfc4271  https://www.explainxkcd.com/wiki/index.php/2347:_Dependency https://nvd.nist.gov/vuln/detail/CVE-2022-20773  https://particulars.app Sponsors: Kandji Halp VMware Workspace One Watchman Monitoring If you're interested in sponsoring the Mac Admins Podcast, please email podcast@macadmins.org for more information. Get the latest about the Mac Admins Podcast, follow us on Twitter! We're @MacAdmPodcast! The Mac Admins Podcast has launched a Patreon Campaign! Our named patrons this month include Weldon Dodd, Damien Barrett, Justin Holt, Chad Swarthout, William Smith, Stephen Weinstein, Seb Nash, Dan McLaughlin, Joe Sfarra, Nate Cinal, Jon Brown, Dan Barker, Tim Perfitt, Ashley MacKinlay, Tobias Linder Philippe Daoust, AJ Potrebka, Adam Burg, & Hamlin Krewson

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)
IPv6 Address Types - IPv6 Addressing - Introduction to Networks - CCNA - KevTechify | Podcast 62

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)

Play Episode Listen Later Apr 17, 2022 11:11


In this episode we are going to look at IPv6 Address Types.We will be discussing Unicast, Multicast, Anycast, IPv6 Prefix Length, Types of IPv6 Unicast Addresses, A Note About the Unique Local Address, IPv6 GUA, IPv6 GUA Structure, Global Routing Prefix, Subnet ID, Interface ID, IPv6 Link Local Address (LLA).Thank you so much for listening to this episode of my series on Introduction to Networks for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Cisco Certified Network Associate (CCNA)Introduction to Networks v1 (ITN)Episode 12 - IPv6 AddressingPart C - IPv6 Address TypesPodcast Number: 62-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment 

Podcast de Eduardo Collado

Hoy os traigo un audio sobre BGP Anycast. Anunciar el mismo direccionamiento IP desde varios sitios a la vez nos permite utilizar BGP para «acercar el contenido al usuario final». Si os fijáis en ping.pe …

ANYCAST
ANYCAST-Tütensuppen-Adventskalender 2021: 24. Dezember (KaiKast-Crossover)

ANYCAST

Play Episode Listen Later Dec 23, 2021 42:27


Heute: Maggi für Genießer Ochsenschwanz Suppe. Außerdem gibt es heute Tee und Rätsel mit Kai vom KaiKast.

KaiKast
Adventskalender: Vierundzwanzig (ANYCAST-Crossover)

KaiKast

Play Episode Listen Later Dec 23, 2021 41:46


Screaming in the Cloud
Ironing out the BGP Ruffles with Ivan Pepelnjak

Screaming in the Cloud

Play Episode Listen Later Dec 3, 2021 42:19


About IvanIvan Pepelnjak, CCIE#1354 Emeritus, is an independent network architect, blogger, and webinar author at ipSpace.net. He's been designing and implementing large-scale service provider and enterprise networks as well as teaching and writing books about advanced internetworking technologies since 1990.https://www.ipspace.net/About_Ivan_PepelnjakLinks:ipSpace.net: https://ipspace.net TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by my friends at ThinkstCanary. Most companies find out way too late that they've been breached. ThinksCanary changes this and I love how they do it. Deploy canaries and canary tokens in minutes and then forget about them. What's great is the attackers tip their hand by touching them, giving you one alert, when it matters. I use it myself and I only remember this when I get the weekly update with a “we're still here, so you're aware” from them. It's glorious! There is zero admin overhead  to this, there are effectively no false positives unless I do something foolish. Canaries are deployed and loved on all seven continents. You can check out what people are saying at canary.love. And, their Kub config canary token is new and completely free as well. You can do an awful lot without paying them a dime, which is one of the things I love about them. It is useful stuff and not an, “ohh, I wish I had money.” It is speculator! Take a look; that's canary.love because it's genuinely rare to find a security product that people talk about in terms of love. It really is a unique thing to see. Canary.love. Thank you to ThinkstCanary for their support of my ridiculous, ridiculous non-sense.  Corey: Developers are responsible for more than ever these days. Not just the code they write, but also the containers and cloud infrastructure their apps run on. And a big part of that responsibility is app security — from code to cloud.That's where Snyk comes in. Snyk is a frictionless security platform that meets developers where they are, finding and fixing vulnerabilities right from the CLI, IDEs, repos, and pipelines. And Snyk integrates seamlessly with AWS offerings like CodePipeline, EKS, ECR, etc., etc., etc., you get the picture! Deploy on AWS. Secure with Snyk. Learn more at snyk.io/scream. That's S-N-Y-K-dot-I-O/scream. Because they have not yet purchased a vowel.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I have an interesting and storied career path. I dabbled in security engineering slash InfoSec for a while before I realized that being crappy to people in the community wasn't really my thing; I was a grumpy Unix systems administrator because it's not like there's a second kind of those out there; and I dabbled ever so briefly in the wide world of network administration slash network engineering slash plugging the computers in to make them talk to one another, ideally correctly. But I was always a dabbler. When it comes time to have deep conversations about networking, I immediately tag out and look to an expert. My guest today is one such person. Ivan Pepelnjak is oh so many things. He's a CCIE emeritus, and well, let's start there. Ivan, welcome to the show.Ivan: Thanks for having me. And oh, by the way, I have to tell people that I was a VAX/VMS administrator in those days.Corey: Oh, yes the VAX/VMS world was fascinating. I talked—Ivan: Yes.Corey: —to a company that was finally emulating them on physical cards because that was the only way to get them there. Do you refer to them as VAXen, or VAXes, or how did you wind up referring—Ivan: VAXes.Corey: VAXes. Okay, I was on the other side of that with the inappropriately pluralizing anything that ends with an X with an en—‘boxen' and the rest. And that's why I had no friends for many years.Ivan: You do know what the first VAX was, right?Corey: I do not.Ivan: It was a Swedish Hoover company.Corey: Ooh.Ivan: And they had a trademark dispute with Digital over the name, and then they settled that.Corey: You describe yourself in your bio as a CCIE Emeritus, and you give the number—which is low—number 1354. Now, I've talked about certifications on this show in the context of the modern era, and whether it makes sense to get cloud certifications or not. But this is from a different time. Understand that for many listeners, these stories might be older than you are in some cases, and that's okay. But Cisco at one point, believe it or not, was a shining beacon of the industry, the kind of place that people wanted to work at, and their certification path was no joke.I got my CCNA from them—Cisco Certified Network Administrator—and that was basically a byproduct of learning how networks worked. There are several more tiers beyond that, culminating in the CCIE, which stands for Cisco Certified Internetworking Expert, or am I misremembering?Ivan: No, no, that's it.Corey: Perfect. And that was known as the doctorate of networking in many circles for many years. Back in those days, if you had a CCIE, you are guaranteed to be making an awful lot of money at basically any company you wanted to because you knew how networking—Ivan: In the US.Corey: —worked. Well, in the US. True. There's always the interesting stories of working in places that are trying to go with the lowest bidder for networking gear, and you wind up spending weeks on end trying to figure out why things are breaking intermittently, and only to find out at the end that someone saved 20 bucks by buying cheap patch cables. I digress, and I still have the scars from those.But it was fascinating in those days because there was a lab component of getting those tests. There were constant rumors that in the middle of the night, during the two-day certification exam, they would come in and mess with the lab and things you'd set up—Ivan: That's totally true.Corey: —you'd have to fix it the following day. That is true?Ivan: Yeah. So, in the good old days, when the lab was still physical, they would even turn the connectors around so that they would look like they would be plugged in, but obviously there was no signal coming through. And they would mess up the jumpers on the line cards and all that stuff. So, when you got your broken lab, you really had to work hard, you know, from the physical layer, from the jumpers, and they would mess up your config and everything else. It was, you know, the real deal. The thing you would experience in real world with, uh, underqualified technicians putting stuff together. Let's put it this way.Corey: I don't wish to besmirch our brethren working in the data centers, but having worked with folks who did some hilariously awful things with cabling, and how having been one of those people myself from time to time, it's hard to have sympathy when you just spent hours chasing it down. But to be clear, the CCIE is one of those things where in a certain era, if you're trying to have an argument on the internet with someone about how networks work and their responses, “Well, I'm a CCIE.” Yeah, the conversation was over at that point. I'm not one to appeal to authority on stuff like that very often, but it's the equivalent of arguing about medicine with a practicing doctor. It's the same type of story; it is someone where if they're wrong, it's going to be in the very fringes or the nuances, back in this era. Today, I cannot speak to the quality of CCIEs. I'm not attempting to besmirch any of them. But I'm also not endorsing that certification the way I once did.Ivan: Yeah, well, I totally agree with you. When this became, you know, a mass certification, the reason it became a mass certification is because reseller discounts are tied to reseller status, which is tied to the number of CCIEs they have, it became, you know, this, well, still high-end, but commodity that you simply had to get to remain employed because your employer needed the extra two point discount.Corey: It used to be that the prerequisite for getting the certification was beyond other certifications was, you spent five or six years working on things.Ivan: Well, that was what gave you the experience you needed because in those days, there were no boot camps. Today, you have [crosstalk 00:06:06]—Corey: Now, there's boot camp [crosstalk 00:06:07] things where it's we're going to train you for four straight weeks of nothing but this, teach to the test, and okay.Ivan: Yeah. No, it's even worse, there were rumors that some of these boot camps in some parts of the world that shall remain unnamed, were actually teaching you how to type in the commands from the actual lab.Corey: Even better.Ivan: Yeah. You don't have to think. You don't have to remember. You just have to type in the commands you've learned. You're done.Corey: There's an arc to the value of a certification. It comes out; no one knows what the hell it is. And suddenly it's, great, you can use that to really identify what's great and what isn't. And then it goes at some point down into the point where it becomes commoditized and you need it for partner requirements and the rest. And at that point, it is no longer something that is a reliable signal of anything other than that someone spent some time and/or money.Ivan: Well, are you talking about bachelor degree now?Corey: What—no, I don't have one of those either. I have—Ivan: [laugh].Corey: —an eighth grade education because I'm about as good of an academic as it probably sounds like I am. But the thing that really differentiated in my world, the difference between what I was doing in the network engineering sense, and the things that folks like you who were actually, you know, professionals rather than enthusiastic amateurs took into account was that I was always working inside of the LAN—Local Area Network—inside of a data center. Cool, everything here inside the cage, I can make a talk to each other, I can screw up the switching fabric, et cetera, et cetera. I didn't deal with any of the WAN—Wide Area Network—think ‘internet' in some cases. And at that point, we're talking about things like BGP, or OSPF in some parts of the world, or RIP. Or RIPv2 if you make terrible life choices.But BGP is the routing protocol that more or less powers the internet. At the time of this recording, we're a couple weeks past a BGP… kerfuffle that took Facebook down for a number of hours, during which time the internet was terrific. I wish they could do that more often, in fact; it was almost like a holiday. It was fantastic. I took my elderly relatives out and got them vaccinated. It was glorious.Now, we're back to having Facebook and, terrific. The problem I have whenever something like this happens is there's a whole bunch of crappy explainers out there of, “What is BGP and how might it work?” And people have angry opinions about all of these things. So instead, I prefer to talk to you. Given that you are a networking trainer, you have taught people about these things, you have written books, you have operated large—scale environments—Ivan: I even developed a BGP course for Cisco.Corey: You taught it for Cisco, of all places—Ivan: Yeah. [laugh].Corey: —back when that was impressive, and awesome and not a has-been. It's honestly, I feel like I could go there and still wind up going back in time, and still, it's the same Cisco in some respects: ‘evolve or die dinosaur,' and they got frozen in amber. But let's start at the very beginning. What is BGP?Ivan: Well, you know, when the internet was young, they figured out that we aren't all friends on the internet anymore. And I want to control what I tell you, and you want to control what you tell me. And furthermore, I want to control what I believe from what you're telling me. So, we needed a protocol that would implement policy, where I could say, “I will only announce my customers to you, but not what I've heard from Verizon.” And you will do the same.And then I would say, “Well, but I don't want to hear about that customer of yours because he's also my customer.” So, we need some sort of policy. And so they invented a protocol where you will tell me what you have, I will tell you what I have and then we would both choose what we want to believe and follow those paths to forward traffic. And so BGP was born.Corey: On some level, it seems like it's this faraway thing to people like me because I have a residential internet connection and I am not generally allowed to make my own BGP announcements to the greater world. Even when I was working in data centers, very often the BGP was handled by our upstream provider, or very occasionally by a router they would drop in with the easiest maintenance instructions in the world for me of, “Step one, make sure it has power. Step two, never touch it. Step three, we'd prefer if you don't even look at it and remain at least 20 feet away to keep from bringing your aura near anything we care about.” And that's basically how you should do with me in the context of hardware. So, it was always this arcane magic thing.Ivan: Well, it's not. You know, it's like power transmission: when you know enough about it, it stops being magic. It's technology, it's a bit more complicated than some other stuff. It's way less complicated than some other stuff, like quantum physics, but still, it's so rarely used that it gets this aura of being mysterious. And then of course, everyone starts getting their opinion, particularly the graduates of the Facebook Academy.And yes, it is true that usually BGP would be used between service providers, so whenever, you know, we are big enough to need policy, if you just need one uplink, there is no policy there. You either use the uplink or you don't use the uplink. If you want to have two different links to two different points of presence or to two different service providers, then you're already in the policy land. Do I prefer one provider over the other? Do I want to announce some things to one provider but other things to the other? Do I want to take local customers from both providers because I want to, you know, have lower latency because they are local customers? Or do I want to use one solely as the backup link because I paid so little for that link that I know it's shitty.So, you need all that policy stuff, and to do that, you really need BGP. There is no other routing protocol in the world where you could implement that sort of policy because everything else is concerned mostly with, let's figure out as fast as possible, what is reachable and how to get there. And BGP is like, “Hey, slow down. There's policy.”Corey: Yeah. In the context of someone whose primary interaction with networks is their home internet, where there's a single cable coming in from the outside world, you plug it into a device, maybe yours, maybe ISPs, maybe we don't care. That's sort of the end of it. But think in terms of large interchanges, where there are multiple redundant networks to get from here to somewhere else; which one should traffic go down at any given point in time? Which networks are reachable on the other end of various distant links? That's the sort of problem that BGP is very good at addressing and what it was built for. If you're running BGP internally, in a small network, consider not doing exactly that.Ivan: Well, I've seen two use cases—well, three use cases for people running BGP internally.Corey: Okay, this I want to hear because I was always told, “No touch ‘em.” But you know, I'm about to learn something. That's why I'm talking to you.Ivan: The first one was multinationals who needed policy.Corey: Yes. Many multi-site environments, large-scale companies that have redundant links, they're trying to run full mesh in some cases, or partial mesh where—between a bunch of facilities.Ivan: In this case, it was multiple continents and really expensive transcontinental links. And it was, I don't want to go from Europe to Sydney over US; I want to go over Middle East. And to implement that type of policy, you have to split, you know, the whole network into regions, and then each region is what BGP calls an autonomous system, so that it gets its stack, its autonomous system number and then you can do policy on that saying, “Well, I will not announce Asian routes to Europe through US, or I will make them less preferred so that if the Middle East region goes down, I can still reach Asia through US but preferably, I will not go there.”The second one is yet again, large networks where they had too many prefixes for something like OSPF to carry, and so their OSPF was breaking down and the only way to solve that was to go to something that was designed to scale better, which was BGP.And third one is if you want to implement some of the stuff that was designed for service providers, initially, like, VPNs, layer two or layer three, then BGP becomes this kitchen sink protocol. You know, it's like using Route 53 as a database; we're using BGP to carry any information anyone ever wants to carry around. I'm just waiting for someone to design JSON in BGP RFC and then we are, you know… where we need to be.Corey: I feel on some level, like, BGP gets relatively unfair criticism because the only time it really intrudes on the general awareness is when something has happened and it breaks. This is sort of the quintessential network or systems—or, honestly, computer—type of issue. It's either invisible, or you're getting screamed at because something isn't working. It's almost like a utility. On some level. When you turn on a faucet, you don't wonder whether water is going to come out this time, but if it doesn't, there's hell to pay.Ivan: Unless it's brown.Corey: Well, there is that. Let's stay away from that particular direction; there's a beautiful metaphor, probably involving IBM, if we do. So, the challenge, too, when you look at it is that it's this weird, esoteric thing that isn't super well understood. And as soon as it breaks, everyone wants to know more about it. And then in full on charging to the wrong side of the Dunning-Kruger curve, it's, “Well, that doesn't sound hard. Why are they so bad at it? I would be able to run this better than they could.” I assure you, you can't. This stuff is complicated; it is nuanced; it's difficult. But the common question is, why is this so fragile and able to easily break? I'm going to turn that around. How is it that something that is this esoteric and touches so many different things works as well as it does?Ivan: Yeah, it's a miracle, particularly considering how crappy the things are configured around the world.Corey: There have been periodic outages of sites when some ISP sends out a bad BGP announcement and their upstream doesn't suppress it because hey, you misconfigured things, and suddenly half the internet believes oh, YouTube now lives in this tiny place halfway around the world rather than where it is currently being Anycasted from.Ivan: Called Pakistan, to be precise.Corey: Exact—there was an actual incident there; we are not dunking on Pakistan as an example of a faraway place. No, no, an Pakistani ISP wound up doing exactly this and taking YouTube down for an afternoon a while back. It's a common problem.Ivan: Yeah, the problem was that they tried to stop local users accessing YouTube. And they figured out that, you know, YouTube, is announcing this prefix and if they would announce to more specific prefixes, then you know, they would attract the traffic and the local users wouldn't be able to reach YouTube. Perfect. But that leaked.Corey: If you wind up saying that, all right, the entire internet is available on this interface, and a small network of 256 nodes available on the second interface, the most specific route always wins. That's why the default route or route of last resort is the entire internet. And if you don't know where to send it, throw it down this direction. That is usually, in most home environments, the gateway that then hands it up to your ISP, where they inspect it and do all kinds of fun things to sell ads to you, and then eventually get it to where it's going.This gets complicated at these higher levels. And I have sympathy for the technical aspects of what happened at Facebook; no sympathy whatsoever for the company itself because they basically do far more harm than they do good and I've been very upfront about that. But I want to talk to you as well about something that—people are going to be convinced I'm taking this in my database direction, but I assure you I'm not—DNS. What is the relationship between BGP and DNS? Which sounds like a strange question, sometimes.Ivan: There is none.Corey: Excellent.Ivan: It's just that different large-scale properties decided to implement the global load-balancing global optimal access to their servers in different ways. So, Cloudflare is a typical example of someone who is doing Anycast, they are announcing the same networks, the same prefixes, from hundreds locations around the world. So, BGP will take care that you always get to the close Cloudflare [unintelligible 00:18:46]. And that's it. That's how they work. No magic. Facebook didn't believe in the power of Anycast when they started designing their service. So, what they're doing is they have DNS servers around the world, and the DNS servers serve the local region, if you wish. And that DNS server then decides what facebook.com really stands for. So, if you query for facebook.com, you'll get a different answer in Europe than in US.Corey: Just a slight diversion on what Anycast is. If I ping Google's public resolver 8.8.8.8—easy to remember—from my computer right now, the packet gets there and back in about five milliseconds.Wherever you are listening to this, if you were to try that same thing you'd see something roughly similar. Now, one of two things is happening; either Google has found a way to break the laws of physics and get traffic to a central point faster than light for the 8.8.8.8 that I'm talking to and the one that you are talking to are not in fact the same computer.Ivan: Well, by the way, it's 13 milliseconds for me. And between you and me, it's 200 millisecond. So yes, they are cheating.Corey: Just a little bit. Or unless they tunneled through the earth rather than having to bounce it off of satellites and through cables.Ivan: No, even that wouldn't work.Corey: That's what the quantum computers are for. I always wondered. Now, we know.Ivan: Yeah. They're entangling the replies in advance, and that's how it works. Yeah, you're right.Corey: Please continue. I just wanted to clarify that point because I got that one hilariously wrong once upon a time and was extremely confused for about six months.Ivan: Yeah. It's something that no one ever thinks about unless, you know, you're really running large-scale DNS because honestly, root DNS servers were Anycasted for ages. You think they're like 12 different root DNS servers; in reality, there are, like, 300 instances hidden behind those 12 addresses.Corey: And fun trivia fact; the reason there are 12 addresses is because any more than that would no longer fit within the 512 byte limit of a UDP packet without truncating.Ivan: Thanks for that. I didn't know that.Corey: Of course. Now, EDNS extensions that you go out with a larger [unintelligible 00:21:03], but you can't guarantee that's going to hit. And what happens when you receive a UDP packet—when you receive a DNS result with a truncate flag set on the UDP packet? It is left to the client. It can either use the partial result, or it can try and re-establish over a TCP connection.That is one of those weird trivia questions they love to ask in sysadmin interviews, but it's yeah, fundamentally, if you're doing something that requires the root nameservers, you don't really want to start going down those arcane paths; you want it to just be something that fits in a single packet not require a whole bunch of computational overhead.Ivan: Yeah, and even within those 300 instances, there are multiple servers listening to the same IP address and… incoming packets are just sprayed across those servers, and whichever one gets the packet replies to it. And because it's UDP, it's one packet in one packet out. Problem solved. It all works. People thought that this doesn't work for TCP because, you know, you need a whole session, so you need to establish the session, you send the request, you get the reply, there are acknowledgements, all that stuff.Turns out that there is almost never two ways to get to a certain destination across the internet from you. So, people thought that, you know, this wouldn't work because half of your packets will end in San Francisco, and half of the packets will end in San Jose, for example. Doesn't work that way.Corey: Why not?Ivan: Well, because the global Internet is so diverse that you almost never get two equal cost paths to two different destinations because it would be San Francisco and San Jose announcing 8.8.8.8 and it would be a miracle if you would be sitting just in the middle so that the first packet would go to San Francisco, the second one would go to San Jose, and you know, back and forth. That never happens. That's why Cloudflare makes it work by analysing the same prefix throughout the world.Corey: So, I just learned something new about how routing announcements work, an aspect of BGP, and you a few minutes ago learned something about the UDP size limit and the root name servers. BGP and DNS are two of the oldest protocols in existence. You and I are also decades into our careers. If someone is starting out their career today, working in a cloud-y environment, there are very few network-centric roles because cloud providers handle a lot of this for us. Given these protocols are so foundational to what goes on and they're as old as they are, are we as an industry slash sector slash engineers losing the skills to effectively deploy and manage these things?Ivan: Yes. The same problem that you have in any other sufficiently developed technology area. How many people can build power lines? How many people can write a compiler? How many people can design a new CPU? How many people can design a new motherboard?I mean, when I was 18 years old, I was wire wrapping my own motherboard, with 8-bit processor. You can't do that today. You know, as the technology is evolving and maturing, it's no longer fun, it's no longer sexy, it stops being a hobby, and so it bifurcates into users and people who know about stuff. And it's really hard to bridge the gap from one to the other. So, in the end, you have, like, this 20 [graybeard 00:24:36] people who know everything about the technology, and the youngsters have no idea. And when these people die, don't ask me [laugh] how we'll get any further on.Corey: This episode is sponsored by our friends at CloudAcademy. That's right, they have a different lab challenge up for you called, “Code Red: Repair an AWS Environment with a Linux Bastion Host.” What does it do? Well, its going to assess your ability to troubleshoot AWS networking and security issues in a production like environment. Well, kind of, its not quite like production because some exec is not standing over your shoulder, wetting themselves while screaming. But..ya know, you can pretend in fact I'm reasonably certain you can retain someone specifically for that purpose should you so choose. If you are the first prize winner who completes all four challenges with the fastest time, you'll win a thousand bucks. If you haven't started yet you can still complete all four challenges between now and December 3rd to be eligible for the grand prize. There's only a few days left until the whole thing ends, so I would get on it now. Visit cloudacademy.com/corey. That's cloudacademy.com/C-O-R-E-Y, for god's sake don't drop the “E” that drives me nuts, and thank you again to Cloud Academy for not only promoting my ridiculous non sense but for continuing to help teach people how to work in this ridiculous environment.Corey: On some level, it feels like it's a bit of a down the stack analogy for what happened to me early in my career. My first systems administration job was running a large-scale email system. So, it was a hobby that I was interested in. I basically bluffed my way into working at a university for a year—thanks, Chapman; I appreciate that [laugh]—and it was great, but it was also pretty clear to me that with the rise of things like hosted email, Gmail, and whatnot, it was not going to be the future of what the present day at that point looked like, which was most large companies needed an email administrator. Those jobs were dwindling.Now, if you want to be an email systems administrator, there are maybe a dozen companies or so that can really use that skill set and everyone else just outsources that said, at those companies like Google and Microsoft, there are some incredibly gifted email administrators who are phenomenal at understanding every nuance of this. Do you think that is what we're going to see in the world of running BGP at large scale, where a few companies really need to know how this stuff works and everyone else just sort of smiles, nods and rolls with it?Ivan: Absolutely. We're already there. Because, you know, if I'm an end customer, and I need BGP because I have to uplinks to two ISPs, that's really easy. I mean, there are a few tricks you should follow and hopefully, some of the guardrails will be built into network operating systems so that you will really have to configure explicitly that you want to leak [unintelligible 00:26:15] between Verizon and AT&T, which is great fun if you have too low-speed links to both of them and now you're becoming transit between the two, which did happen to Verizon; that's why I'm mentioning them. Sorry, guys.Anyway, if you are a small guy and you just need two uplinks, and maybe do a bit of policy, that's easy and that's achievable, let's say with some Google and paste, and throwing spaghetti at the wall and seeing what sticks. On the other hand, what the large-scale providers—like for example Facebook because we were talking about them—are doing is, like, light years away. It's like comparing me turning on the light bulb and someone running, you know, nuclear reactor.Corey: Yeah, you kind of want the experts running some aspects on that. Honestly, in my case, you probably want someone more competent flipping the light switch, too. But that's why I have IoT devices here that power my lights, it on the one hand, keeps me from hurting myself on the other leads to a nice seasonal feel because my house is freaking haunted.Ivan: So, coming back to Facebook, they have these DNS servers all around the world and they don't want everyone else to freak out when one of these DNS servers goes away. So, that's why they're using the same IP address for all the DNS servers sitting anywhere in the world. So, the name server for facebook.com is the same worldwide. But it's different machines and they will give you different answers when you ask, “Where is facebook.com?”I will get a European answer, you will get a US answer, someone in Asia will get whatever. And so they're using BGP to advertise the DNS servers to the world so that everyone gets to the closest DNS server. And now it doesn't make sense, right, for the DNS server to say, “Hey, come to European Facebook,” if European Facebook tends to be down. So, if their DNS server discovers that it cannot reach the servers in the data center, it stops advertising itself with BGP.Why would BGP? Because that's the only thing it can do. That's the only protocol where I can tell you, “Hey, I know about this prefix. You really should send the traffic to me.” And that's what happened to Facebook.They bricked their backbone—whatever they did; they never told—and so their DNS server said, “Gee, I can't reach the data center. I better stop announcing that I'm a DNS server because obviously I am disconnected from the rest of Facebook.” And that happens to all DNS servers because, you know, the backbone was bricked. And so they just, you know, [unintelligible 00:29:03] from the internet, they've stopped advertising themselves, and so we thought that there was no DNS server for Facebook. Because no DNS server was able to reach their core, and so all DNS servers were like, “Gee, I better get off this because, you know, I have no clue what's going on.”So, everything was working fine. Everything was there. It's just that they didn't want to talk to us because they couldn't reach the backend servers. And of course, people blamed DNS first because the DNS servers weren't working. Of course they weren't. And then they blame the BGP because it must be BGP if it isn't DNS. But it's like, you know, you're blaming headache and muscle cramps and high fever, but in fact you have flu.Corey: For almost any other company that wasn't Facebook, this would have been a less severe outage just because most companies are interdependent on each other companies to run infrastructure. When Facebook itself has evolved the way that it has, everything that they use internally runs on the same systems, so they wound up almost with a bootstrapping problem. An example of this in more prosaic terms are okay, the data center had a power outage. Okay, now I need to power up all the systems again and the physical servers I'm trying to turn on need to talk to a DNS server to finish booting but the DNS server is a VM that lives on those physical servers. Uh-oh. Now, I'm in trouble. That is a overly simplified and real example of what Facebook encountered trying to get back into this, to my understanding.Ivan: Yes, so it was worse than that. It looks like, you know, even out-of-band management access didn't work, which to me would suggest that out-of-band management was using authentication servers that were down. People couldn't even log to Zoom because Zoom was using single-sign-on based on facebook.com, and facebook.com was down so they couldn't even make Zoom calls or open Google Docs or whatever. There were rumors that there was a certain hardware tool with a rotating blade that was used to get into a data center and unbrick a box. But those rumors were vehemently denied, so who knows?Corey: The idea of having someone trying to physically break into a data center in order to power things back up is hilarious, but it does lead to an interesting question, which is in this world of cloud computing, there are a lot of people in the physical data centers themselves, but they don't have access, in most cases to log into any of the boxes. One of the most naive things I see all the time is, “Oh well, the cloud provider can read all of your data.” No, they can't. These things are audited. And yeah, theoretically, if they're lying outright, and somehow have falsified all of the third-party audit stuff that has been reported and are willing to completely destroy their business when it gets out—and I assure you, it would—yeah, theoretically, that's there. There is an element of trust here. But I've had to answer a couple of journalists questions recently of, “Oh, is AWS going to start scanning all customer content?” No, they physically cannot do it because there are many ways you can configure things where they cannot see it. And that's exactly what we want.Ivan: Yeah, like a disk encryption.Corey: Exactly. Disk encryption, KMS on some level, using—rolling your own, et cetera, et cetera. They use a lot of the same systems we do. The point being, though, is that people in the data centers do not even have logging rights to any of these nodes for the physical machines, in some cases, let alone the customer tenants on top of those things. So, on some level, you wind up with people building these systems that run on top of these computers, and they've never set foot in one of the data centers.That seems ridiculous to me as someone who came up visiting data centers because I had to know where things were when they were working so I could put them back that way when they broke later. But that's not necessary anymore.Ivan: Yeah. And that's the problem that Facebook was facing with that outage because you start believing that certain systems will always work. And when those systems break down, you're totally cut off. And then—oh, there was an article in ACM Queue long while ago where they were discussing, you know, the results of simulated failures, not real ones, and there were hilarious things like phone directory was offline because it wasn't on UPS and so they didn't know whom to call. Or alerts couldn't be diverted to a different data center because the management station for alert configuration was offline because it wasn't on UPS.Or, you know the one, right, where in New York, they placed the gas pump in the basement, and the diesel generators were on the top floor, and the hurricane came in and they had to carry gas manually, all the way up to the top floor because the gas pump in the basement just stopped working. It was flooded. So, they did everything right, just the fuel wouldn't come to the diesel generators.Corey: It's always the stuff that is under the hood on these things that you can't make sense of. One of the biggest things I did when I was evaluating data center sites was I'd get a one-line diagram—which is an electrical layout of the entire facility—great. I talked to the folks running it. Now, let's take a walk and tour it. Hmmm, okay. You show four transformers on your one-line diagram. I see two transformers and two empty concrete pads. It's an aspirational one-line diagram. It's a joke that makes it a one-liner diagram and it's not very funny. So it's, okay if I can't trust you for those little things, that's a problem.Ivan: Yeah, well, I have another funny story like that. We had two power feeds coming into the house plus the diesel generator, and it was, you know, the properly tested every month diesel generator. And then they were doing some maintenance and they told us in advance that they will cut both power feeds at 2 a.m. on a Sunday morning.And guess what? The diesel generator didn't start. Half an hour later UPS was empty, we were totally dead in water with quadruple redundancy because you can't get someone it's 2 a.m. on a Sunday morning to press that button on the diesel generator. In half an hour.Corey: That is unfortunate.Ivan: Yeah, but that's how the world works. [laugh].Corey: So, it's been fantastic reminding myself of some of the things I've forgotten because let's be clear, in working with cloud, a lot of this stuff is completely abstracted away. I don't have to care about most of these things anymore. Now, there's a small team of people that AWS who very much has to care; if they don't, I will say mean things to them on Twitter, if I let my HugOps position slip up just a smidgen. But they do such a good job at this that we don't have problems like this, almost ever, to the point where when it does happen, it's noteworthy. It's been fun talking to you about this just because it's a trip down a memory lane that is a lot more aligned with the things that are there and we tend not to think about them. It's almost a How it's Made episode.Ivan: Yeah. And don't be so relaxed regarding the cloud networking because, you know, if you don't go full serverless with nothing on-premises, you know what protocol you're running between on-premises and the cloud on direct connect? It's called BGP.Corey: Ah. You know, I did not know that. I've done some ridiculous IPsec pairings over those things, and was extremely unhappy for a while afterwards, but I never got to the BGP piece of it. Makes sense.Ivan: Yeah, even over IPsec if you want to have any dynamic failover, or multiple sites, or anything, it's [BP 00:36:56].Corey: I really want to thank you for taking the time to go through all this with me. If people want to learn more about how you view these things, learn more things from you, as I'd strongly recommend they should if they're even slightly interested by the conversation we've had, where can they find you?Ivan: Well, just go to ipspace.net and start exploring. There's the blog with thousands of blog entries, some of them snarkier than others. Then there are, like, 200 webinars, short snippets of a few hours of—Corey: It's like a one man version of re:Invent. My God.Ivan: Yeah, sort of. But I've been working on this for ten years, and they do it every year, so I can't produce the content at their speed. And then there are three different full-blown courses. Some of them are just, you know, the materials from the webinars, plus guest speakers plus hands-on exercises, plus I personally review all the stuff people submit, and they cover data centers, and automation, and public clouds.Corey: Fantastic. And we will, of course, put links to that into the [show notes 00:38:01]. Thank you so much for being so generous with your time. I appreciate it.Ivan: Oh, it's been such a huge pleasure. It's always great talking with you. Thank you.Corey: It really is. Thank you once again. Ivan Pepelnjak network architect and oh so much more. CCIE #1354 Emeritus. And read the bio; it's well worth it. I am Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice and a comment formatted as a RIPv2 announcement.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

ANYCAST
Trailer: Der ANYCAST-Adventskalender 2021

ANYCAST

Play Episode Listen Later Nov 27, 2021 9:03


Auf Telegram gibt es ab dem 1. Dezember 2021 jeden Tag eine neue Episode unseres Adventskalender zu hören. Und was wir da verkosten werden erfahrt ihr in diesem Trailer!

ANYCAST
ANYCAST Kompakt

ANYCAST

Play Episode Listen Later Oct 30, 2021 2:46


Sorry, nächste Woche geht's weiter!

kompakt anycast
ANYCAST
ANY112 – Auf der Wache (zu Gast: Angela Merkel)

ANYCAST

Play Episode Listen Later Sep 17, 2021 76:04


Ta-Tü-Ta-Ta der ANYCAST ist da! Das ist das große Feuerwehr-Spezial des einzigen Fachpodcasts mit ASMR-Komponente.

Thinking Elixir Podcast
56: Fly-ing Elixir Close to Users with Kurt Mackey

Thinking Elixir Podcast

Play Episode Listen Later Jul 13, 2021 59:32


We talk with Kurt Mackey, founder at Fly.io, about what makes the Fly platform unique and why hosting Elixir applications there makes a lot of sense. They started out looking to make a better CDN for developers and this pushed them to try deploying Full Stack applications closer to users, not just the static assets! We learn about the tech behind the networking, how databases can be moved closer to users, and how LiveView is even more awesome when it is close to users. Kurt also shares what he sees as the future for databases as the industry continues to move into globally distributed applications. Elixir Community News - https://github.com/elixir-lang/elixir/pull/11101 (https://github.com/elixir-lang/elixir/pull/11101) – Task.completed/1 function added - https://github.com/elixir-lang/elixir/releases/tag/v1.12.2 (https://github.com/elixir-lang/elixir/releases/tag/v1.12.2) – Elixir 1.12.2 minor release - https://github.com/phoenixframework/phoenixliveview/pull/1511 (https://github.com/phoenixframework/phoenix_live_view/pull/1511) – LV Lifecycle Hooks - https://github.com/phoenixframework/phoenixliveview/pull/1515 (https://github.com/phoenixframework/phoenix_live_view/pull/1515) – Add onmount option to livesession - https://github.com/phoenixframework/phoenixliveview/pull/1513 (https://github.com/phoenixframework/phoenix_live_view/pull/1513) – New formfor/4 - https://github.com/phoenixframework/phoenix/pull/4355 (https://github.com/phoenixframework/phoenix/pull/4355) – Add mailer generator for swoosh - https://github.com/akoutmos/prom_ex/blob/master/CHANGELOG.md (https://github.com/akoutmos/prom_ex/blob/master/CHANGELOG.md) – promex library has 1.3.0 release adding Absinthe GraphQL plugin and dashboard - https://fly.io/blog/monitoring-your-fly-io-apps-with-prometheus/ (https://fly.io/blog/monitoring-your-fly-io-apps-with-prometheus/) – Using promex on an Elixir project - https://github.com/wojtekmach/elixir-run (https://github.com/wojtekmach/elixir-run) – Wojtek Mach released a new project called "elixir-run" - https://github.com/nurturenature/elixir_actions#elixir-actions-for-github (https://github.com/nurturenature/elixir_actions#elixir-actions-for-github) – Elixir Github Actions CI example repo - https://twitter.com/evadne/status/1412083538988613636 (https://twitter.com/evadne/status/1412083538988613636) – Etso 0.1.6 was released adding "isnil" query support Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://fly.io/ (https://fly.io/) - https://fly.io/docs/introduction/ (https://fly.io/docs/introduction/) - https://fly.io/blog/building-a-distributed-turn-based-game-system-in-elixir/ (https://fly.io/blog/building-a-distributed-turn-based-game-system-in-elixir/) - https://www.compose.com/ (https://www.compose.com/) - https://twitter.com/QuinnyPig (https://twitter.com/QuinnyPig) – Corey Quinn on Twitter - https://nats.io/ (https://nats.io/) - https://jamstack.org/ (https://jamstack.org/) - https://liveview-counter.fly.dev/ (https://liveview-counter.fly.dev/) - https://github.com/fly-apps/phoenix-liveview-cluster (https://github.com/fly-apps/phoenix-liveview-cluster) - https://en.wikipedia.org/wiki/Anycast (https://en.wikipedia.org/wiki/Anycast) - https://en.wikipedia.org/wiki/BorderGatewayProtocol (https://en.wikipedia.org/wiki/Border_Gateway_Protocol) - https://en.wikipedia.org/wiki/Contentdeliverynetwork (https://en.wikipedia.org/wiki/Content_delivery_network) - https://www.wireguard.com/ (https://www.wireguard.com/) - https://tailscale.com/ (https://tailscale.com/) - https://fly.io/blog/globally-distributed-postgres/ (https://fly.io/blog/globally-distributed-postgres/) - https://www.cockroachlabs.com/ (https://www.cockroachlabs.com/) - https://www.yugabyte.com/ (https://www.yugabyte.com/) - https://www.citusdata.com/ (https://www.citusdata.com/) - https://fly.io/blog/ (https://fly.io/blog/) - https://twitter.com/flydotio (https://twitter.com/flydotio) - https://en.wikipedia.org/wiki/JoeArmstrong(programmer) (https://en.wikipedia.org/wiki/Joe_Armstrong_(programmer)) Guest Information - https://twitter.com/mrkurt (https://twitter.com/mrkurt) – on Twitter - https://github.com/mrkurt (https://github.com/mrkurt) – on Github - https://fly.io/blog/ (https://fly.io/blog/) – Blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

Dr. Bill.TV - Audio Netcasts
DrBill.TV #494 – Audio – The Microsoft AND Windows Are Evil Edition!

Dr. Bill.TV - Audio Netcasts

Play Episode Listen Later Jun 28, 2021 21:57


A first look at Windows 11, Duck Duck Go improves on-line privacy, GS0tW: USB Oblivion! Subscribe to our YouTube Channel, Microsoft steals an Open Source project for their updater, the June 24th Windows 11 Event, will YOUR PC run Windows 11? An overview of Anycast! (Jun 28, 2021) Links that pertain to this Netcast: TechPodcasts Network International Association of Internet [...] The post DrBill.TV #494 – Audio – The Microsoft AND Windows Are Evil Edition! appeared first on Dr. Bill | The Computer Curmudgeon.

Dr. Bill.TV - Video Netcasts
DrBill.TV #494 – Video – The Microsoft AND Windows Are Evil Edition!

Dr. Bill.TV - Video Netcasts

Play Episode Listen Later Jun 28, 2021 22:00


A first look at Windows 11, Duck Duck Go improves on-line privacy, GS0tW: USB Oblivion! Subscribe to our YouTube Channel, Microsoft steals an Open Source project for their updater, the June 24th Windows 11 Event, will YOUR PC run Windows 11? An overview of Anycast! (Jun 28, 2021) Links that pertain to this Netcast: TechPodcasts Network International Association of Internet [...] The post DrBill.TV #494 – Video – The Microsoft AND Windows Are Evil Edition! appeared first on Dr. Bill | The Computer Curmudgeon.

ANYCAST
ANY099 – ANYCAST.shop

ANYCAST

Play Episode Listen Later Jun 16, 2021 111:10


Es hat zwar bisschen gedauert, aber hier nun die 99. Folge des einzigen Fachpodcasts. Weil 99 nur eins von 100 weg ist, nutzen wir die Gelegenheit, auf diese besondere Episode, die nur auf Kassette erscheint, wirklich dolle hinzuweisen. Außerdem geht sonst nur ums Impfen. Happy Impfneid, ihr Loser!

ANYCAST
ANYCAST Weihnachtsfeier 2020

ANYCAST

Play Episode Listen Later Dec 23, 2020 80:51


2020 ist fast zu Ende und wir machen eine kleine Remote-Weihnachtsparty, die alles ist nur keine Party. Wir reden über Corona, Impfungen, Weihnachten, Weihnachtsfilme und scrollen uns durch unsere Fotos von 2020. In diesem Sinne: Frohe Weihnachten und einen guten Rutsch!

ANYCAST
ANY090 – Einsatz in Pankow: Mit dem Dorfpodcast unterwegs

ANYCAST

Play Episode Listen Later May 23, 2020 59:25


Erste Schritte in die Nach-Corona-Normalität: Wir nehmen wieder zusammen in einem Raum auf. Mit Abstand. Die letzten Wochen haben Zeit gegeben, noch mal in alten Doku-Schätzen zu wühlen und schwelgen. Außerdem gehen wir einem großen Skandal in der Welt der 1-Euro-Läden auf die Spur. Ganz viel und noch weniger heute im ANYCAST.

Software Sessions
Bringing GeoCities Back with Kyle Drake

Software Sessions

Play Episode Listen Later Jan 15, 2020 93:53


Kyle Drake discusses what GeoCities was, why it failed, the technical and legal challenges of creating its spiritual successor Neocities, and how he's working to preserve and curate sites from the old web.

Cyber Help
DNS-2-Anycast-DNS

Cyber Help

Play Episode Listen Later Jan 3, 2020 8:43


Basic Information about Anycast DNS.

anycast
The Dan York Report
TDYR 370 - CDNs, Anycast, Containers, and the Disaggregation of Networking Models

The Dan York Report

Play Episode Listen Later Nov 7, 2019 10:53


Once upon a time, our mental model of how networking worked was rather simple. You had a device connected to a network (a "client") and a server to which that device was connecting. Each had a host name and an IP address. The server was typically a computer on a desk or in a rack. But that's not TODAY's network... now instead we have content delivery networks (CDNs), Anycast networks, containers and so much more.. it's a FASCINATING time to be in networking! I talk about all of that on this episode...

linkmeup. Подкаст про IT и про людей
telecom №77. IPv4 продолжают заканчиваться

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jul 25, 2019


Стоило только linkmeup поплакаться, что IPv4 больше нет и взять его почти неоткуда, как американский AMPR распечатал свой 44/8 и продал блок /10 Амазону. Ещё 3х/10 осталось, налетай! А учитывая, сколько было отдано за 4млн адресов, КДПВ обретает особый шарм. Кто: Артём Гавриченков aka @ximaera Про что: Где бы взять IP История Channel Partners: slashdot.org/story/355802 Амир Гоулстэн использовал подставные компании, зарегистрированные на поддельные документы, чтобы получить около 757 тыс. адресов далее на продажу. Гоулстэн заработал на этом около 10 млн. долларов минимум. Попался он на подозрительных документах к одному из transfer request'ов, после чего ARIN начал копать. Часть адресов (не все!) попали в отзыв. Является ли это прецедентом? А может, реально сегодня не внедрять IPv6, а всё просто взять и поделить?Как вообще был устроен менеджмент адресов в конце 80-х—начале 90-х годов. Как это связано с развитием IPv4, IPv6 и крахом доткомов?201 миллион IPv4-адресов US Department of Defense: а почему бы не перераспределить их? (Спойлер: это не имеет смысла)По тому же принципу избыточности множество сетей было аллоцировано под всякое странное: multicast (который до IPv6 так и не взлетел), loopback, эксперименты и недо-BOOTP 80-х годов. Почему бы не раздать их под Anycast? Интернет-инженеры уже думали и об этом, но и это нам в итоге не поможет.Проекты Мартинеса и Фриасаса по отзыву LIR'ов за hijacking и abuse. Это испанец и португалец, выступающие за то, чтобы сделать RIPE NCC Интернет-полицией. Они занимаются борьбой со спамом, и их инициативы все примерно в таком духе. С этого года они окончательно обрели смелость, и наша жизнь стала намного интереснее. Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска.

linkmeup. Подкаст про IT и про людей
telecom №77. IPv4 продолжают заканчиваться

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jul 25, 2019


Стоило только linkmeup поплакаться, что IPv4 больше нет и взять его почти неоткуда, как американский AMPR распечатал свой 44/8 и продал блок /10 Амазону. Ещё 3х/10 осталось, налетай! А учитывая, сколько было отдано за 4млн адресов, КДПВ обретает особый шарм. Кто: Артём Гавриченков aka @ximaera Про что: Где бы взять IP История Channel Partners: slashdot.org/story/355802 Амир Гоулстэн использовал подставные компании, зарегистрированные на поддельные документы, чтобы получить около 757 тыс. адресов далее на продажу. Гоулстэн заработал на этом около 10 млн. долларов минимум. Попался он на подозрительных документах к одному из transfer request'ов, после чего ARIN начал копать. Часть адресов (не все!) попали в отзыв. Является ли это прецедентом? А может, реально сегодня не внедрять IPv6, а всё просто взять и поделить?Как вообще был устроен менеджмент адресов в конце 80-х—начале 90-х годов. Как это связано с развитием IPv4, IPv6 и крахом доткомов?201 миллион IPv4-адресов US Department of Defense: а почему бы не перераспределить их? (Спойлер: это не имеет смысла)По тому же принципу избыточности множество сетей было аллоцировано под всякое странное: multicast (который до IPv6 так и не взлетел), loopback, эксперименты и недо-BOOTP 80-х годов. Почему бы не раздать их под Anycast? Интернет-инженеры уже думали и об этом, но и это нам в итоге не поможет.Проекты Мартинеса и Фриасаса по отзыву LIR'ов за hijacking и abuse. Это испанец и португалец, выступающие за то, чтобы сделать RIPE NCC Интернет-полицией. Они занимаются борьбой со спамом, и их инициативы все примерно в таком духе. С этого года они окончательно обрели смелость, и наша жизнь стала намного интереснее. Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска. Url podcast:https://dts.podtrac.com/redirect.mp3/fs.linkmeup.ru/podcasts/telecom/linkmeup-V077(2019-07).mp3

linkmeup. Подкаст про IT и про людей
telecom №77. IPv4 продолжают заканчиваться

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jul 25, 2019


Стоило только linkmeup поплакаться, что IPv4 больше нет и взять его почти неоткуда, как американский AMPR распечатал свой 44/8 и продал блок /10 Амазону. Ещё 3х/10 осталось, налетай! А учитывая, сколько было отдано за 4млн адресов, КДПВ обретает особый шарм. Кто: Артём Гавриченков aka @ximaera Про что: Где бы взять IP История Channel Partners: slashdot.org/story/355802 Амир Гоулстэн использовал подставные компании, зарегистрированные на поддельные документы, чтобы получить около 757 тыс. адресов далее на продажу. Гоулстэн заработал на этом около 10 млн. долларов минимум. Попался он на подозрительных документах к одному из transfer request'ов, после чего ARIN начал копать. Часть адресов (не все!) попали в отзыв. Является ли это прецедентом? А может, реально сегодня не внедрять IPv6, а всё просто взять и поделить?Как вообще был устроен менеджмент адресов в конце 80-х—начале 90-х годов. Как это связано с развитием IPv4, IPv6 и крахом доткомов?201 миллион IPv4-адресов US Department of Defense: а почему бы не перераспределить их? (Спойлер: это не имеет смысла)По тому же принципу избыточности множество сетей было аллоцировано под всякое странное: multicast (который до IPv6 так и не взлетел), loopback, эксперименты и недо-BOOTP 80-х годов. Почему бы не раздать их под Anycast? Интернет-инженеры уже думали и об этом, но и это нам в итоге не поможет.Проекты Мартинеса и Фриасаса по отзыву LIR'ов за hijacking и abuse. Это испанец и португалец, выступающие за то, чтобы сделать RIPE NCC Интернет-полицией. Они занимаются борьбой со спамом, и их инициативы все примерно в таком духе. С этого года они окончательно обрели смелость, и наша жизнь стала намного интереснее. Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска. Url podcast:https://dts.podtrac.com/redirect.mp3/fs.linkmeup.ru/podcasts/telecom/linkmeup-V077(2019-07).mp3

Anycast
2019 (Tax Everything!)

Anycast

Play Episode Listen Later Jan 3, 2019 43:49


The all new Anycast! 2019 is upon us and its time to hear from you. Covering a few general New Years resolutions to a main oratory bowel movement to TAX EVERYTHING!

Anycast
Jesus jams anycast

Anycast

Play Episode Listen Later Dec 13, 2018 44:56


a random episode with music thrown in

Anycast
The giveaway

Anycast

Play Episode Listen Later Nov 29, 2018 42:04


an anycast giveaway, children's services, and shatner!

Anycast
anycast sex (drunken freestyle)

Anycast

Play Episode Listen Later Nov 13, 2018 2:43


a sexy song just for our listeners.

Anycast
Premiere Episode

Anycast

Play Episode Listen Later Sep 27, 2018 42:08


Anything goes! From Religion to Video Games, maybe a genital joke here and there...

Gretchenfrage
GFST04 – Gretchenfrage Stammtisch: fundamentalistische Evangelikale und ein selbstbestimmtes Leben

Gretchenfrage

Play Episode Listen Later Oct 1, 2016 85:49


In dieser Folge sprechen wir mit Cornelis Kater, Podcaster für die Formate "Schöne Ecken", die "Bahnhelden" und den "Anycast". Ihn haben wir auf dem Podcamp 2016 getroffen und schon da gemerkt, dass wir dringend eine Folge aufnehmen müssen. Mit ihm reden wir über seine Kindheit und Jugend in einer evangelikalen Gemeinde, die von Angst vor Sünde und Hölle geprägt war und darüber warum er trotzdem findet, dass jede und jeder, sowie die Welt im Allgemeinen trotzdem Religion braucht. Viel Spaß beim Hören!

Collect
EP. 004 - Daemon Filmes

Collect

Play Episode Listen Later Apr 13, 2016 91:13


Olá a todos. Este é mais um episódio de Anycast! Opa.. Perae... Devido a alguns percauços teremos que mudar de nome. E hoje estreamos o novo nome.. E agora a partir de hoje este é o COLLECT! Aquele mesmo Collect que vc faz depois de cada projeto no after… Collect de coletar, juntar, fazer montinho... Ou Collect Cast? Vcs me digam aí. E no episódio de hoje a galera da Daemon Filmes, e a empreitada de fazer algo seu, um negócio com a sua cara. Se vc pensa em um dia ter a sua própria produtora ou quer saber os detalhes de como funciona uma, de perto, vem com a gente nessa conversa. Quem participou da conversa foi a Baboo Matusaki, sócia e diretora de arte, Tom Strackle, diretor de 3D, Totem Dias, sócio e diretor 2D e Andreia Barreto sócia e atendimento. Tivemos a participação especial do Vader e da Leia, os gatinhos lindos do Tom. ---------- Quem conversou com a gente: DAEMON FILMES Web - https://www.daemonfilmes.tv FB - https://www.facebook.com/daemonfilmes Vimeo - https://vimeo.com/daemonfilmes Baboo Matsusaki - https://vimeo.com/matsusaki Tom Strackle -https://vimeo.com/tomstracke - www.tomstracke.com Andreia Barreto - contato@daemonfilmes.tv Totem Dias HOSTS Eloise Bento - elobento.com Rafa Piotto - http://moxo.tv/ ---------------------- Vem falar com a gente! Facebook.com/collect.podast Twitter - https://twitter.com/CollectPodcast Sound cloud - https://soundcloud.com/collectpodcast --------------------- Links: Todd McFarlene - http://mcfarlane.com/ Crianças contra Zika - https://www.facebook.com/criancascontrazika/?fref=nf Twitter Guilhermo del Toro - https://twitter.com/RealGDT Itau Espaço Cinema - Festival de cinema independente americano - http://www.itaucinemas.com.br/home/ Policia 24h - 4Cabezas - https://vimeo.com/160448837 cisma - https://vimeopro.com/cisma/reel Animation Domination High Def - https://www.youtube.com/user/FOXADHD

ANYCAST
ANY050 - 5.000 Jahre Sauerbraten

ANYCAST

Play Episode Listen Later May 8, 2014 109:48


Pressekonferenz (00:00:00) Eröffnung (00:07:22) 5000 Jahre Domestizierung des Pferdes; „Ohne Pferd keine Kutsche, ohne Kutsche kein Auto, und ohne Auto keine Unfälle!“ (Cornelis); 1. Gast: Sebastian Fiebrig (00:10:20) @saumselig; Textilvergehen; FC Union Berlin; Faxe 10%; Knalleffekt; „Mate aus der Dose, die neue Zukunft!“ (Cornelis trinkt Mate); „Pfeffi“ Pfefferminz-Likör; Renkes Autogeschichte; „Das ist eine wunderbare Geschichte mit einer schlechten Pointe!“ (Renke); Dennis trägt eine Schwarzer-Balken-Brille; Exclusiv bei anycast: Die Antwort auf die Frage: „Wer wird der Nachfolger von Uwe Neuhaus?“; Renke hat im Training ein Tor geschossen (Applaus); Mike Büskens; Greuther Fürth; Fortuna Düsseldorf; Friedhelm Funkel; Berti Vogts; Lothar Matthäus; Hörertreffen am folgenden Sonntag an der Union-Tanke; Bayern-Fluch (Union hat Schwierigkeiten, gegen Mannschaften aus Bayern zu gewinnen); Fehlzündung bei Dennis' Spielzeug-Knarre; 2. Gast: Ralf Stockmann (00:24:33) @rstockm; „Ist das eigentlich dauerhaft mit Euch?“ (Ralf bezweifelt die Zukunft von anyca.st); „Total armselig!“ (Dennis feuert wieder seine Knarre); Ralf findet es bizarr, das alles so wenig kostet; Häppchen (Pumpernickel mit Obazda, ausserdem Kirschtomaten und Apfel) werden herumgereicht; Winkekatzen sind kein Perpetuum Mobile; Renke ruft auf, die Couch aus der Microsoft-Lounge herauszutragen; Illegale Versammlung; Sascha Lobo; Dennis nennt das Publikum der re:publica „Marketing-Abschaum“; „Oh mein Gott, ich bin so sauer, ich registriere eine Domain!“ (Ralf zitiert eine Domain); Ralf entkleidet sich; Ralf ist Regierungsrat und hat sich ein T-Shirt mit „re:gierungsrat“ drucken lassen; Christoph Schlingensief; „Ihr seid mir alle zu rational. So wird das nichts mit der Revolution!“ (Ralf); Renke kritisiert Sascha Lobo; Renke liest seinen Tweet; Die re:publica sei für Poser, wird behauptet; Am Anfang war Dennis' Bauchansatz zu sehen; Man schaut streng in die erste Reihe, um den nächsten Gast auf die Bühne zu bitten; 3. Gast: Claudia Krell (00:45:30) @wortkomplex; Pantone; Claudia hat einen Kalender, der jeden Tag eine neue Farbe anzeigt; HKS; HKS 42, die Farbe des deutschen Mittelstands (blau); Offsetdruck; Spendenaufruf für einen anyca.st-Bierpong-Tisch; Renke erklärt das Bierpong-Konzept; Renke würde das Geld zurücküberweisen, wenn die Gesamtsumme nicht zustande kommt; Europawahl; Briefwahllokalvorsteller; Erfrischungsgeld; Wahlwerbung; Dennis möchte ein Feuerzeug, das auf 5 Meter Entfernung funktioniert; Claudia empfiehlt einen Umweltvernichter; Claudia war im Mondo Sardo in der Winsstraße; Mondo Sardo; „War ja auch mal Deutschland!“ (Renke über Sardinien); Die Spielzeugknarre kommt ausgiebig zum Einsatz; Claudia verabschiedet sich; 4. Gast: Sven Sedivy (01:06:27) @graphorama; Odol; LAN-Party; Man verbreitet das Gerücht, dass auf LAN-Parties destilliertes Wasser bereitstehen würde; Elekrolyse; Dextrose; Dennis hat bei der BP nach Erdöl gefragt, und einen Liter per UPS-Gefahrentransport geschickt bekommen; Verquere Weltanschauungen; Rossmann; Die shownot.es werden erwähnt; Dennis hat einen weiss-gelben Gurt in Judo; Breaking News: Zwischen Magdeburg und Braunschweig ist jemand nackt durch den Zug getanzt; Die neue Website vom anyca.st; Körperteile aufmachen; „Sachen mit Körperteilen aufmachen“; Sven möchte Schnittchen; „Ich sehe was, was du nicht siehst. Für einen Audiopodcast auch eine sehr gute Idee!“ (Renke schägt ein Gesellschaftsspiel vor); ADN-Lesung; „Starkbier-Man“ (Sven); „Meine Superheldenfähigkeit ist Spezialkleber“ (Cornelis); Renke und Cornelius hatten Kroketten zum Frühstück; Sven will Renke denunzieren; „AH MEIN NIPPEL!!!“ (Renke wurde von Dennis mit der Spielzeugknarre beschossen); Graphorama hat die Haare wachsen lassen für den Anycast; Dennis wurde von seiner Oma enterbt; 5. Gast: Martin Fischer (01:28:09) @nitramred; Staatsbürgerkunde-Podcast; „Das sieht ein bisschen aus wie Wick Medinait.“ (Martin zum Pfefferminz-Schnaps); Odol erzeuge; Phenol; „Mein Agentenname ist Doppel Null Null“ (Dennis); „Nasenlochwechsel!“ (Dennis raschelt mit dem Headset); Martin macht Werbung für seinen Podcast; Renke bestellt grüße an Martins Eltern; Goldkrone und Nordhäuser Doppelkorn waren die bekanntesten Schnäpse in der DDR; erklärt Heiko Linke am Saalmikrophon; „Es muss ja knallen, das ist der einzige Effekt“ (Dennis); Alkohol war recht günstig in der DDR; Bettler waren in der DDR assoziale; Renkes These: Einen der die Schubkarre schiebt, den braucht man immer; Allkauf; Ausbauhaus: Man konnte früher in der DDR im Laden ein Haus kaufen; 6. Gast: Katrin Roenicke (01:40:02) @diekadda; Renke öffnet Katrin ein Bier; Die ADN Lesung; ADN: State of the Union; „Es ist nichts vorbei, solange es nicht vorüber ist“; Verabschiedung (01:46:56) Dank an alle Gäste; Die Internationale (01:48:28)

Gordon And Mike's ICT Podcast
Internet Protocol version 6 (IPv6) Details Podcast [32:30]

Gordon And Mike's ICT Podcast

Play Episode Listen Later Apr 3, 2008 32:30


Intro: Two weeks ago we gave an overview of IPv6. This week we take a look at some of the technical details for this protocol. Mike: Gordon, a couple of weeks ago we discussed Ipv6 - can you give us a quick review - what's the difference between IPv4 and IPv6? The most obvious distinguishing feature of IPv6 is its use of much larger addresses. The size of an address in IPv6 is 128 bits, which is four times the larger than an IPv4 address. A 32-bit address space allows for 232 or 4,294,967,296 possible addresses. A 128-bit address space allows for 2 28 or 340,282,366,920,938,463,463,374,607,431,768,211,456 (or 3.4x1038) possible addresses. In the late 1970s when the IPv4 address space was designed, it was unimaginable that it could be exhausted. However, due to changes in technology and an allocation practice that did not anticipate the recent explosion of hosts on the Internet, the IPv4 address space was consumed to the point that by 1992 it was clear a replacement would be necessary. With IPv6, it is even harder to conceive that the IPv6 address space will be consumed. Mike: It's not just to have more addresses though, is it? It is important to remember that the decision to make the IPv6 address 128 bits in length was not so that every square inch of the Earth could have 4.3x1020 addresses. Rather, the relatively large size of the IPv6 address is designed to be subdivided into hierarchical routing domains that reflect the topology of the modern-day Internet. The use of 128 bits allows for multiple levels of hierarchy and flexibility in designing hierarchical addressing and routing that is currently lacking on the IPv4-based Internet. Mike: Is there a specific RFC for IPv6? The IPv6 addressing architecture is described in RFC 2373. Mike: I know there is some basic terminology associated with IPv6. Can you describe Nodes and Interfaces as they apply to IPv6? A node is any device that implements IPv6. It can be a router, which is a device that forwards packets that aren't directed specifically to it, or a host, which is a node that doesn't forward packets. An interface is the connection to a transmission medium through which IPv6 packets are sent. Mike: How about some more IPv6 terminology - can you discuss Links, Neighbors, Link MTUs, and Link Layer Addresses? A link is the medium over which IPv6 is carried. Neighbors are nodes that are connected to the same link. A link maximum transmission unit (MTU) is the maximum packet size that can be carried over a given link medium, and is expressed in octets. A Link Layer address is the "physical" address of an interface, such as media access control (MAC) addresses for Ethernet links. Mike: Can you give a brief ouline in address syntax? IPv4 addresses are represented in dotted-decimal format. This 32-bit address is divided along 8-bit boundaries. Each set of 8 bits is converted to its decimal equivalent and separated by periods. For IPv6, the 128-bit address is divided along 16-bit boundaries, and each 16-bit block is converted to a 4-digit hexadecimal number and separated by colons. The resulting representation is called colon-hexadecimal. The following is an IPv6 address in binary form: 00100001110110100000000011010011000000000000000000101111001110110000001010101010000000001111111111111110001010001001110001011010 The 128-bit address is divided along 16-bit boundaries: 0010000111011010  0000000011010011   0000000000000000   0010111100111011  0000001010101010   0000000011111111   1111111000101000  1001110001011010    Each 16-bit block is converted to hexadecimal and delimited with colons. The result is: 21DA:00D3:0000:2F3B:02AA:00FF:FE28:9C5A IPv6 representation can be further simplified by removing the leading zeros within each 16-bit block. However, each block must have at least a single digit. With leading zero suppression, the address representation becomes: 21DA:D3:0:2F3B:2AA:FF:FE28:9C5A Mike: I know there are lost of zeros in IPv6 addresses - can you discribe zero compression notation? Some types of addresses contain long sequences of zeros. To further simplify the representation of IPv6 addresses, a contiguous sequence of 16-bit blocks set to 0 in the colon hexadecimal format can be compressed to “::?, known as double-colon. For example, the link-local address of FE80:0:0:0:2AA:FF:FE9A:4CA2 can be compressed to FE80::2AA:FF:FE9A:4CA2. The multicast address FF02:0:0:0:0:0:0:2 can be compressed to FF02::2. Zero compression can only be used to compress a single contiguous series of 16-bit blocks expressed in colon hexadecimal notation. You cannot use zero compression to include part of a 16-bit block. For example, you cannot express FF02:30:0:0:0:0:0:5 as FF02:3::5. The correct representation is FF02:30::5. To determine how many 0 bits are represented by the “::?, you can count the number of blocks in the compressed address, subtract this number from 8, and then multiply the result by 16. For example, in the address FF02::2, there are two blocks (the “FF02? block and the “2? block.) The number of bits expressed by the “::? is 96 (96 = (8 – 2)(16). Zero compression can only be used once in a given address. Otherwise, you could not determine the number of 0 bits represented by each instance of “::?. Mike: IPv4 addresses use subnet masks - do IPv6 addresses? No - a subnet mask is not used for IPv6. Something called prefix length notation is supported. The prefix is the part of the address that indicates the bits that have fixed values or are the bits of the network identifier. Prefixes for IPv6 subnet identifiers, routes, and address ranges are expressed in the same way as Classless Inter-Domain Routing (CIDR) notation for IPv4. An IPv6 prefix is written in address/prefix-length notation. For example, 21DA:D3::/48 is a route prefix and 21DA:D3:0:2F3B::/64 is a subnet prefix. Mike: I know there are three basic types of IPv6 addresses - can you give a brief description of each? 1. Unicast – packet sent to a particular interface A unicast address identifies a single interface within the scope of the type of unicast address. With the appropriate unicast routing topology, packets addressed to a unicast address are delivered to a single interface. To accommodate load-balancing systems, RFC 2373 allows for multiple interfaces to use the same address as long as they appear as a single interface to the IPv6 implementation on the host. 2. Multicast - packet sent to a set of interfaces, typically encompassing multiple nodes A multicast address identifies multiple interfaces. With the appropriate multicast routing topology, packets addressed to a multicast address are delivered to all interfaces that are identified by the address. 3. Anycast – while identifying multiple interfaces (and typically multiple nodes) is sent only to the interface that is determined to be “nearest? to the sender. An anycast address identifies multiple interfaces. With the appropriate routing topology, packets addressed to an anycast address are delivered to a single interface, the nearest interface that is identified by the address. The “nearest? interface is defined as being closest in terms of routing distance. A multicast address is used for one-to-many communication, with delivery to multiple interfaces. An anycast address is used for one-to-one-of-many communication, with delivery to a single interface. In all cases, IPv6 addresses identify interfaces, not nodes. A node is identified by any unicast address assigned to one of its interfaces. Mike: What about broadcasting? RFC 2373 does not define a broadcast address. All types of IPv4 broadcast addressing are performed in IPv6 using multicast addresses. For example, the subnet and limited broadcast addresses from IPv4 are replaced with the link-local scope all-nodes multicast address of FF02::1. Mike: What about special addresses? The following are special IPv6 addresses: Unspecified Address The unspecified address (0:0:0:0:0:0:0:0 or ::) is only used to indicate the absence of an address. It is equivalent to the IPv4 unspecified address of 0.0.0.0. The unspecified address is typically used as a source address for packets attempting to verify the uniqueness of a tentative address. The unspecified address is never assigned to an interface or used as a destination address. Loopback Address The loopback address (0:0:0:0:0:0:0:1 or ::1) is used to identify a loopback interface, enabling a node to send packets to itself. It is equivalent to the IPv4 loopback address of 127.0.0.1. Packets addressed to the loopback address must never be sent on a link or forwarded by an IPv6 router. Mike: How is DNS handled? Enhancements to the Domain Name System (DNS) for IPv6 are described in RFC 1886 and consist of the following new elements: Host address (AAAA) resource record IP6.ARPA domain for reverse queries Note:  According to RFC 3152, Internet Engineering Task Force (IETF) consensus has been reached that the IP6.ARPA domain be used, instead of IP6.INT as defined in RFC 1886. The IP6.ARPA domain is the domain used by IPv6 for Windows Server 2003. The Host Address (AAAA) Resource Record: A new DNS resource record type, AAAA (called “quad A?), is used for resolving a fully qualified domain name to an IPv6 address. It is comparable to the host address (A) resource record used with IPv4. The resource record type is named AAAA (Type value of 28) because 128-bit IPv6 addresses are four times as large as 32-bit IPv4 addresses. The following is an example of a AAAA resource record:         host1.microsoft.com    IN    AAAA   FEC0::2AA:FF:FE3F:2A1C A host must specify either a AAAA query or a general query for a specific host name in order to receive IPv6 address resolution data in the DNS query answer sections. The IP6.ARPA Domain The IP6.ARPA domain has been created for IPv6 reverse queries. Also called pointer queries, reverse queries determine a host name based on the IP address. To create the namespace for reverse queries, each hexadecimal digit in the fully expressed 32-digit IPv6 address becomes a separate level in inverse order in the reverse domain hierarchy. For example, the reverse lookup domain name for the address FEC0::2AA:FF:FE3F:2A1C (fully expressed as FEC0:0000:0000:0000:02AA: 00FF:FE3F:2A1C) is: C.1.A.2.F.3.E.F.F.F.0.0.A.A.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.C.E.F.IP6.ARPA. The DNS support described in RFC 1886 represents a simple way to both map host names to IPv6 addresses and provide reverse name resolution. Mike: Can you discuss transition from IPv4 to IPv6? Mechanisms for transitioning from IPv4 to IPv6 are defined in RFC 1933. The primary goal in the transition process is a successful coexistence of the two protocol versions until such time as IPv4 can be retired if, indeed, it's ever completely decommissioned. Transition plans fall into two primary categories: dual-stack implementation, and IPv6 over IPv4 tunneling. More Info Mechanisms for transitioning from IPv4 to IPv6 are defined in RFC 1933. There are two primary methods. Dual Stack Implementation The simplest method for providing IPv6 functionality allows the two IP versions to be implemented as a dual stack on each node. Nodes using the dual stack can communicate via either stack. While dual-stack nodes can use IPv6 and IPv4 addresses that are related to each other, this isn't a requirement of the implementation, so the two addresses can be totally disparate. These nodes also can perform tunneling of IPv6 over IPv4. Because each stack is fully functional, the nodes can configure their IPv6 addresses via stateless autoconfiguration or DHCP for IPv6, while configuring their IPv4 addresses via any of the current configuration methods. IPv6 Over IPv4 Tunneling The second method for implementing IPv6 in an IPv4 environment is by tunneling IPv6 packets within IPv4 packets. These nodes can map an IPv4 address into an IPv4-compatible IPv6 address, preceding the IPv4 address with a 96-bit "0:0:0:0:0:0" prefix. Routers on a network don't need to immediately be IPv6-enabled if this approach is used, but Domain Name System (DNS) servers on a mixed-version network must be capable of supporting both versions of the protocol. To help achieve this goal, a new record type, "AAAA," has been defined for IPv6 addresses. Because Windows 2000 DNS servers implement this record type as well as the IPv4 "A" record, IPv6 can be easily implemented in a Windows 2000 environment. Mike: we've only touched on some of the IPv6 details - where can people get more information? I'm hoping to run a session at our summer conference July 28 - 31 in Austin, TX - we've currently got faculty fellowships available to cover the cost of the conference. See www.nctt.org for details. References - Content for this academic podcast from Microsoft sources: All Linked Documents at Microsoft Internet Protocol Version 6 (note: excellent and free online resources): http://technet.microsoft.com/en-us/network/bb530961.aspx Understanding IPv6, Joseph Davies, Microsoft Press, 2002 ISBN: 0-7356-1245-5 Sample Chapter at: http://www.microsoft.com/mspress/books/sampchap/4883.asp#SampleChapter