Podcasts about Chainer

Machine learning software

  • 30PODCASTS
  • 41EPISODES
  • 51mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 6, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Chainer

Latest podcast episodes about Chainer

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well

Ars Moriendi
Ep.4 Né pour déchainer les enfers

Ars Moriendi

Play Episode Listen Later Dec 5, 2023 57:27


Nos histoires Richard Speck: On dit que l'alcool fait sortir notre vraie personnalité, et ce qu'on garde bien caché au fond de nous fait surface, parfois de façon explosive. Les blackouts peuvent parfois dissimuler des horreurs inimaginables... Sombre Dimanche: Et si certaines chansons avaient le pouvoir de manipuler l'esprit? La sinistre réputation qui entoure cette chanson hongroise raconte qu'elle pousse au suicide. Mettant en vedette Maxime Paradis, Mathieu Niquette et Jean-Michel Berthiaume ArsMoriendiPodcast.ca

enfers mettant chainer mathieu niquette
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
10 Best Open-Source Deep Learning Tools to Know in 2023; Will.i.am hails AI technology as ‘new renaissance' in music; Google says it'll scrape everything you post online for AI;

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Jul 3, 2023 25:45


10 Best Open-Source Deep Learning Tools to Know in 2023:TensorFlow, PyTorch, Keras, MXNet, Caffe, Theano, Torch, Chainer, DeepLearning4j, Caffe2Google says it'll scrape everything you post online for AI;Microsoft uses ChatGPT to instruct and interact with robots;Will.i.am hails AI technology as ‘new renaissance' in music;Benchmarking LLMs searching scientific evidence;MIT Unveils Revolutionary AI Tool: Enhancing Chart Interpretation and Accessibility with Adaptive, Detail-Rich Captions for Users of All AbilitiesMoonlander launches AI-based platform for immersive 3D game developmentMozilla adds AI Help that does the oppositePanic about overhyped AI risk could lead to the wrong kind of regulationIt only took five hours for an AI model to design a functional computerDaily AI Update News from Microsoft, Humane, Nvidia, and MoonlanderUS Senator Believes AI Should Be Aligned With Democratic ValuesThis podcast is generated using the Wondercraft AI platform, a tool that makes it super easy to start your own podcast, by enabling you to use hyper-realistic AI voices as your host. Like mine!Attention AI Unraveled podcast listeners!Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," by Etienne Noumen, now available at Google, Apple and Amazon! This engaging read answers your burning questions and provides valuable insights into the captivating world of AI. Don't miss this opportunity to elevate your knowledge and stay ahead of the curve.Get your copy Apple, Google, or Amazon today!

Half the Experience
Top Food Chainer's

Half the Experience

Play Episode Listen Later Jan 3, 2023 69:00


I was five years old right before i turned six. When the people were free and traded shells for food we understood the we were the top of the crop ready to build air boats and water planes. Do not forget to share and subscribe. Thanks to alll our other one other listeners! --- Support this podcast: https://anchor.fm/halfthe-experience/support

chainer
Eureka Street Crypto Podcast
Episode 38 - Hey man, are you a multi-chainer or a maxi?

Eureka Street Crypto Podcast

Play Episode Listen Later Dec 22, 2021 38:10


Good morning! This morning I bring up a topic that has been on my mind. Does the future include multiple layer 1 blockchains? Or will one blockchain rule them all? Many ETH maxis will tell you that Ethereum will be the base-layer for all of crypto and that the world will run on a variety of layer 2 solutions. They say that all the other layer 1 "monolithic" blockchains will fall to their doom. However, at the same time, you see lots of dapps like Aave using any blockchain that has sufficient liquidity and reaping handsome rewards for being open-minded to many other blockchain solutions. Which perspective will be the victor in the end? Sources: https://newsletter.defitimes.io/p/go-multi-chain-or-ngmi?utm_campaign=post&utm_medium=email https://uniswap.org/ https://aave.com/ https://ens.domains/

Commander Amateur
Budget-Deck: Chainer, Nightmare Adept

Commander Amateur

Play Episode Listen Later Sep 23, 2021 16:52


Kreaturen aus dem Friedhof wiederzuverwerten ist super. Aber unser Commander Chainer gibt auch noch allen davon Hast. Wie wir mit diesen Fähigkeiten Unfug betreiben, hört ihr in diesem Budget-Deckguide. Deckliste auf Moxfield Discord-Server der PodRiders Instagram Twitter PodRiders wird ausgestattet von Shure. Bist du auf der Suche nach hochwertigem Audio-Equipment? Dann folge diesem Link: https://shu.re/3zdcJUV

Into the 99
Modern Horizons Two: Legendary Creature Lore

Into the 99

Play Episode Listen Later Jun 16, 2021 53:50


On this lore episode, Daniel, Evan and John talk about some of the other legendary creatures in modern horizons 2.  We go over Garth one eye, Asmor, Chatterfang, Titania, Chainer, Tourach and Kaldra and their small but interesting back stories. Definitely tune in if you want to learn more about these exciting new legendary creatures! You can read more about them herehttps://mtg.fandom.com/wiki/Garth_One-Eyehttps://mtg.fandom.com/wiki/Asmoranomardicadaistinaculdacarhttps://mtg.fandom.com/wiki/Chainerhttps://mtg.fandom.com/wiki/Tourachhttps://mtg.fandom.com/wiki/Kaldrahttps://mtg.fandom.com/wiki/Chatterfanghttps://mtg.fandom.com/wiki/Titaniahttps://mtg.fandom.com/wiki/Grist Want all our awesome content in one place? Check out our website: https://www.Intothe99.comIf you want to support the show make sure to check out our merch store and Patreon:Patreon: https://www.patreon.com/Intothe99Merch Store: https://teespring.com/stores/intothe99Intro Music by:Track: ROY KNOX - Lost In SoundMusic Provided By: Magic RecordsListen To The Original: https://youtu.be/bafd5CsNk0MFanlink: https://fanlink.to/lisOutro Music by:https://www.purple-planet.com   

Commander Theory
How To Be Popular - A Look Into the Top 20 Commanders

Commander Theory

Play Episode Listen Later Jan 8, 2021 43:34


The top 20 Commanders over the last two years is one of the most interesting features on EDHREC. It's interesting to see it change over time and to see what commanders have the go-get-'em attitude needed to break into the top spots, but what can it teach us about design? In this episode, Nick and Zak look at the top 20 commanders on EDHREC and try to analyze what makes them popular? Why should Muldrotha succeed when Chainer, Nightmare Adept and other graveyard strategies don't even make it to the top 100 most popular commanders? Listen and find out! We especially want your feedback from this episode, so please let us know what you think, see, and feel about this kind of topic! What decks and cards do you see the most of? What have the beginners in your magic playgroups been drawn towards? Next week will be our first Kaldheim spoiler episode, so stay tuned! It's gonna be big! Follow Commander Theory on Tumblr, Twitter, and YouTube for more Commander content!If you like our podcast, please support us on Patreon!If you're planning on shopping with TCGPlayer, you can support the show by using our affiliate link. It costs you nothing and earns money for the show!The opening theme is Lincoln Continental by Entrophy (now Nic Cage on Soundcloud)Support the show (https://www.patreon.com/commandertheory)

Haunted Attraction Network
Postmortem: Chainer's Field of Screams in Botkins, Ohio

Haunted Attraction Network

Play Episode Listen Later Dec 4, 2020 12:16


Max gets a postmortem report from Pyle Hazard at Chainer's Field of Screams in Botkins, Ohio. Chainer's is a small family haunt, but they had a stellar year and raised a lot for charity. They're keeping involved by ringing for the salvation army and entering the local mascot competition. In the news, Darryl reports on: Black Friday Deals for Haunters, the fire at Dark Woods, Haunt Weekly's Anti Christmas Music List, and a Halloween miracle.

Haunted Attraction Network
Postmortem: Chainer’s Field of Screams in Botkins, Ohio

Haunted Attraction Network

Play Episode Listen Later Dec 4, 2020 12:16


Max gets a postmortem report from Pyle Hazard at Chainer’s Field of Screams in Botkins, Ohio. Chainer’s is a small family haunt, but they had a stellar year and raised a lot for charity. They’re keeping involved by ringing for the salvation army and entering the local mascot competition. In the news, Darryl reports on: Black Friday Deals for Haunters, the fire at Dark Woods, Haunt Weekly’s Anti Christmas Music List, and a Halloween miracle.

Rebuild
275: Not-So-Smart Speaker (higepon)

Rebuild

Play Episode Listen Later Jul 20, 2020 97:15


Taro Minowa さんをゲストに迎えて、GitHub, 汎用AI, Alfred, Twitter, Netflix, 低温調理などについて話しました。 Show Notes Cleanfeed Rebuild Search Groonga - An open-source fulltext search engine and column store GitHub Archive Program: the journey of the world's open source code to the Arctic Abstraction and Reasoning Challenge | Kaggle Conversational AI: The Science Behind the Alexa Prize The Myth of a Superhuman AI ELIZA The Migration Guide from Chainer to PyTorch François Chollet アフィン写像 Alfred - Productivity App for macOS blueutil: CLI for bluetooth on OSX Magnet Typora — a markdown editor, markdown reader. A hacker used Twitter’s own ‘admin’ tool to spread cryptocurrency scam Rebuild: 37: N Factor Auth (Naoki Hiroshima) Twitter: An update on our security incident How to Protect Your Phone Against a SIM Swap Attack 三体Ⅱ 黒暗森林(上) | 劉 慈欣 日本沈没 2020 Unsolved Mysteries 低温調理器 BONIQ Instant Pot リロ氏 (@ly_rone) / Twitter Anker PowerWave Pad Alloy 15W

Commander ad Populum
Commander ad Populum Ep 62 - Chainer, Nightmare Adept

Commander ad Populum

Play Episode Listen Later Jun 5, 2020 35:26


Welcome to another deck tech for the people, by the people, for the people! Today's list is a Patreon supporter list submitted by Anthony Bockley. It can be found here: https://archidekt.com/decks/314111#Got_me_in_CHAINS Huge thank you to all the supporters of the podcast! If you'd like to have a deck featured, or take a to look at any of the other benefits of becoming patron, head here: https://www.patreon.com/CadPopCast Social Media locations: Twitter: @CadPopCastFacebook: Facebook.com/CadPopCast Patreon: Patreon.com/CadPopCast

nightmare commander adept chainer cadpopcast
Researchat.fm
52. Split into a row

Researchat.fm

Play Episode Listen Later Apr 11, 2020 69:52


ゲストにkotoneさんを招き、ディープラーニング、機械学習とそのアルゴリズムや計算用マシンについて話しました。Show notes 機械学習 (Wikipedia) ディープラーニング(深層学習) (Wikipedia) ニューラルネットワーク (Wikipedia) パーセプトロン (Wikipedia) GPU (Wikipedia) ディープラーニング入門 Chainer チュートリアル 株式会社 Preferred Networks Chainer NVIDIA CUDA-X GPU-Accelerated Libraries for AI and HPC Preferred Networks、1024個のGPUでディープラーニングの世界最高速を実現と発表 NARUTO -ナルト- 疾風伝 ナルト全巻セット ImageNet…学習用の画像データセット 畳み込みニューラルネット(Wikipedia) THE MNIST DATABASE of handwritten digits…手書き文字認識用のデータセット 誤差逆伝播法 (Wikipedia)…バックプロパゲーション Data Parallelism VS Model Parallelism in Distributed Deep Learning Training…データ並列とモデル並列の違い Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes (arXiv)…PFNが達成した超高効率データ並列学習 One weird trick for parallelizing convolutional neural networks (arXiv)…畳み込みニューラルネットワークのモデル並列化 Performance Analysis of a Pipelined Backpropagation Parallel Algorithm (IEEE)…パイプライン化による層並列化の検証 GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism (arXiv)…マイクロバッチパイプライン化による層並列化の実装 Decoupled Neural Interfaces using Synthetic Gradients (arXiv)…誤差逆伝播に必要な勾配を生成することで並列化可能とする学習手法 Direct Feedback Alignment Provides Learning in Deep Neural Networks (arXiv)…層ごとの誤差逆伝播が不要な学習手法 Training Neural Networks with Local Error Signals (arXiv)…大域的な誤差信号が不要な学習手法 スーパーコンピュータ「京」はとてつもなく速い スーパーコンピュータ「富岳」プロジェクト スパコン撤去で「今は無き京前」? 駅名が話題 GUNSLINGER GIRL (全15巻)…Kotoneさんおすすめのマンガ Researchat.fm お便りフォーム…Researchat.fmではリスナーの方からのお便りを募集中です。よろしくお願いします。 Editorial notes 生命科学の内容がメインのポッドキャストに呼んでいただきありがとうございました。とても楽しく収録させていただいたので皆さまにも楽しんでいただければ幸いです。いずれアイマスの話もリベンジしたいですね (kotone) 機械学習ネタ、これまでとは異なるテーマでとても良かったですね (soh) 他分野の話を聴くのは楽しい〜〜〜(tadasu) ことねさん出演ありがとうございました!音源が落ちてしまったアイマスの話もまた今度是非!(coela)

Questfall
Episode 26 - Chain of Fools

Questfall

Play Episode Listen Later Apr 8, 2020 58:15


The Coven has honoured their deal with Sichora and moved to destroy Vizimir the Chainer, but the fight seems to be going badly. Can our heroes make it to the battle in time to save Nizog and Oreissa from the brutal chain devil?Welcome to Questfall, adventurers! For more details on us and our world of the wicked and weird, follow us on Facebook, Instagram, Twitter, or Tumblr, or check out our website at questfall.caMusic: “Questfall Theme” by Mimi Taylor and James Everett

tumblr coven chain of fools chainer
The Gravel Ride.  A cycling podcast
Gravel Bike 101 (2020) - A conversation with Randall from Thesis Bike focused on finding the right gravel bike for you.

The Gravel Ride. A cycling podcast

Play Episode Listen Later Mar 31, 2020 53:41


Sponsored by Cycle Oregon.  Look for multiple great events this fall. This week we take another look at Gravel Bike 101 in a conversation with Randall Jacobs from Thesis Bike with the goal of breaking down some of the considerations when purchasing a gravel bike. New way to support the pod!  Buy me a Coffee. Randall Jacobs @  Thesis Bike Automated Transcript (please forgive the typos). Good day everyone and welcome to the gravel ride podcast. I'm your host, Craig Dalton. This week's episode of the podcast is brought to you by our friends at Cycle Oregon. I introduced you to them last week talking about their exciting gravel weekend they had planted in may and wouldn't you know it. Boom pandemic. The guys up at Cycle Oregon are super bummed, but they're delaying this event until October, which is definitely the right thing to do. I know all event organizers all over the world, they're struggling with what to do and where to get some time slots. Fortunately as you guys know, Oregon is such a great place to ride in the fall. October is going to be a real neat time there in the Ti Valley and I'm looking forward to the event. So go check out www.CycleOregon.com and if you're interested in information, make sure to put TGR in your registration. I believe there's a team field or otherwise and note field where you can put TGR just to let them know that you heard about it first here at the gravel ride and definitely support them and all the other event organizers who are rejiggering their calendar to make sure that when it's safe to go out, when it's safe to congregate in groups. We have awesome events to go look forward to. I don't know about you guys, but this pandemic has forced me to really think about what my calendar is going to look like. A lot of great events in the first half of the year have been postponed and perhaps they'll come back later in the year, but it's definitely gonna be a fun filled fall. I'm super optimistic and looking forward to it. I know like me, everybody's struggling through this hard time, so let's just band together. Let's do what we can. Let's be kind to one another. Let's reach out to each other online. Let's keep those solo rides going so we can stay fit and you know, we'll be back. Everybody's going to be back. So keep in mind that I record these podcasts maybe a couple of months in advance, so if any of the content seems to be inappropriate like me calling for a group ride or anything like that, just keep in mind that the intros are more present, but the body of the recording is done typically a month or so in arrears, so again, forgive any gaps from that perspective. I'm super stoked on this episode it has with most episodes. I really wanted to revisit our gravel bike one Oh one episode we did early on in the podcast because I think it's just a great starting point for a new riders as well as riders who have been around for a while and are thinking about their equipment in different ways as they've learned how to ride and chosen the terrain that they fallen in love with. I've asked my friend Randall, cofounder of thesis bikes out of San Francisco to join the podcasts and he actually had me over again. This was before the, so I was over at thesis world headquarters over there in San Francisco and just enjoyed the conversation. It was a lot of fun to catch up with a buddy that I've been riding with now for for a year. Plus. The thesis bike is available at thesis' dot bike. They've got some deals going over there right now, so hop on over and check out what they're doing and said Randall and the team and note they definitely like to interact with the community. So feel free to reach out with any questions about their bike and anything that's come up in relation to this podcast and the gravel bike or one-on-one. I'm here for you as always. But I'm sure Randall would be game to answer any questions over social media or directly over email. So apologies for the long intro. Thanks again to our sponsor cycle Oregon for stepping up for this episode and a few others. We look forward to seeing you in the fall with some of your great events. And with that, let's dive right in to gravel bike one Oh one all right, Randall. Okay. Come to the show. Thank you very much. It's nice to be back. I appreciate you having me in your home and where you work a lot for thesis. It's a joy to be here. This is a global headquarters yet virtual company. Yeah, exactly. So for awhile, I know on our bike rides I've been talking to you about my desire to kind of take a step back and do another gravel bike one Oh one episode. I did one back in 2018 with the goal of, you know, if you were thinking about getting into the sport, what do you need to think about? And I realized I was in a bike shop. We kind of probably stepped head of where we should have even started because a lot of people will stumble upon this podcast and just be asking themselves the question is gravel cycling for me? So I thought it'd be great to just have a conversation about that today. Sure. And, and I think the answer to that question really depends on where you're coming from. Right? So some of us are coming, you know, I was a former mountain biker, you know, racer. I did, you know, it trained a lot on the road. So I'm already kind of a dyed in the wool cyclist. You know, this is, this is my, what I do, it's my tribe. But then you have other people who are like getting into this. Maybe this is like their first serious bike, right? They, maybe they had a bike in college, maybe they have like a, you know, a, a commuter or something like that and they see their friends having fun. And so I think in terms of like how to think about a gravel bike, well for the people who already have a stable, maybe they're thinking about this as their and as an Mplus plus one machine. And by that I mean like the optimal bikes for S for cyclist is often said to be N plus one, I need one more. I'm not an adherent to that philosophy but, but it's the idea of like having a dedicated machine for going out on these long rambling rides on a mix of road and dirt and, and so on, being able to get lost and have adventures. The other philosophy, which is kind of my, my jam is, you know, N minus one or maybe even N minus two or in minus three, if you have that stable. So think of a bike and gravel riding is like, you have a bike that can do all the things right? It's a, it's a really good say, endurance road bike. If you put some slicks on it you put some fat six 50 B's, it's a borderline mountain bike. You put a dropper and a flair bar on there. Like you're, you're, you have a better mountain bike then, you know, the people who invented mountain biking, not, not far from here. So you know, this idea of like having a machine where you can go out on a ride and on the road and be like, huh, I wonder where that trail goes. And then just dive into it and explore or somebody is, is hosting, you know, a mixed terrain ride and you just, you have the right machine for a variety of different experiences. The last one being like adventure. It was like, like travel, bike packing, touring these bikes generally have, you know, oftentimes have accommodations for, you know, you put bag systems on and things like this and you can really get out there. So I'm going to take a step back from my sister who's constantly asking me like, what the heck is this that you do? She knows mountain bikes and she knows the tour de France. And so what I've said to her is it's a, a drop handlebars bike that you can ride off road. So it kind of looks like a road bike to many people, but it's actually capable and has a lot of design features that we can get into later that allow it to go anywhere on road or off road. It can. Yeah. And there's kind of, there's a spectrum, right? You have machines that are, you know, almost like cross bikes in terms of like more limited tire clearance. And maybe the, the geo is, is a little bit more aggressive or something. And then you have others that are essentially drop our mountain bikes. Right? And so the former is not going to be as capable on dirt. The the ladder is going to be kind of a pig on the road. And it's, it the, the steering will be a bit slower and they're great for that dedicated purpose. But yeah, in terms of like being able to go out and have this wide variety of adventures, you know, you want to be kind of mindful of, of getting a machine that'll cover, cover all the bases. And I think that that's a gravel bike at its best is one that can do all the things. Yeah. And I mean I think that's an interesting part of this exploding sector of the cycling industry is that people are trying to figure out, well what's my entry point? Is it a bike that can do all these things? Or is it a bike that does one end of spectrum better than others? And you know, I often talk about road plus bikes as being sort of the basic entry part. If you have, you know, if you're on the roadie side of the market, you're like, okay, now I can run a 30 to see tire in addition to my 25 or my 28 when I'm on the road and when I'm running that 32, see, I can go on a dirt road and feel comfortable. Well, this really gets down to like, you know, let's get down to the brass tacks of like, what is, what is the difference between all these different bikes? You have like rode bikes and you have, you know, climbing road bikes and arrow road bikes and endurance road bikes. You have cross bikes and you have gravel bikes, you have a, you know, a bike packing and touring rigs and so on. And you know, there's this idea that like, every one of these is kind of purpose built for that experience. But we've had some key enabling technologies of late one of which being like tubeless tires, right? Run on wide rims. Another being, you know, dropper posts you know, the trend towards slightly flared bars and then materials like carbon fiber. Make it so that you can have a machine that's lightweight. You can have a machine that is, you know, very capable off road cause Oh the last one being disc brakes of course. You know, you can swap between wheel sets to have like a road or a dirt experience if you want to go to the extremes. And then with something like a dropper, you know, you, you're getting into mountain bike territory with our suspension because you're, you're able to shift your weight back and keep your front wheel light, let it roll and kind of sail over train and your, your butts off the back and the bikes dancing out, you know, underneath you as your legs are acting as suspension, like the capability of something like that is well into cross country territory. Yeah. Yeah. So let me, I'm going to, I'm going to step back from my sister's benefit again and say, why do we have tubeless tires? Okay. We used to have tubes and tires and we still do on plenty of bikes, but tubes required us to run higher air pressure to avoid pinch flats. Yep. And probably many other reasons that I won't drill into. And now we have just the tire with some sealant inside that we pump up and we can run lower type or higher pressure, which gives us, we can talk about what it will do off road, but at, at, at sort of a simple level, it allows us to have a more comfortably comfortable riding tire, Even better rolling resistance and similar or potentially even slightly lighter system weight. It's actually benefits all around. The, you know, there's so tubeless tires you get, one of the big risks that you have, especially as you go off road is pinch flats. So basically, you know, you hit a bump, pinches the two between the ground and the rim and you get a little sneak bike, sneak bait sort of a pattern on the tube. That goes away with tubeless. The, you know, the manufacturing tolerances available within the bike industry have improved significantly and you know, tire construction, all that stuff that makes it so that you can get the tight tolerances needed for a tubeless system. The advent of like sealants. And so on, make it so that not only do you like seal the casing properly, but if you get a little puncture, there's a good chance it's going to hold up. And so there's just, it's all benefit. Like the only downside may be road, some people will say like, Oh, like tubeless road, it's, it's a pain in the buck. It's it, you know, the industry hasn't properly settled on standards and so on. That is actually mostly a problem with narrow rims and tires. And if you run wide rims and a 28 plus road tire, your pressures are low enough where a lot of the problems associated with high pressure systems goes away. So if you're thinking tubeless, like tubeless is an essential enabling technology of these experiences go tubeless, you'll, you won't look back. And that, that's all. And when you walk into the bike shop or you're shopping online, it's not going to look any different. It's just a wheel with a tire on it. If you're in the, I'm buying my first bike or my first gravel bikes, I don't stress about that. But when someone says tubeless, two thumbs up from everybody here, it's super important to your enjoyment for a lot of different reasons. The second thing you mentioned that may be different looking than somebody's previous shopping experience with bikes are these disc brakes. And the only thing really you need to know about this brakes is they stop a hell of a lot better than caliper brakes or anything that proceeded it. And they're really a must have for going off road. Yeah. And, and of course like people often as you cited there will cite the power of a disc brake as the primary benefit, a good caliber brake and the dry has plenty of braking force. Right. but it's the, the consistency of breaking like in the wet, in the dirt and so on. You know, grinding down your rims, the rims are going to hold up. It changes rim construction as well, so you can have the lighter, stiffer, stronger and not have to dissipate heat. But then also modulation. So like the little like, especially on dirt, you know, the difference between breaking traction and not breaking traction can be a tiny amount of forest at the lever. And so being able to like trends, you know, at the end of the day, like a human on a bike is a cyborg, right? And you're trying to create this, this melding of, of human and machine such that, you know, it's an extension of, of, of the animal on, on the machine. And so like that, that modulation I think is, is actually arguably one of the greatest benefits. The last one being, and this one's quite critical for gravel, is you're no longer dependent on your rim and tie tire combo like your room and tire combo don't affect your your brake caliper clearance cause you're not squeezing at the rim, you're doing it at, at the rotor. And so you can swap wheels, you can have, you know, a road set with, with a skinny tire, skinny, slick, and you can have a big fat mountain bike tire on your other set and it's gonna grab at the same point. And so that, that is where you see like six 50 bees come in. You're not gonna find a caliber that breaks well at the rim that can fit around a 40 mil tire anyways. Like, you know, cross bikes and notoriously they squeal and so on. So that that other component of like being able to fit a variety of different wheel tire packages too is kind of another key component that I think was essential in this big shift. Right? Yup. Yeah, exactly. So when you go in the bike shop, you're going to see something drop handlebars a little bit, Navi your tire, then potentially you've seen on our bike, in your past shopping experience, you're also going to see a wide variety of frame materials. So anybody who's shopping for a bike, like every other sector of the sport, you've got steel bikes, you've got aluminum bikes, you've got titanium bikes. And you've got carbon fiber bikes and we don't need to drill into the minutia around these different materials cause that's probably another podcast. Don't want to go deep nerd on this. Don't want to go deep on it, but let's just put it out there that these, you know, in general camps, these materials are going to have different fields, different weights in different attributes. Right? Yeah. Is that, it's interesting. I actually just did a whole project researching You know, titanium and got deep in the weeds. And you know, I was at specialized when they are doing smart well with the aluminum. There's some ideas, there's some misconceptions around say aluminum being really stiff. That was the case back in the day when I'm probably going in. The weeds aren't I think of it this way if as far as a material that gives you really impressive stiffness to weight that's highly tuneable for, you know, damping and various other characteristics that you want on the bike. You just can't be carbon. Like it is just a superior material. And I know that, you know Ty and, and steel had their acolytes and I think that those bikes are beautiful. They have their merits. It's great for custom because you can just MITRE tubes and, and, and take them together pretty easily. But as far as like if you are, if you're in the kind of like three K plus range you know, a carbon frame has a lot of benefits, especially for this experience where, you know, the you, you otherwise might end well, there's like it's kinda, maybe we cut this part out because I'm kind of going into the weeds already. Yeah, no, that's okay. Randall, you know, we're, we're gonna, I think we're gonna we're going to go in the weeds and we're going to pull back, I think at a high level. Again, if you're a new athlete shopping for a bike, if this is your, your sort of first proper adventure bicycle, you're going to have some sort of basic things that you're going to get in front of. So, so here's maybe a good way to frame this. If you're on a budget, right, and you know, your budgets like 1500 bucks, if there's a $1,500 gravel bike out there, it probably is not going to have the best components because a lot of the money went into the frame and you can think while it's upgradable and so on. Well, by the time you upgrade all those components, it's like turn, you know, getting a civic and boosting it, and then you fix the suspension and you've all of a sudden spent Porsche money, but you still have a civic. But if you, if you, if you're just getting into it, you're on a budget, steel and aluminum, really hard to beat. You can find really well thought out steel and aluminum frames and chassies that will perform well and kind of get you into the sport. And some of the better aluminum ones in particular at a rather high level. Again, using like, you know, the Cannondale aluminum road bikes and the specialized you know, smart well bikes as an example of aluminum that performs like carbon but at, at the top of the heap carbon for sure. Yeah. And I know we'll get some emails and some texts about titanium, which I'm a big fan of. I love the material. It's a different ballpark and I think when you're ready for titanium, you will have gone through that thought process if it's ultimately the material that makes sense for you. Well, what it comes down to is titanium specifically. You just can't accomplish the bottom bracket stiffness with titanium that you can with carbon fiber or even aluminum. Just because of the way the, the limitations on tube shaping and you know, how much space you have to weld things at the bottom bracket, juncture and so on. So that's probably the biggest compromise that you have with titanium is that bottom bracket stiffness. But otherwise, like, yeah, they're beautiful and you can, you can have a beautiful machine with that material. The other thing that I learned personally was that, you know, it's hard to make the right choice right when you get into this sport. So I, I was riding a Niner aluminum Niner, which was my first gravel bike, which is fully capable, but it had cable actually weighted breaks and I think it could max out at about a 36 or 38 and it turned out for me, you know, how I ride, like it just wasn't matching the aggression, if you, if you will, of my, my descending that I wanted to explore with the gravel bike. And I think that's, that is, you know, one of those things that I do encourage people to really think about is what tires will your bicycle run because it can be limiting and you need to think about what your strengths are, what your concerns are as you're coming into the sport. I think our group ride this last weekend was illustrative cause I was talking to some women from the Santa Rosa area who were incredible athletes, great climbers and a lot of fun to ride with. But when we got on the hairball descents, you know, they had the narrower tires and I feel like it was holding them back a little bit. Although to their credit, they powered through every section we threw at them. Oh, they were crushing it. Yeah. but yeah, it's, I mean, there's really no reason at this point if you're buying a new bike to buy something that doesn't take six 50 B's. Like, I just think that's if you, even if you're thinking that you're going to be riding it more kind of endurance road or more, say like a, a Belgium waffle ride, people show up on, you know, 32 mil slicks. Right? Even if that's going to be more your jam, you're going to reach your point where you want to hit something a little bit gnarlier and you're going to be tire limited. And you know, I've written 700 by 40. There are people who say like 700 by forties, you know, faster or 700 season going to be faster. They're thinking about, you know, TuneIn or mountain and so on. But inevitably you have compromises with that. Well, one, it's not necessarily faster because if the train is undulating and you have lots of bumps and so on, that's all you know horizontal energy that you put in by pedaling, that's getting tr dissipated as vertical energy. Basically you're getting bounced around on the bike and so a big fat tire will address that. But then also like you, you just have so much more ability to go in. Like, you know, I wonder if I can ride that right. Big fat tire you're gonna have a much better chance of riding it and you're going to have less issues with, you know, cracking rims and things like this cause you get, you know, you're under biked on terrain that really demands a, a more capable machine. Yeah. I'm a broken record, obviously [inaudible] six 50 being wide tires, but that's my jam. I think I could be wrong, but I suspect that most bikes out there get specked with 700 seat wheels. What's your sense on that? I think it's, I think it's great to have a 700 set so that you can put your road slicks on them. And as long as the frame fits six 50 B, you'll still be able to go out and have properly rowdy fund. But don't you, don't you get the sense that most shops you see, most bikes you see in a bike shop are advertised start with 700 see as a starting a lot of them. Yeah, yeah, that's, that's just a sense. I haven't, and to your point, like you know, we've both written 700 C wheels, plenty around here and Miranda and I do spend a fair amount of time on 700 by 40 but I remember going out to SBT gravel this year and the guys at Panorai sir, were like, Oh, you should ride it like a 32 and I was like, Oh my God, I can't even imagine putting that on my gravel bike. That said, for that particular course, it would have been fine for me. But with the forties I did find as usual, I was just rolling by people on the dissents. Having the wider tire and even on the small road sections on that course, the actual paved road sections, I didn't really feel like 40 was holding me back in any way. Well, the, so, so my take on this is that, you know, the folks who are trying to like run the minimal tire on the course, you know, if we're talking racers that whole mindset is going to go the way of, you know, the 700 by 23 roadie, you know, mindset where it's like, I need a tire that feels, you know, that's as hard as possible. I'm going to, I'm going to do 700 by 23 I'm going to run into a 120 PSI and I'm gonna feel everything and that's gonna make me feel fast. And that probably means I'm actually going faster. Well, no, you're, the rolling resistance is higher. There's no aerodynamic benefit. Obviously it's, the tire shape is the same. You're literally just wasting energy and beating the hell out of your body. So I think that the gravel scene is going to migrate much more towards fat, six 50 bees. Unless you're doing like hard packed dirt fire roads you know, the fatter six 50 bees are the way to go. And you can just, you know, again, you're out on that dirt fire road. Where does that single track go that that is a wonderful part of this experience. Yeah. And I know we won't probably won't drill too far into the notion of suspension and the many ways in which that gets into a bike, but tire volume is suspension. Don't get it wrong, don't get it twisted people. Well, and it's, it's suspension that is extremely efficient, right? It's not sapping energy. And if you, you know, what's beautiful too is like, you know, let's say you're a Trailhead is an hour away. Like I ride up, you know, from San Francisco to Fairfax and do Tamar Rancho, right? And it's probably mountain bike trail. Well, I'll run a few, few PSA higher PSI higher on the way there and then drop it a little bit. And then you know, getting shredded on the single track and it's a great time, Highly tuneable suspension, one knob tuning, right, right from your tire bow. Okay. So there's, I mean there's a few things for people to think about. We're getting people stoked on gravel. We encourage you to kind of look at whatever your bike budget is, look at a bike that can run both 706 50 B wheelsets if you have the option of starting out with six 50 [inaudible], I think it gives you this one all the benefits we've just been talking about, but then a margin of safety as a newer rider and a margin of comfort that you're not going to get in 700 sea wheel sets. That, that said, you know, if you fall in love with a 700 w C wheelset bike, go for it. Like hopefully it can go at least out to a 40, as Randall said, I think the evidence is clear that tire manufacturers are going bigger and bigger even on the 700 seat size at the end of the day. But these are, you know, those are a couple things to think about around these bikes. The other big thing to think about I think is just where you live. And you know, my bias always comes through being someone who rides what are considered more mountain bikey type terrain with my bike. So my set up tens that way, but I always tried to take a step back and think, well, people in the Midwest or on the East coast, they're talking about plenty of different terrain and the mountain States, again, different terrain that's gonna play a role in what bike's gonna make sense for you? Well, I would say to a degree I think it actually has more to do with like what re wheel tire package makes the most sense for your specific terrain. But in terms of the bike itself the basic principle of like, make sure it fits six 50 B's so that you always have that ability. I, I don't, there's really no downside to that. Doesn't affect geometry. There's no negative aspect of accommodating that tire. And you know, I've written all over the country. I'm from the Boston area. And you know, if with my setup like the tires, you know, I get a byway way in the front and adventure in the rear, so like a file, a semi slick in the rear. And in a file tread up front, I'm efficient on the road on Boston. Like I would road ride to a local mountain bike group ride and it was fast on the road and then I could ride with those, those folks. And you know, I was a little bit underbite I had a great time and then I can ride back and, and you know, this really like the rolling efficiency is there with these tires in the tire construction and so on. So I still think like getting a machine that is more capable than you think you need it to be. Because you'll be bummed out when there are rides that you can't do cause your machine is just not up to it. Yeah. I've been surprised with my gravel bikes. Just the, the idea that as you said, you can roll up to a group ride on the road and hang in there in a way that you maybe wouldn't think. You're like, I've got this sort of burly machine. But the reality is it's not. These are, these are kissing cousins from the road bikes. They're not that far off. Well, let's, let's talk about the actual differences. Right? So I mean, with the advent of hydraulic disc brakes for drop our bikes, right? So the breaking, you know, breaking systems are the same. You know, the geometries you can have, there are some gravel bikes that are, you know, really long and they're and, and more biased towards stability. Some of them are even borderline drop our mountain bikes, but you can get a gravel bike that has an endurance road geo. Like there's this overlapping point between, you know, endurance road and cyclocross and, you know, shrady gravel riding. There's that sweet spot where you have a machine that depending on the tires you put on it and how you, you know, maybe maybe how aggressively you set up your handlebar, you can have different experiences. Yeah. And that's, I mean, that's the beauty of these things. I mean, we've talked online on a number of us are offline on a number of occasions just about how put the road wheel set on this. Things that are road sled, you can kit the group ride. It's all good. Put a sort of tire setup that you just described. You can ride 20 miles of pavement, go hit a mountain bike trail system and ride home, get a NABI or set up. You can get pretty extreme with these bikes, strap some bags on. All of a sudden it's this overnight rig. And I think that's, it's incredible. The versatility of these bikes. Well, It's essentially, so my, my thinking is like, you know, if we could have one bike that really does everything, that would be the ideal. I think given the current state of the art, you know, a gravel bike with two wheel sets or a road, and then like a six 50 P dirt covers everything from performance road riding to bore, you know, borderline cross country bike packing like touring and so on cyclocross. And then if you, if you're into like, hardcore trails, get a dual suspension, tread sled, like that is a different experience. These bikes are not going to be the most fun when it gets properly Chandry and you're doing, you know, 20% gradients and, and, and what have you. But honestly, I used to be a mountain biker. I don't have the time. I don't own a car. You know, I, I don't want to like load up a big machine and drive out to the trails. I want to ride the trails that I have out my door. And you know, fortunately here we have some really good ones. And the truth is like, most people have some good trails, trails near where they live. They know where to look, especially if you can connect them with all these little road sections that are still fun to ride because your, your bike is still fun on, on those roads. Yeah. I think for us, you know, in, in Marin do to kind of trail access issues, we've got to get a bunch further North before you get into some real fun mountain biking. So these types of bikes, like if you're living in San Francisco, being able to ride across the golden gate bridge efficiently, then hit the dirt and the headlines. Yeah, it's just really nice. I mean, I did that for years on a hardtail 20 Niner, which was fine, but it really wasn't scratching the mountain bike itch. You know, cause I would just wasn't getting into the technical terrain. Then all of a sudden I started riding drop bars and some of those fire road dissents are really fun because you can sort of push the limits of technology and technique to try to ride them fast as if you're on a mountain bike, but without the sort of safety net of a suspension fork. So, so should should we get on a soapbox about dropper posts? I, I'm always game to get on that soapbox. I think I occupy, my name's on it next years. Yeah. So so for the listener, so a dropper post is simply, it's a telescoping seatpost that can be actuated by a lever. It can sink down and get out of the way. So if you've, if you're a road cyclist, you've never probably experienced this to this date, you're, you sort of set up your saddle height at your ideal peddling sort of leg length and, and you're good to go with a dropper post. You've got any number of different adjustments you can make from totally slammed out of the way to your perfect peddling position. Well, and here's, you know, there's this, this is actually, I believe you know, after disc brakes and tubeless tires on wide rims, like this is an essential enabling technology. And I think that dropper posts will be pretty ubiquitous before too long on this type of bike. You add, you know, 0.7 pounds, right? You know, Ooh, the weight we need in the group in the crowd might not like that. But here's what you get. You now is set up your saddle at the optimum position for power output, right? Because you don't have to compromise it to be able to scoot your butt off the back. And then when you get your butt off the back, your, your saddle is dropped down. So you really have like a lot of travel in your legs. The bike can be dancing underneath you going up and down and side to side and using all this body English to, to navigate the terrain. And, and you know, the bike is, is doing all this stuff in your body is taking a relatively smooth line through space. And so you can think of this as like, it's suspension without the slop, right? It's not, you don't get this big lumbering beast on the road where you know, it's bobbing underneath you. But when you want it, like it's there and, and as you develop the skill around it, it just radically extends the capability of machine. Yeah. And for him. It's interesting, you know it, I think it's often occupied the space of like, Oh a more advanced or experienced athlete comes to getting a dropper posts. But the reality is it's so good for beginner riders, for even riding on the road for God's sake. It's a good, it's a good thing because when you get up on those steeps, the last, particularly with the drop bar bikes, you, you sort of, when you're steeply descending, you just feel like you're getting thrown over the handlebars cause you are, because that seat is pitching you over the bars. But with the dropper posts, the saddle sinks right out of the way. You can, you have such a large pocket underneath your under carriage to kind of maneuver the bike around. So if, if you're going over a little over a little rock or something and there's a little bit of a drop off, you just have that room. Yeah. I think, and this is actually worth diving into. So cause this, this is really where like we get into cross country territory. So essentially the dropper with the dropper, you can shift your hips back. So you kind of like exaggeratedly you know point your by your, your butt off the back of the bike saddle ends up somewhere like around your, your tummy there. You're in the drops up front, which are more accessible because your, your body's lower right and those drops give you more leverage, especially if they're flared. You're because you have more mass over the rear, you can use your rear for speed control cause you have way more braking force cause the mass is there. And then you know, your front wheel is not being asked to both steer and brake and so it can just roll. You can keep it light. Your upper body stays nice and lightened the front just kind of rolls over stuff and the bike is kinda rocking back and forth, going over rough terrain. Your legs are absorbing it. And you know, if, if you, if you come up to a rudder, you come up to like something sketchy, you're not going to pitch over the front because your center of mass is so far back and you're, you don't, you're not breaking so much with the front that just the physics of it are such that you're, you're not going to be lawn darting, you're not going to be hot, you know, high siding over the front of the bike. Worst case scenario, your rear slides, that's controllable. In fact, when you start really becoming one with the bike, that's fun. You drift it. Like that's part of the technique. Yeah. I feel like, I feel like it's exponentially enhancing the safety and performance experience. And I see it time and time again. I ride with people who have the same sort of relative skill level as I do, but I can see they're constrained by being pitched up and over whenever we hit anything technical. Well, and, and another component of this is like you mentioned on the road, this being a game changer. There's something really delightful about being in like a bullet tuck with a dropper down with six 50 B's all covered in mud and ripping past somebody peddling down on a road bike or on a narrow road bike. But another element is a mobility actually. So you have, you know, we I talked to a lot of riders cause we do like our bikes are all custom and it's like, you know, I have trouble getting on and off the bike, like a dropper post makes it easier to get on and off the bike. And you know, that that is significance. That is, that is a meaningful improvement in accessibility. I think a lot of people like w when they think about a dropper, it's like, Oh, it's either high or low. But the interesting thing is once you get used to it, it's infinite. So I, you know, I was, I was riding with, with someone who was out on a demo ride on one of your bikes the other weekend. And I was like, Oh, you did, did you drop your posts? You know, a centimeter or an inch for this little traverse we were doing. He's like, no, I didn't, didn't think about it. I was like, well, you should because look what happens. Like I can now corner with a little bit more ease because I just, I have the ability to throw the bike around. We're not, we're not in a max power peddling situation, so it's not required that I have it at that perfect height. So I might as well have that room so I can throw the bike around and make it more playful. I mean, the, the way we, that this used to be done in the past and you know, the battle days before mountain, before a dropper posts is you know, we used to drop the saddle on our mountain bikes, three quarters of an inch so that we'd have a little bit more maneuverability. Now you can just, you know, do a little micro adjust and then when you hit the flat section, you hit the road, you pop it back up and you're in pure power production mode. So absolutely. I'm going to be sharing with some listeners my age a little bit here by saying like, I actually rocked the height right back in the day, which was this spring system that attached to your your seatpost. So you could throw your quick-release, slam it down and then theoretically it would pop back up. The problem with that is it never popped back up straight. Like today's dropper posts, which your saddle is always going to remain in the exact right position for you. We live in a golden age of, of equipment. The fact that you can go out and ride like we did the other day and stuff just works and it fun on all the different terrain. Like that's magical. So, Yeah. Yeah. No, I hope, I hope our shared enthusiasm for the sport is coming through in this podcast because anybody listening like these bikes for me, they have just given me the ability to, to ride wherever and whenever I want. I still do have that full suspension sled that gets written. Rarely if I'm, you know, doing a trip to Thai or someplace where I'm going to hit some real nice mountain bike terrain, which I still completely love. But having a gravel bike in my life has just been reinvigorating for my passion and love of the sport. Yeah. Yeah. I mean this gets down to, you know, let's get philosophical for a moment. Like why do we do this? Like what, what is the purpose? We are adults, right? Spending money on this equipment so we can go out and ride in a giant circle. And you know, like what is the point of this activity? And, and for me it comes down to like connection, right? You're on, you know, on a machine, you're connecting with the machine. You're connecting with your body, right? You know that, that sinking of your, your breathing, your heart rate, your cadence, like you, you get in, you can get into a flow state, you can you know, you can focus, you connect with yourself, you connect with the environments, you connect with community. Like you, we had, you know, how many people come out the other day and they were just stoked to be there and, and to meet each other and go on and have this experience. And like there were some writers who were really strong and there are some writers who it was their first big gravel group ride. And everybody got what they wanted out of that experience. And I think that that's something that's quite powerful about this particular type of writing. And, and if we take a bigger step back, like this is, this is not just, this isn't just about cycling, this is about like a life well lived, right? For me that this is the reason why I personally and so resonant with this experience and why I care so much and why I try to share it is because there's just so much there. In terms of like you know, having an outlet for adults to play, like children to interact without all the hierarchies and all the way, all the things that we have you know, to kind of all the identities that we have off the bike. What matters on the bike is that you're on the bike and you're friendly and you know, maybe if you're strong, you get a little bit of credit. Really. Generally people don't care that much. It's about having an adventure. Yeah. But I mean, that resonates with me. I've found over the course of my life, I've got this sort of adventure bucket, and if I'm not filling it on a weekly basis, I tend to get depressed. Yeah. And you know, I found that as much as I love cycling and as many great road riding experiences that I've had, it's a smaller part of those road rides that filled my adventure bucket. But when I get off road particularly, I mean, we're so blessed here in the Bay area that we can go out of our door and we can see no one we can get on these trails. Even though there's a huge population around here, you can have days and mornings where you do a loop and you see virtually no one. I mean if you live in New York city, you can find this. It's harder, but you can find that section of park at the right time of day that, that, you know, you get your, your peace, you get your tranquility. Yeah. Same in Washington DC where I started started my cycling back there. We just had these neighborhood trails that you have to know where the next entrance was, but you could just get out there amongst, you know, the traffic was just there around the corner, but all of a sudden you found this pocket of adventure. And another thing you were talking about that I think is, is unique to gravel riding that is maybe shared with our mountain bike brother. And it's just this idea of like riding a section and then grouping up afterwards and wanting to high five people. Yeah. It's just, it's fun as a grown ass man, grown ass woman to giggle and high five your friends. Yeah. Well I think that there, the fact that this is not the norm that like day to day joy and connection is not something that we've built into our now. We're now we're getting way into the philosophical realm. But like what is the point of all of this stuff that we're doing, right? We, you know, are we our jobs, are we our families? Are we our, our, our gender or race or something or we like something greater than that. And is there more to life than I mean of course like there is the struggle and we are in a a privileged position to have the time and the resources to buy a machine like this and to be able to steal away. I would like to see those types of experiences be accessible to more people because it really is like there's, there's there's being, there's living and then there's like being alive and that's where I think that these experiences come in. Yeah, it's important to remember. Yeah. So circling back off our philosophical bandwagon, but I mean, I think we, it, this should resonate with listeners. Anybody who's written off road, I think when they really think about it, they're going to think and remember like it is really filling something inside them. So I guess going back to where we started with gravel bike one Oh one one get a gravel bike, it's going to be great for you when you're looking for a gravel bike. Obviously price points are gonna be a concern. Get into the sport where you can afford it. Go out there and ride it. We're not, we're not sitting here saying go buy expensive equipment. It's the only way to ride gravel by no means. And I think gravel of any sector of the sport has shown that. It's like welcome all comers. If you want to go out ride trails, have a good time, smile, everybody's welcome in this sport. And we've, we've covered a lot of kind of the, you know, what to look for in equipment. One other one I think it's important to, to throw out there is gearing. I'm a huge fan of one by drive trains and I'm a big fan of having way more low end than you think you need. So like a big old pie plate in the rear so that, you know, when you hit that steep pitch you're going to be able to get up or when you get in over your head and you do that 60 mile group ride and you're completely kicked and you have that last pitch to get up, you can spin up it. Yeah. So for the listener. So, And let's talk about you've, you've generally got an option of two chain rings upfront and a cassette in the back or one chain ring up front and the cassette in the back. And I grappled with this with my, my first two gravel bikes. And ultimately I originally decided on a two I set up because I was sort of swayed with this idea that Oh, on the road I wasn't gonna have the nuances and the subtleties between the gears. But after spending a couple of years in the sport, I was lusting after one buy and I'm my present thesis, I'm on a one by setup and I couldn't be happier because I don't, I don't personally miss any of those subtleties that were purported to exist. Yeah. And you want like, you know just to throw out some numbers, like a 10 42 in the rear, 1146 in the rear and you can get all the range that you get with the two by with that big old cassette people will talk about the jumps, which is what you were alluding to and yeah, the jumps are bigger. I mean, that's just math, but the fact is like a two by 11 is really like a 14 speed, right? A lot of the gears overlap. And so a one by 11 is not going to be twice as big of a jump. The second thing is that if you're fit properly to the bike with the right crank length proportional to your inseam and like you're able to spin smoothly because you're dialed to the machine you, you're going to be fine at, you know, you know, in one gear in the other in terms of changing the cadence. And then the last thing is on gravel. The terrain is changing so much that you generally be grabbing two or three gears anyways. And so you know, it actually makes that easier. But the last thing here is just, there's nothing to think about, right? If you think about like the experience that you want, the bike is not the center of the action. Like it's, it's, you want the bike to disappear. And so if you're thinking about cross chaining, you're thinking about chain drops and this other stuff this is going to get in the way of you. You, you know, flowing in the environment. Yeah. I think I was dabbling with one by demo bikes. What I found right away was that it was just quieter, you know, with the clutch rear derailer, no sh no Chainer no. A derailer up front. The chain can be tighter. Everything seemed to just be quieter and, and felt more together. Yeah. The, I mean you, there are good to buy drive trains now with clutches fortunately. And if you go electronic, it takes away some of the cross chain and you can have it auto change the front and so on. But still like don't complicate things. Like one buy is super simple. It just works. It's cheaper upfront, cheaper to maintain. It's easier to meet. Like just get a one buy. And if you, if it's not the right gearing, you change the chain ring. Like, you know, 50 bucks. You can always dial it to what you need. Yeah, yeah, absolutely. Well, I think this is all good stuff. Are there any kind of key takeaways that you would leave the listener with? Thinking, thinking from the mentality of, okay, someone considering jumping in the sport, they've learned a little bit from us today. What are the things you want them to walk away with? I would say that I would, I would target this, this response to the people who are really like, they're really interested in, in not just adding gravel to their repertoire. They're already cyclists. Cause you know, those of us who are already cyclists, we're already getting you know, our group rides or are our on the mountain bike or whatever. But you know, especially for, for the newbie like this is, this is an experience that's accessible. Find people in your community who are organizing group rides, who can give you some guidance on, on now, where to ride and, and equipment choices and so on. And, you know, don't be intimidated by you know, some of the train you go on, go out and have adventures, push yourself connect with people. And you will find as I have, and I think a lot of us have had that this is really an experience that's part of a life well lived. You know, everything from, from of course the, the basics of like just being fit and, and feeling healthy, but more importantly, just mental health, right? You talk about, you know, being depressed when you don't ride. This is therapy. Like this is, this is a way of, of, of self care. So, you know, find the people who have, who've you know, learned how to get the most out of this and get their, their guidance on, on how to join because it's a very accessible style of cycling you get into. Yeah, I think those are all great. Great. Parting thoughts and I would just add sort of, don't be afraid of gravel. We're not talking about bringing you to crank works up at Whistler and send you off, sending you off a a 40 foot jump. Dirt roads have been written since the Dawn of the bicycle time and, and it's, you know, it's the simplest incarnation. You don't need anything special. You can ride a, a tiny road bike tire off road and be enjoying gravel. As we've talked about earlier as, as you sort of make the right equipment choices and you'd develop the skills you can go explore further and further. One of the things that I've personally enjoyed around here, and I sort of encouraged newer athletes is ride uphill off road and ride downhill on the road. You don't have to do it all. You can, you can sort of go where your comfort level lies and you will get some of those rewards in the Bay area. That strategy is useful because descending, even at a casual pace, you're going faster than most of the cars. And you sort of forgive yourself, needing to know a lot of the sort of technical skills to go down Hill that you'll learn over time. Well, and the thing is by simple virtue of having what we're calling a gravel bike, this marketing term of gravel bike, these all purpose machines just write it how you want to ride it. Like that is, that is exactly the point. Like you can do all the things and you know, get the bike, do some exploring, find out what your jam is and then do more of that. And you know that that's a, like, that's what's beautiful about this is you can find, you can find your, your terrain, the stuff that you enjoy and in the community around that type of writing that you can join up, which is arguably one of the, one of the best parts about this is the, the people you meet alone. Yeah. And that's, you know, I've obviously talked to a lot of event organizers on the podcast and I think almost uniformly they are looking at creating distances and you know, different categories of events so that you can do a 25 miles starter gravel event. Because these experiences as Randal alluded to in terms of the community, it just, it's great to travel to do these things because they're just fun days out. Whether you're doing the 25 mile version or the a hundred mile version, you're all going to coalesce afterwards with a little bit of dirt on your bike and your body and you're going to enjoy a shared meal and maybe a beer together. And it's just great to get out there and do, It's a th there's a term is a term that's been coined in the Bay area. I th I think it's attributable to Murphy Mac of the super pro series, but the idea of like a, a mullet ride, it's like business in the front party in the back. So like show up, you start, everyone starts together. It's a, it's a, it's a festival atmosphere. It's a party atmosphere. And if you want to go out and race, go throw down. If you just want to like go and you know, slog through, you know, 60 miles and feel that sense of accomplishment and meet people along the way, that experience is there too. And that's kind of the general vibe around this. It's not like, you know, winter take all crit racing on the weekends or something. This is like, let's go have an adventure together and enjoy each other's company. Yeah, no, that's perfect. I think those are great closing thoughts Randall, so I appreciate you having me over. I appreciate the conversation. I hope everybody listening is getting a little bit out of it and at minimum of guarantee they're getting your enthusiasm and my enthusiasm for the sport. Yeah. Hopefully if anyone is in the Bay area, I'll come join us for a ride and I'll be around the country later this year. We'd love to a ride with some of you folks. Right on. Right. So thanks again to Randall from thesis for the time and the conversation. As I mentioned in the intro, obviously calling out group rides and things like that is not something we're condoning at this point, but definitely Randall and I love to get groups of people together here in the Bay area as I'm sure many of do you do around the country, so let's keep looking forward to better times and getting together soon. In the meantime, I forgot to mention all the great feedback I got about bringing on board a sponsor and advertisers to the podcast. I really appreciated the kind words and the thumbs up you guys were giving me to say, Hey, it's okay if you want to offset some of your costs. We know you're a volunteer effectively in doing this, so thanks so much. I also did set up a buy me a coffee account@buymeacoffee.com slash the gravel ride where you can simply buy me a cup of Joe if you like what I'm doing. So anyway guys, stay safe, stay healthy during this pandemic. As always, I appreciate your feedback. Feel free to shoot me a note at Craig at the gravel ride dock, bike, or hit me up on Facebook or Instagram until next time, here's to finding some dirt under your wheels.    

The Wizard's Staff - A Magic the Gathering EDH Podcast
The Wizard's Staff Episode 55: How to Write an EDH Primer 101

The Wizard's Staff - A Magic the Gathering EDH Podcast

Play Episode Listen Later Mar 16, 2020 59:48


Warning: Explicit ContentAre you passionate about a deck but don't know how to write an essay because you were asleep during english class? Well look no further! We have the solution for you!**Resources**Parke's Chainer Primer: https://archidekt.com/decks/425545#Chainer,_Master_of_Resources_PrimerParke's Kozilek Primer: https://archidekt.com/decks/427147#Kozilek's_Steam_Engine_-_Primer Blake's Sigarda Primer: https://archidekt.com/decks/345518#PRIMER_Sigarda_Voltron_Enchantress Guy's Purphoros Primer: http://tappedout.net/mtg-decks/purphoros-god-of-the-forge-groupsmack-primer/?cb=1583549841How to Write Primers: https://www.reddit.com/r/CompetitiveEDH/comments/8rs9tj/a_primer_on_primers/ https://www.reddit.com/r/CompetitiveEDH/comments/b0oga9/what_should_i_put_in_my_primer/ Cedh primers on Discord:https://www.youtube.com/watch?v=8djSFUQuPL4#EDH #Commander #MagicTheGathering———————————How it Begins by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/...)Source: http://incompetech.com/music/royalty-...Artist: http://incompetech.com/Follow us on Twitter:https://twitter.com/WizardsStaff101Checkout our iTunes:https://itunes.apple.com/us/podcast/t...Checkout our Spotifyhttps://open.spotify.com/show/16xTrhzzFFPe4uxJxAH5clGuy Gibboney: Artistic Management, Production, Account ManagerBlake Pintler: Head Note Curator, Assistant to Artistic Management, He Thought of the NameEmail: thewizardsstaff101@gmail.comSupport the show (https://twitter.com/WizardsStaff101)

Commander's Brew
Chainer & Syr Konrad - 219

Commander's Brew

Play Episode Listen Later Nov 12, 2019


You can find this and our other decks on TCGplayer!If you'd like to purchase any cards we've included in the deck if you use our affiliate link below it helps us out bigtime!TCG Link: http://bit.ly/2JLsKKxOn top of American listeners getting our decks from TCGplayer, our Canadian listeners are better off using MTGCanada and the Wizard's Tower exclusive coupon code BREWELDRAINE to get 5% off your order and FREE Canadian shipping as long as you order $15 or more! Check it all out at www.mtgcanada.com/commandersbrew!Sean puts together a wheeling, milling, discarding and recarding Rakdos deck featuring Chainer and Syr Konrad that will make your opponents wish they had way more graveyard hate than they do.You can always help the show directly through www.patreon.com/commandersbrew and get access to our discord to help us brew our decks as well as other perks!Visit www.commandersbrew.com for direct downloads and a streaming version of the podcast.Like our kind of humour? Check out our Brews News comedy playlist on YouTube for real inside jokes for Magic players! https://www.youtube.com/playlist?list=PLLjcRrl68PLwQSmjTIRztlZNvIQWZSxlXFollow us on twitter at @commandersbrew and individually we are @seantabares and @andyhullbone."There It Is"Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 3.0http://creativecommons.org/licenses/by/3.0

Random Facts Club
3. 研究も開発も基盤も「攻める」機械学習 (kuenishi)

Random Facts Club

Play Episode Listen Later Aug 17, 2019 65:34


関連リンク Preferred Networks KDD 2019 | Chainer: a Deep Learning Framework for Accelerating the Research Cycle chainer/chainerio Jubatus : オンライン機械学習向け分散処理フレームワーク PFN、3機種めのディープラーニング用スパコンを2019年7月に稼働、合計で200ペタFLOPSに 確率的勾配降下法 Lustre (ファイルシステム) C API libhdfs Amazon CloudFront ケルベロス認証 Hadoop and Kerberos Apache Hadoop Ozone 小さなファイルが大きな問題を引き起こす:Hadoopクラスターでのスモールファイルの予防と対処について Kubernetesに分散ストレージのCephを統合する「Rook」がCNCFの正式プロジェクトに。ファイル、ブロック、S3互換オブジェクトストレージやマルチリージョン対応も Python bindings - Apache Arrow SRE サイトリライアビリティエンジニアリング ―Googleの信頼性を支えるエンジニアリングチーム Autonomous Tidying-up Robot System NSDI ‘19 OSDI ‘18 「ところてんって会社で何やってるの?なんのエンジニアだっけ?」 と部長から言われた。 hadoopとhiveとmysqlと機械学習とログ解析と自然言語処理とVBAと火消しをやっているが、めんどくさいので「高機能雑用」と答えておいた。 だいたい間違ってない。 Preferred Networks Careers GTC Silicon Valley-2019: MagLev: A Production-grade AI Platform Running on GPU-enabled Kubernetes Clusters

Random Facts Club
3. 研究も開発も基盤も「攻める」機械学習 (kuenishi)

Random Facts Club

Play Episode Listen Later Aug 17, 2019 65:34


関連リンク Preferred Networks KDD 2019 | Chainer: a Deep Learning Framework for Accelerating the Research Cycle chainer/chainerio Jubatus : オンライン機械学習向け分散処理フレームワーク PFN、3機種めのディープラーニング用スパコンを2019年7月に稼働、合計で200ペタFLOPSに 確率的勾配降下法 Lustre (ファイルシステム) C API libhdfs Amazon CloudFront ケルベロス認証 Hadoop and Kerberos Apache Hadoop Ozone 小さなファイルが大きな問題を引き起こす:Hadoopクラスターでのスモールファイルの予防と対処について Kubernetesに分散ストレージのCephを統合する「Rook」がCNCFの正式プロジェクトに。ファイル、ブロック、S3互換オブジェクトストレージやマルチリージョン対応も Python bindings - Apache Arrow SRE サイトリライアビリティエンジニアリング ―Googleの信頼性を支えるエンジニアリングチーム Autonomous Tidying-up Robot System NSDI ‘19 OSDI ‘18 「ところてんって会社で何やってるの?なんのエンジニアだっけ?」 と部長から言われた。 hadoopとhiveとmysqlと機械学習とログ解析と自然言語処理とVBAと火消しをやっているが、めんどくさいので「高機能雑用」と答えておいた。 だいたい間違ってない。 Preferred Networks Careers GTC Silicon Valley-2019: MagLev: A Production-grade AI Platform Running on GPU-enabled Kubernetes Clusters

Commander Theory
Commander 2019 Set Review, Part 2

Commander Theory

Play Episode Listen Later Aug 15, 2019 101:05


In this episode, Nick, Zak, and Alex complete their discussion of which cards and commanders from the Commander 2019 precons are likely to make an impact on the format.Decks discussed in this episode:Chainer, Nightmare AdeptGreven, Predator CaptainAtla Palani, Nest TenderPramikon, Sky RampartSupport the show (https://www.patreon.com/commandertheory)

Legendary Creature - Podcast
Commander 2019 Review | Faceless Menace | Merciless Rage

Legendary Creature - Podcast

Play Episode Listen Later Aug 15, 2019 156:30


It’s commander Christmas time. Or whatever your favorite gift giving holiday is. We’re here to review the new preconstructed commander decks and their all their juicy—is that too hopeful?—contents.   Join Andy and Kyle as the dive into two of the new decks: Faceless Menace and Merciless Rage.   -----------------------------------   Come watch us actually play commander and check out other videos on our YouTube channel:    Find the podcast on Spotify, iTunes, Stitcher, Google Podcasts or wherever you grab your RSS feeds. Also, ask your smart speakers to play the Legendary Creature Podcast.   Twitter: @legend_creature   -----------------------------------   The intro and outro music as always comes one of the gracious, sexy, dank souls kind enough to let us use their beats. This episode’s tunes are by Home. Check out all the dope musicians who let us use their beats and be sure to support them: Home –  Protector 101 –  Silver Richards –  Dan Terminus –    Big shout out to Mikey Patch for logo artwork! Check his stuff out: 

The Vorthos Cast
77 - Commander (2019 Edition) Flavor Gems

The Vorthos Cast

Play Episode Listen Later Aug 12, 2019 46:58


Every new Commander product brings flavorful new legends, new cards, and reprints to Magic players worldwide. They're good in meaty, perfect for digging in for a full flavor gems episode featuring flavorful new legends, new...well, you know. Also a baby crab! If you enjoy The Vorthos Cast, consider supporting us on Patreon at www.patreon.com/thevorthoscast! 02:27 – MTG Lore Advanced Story Search Link: http://mtglore.com/advanced-story-search/ 04:14 – Anje Falkenrath 05:29 – Atla Palani, Nest Tender 06:37 – Chainer, Nightmare Adept 07:22 – Elsha of the Infinite 08:14 – Gerrard, Weatherlight Hero 10:14 – Greven, Predator Captain 11:36 – Ghired, Conclave Exile 13:15 – K'rrik, Son of Yawgmoth 14:54 – Kedena, Slinking Sorcerer 16:40 – Marisi, Breaker of the Coil 20:00 – Rayami, First of the Fallen 21:38 – Sevinne, the Chronoclasm 22:51 – Tahngarth, First Mate Link: https://soundcloud.com/user-345643028/commander-2019-preview-moo-crew 23:27 – Volrath, the Shapestealer 25:19 – Mandate of Peace 26:30 – Thalia's Geistcaller 27:20 – Song of the Worldsoul 27:41 – Leadership Vacuum 27:59 – Bone Miser 29:51 – Backdraft Hellkite 30:57 – Dockside Extortionist 31:48 – Ohran Frostfang 32:35 – Idol of Oblivion Link: https://magic.wizards.com/en/articles/archive/magic-story/stirring-slumber-2015-05-13 35:18 – Chromeshell Crab 36:32 – Solemn Simulacrum 37:30 – Mimic Vat 39:10 – Final Thoughts

SUNDAY FUNDAY REAKTOR PODCAST
Sunday Funday #94: Recording Multiple Outputs REAKTOR BLOCKS in Reaktor 6.3

SUNDAY FUNDAY REAKTOR PODCAST

Play Episode Listen Later Jun 26, 2019 22:24


The ninety-fourth episode of Sunday Funday podcast, hoisted by Benjamin. . . Docking bay NINETY-FOUR!! Episode #94 finds us continuing to explore the Reaktor 6.3 Update! Native Instruments have put out a MASSIVE update to their Reaktor software, enabling one of the most requested features, front facing REAKTOR BLOCKS PATCHING! We RECORD a build of a sampling rack, using the CHAINER and ROUTING MATRIX HORIZONTAL blocks from the TOYBOX Sampling Pack collection. We illustrate how to route multiple outputs from Reaktor Blocks into a DAW, in this case, Pro Tools. This podcast aims to uncover the magic and demystify Native Instruments Reaktor 6, and synthesis in general. . . . TOYBOX FREE PACK . DRM. OSC-Drums. . MIX. MIX-4 Ch Mixer (Stereo). . . TOYBOX FLOOR SHAKERS. . EFX. EFX-Reverb. DYN-Compressor. EFX-Distortion (Stereo). . . TOYBOX SAMPLING PACK. . SEQ. SEQ-Sequence. SEQ-Routing Matrix Horizontal. SEQ-Chainer. . . . YouTube: https://www.youtube.com/sundayfunday TOTAL HUMAN OPTIMIZATION, workout equipment, supplements, education, and more at https://www.onnit.com Soundcloud: https://soundcloud.com/sundayfundaypodcast iTunes: https://itunes.apple.com/us/podcast/sunday-funday-reaktor-podcast/id1407932536?mt=2 Google: https://play.google.com/music/m/Ipibc65b22qvb33ziujbg2b2aue?t=SUNDAY_FUNDAY_REAKTOR_PODCAST https://www.toyboxaudio.com/ https://www.toyboxaudio.com/pages/sampling-pack-details https://www.toyboxaudio.com/pages/floor-shakers-pack-details https://www.toyboxaudio.com/pages/free-pack-details https://www.native-instruments.com/en/ https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/ https://www.instagram.com/sundayfundaypodcast/ Nativeinstruments, native instruments, reaktor, reaktor 6, Reaktor 6.3, reaktor blocks primes, reaktor blocks, iTunes, google, podcast, synthesis, synthesizer, soft synth, synth, software synthesizer, softsynth, TOYBOX, TOYBOX AUDIO, Toybox Blocks, Toybox Reaktor Blocks, build, patch, Toybox Free Pack, Toybox Sampling Pack, Sequence, Routung Matrix Horizontal, Compressor, Reverb, Distortion, Drums, Side Chain, Side Chain Compression, Side Chain Compressor, Chainer, AVID, Pro Tools, ONNIT

SUNDAY FUNDAY REAKTOR PODCAST
Sunday Funday #73: Recording Multiple Outputs REAKTOR BLOCKS in Reaktor 6.3

SUNDAY FUNDAY REAKTOR PODCAST

Play Episode Listen Later Jun 16, 2019 9:37


The seventy-third episode of Sunday Funday podcast, hoisted by Benjamin. . . Episode #73 finds us continuing to explore the Reaktor 6.3 Update! Native Instruments have put out a MASSIVE update to their Reaktor software, enabling one of the most requested features, front facing REAKTOR BLOCKS PATCHING! We RECORD a build of a sampling rack, using the CHAINER and ROUTING MATRIX HORIZONTAL blocks from the TOYBOX Sampling Pack collection. We illustrate how to route multiple outputs from Reaktor Blocks into a DAW, in this case, Pro Tools. This podcast aims to uncover the magic and demystify Native Instruments Reaktor 6, and synthesis in general. . . . TOYBOX FREE PACK . DRM. OSC-Drums. . MIX. MIX-4 Ch Mixer (Stereo). . . TOYBOX FLOOR SHAKERS. . EFX. EFX-Reverb. DYN-Compressor. EFX-Distortion (Stereo). . . TOYBOX SAMPLING PACK. . SEQ. SEQ-Sequence. SEQ-Routing Matrix Horizontal. SEQ-Chainer. . . . YouTube: https://www.youtube.com/sundayfunday Soundcloud: https://soundcloud.com/sundayfundaypodcast iTunes: https://itunes.apple.com/us/podcast/sunday-funday-reaktor-podcast/id1407932536?mt=2 Google: https://play.google.com/music/m/Ipibc65b22qvb33ziujbg2b2aue?t=SUNDAY_FUNDAY_REAKTOR_PODCAST https://www.toyboxaudio.com/ https://www.toyboxaudio.com/pages/sampling-pack-details https://www.toyboxaudio.com/pages/floor-shakers-pack-details https://www.toyboxaudio.com/pages/free-pack-details https://www.native-instruments.com/en/ https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/ https://www.instagram.com/sundayfundaypodcast/

SUNDAY FUNDAY REAKTOR PODCAST
Sunday Funday #72: Chainer BLOCK from TOYBOX Sampling Pack in Reaktor 6.3

SUNDAY FUNDAY REAKTOR PODCAST

Play Episode Listen Later Jun 16, 2019 22:25


The seventy-second episode of Sunday Funday podcast, hoisted by Benjamin. . . Episode #73 finds us continuing to explore the Reaktor 6.3 Update! Native Instruments have put out a MASSIVE update to their Reaktor software, enabling one of the most requested features, front facing REAKTOR BLOCKS PATCHING! We RECORD a build of a sampling rack, using the CHAINER and ROUTING MATRIX HORIZONTAL blocks from the TOYBOX Sampling Pack collection. We illustrate how to route multiple outputs from Reaktor Blocks into a DAW, in this case, Pro Tools. This podcast aims to uncover the magic and demystify Native Instruments Reaktor 6, and synthesis in general. . . . TOYBOX FREE PACK . DRM. OSC-Drums. . MIX. MIX-4 Ch Mixer (Stereo). . . TOYBOX FLOOR SHAKERS. . EFX. EFX-Reverb. DYN-Compressor. EFX-Distortion (Stereo). . . TOYBOX SAMPLING PACK. . SEQ. SEQ-Sequence. SEQ-Routing Matrix Horizontal. SEQ-Chainer. . . . YouTube: https://www.youtube.com/sundayfunday Soundcloud: https://soundcloud.com/sundayfundaypodcast iTunes: https://itunes.apple.com/us/podcast/sunday-funday-reaktor-podcast/id1407932536?mt=2 Google: https://play.google.com/music/m/Ipibc65b22qvb33ziujbg2b2aue?t=SUNDAY_FUNDAY_REAKTOR_PODCAST https://www.toyboxaudio.com/ https://www.toyboxaudio.com/pages/sampling-pack-details https://www.toyboxaudio.com/pages/floor-shakers-pack-details https://www.toyboxaudio.com/pages/free-pack-details https://www.native-instruments.com/en/ https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/ https://www.instagram.com/sundayfundaypodcast/

AWS Podcast
#314: May 2019 Update Show 2

AWS Podcast

Play Episode Listen Later May 26, 2019 32:09


Simon hosts an update show with lots of great new features and capabilities! Chapters: Developer Tools 0:26 Storage 3:02 Compute 5:10 Database 10:31 Networking 13:41 Analytics 16:38 IoT 18:23 End User Computing 20:19 Machine Learning 21:12 Application Integration 24:02 Management and Governance 24:23 Migration 26:05 Security 26:56 Training and Certification 29:57 Blockchain 30:27 Quickstarts 31:06 Shownotes: Topic || Developer Tools Announcing AWS X-Ray Analytics – An Interactive approach to Trace Analysis | https://aws.amazon.com/about-aws/whats-new/2019/04/aws_x_ray_interactive_approach_analyze_traces/ Quickly Search for Resources across Services in the AWS Developer Tools Console | https://aws.amazon.com/about-aws/whats-new/2019/05/search-resources-across-services-developer-tools-console/ AWS Amplify Console adds support for Incoming Webhooks | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-amplify-console-adds-support-for-incoming-webhooks/ AWS Amplify launches an online community for fullstack serverless app developers | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-amplify-launches-an-online-community-for-fullstack-serverless-app-developers/ AWS AppSync Now Enables More Visibility into Performance and Health of GraphQL Operations | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-appsync-now-enables-more-visibility-into-performance-and-hea/ AWS AppSync Now Supports Configuring Multiple Authorization Types for GraphQL APIs | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-appsync-now-supports-configuring-multiple-authorization-type/ Topic || Storage Amazon S3 Introduces S3 Batch Operations for Object Management | https://aws.amazon.com/about-aws/whats-new/2019/04/Amazon-S3-Introduces-S3-Batch-Operations-for-Object-Management/ AWS Snowball Edge adds block storage – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-snowball-edge-adds-block-storage-for-edge-computing-workload/ Amazon FSx for Windows File Server Adds Support for File System Monitoring with Amazon CloudWatch | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-fsx-for-windows-file-server-adds-support-for-cloudwatch/ AWS Storage Gateway enhances access control for SMB shares to store and access objects in Amazon S3 buckets | https://aws.amazon.com/about-aws/whats-new/2019/05/AWS-Storage-Gateway-enhances-access-control-for-SMB-shares-to-access-objects-in-Amazon-s3/ Topic || Compute AWS Lambda adds support for Node.js v10 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws_lambda_adds_support_for_node_js_v10/ AWS Serverless Application Model (SAM) supports IAM permissions and custom responses for Amazon API Gateway | https://aws.amazon.com/about-aws/whats-new/2019/aws_serverless_application_Model_support_IAM/ AWS Step Functions Adds Support for Workflow Execution Events | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-step-functions-adds-support-for-workflow-execution-events/ Amazon EC2 I3en instances, offering up to 60 TB of NVMe SSD instance storage, are now generally available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-ec2-i3en-instances-are-now-generally-available/ Now Create Amazon EC2 On-Demand Capacity Reservations Through AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/04/now-create-amazon-ec2-on-demand-capacity-reservations-through-aws-cloudformation/ Share encrypted AMIs across accounts to launch instances in a single step | https://aws.amazon.com/about-aws/whats-new/2019/05/share-encrypted-amis-across-accounts-to-launch-instances-in-a-single-step/ Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single step | https://aws.amazon.com/about-aws/whats-new/2019/05/launch-encrypted-ebs-backed-ec2-instances-from-unencrypted-amis-in-a-single-step/ Amazon EKS Releases Deep Learning Benchmarking Utility | https://aws.amazon.com/about-aws/whats-new/2019/05/-amazon-eks-releases-deep-learning-benchmarking-utility-/ Amazon EKS Adds Support for Public IP Addresses Within Cluster VPCs | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-eks-adds-support-for-public-ip-addresses-within-cluster-v/ Amazon EKS Simplifies Kubernetes Cluster Authentication | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-eks-simplifies-kubernetes-cluster-authentication/ Amazon ECS Console support for ECS-optimized Amazon Linux 2 AMI and Amazon EC2 A1 instance family now available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-ecs-console-support-for-ecs-optimized-amazon-linux-2-ami-/ AWS Fargate PV1.3 now supports the Splunk log driver | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-fargate-pv1-3-now-supports-the-splunk-log-driver/ Topic || Databases Amazon Aurora Serverless Supports Capacity of 1 Unit and a New Scaling Option | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_aurora_serverless_now_supports_a_minimum_capacity_of_1_unit_and_a_new_scaling_option/ Aurora Global Database Expands Availability to 14 AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/05/Aurora_Global_Database_Expands_Availability_to_14_AWS_Regions/ Amazon DocumentDB (with MongoDB compatibility) now supports per-second billing | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-documentdb-now-supports-per-second-billing/ Performance Insights is Generally Available on Amazon Aurora MySQL 5.7 | https://aws.amazon.com/about-aws/whats-new/2019/05/Performance-Insights-GA-Aurora-MySQL-57/ Performance Insights Supports Counter Metrics on Amazon RDS for Oracle | https://aws.amazon.com/about-aws/whats-new/2019/05/performance-insights-countermetrics-on-oracle/ Performance Insights Supports Amazon Aurora Global Database | https://aws.amazon.com/about-aws/whats-new/2019/05/performance-insights-global-datatabase/ Amazon ElastiCache for Redis adds support for Redis 5.0.4 | https://aws.amazon.com/about-aws/whats-new/2019/05/elasticache-redis-5-0-4/ Amazon RDS for MySQL Supports Password Validation | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-mysql-supports-password-validation/ Amazon RDS for PostgreSQL Supports New Minor Versions 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21 | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-postgresql-supports-minor-version-112/ Amazon RDS for Oracle now supports April Oracle Patch Set Updates (PSU) and Release Updates (RU) | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-oracle-now-supports-april-oracle-patch-set-updates-psu-and-release-updates-ru/ Topic || Networking Elastic Fabric Adapter Is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/04/elastic-fabric-adapter-is-now-generally-available/ Migrate Your AWS Site-to-Site VPN Connections from a Virtual Private Gateway to an AWS Transit Gateway | https://aws.amazon.com/about-aws/whats-new/2019/04/migrate-your-aws-site-to-site-vpn-connections-from-a-virtual-private-gateway-to-an-aws-transit-gateway/ Announcing AWS Direct Connect Support for AWS Transit Gateway | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing-aws-direct-connect-support-for-aws-transit-gateway/ Amazon CloudFront announces 11 new Edge locations in India, Japan, and the United States | https://aws.amazon.com/about-aws/whats-new/2019/05/cloudfront-11locations-7may2019/ Amazon VPC Endpoints Now Support Tagging for Gateway Endpoints, Interface Endpoints, and Endpoint Services | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-vpc-endpoints-now-support-tagging-for-gateway-endpoints-interface-endpoints-and-endpoint-services/ Topic || Analytics Amazon EMR announces Support for Multiple Master nodes to enable High Availability for EMR applications | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-emr-announces-support-for-multiple-master-nodes-to-enable-high-availability-for-EMR-applications/ Amazon EMR now supports Multiple Master nodes to enable High Availability for HBase clusters | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-emr-now-supports-multiple-master-nodes-to-enable-high-availability-for-hbase-clusters/ Amazon EMR announces Support for Reconfiguring Applications on Running EMR Clusters | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-emr-announces-support-for-reconfiguring-applications-on-running-emr-clusters/ Amazon Kinesis Data Analytics now allows you to assign AWS resource tags to your real-time applications | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_kinesis_data_analytics_now_allows_you_to_assign_aws_resource_tags_to_your_real_time_applications/ AWS Glue crawlers now support existing Data Catalog tables as sources | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-glue-crawlers-now-support-existing-data-catalog-tables-as-sources/ Topic || IoT AWS IoT Analytics Now Supports Faster SQL Data Set Refresh Intervals | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-iot-analytics-now-supports-faster-sql-data-set-refresh-intervals/ AWS IoT Greengrass Adds Support for Python 3.7, Node v8.10.0, and Expands Support for Elliptic-Curve Cryptography | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-iot-greengrass-adds-support-python-3-7-node-v-8-10-0-and-expands-support-elliptic-curve-cryptography/ AWS Releases Additional Preconfigured Examples for FreeRTOS on Armv8-M | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-releases-additional-freertos-preconfigured-examples-armv8m/ AWS IoT Device Defender supports monitoring behavior of unregistered devices | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-device-defender-supports-monitoring-behavior-of-unregistered-devices/ AWS IoT Analytics Now Supports Data Set Content Delivery to Amazon S3 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-analytics-now-supports-data-set-content-delivery-to-amaz/ Topic || End User Computing Amazon AppStream 2.0 adds configurable timeouts for idle sessions | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-appstream-2-0-adds-configurable-timeouts-for-idle-session/ Monitor Emails in Your Workmail Organization Using Cloudwatch Metrics and Logs | https://aws.amazon.com/about-aws/whats-new/2019/05/monitor-emails-in-your-workmail-organization-using-cloudwatch-me/ You can now use custom chat bots with Amazon Chime | https://aws.amazon.com/about-aws/whats-new/2019/05/you-can-now-use-custom-chat-bots-with-amazon-chime/ Topic || Machine Learning Developers, start your engines! The AWS DeepRacer Virtual League kicks off today. | https://aws.amazon.com/about-aws/whats-new/2019/04/AWSDeepRacerVirtualLeague/ Amazon SageMaker announces new features to the built-in Object2Vec algorithm | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sagemaker-announces-new-features-to-the-built-in-object2v/ Amazon SageMaker Ground Truth Now Supports Automated Email Notifications for Manual Data Labeling | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sagemaker-ground-truth-now-supports-automated-email-notif/ Amazon Translate Adds Support for Hindi, Farsi, Malay, and Norwegian | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_translate_support_hindi_farsi_malay_norwegian/ Amazon Transcribe now supports Hindi and Indian-accented English | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-transcribe-supports-hindi-indian-accented-english/ Amazon Comprehend batch jobs now supports Amazon Virtual Private Cloud | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-comprehend-batch-jobs-now-supports-amazon-virtual-private-cloud/ New in AWS Deep Learning AMIs: PyTorch 1.1, Chainer 5.4, and CUDA 10 support for MXNet | https://aws.amazon.com/about-aws/whats-new/2019/05/new-in-aws-deep-learning-amis-pytorch-1-1-chainer-5-4-cuda10-for-mxnet/ Topic || Application Integration Amazon MQ Now Supports Resource-Level and Tag-Based Permissions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-mq-now-supports-resource-level-and-tag-based-permissions/ Amazon SNS Adds Support for Cost Allocation Tags | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sns-adds-support-for-cost-allocation-tags/ Topic || Management and Governance Reservation Expiration Alerts Now Available in AWS Cost Explorer | https://aws.amazon.com/about-aws/whats-new/2019/05/reservation-expiration-alerts-now-available-in-aws-cost-explorer/ AWS Systems Manager Patch Manager Supports Microsoft Application Patching | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-systems-manager-patch-manager-supports-microsoft-application-patching/ AWS OpsWorks for Chef Automate now supports Chef Automate 2 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-opsworks-for-chef-automate-now-supports-chef-automate-2/ AWS Service Catalog Connector for ServiceNow supports CloudFormation StackSets | https://aws.amazon.com/about-aws/whats-new/2019/05/service-catalog-servicenow-connector-now-supports-stacksets/ Topic || Migration AWS Migration Hub EC2 Recommendations | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-migration-hub-ec2-recommendations/ Topic || Security Amazon GuardDuty Adds Two New Threat Detections | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-guardduty-adds-two-new-threat-detections/ AWS Security Token Service (STS) now supports enabling the global STS endpoint to issue session tokens compatible with all AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-security-token-service-sts-now-supports-enabling-the-global-sts-endpoint-to-issue-session-tokens-compatible-with-all-aws-regions/ AWS WAF Security Automations Now Supports Log Analysis | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-waf-security-automations-now-supports-log-analysis/ AWS Certificate Manager Private Certificate Authority Increases Certificate Limit To One Million | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-certificate-manager-private-certificate-authority-increases-certificate-limit-to-one-million/ Amazon Cognito launches enhanced user password reset API for administrators | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-cognito-launches-enhanced-user-password-reset-api-for-administrators/ AWS Secrets Manager supports more client-side caching libraries to improve secrets availability and reduce cost | https://aws.amazon.com/about-aws/whats-new/2019/05/Secrets-Manager-Client-Side-Caching-Libraries-in-Python-NET-Go/ Create fine-grained session permissions using AWS Identity and Access Management (IAM) managed policies | https://aws.amazon.com/about-aws/whats-new/2019/05/session-permissions/ Topic || Training and Certification New VMware Cloud on AWS Navigate Track | https://aws.amazon.com/about-aws/whats-new/2019/04/vmware-navigate-track/ Topic || Blockchain Amazon Managed Blockchain What's New | https://aws.amazon.com/about-aws/whats-new/2019/04/introducing-amazon-managed-blockchain/ Topic || Quick Starts New Quick Start deploys SAP S/4HANA on AWS | https://aws.amazon.com/about-aws/whats-new/2019/05/new-quick-start-deploys-sap-s4-hana-on-aws/

united states amazon health english japan performance model indian launch services i am oracle norwegian governance api certification python aws hindi automate tb amazon web services smb amis node logs sts farsi emr servicenow mongodb splunk malay ecs cuda redis ebs amazon s3 ec2 high availability graphql apis access management iam sap s 4hana aws amplify nvme ssd amazon rds performance insights generally available chainer aws glue aws identity freertos amazon linux hbase mxnet amazon cloudfront amazon cognito amazon chime amazon api gateway aws regions amazon transcribe aws secrets manager amazon elasticache amazon cloudwatch amazon emr amazon comprehend aws transit gateway amazon fsx elliptic curve cryptography amazon ec2 a1 aws storage gateway topic training amazon virtual private cloud aws opsworks amazon kinesis data analytics aws amplify console
AWS Podcast
#308: April 2019 Update Show

AWS Podcast

Play Episode Listen Later Apr 14, 2019 35:25


Simon and Nicki cover almost 100 updates! Check out the chapter timings to see where things of interest to you might be. Infrastructure 00:42 Storage 1:17 Databases 4:14 Analytics 8:28 Compute 9:52 IoT 15:17 End User Computing 17:40 Machine Learning 19:10 Networking 21:57 Developer Tools 23:21 Application Integration 25:42 Game Tech 26:29 Media 27:37 Management and Governance 28:11 Robotics 30:35 Security 31:30 Solutions 32:40 Topic || Infrastructure In the Works – AWS Region in Indonesia | https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-indonesia/ Topic || Storage New Amazon S3 Storage Class – Glacier Deep Archive | https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/ File Gateway Supports Amazon S3 Object Lock - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/file-gateway-supports-amazon-s3-object-lock/ AWS Storage Gateway Tape Gateway Deep Archive | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-storage-gateway-service-integrates-tape-gateway-with-amazon-s3-glacier-deeparchive-storage-class/ AWS Transfer for SFTP supports AWS Privatelink – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-transfer-for-sftp-now-supports-aws-privatelink/ Amazon FSx for Lustre Now Supports Access from Amazon Linux | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-fsx-for-lustre-now-supports-access-from-amazon-linux/ AWS introduces CSI Drivers for Amazon EFS and Amazon FSx for Lustre | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-introduces-csi-drivers-for-amazon-efs-and-amazon-fsx-for-lus/ Topic || Databases Amazon DynamoDB drops the price of global tables by eliminating associated charges for DynamoDB Streams | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-dynamodb-drops-the-price-of-global-tables-by-eliminating-associated-charges-for-dynamodb-streams/ Amazon ElastiCache for Redis 5.0.3 enhances I/O handling to boost performance | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-elasticache-for-redis-503-enhances-io-handling-to-boost-performance/ Amazon Redshift announces Concurrency Scaling: Consistently fast performance during bursts of user activity | https://aws.amazon.com/about-aws/whats-new/2019/03/AmazonRedshift-ConcurrencyScaling/ Performance Insights is Generally Available on Amazon RDS for MariaDB | https://aws.amazon.com/about-aws/whats-new/2019/03/performance-insights-is-generally-available-for-mariadb/ Amazon RDS adds support for MySQL Versions 5.7.25, 5.7.24, and MariaDB Version 10.2.21 | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-rds-mysql-minor-5725-5725-and-mariadb-10221/ Amazon Aurora with MySQL 5.7 Compatibility Supports GTID-Based Replication | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-aurora-with-mysql-5-7-compatibility-supports-gtid-based-replication/ PostgreSQL 11 now Supported in Amazon RDS | https://aws.amazon.com/about-aws/whats-new/2019/03/postgresql11-now-supported-in-amazon-rds/ Amazon Aurora with PostgreSQL Compatibility Supports Logical Replication | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-aurora-with-postgresql-compatibility-supports-logical-replication/ Restore an Encrypted Amazon Aurora PostgreSQL Database from an Unencrypted Snapshot | https://aws.amazon.com/about-aws/whats-new/2019/03/restore-an-encrypted-aurora-postgresql-database-from-an-unencrypted-snapshot/ Amazon RDS for Oracle Now Supports In-region Read Replicas with Active Data Guard for Read Scalability and Availability | https://aws.amazon.com/about-aws/whats-new/2019/03/Amazon-RDS-for-Oracle-Now-Supports-In-region-Read-Replicas-with-Active-Data-Guard-for-Read-Scalability-and-Availability/ AWS Schema Conversion Tool Adds Support for Migrating Oracle ETL Jobs to AWS Glue | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-schema-conversion-tool-adds-support-for-migrating-oracle-etl/ AWS Schema Conversion Tool Adds New Conversion Features | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-sct-adds-support-for-new-endpoints/ Amazon Neptune Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-neptune-announces-service-level-agreement/ Topic || Analytics Amazon QuickSight Announces General Availability of ML Insights | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon_quicksight_announced_general_availability_of_mL_insights/ AWS Glue enables running Apache Spark SQL queries | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-glue-enables-running-apache-spark-sql-queries/ AWS Glue now supports resource tagging | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-glue-now-supports-resource-tagging/ Amazon Kinesis Data Analytics Supports AWS CloudTrail Logging | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-kinesis-data-analytics-supports-aws-cloudtrail-logging/ Tag-on Create and Tag-Based IAM Application for Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/03/tag-on-create-and-tag-based-iam-application-for-amazon-kinesis-data-firehose/ Topic || Compute Amazon EKS Introduces Kubernetes API Server Endpoint Access Control | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-introduces-kubernetes-api-server-endpoint-access-cont/ Amazon EKS Opens Public Preview of Windows Container Support | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-opens-public-preview-of-windows-container-support/ Amazon EKS now supports Kubernetes version 1.12 and Cluster Version Updates Via CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-now-supports-kubernetes-version-1-12-and-cluster-vers/ New Local Testing Tools Now Available for Amazon ECS | https://aws.amazon.com/about-aws/whats-new/2019/03/new-local-testing-tools-now-available-for-amazon-ecs/ AWS Fargate and Amazon ECS Support External Deployment Controllers for ECS Services | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-fargate-and-amazon-ecs-support-external-deployment-controlle/ AWS Fargate PV1.3 adds secrets and enhanced container dependency management | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-fargate-pv1-3-adds-secrets-and-enhanced-container-dependency/ AWS Event Fork Pipelines – Nested Applications for Event-Driven Serverless Architectures | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-aws-event-fork-pipelines-nested-applications-for-event-driven-serverless-architectures/ New Amazon EC2 M5ad and R5ad Featuring AMD EPYC Processors are Now Available | https://aws.amazon.com/about-aws/whats-new/2019/03/new-amazon-ec2-m5ad-and-r5ad-featuring-amd-epyc-processors-are-now-available/ Announcing the Ability to Pick the Time for Amazon EC2 Scheduled Events | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-the-ability-to-pick-the-time-for-amazon-ec2-scheduled-events/ Topic || IoT AWS IoT Analytics now supports Single Step Setup of IoT Analytics Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-analytics-now-supports-single-step-setup-of-iot-analytic/ AWS IoT Greengrass Adds New Connector for AWS IoT Analytics, Support for AWS CloudFormation Templates, and Integration with Fleet Indexing | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-greengrass-adds-new-connector-aws-iot-analytics-support-aws-cloudformation-templates-integration-fleet-indexing/ AWS IoT Device Tester v1.1 is Now Available for AWS IoT Greengrass v1.8.0 | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-device-tester-now-available-aws-iot-greengrass-v180/ AWS IoT Core Now Supports HTTP REST APIs with X.509 Client Certificate-Based Authentication On Port 443 | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-core-now-supports-http-rest-apis-with-x509-client-certificate-based-authentication-on-port-443/ Generate Fleet Metrics with New Capabilities of AWS IoT Device Management | https://aws.amazon.com/about-aws/whats-new/2019/03/generate-fleet-metrics-with-new-capabilities-of-aws-iot-device-management/ Topic || End User Computing Amazon AppStream 2.0 Now Supports iPad and Android Tablets and Touch Gestures | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-appstream-2-0-now-supports-ipad-and-android-tablets-and-t/ Amazon WorkDocs Drive now supports offline content and offline search | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-workdocs-drive-now-supports-offline-content-and-offline-s/ Introducing Amazon Chime Business Calling | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-amazon-chime-business-calling/ Introducing Amazon Chime Voice Connector | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-amazon-chime-voice-connector/ Alexa for Business now lets you create Alexa skills for your organization using Skill Blueprints | https://aws.amazon.com/about-aws/whats-new/2019/03/alexa-for-business-now-lets-you-create-alexa-skills-for-your-org/ Topic || Machine Learning New AWS Deep Learning AMIs: Amazon Linux 2, TensorFlow 1.13.1, MXNet 1.4.0, and Chainer 5.3.0 | https://aws.amazon.com/about-aws/whats-new/2019/03/new-aws-deep-learning-amis-amazon-linux2-tensorflow-13-1-mxnet1-4-0-chainer5-3-0/ Introducing AWS Deep Learning Containers | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-aws-deep-learning-containers/ Amazon Transcribe now supports speech-to-text in German and Korean | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-transcribe-now-supports-speech-to-text-in-german-and-korean/ Amazon Transcribe enhances custom vocabulary with custom pronunciations and display forms | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-transcribe-enhances-custom-vocabulary-with-custom-pronunciations-and-display-forms/ Amazon Comprehend now supports AWS KMS Encryption | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-comprehend-now-supports-aws-kms-encryption/ New Setup Tool To Get Started Quickly with Amazon Elastic Inference | https://aws.amazon.com/about-aws/whats-new/2019/04/new-python-script-to-get-started-quickly-with-amazon-elastic-inference/ Topic || Networking Application Load Balancers now Support Advanced Request Routing | https://aws.amazon.com/about-aws/whats-new/2019/03/application-load-balancers-now-support-advanced-request-routing/ Announcing Multi-Account Support for Direct Connect Gateway | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-multi-account-support-for-direct-connect-gateway/ Topic || Developer Tools AWS App Mesh is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-app-mesh-is-now-generally-available/ The AWS Toolkit for IntelliJ is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/03/the-aws-toolkit-for-intellij-is-now-generally-available/ The AWS Toolkit for Visual Studio Code (Developer Preview) is Now Available for Download from in the Visual Studio Marketplace | https://aws.amazon.com/about-aws/whats-new/2019/03/the-aws-toolkit-for-visual-studio-code--developer-preview--is-now-available-for-download-from-vs-marketplace/ AWS Cloud9 announces support for Ubuntu development environments | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-cloud9-announces-support-for-ubuntu-development-environments/ Amplify Framework Adds Enhancements to Authentication for iOS, Android, and React Native Developers | https://aws.amazon.com/about-aws/whats-new/2019/03/amplify-framework-adds-enhancements-to-authentication-for-ios-android-and-react-native-developers/ AWS CodePipeline Adds Action-Level Details to Pipeline Execution History | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-codepipeline-adds-action-level-details-to-pipeline-execution-history/ Topic || Application Integration Amazon API Gateway Improves API Publishing and Adds Features to Enhance User Experience | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-api-gateway-improves-api-publishing-and-adds-features/ Topic || Game Tech AWS Whats New - Lumberyard Beta 118 - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/over-190-updates-come-to-lumberyard-beta-118-available-now/ Amazon GameLift Realtime Servers Now in Preview | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-gamelift-realtime-servers-now-in-preview/ Topic || Media Services Detailed Job Progress Status and Server-Side S3 Encryption Now Available with AWS Elemental MediaConvert | https://aws.amazon.com/about-aws/whats-new/2019/03/detailed-job-progress-status-and-server-side-s3-encryption-now-available-with-aws-elemental-mediaconvert/ Introducing Live Streaming with Automated Multi-Language Subtitling | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-live-streaming-with-automated-multi-language-subtitling/ Video on Demand Now Leverages AWS Elemental MediaConvert QVBR Mode | https://aws.amazon.com/about-aws/whats-new/2019/04/video-on-demand-now-leverages-aws-elemental-mediaconvert-qvbr-mode/ Topic || Management and Governance Use AWS Config Rules to Remediate Noncompliant Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/use-aws-config-to-remediate-noncompliant-resources/ AWS Config Now Supports Tagging of AWS Config Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-config-now-supports-tagging-of-aws-config-resources/ Now You Can Query Based on Resource Configuration Properties in AWS Config | https://aws.amazon.com/about-aws/whats-new/2019/03/now-you-can-query-based-on-resource-configuration-properties-in-aws-config/ AWS Config Adds Support for Amazon API Gateway | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-config-adds-support-for-amazon-api-gateway/ Amazon Inspector adds support for Amazon EC2 A1 instances | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-inspector-adds-support-for-amazon-ec2-a1-instances/ Service control policies in AWS Organizations enable fine-grained permission controls | https://aws.amazon.com/about-aws/whats-new/2019/03/service-control-policies-enable-fine-grained-permission-controls/ You can now use resource level policies for Amazon CloudWatch Alarms | https://aws.amazon.com/about-aws/whats-new/2019/04/you-can-now-use-resource-level-permissions-for-amazon-cloudwatch/ Amazon CloudWatch Launches Search Expressions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-cloudwatch-launches-search-expressions/ AWS Systems Manager Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-systems-manager-announces-service-level-agreement/ Topic || Robotics AWS RoboMaker Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-robomaker-announces-service-level-agreement/ AWS RoboMaker announces new build and bundle feature that makes it up to 10x faster to update a simulation job or a robot | https://aws.amazon.com/about-aws/whats-new/2019/03/robomaker-new-build-and-bundle/ Topic || Security Announcing the renewal command for AWS Certificate Manager | https://aws.amazon.com/about-aws/whats-new/2019/03/Announcing-the-renewal-command-for-AWS-Certificate-Manager/ AWS Key Management Service Increases API Requests Per Second Limits | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-key-management-service-increases-api-requests-per-second-limits/ Announcing AWS Firewall Manager Support For AWS Shield Advanced | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-aws-firewall-manager-support-for-aws-shield-advanced/ Topic || Solutions New AWS SAP Navigate Track | https://aws.amazon.com/about-aws/whats-new/2019/03/sap-navigate-track/ Deploy Micro Focus PlateSpin Migrate on AWS with New Quick Start | https://aws.amazon.com/about-aws/whats-new/2019/03/deploy-micro-focus-platespin-migrate-on-aws-with-new-quick-start/

time business service video german android indonesia ios korean preview integration infrastructure restore ability governance io ml aws availability ubuntu amazon web services kubernetes authentication mysql tensorflow redis postgresql lustre mariadb android tablets intellij sftp amazon rds aws fargate generally available amazon eks amazon redshift amazon aurora chainer aws glue service level agreement amazon linux amazon ecs mxnet aws organizations aws config amazon api gateway amazon inspector amazon transcribe amazon elasticache amazon comprehend amazon fsx amazon efs aws certificate manager aws iot device management aws iot greengrass amazon ec2 a1 aws transfer amazon elastic inference aws iot analytics
regonn&curry.fm
28. Chainer Meetup

regonn&curry.fm

Play Episode Listen Later Apr 7, 2019 65:32


Scrapbox お題 リスナーからの質問 第一回から欠かさず聞いています。最近、自分も友人とポッドキャストをしたいなという気持ちが湧いています。お二人はオンラインで通話しながら録音しているように聞こえるのですが、もしよければ収録環境など教えていただくことはできますか?音声がとてもクリアなので参考にしたいです 最初はAdobe Auditionでノイズ削除してたりしてた 最近はZoom Github PagesでYattecastをベースに配信していて1ファイル50MB位を目処にしている Podcast繋がりでAnchorというサービスが気になってる 今年の2月位に Spotify が買収して、Spotify自体もPodcast配信できるようになっているので、このポッドキャストも対応させた Anchorも無料で無制限にホスティングできる 日本では対応していないが、海外だと広告を挿入できたりして、お金を稼ぐことが可能 Spotifyも日本に進出しているし、そのうち日本でも広告は扱えそうなので、そのときには動くかもしれない 動画撮った 独立したVTuberを作る企画 YouTubeLiveコメントで自然言語処理を行い、ネガポジ判定して色が変化する まだ、英語のみ なかなか日本語の良い学習データがない そのうち、返答もできたりするようにしたい Chainer Meetup #09 参加レポート 雰囲気的には、プログラミングイベントと学会の中間のイメージ ChainerXの話 ndarrayでnumpy風に計算グラフ処理等で扱う部分等をC++実装に置き換えている そこら辺の処理をC++側に渡すことで、高速化や、Python側もリファクタしやすくなるらしい Ruby で Chainer 作ってます Julia実装も作ってみたいなと思った 自分のLTでは PyCall.jl でJuliaでChainerを動かした事を発表した Chainerでのプロファイリングをちょっと楽にする話 nvidiaの人の話で、GPUのチューニング方法(GPU処理のボトルネックを見つける方法等)を見せてもらった ChainerRLで株売買を結構頑張ってみた(前編) 株売買をやってみようとしている けど、LTで発表していた、VAEを用いたチャートを画像学習で予測するほうが精度が高そう? やっぱりランダムウォークの投資で利益を出すのは大変 AWS, GCP, Azure の人達が話してた ココらへんはあまり差が生まれにくい、どこも自社のサービスでJupyter notebook動かせる AWSはONNX(オニキス)を使いやすいよ GCPはKubeflowでコンテナを使った開発エコシステム作れる AzureはXBoxでIoT系のノウハウ持ってるから強い オフ会やりました ペットコンペ終了の日に、リスナーの方と3人で。 今週のtips ディスカッションに書き込まれたときにメールで受け取る方法 削除してされてもメールを見るとなにが書き込まれたかわかるのでとりあえずやっておくと安心する メールアドレスをgmailにしておけば、検索性能も高くて探しやすい 今週のkaggle カレー Petコンペ、Publicリーダーボード6位で終えた 結果発表待ち 結果が発表されるまで長い れごん 壺コンペ Chainerで書いてる マルチラベルの対応が難しい Kaggleの新コンペ Kernelコンペが3つ始まった。 ルールが全部同じというわけではなくて難しい 壺とToxicity Classification (毒)はkernelで推論できればOK Freesond Audio Tagging 2019は、External dataとpre-traineは禁止されている

Open Source Directions hosted by Quansight

Chainer is a powerful, flexible and intuitive deep learning framework. Chainer supports CUDA computation. It only requires a few lines of code to leverage a GPU. It also runs on multiple GPUs with little effort. Chainer supports various network architectures including feed-forward nets, convnets, recurrent nets and recursive nets. It also supports per-batch architectures. Forward computation can include any control flow statements of Python without lacking the ability of backpropagation. It makes code intuitive and easy to debug.

Develpreneur: Become a Better Developer and Entrepreneur

The AWS machine learning services are more examples of the newer offerings.  Nevertheless, these are growing fast and can help you embrace cutting edge technology.  Machine learning is a recent technology in general so the time you spend understanding these services may help you land that next job. Amazon SageMaker This service provides a method for building, training, and deploying machine learning models at any scale.  This is a great way to try out machine learning.  The time you spend here is good to use on your next resume update.  You do need to put some data on S3 to analyze and then check out the use cases.  There is a free tier for the first two months. Amazon Comprehend Quick and easy text analysis.  Send your text to this service to analyze it for keywords among many other ways to do so.  There is a free tier you can use to try it out and find out ways to organize and mine your content. Amazon Lex This service allows you to build voice and chatbots using the technology that drives Alexa.  There are some templates, and the interface makes it easy to get started quickly. Amazon Polly If you want to create audio from your content, then this is the service for you.  Try out the service a few thousand words at a time for free, and you can even download the audio in mp3 format. Amazon Rekognition The features that Comprehend provides for text is moved into the video world by Rekognition.  This service analyzes video and can highlight or recognize people, objects, and other details you might search for in a stream. Amazon Translate This service provides a quick and easy way to translate text between any two languages.  Much like Google translate, it is quick and provides an API that you can use to significantly increase your audience. Amazon Transcribe If you have ever wondered about transcribing audio notes (or a podcast), then this is the service for you.  It is quick and easy to customize for even highly technical terms.  The accuracy varies based on the clarity of the audio and background noise. AWS DeepLens This service is best understood by utilizing the tutorials.  It provides a way to analyze videos for objects, faces, and activities.  An essential difference between this and the others is that this is a piece of hardware and not just a service.  It provides a camera with HD and onboard analysis tools for real-time processing of video. AWS Deep Learning AMIs This service provides quick start machine learning on EC2 through the AMIs.  The configuration of a machine learning development environment can be tedious and time-consuming.  These AMI options offer a shortcut to get working sooner. Apache MXNet on AWS This is a machine learning framework Apache MXNet is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning. MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and mobile apps. In just a few lines of Gluon code, you can build linear regression, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization. TensorFlow on AWS This is a machine learning framework on AWS.  I think their description works best and avoids any ignorance about it on my end. "TensorFlow™ enables developers to quickly and easily get started with deep learning in the cloud. The framework has broad support in the industry and has become a popular choice for deep learning research and application development, particularly in areas such as computer vision, natural language understanding, and speech translation.  You can get started on AWS with a fully-managed TensorFlow experience with Amazon SageMaker, a platform to build, train, and deploy machine learning models at scale. Or, you can use the AWS Deep Learning AMIs to build custom environments and workflows with TensorFlow and other popular frameworks including Apache MXNet, PyTorch, Caffe, Caffe2, Chainer, Gluon, Keras, and Microsoft Cognitive Toolkit."  

Potentially Harmful Podcast
Torn Bare 03_Chainer Seraph

Potentially Harmful Podcast

Play Episode Listen Later Jun 22, 2018 18:57


Our good friend Chainer Seraph wanted to share a few of his thoughts with you all. And after having recieved news of the passing of his Oma, he took some time to grieve and let his emotion filled mind flow. Thanks again, love you all Cheers! Follow us at these places, https://twitter.com/realmurderbus RoskaTyrant @ https://twitter.com/Roska https://www.youtube.com/user/RoskaTyrant https://www.twitch.tv/roskatyrant And HalcyonEyes @ https://twitter.com/astrozombies https://www.twitch.tv/dee_squared https://www.youtube.com/channel/UCyncRCXPmCV4wSUG9iITulA https://twitter.com/Endless_Reaper

Disrupting Japan: Startups and Innovation in Japan
This Startup Just Built Japan’s Most Powerful Supercomputer

Disrupting Japan: Startups and Innovation in Japan

Play Episode Listen Later May 14, 2018 45:03


Preferred Networks is making changes in Japan. Over the past few years, this AI startup has raised more than $130M in venture funding and grown to more than 130 people. If you live outside of Japan, you might not have heard of this team, but they are working with Toyota to create the next generation of driverless cars. They are working with Japan's most advanced industrial robot manufacturers to improve efficiency. They are also working with many financial institutions on fraud detection. Oh yes, and they also built Japan's most powerful commercial supercomputer. Today we sit down and talk with Daisuke Okanohara, the technical co-founder of Preferred Networks. Daisuke and I talk about the story behind Preferred Networks, he also shares his challenges and current strategies for maintaining the company's experimental and engineering culture as it grows larger and more structured. Daisuke also talks about his time at Google, how Japanese AI stacks up to China and the US, and why he’s convinced that their biggest competition is going to come from somewhere you would never expect. It's a great discussion, and I think you'll enjoy it. Show Notes What edge-heavy computing is and why it's important How a Google Internship changed Daisuke's outlook on AI The future of driverless cars at Toyota Why the team decided to build Japan's most powerful supercomputer Why you can't sell disruptive products to large companies How to keep a curious spirit even as your company grows Where the real competition in AI will come from Links from the Founder Everything you ever wanted to know about Preferred Networks Check out their Homepage Follow them on Twitter @PreferredNet Check out Chainer Preferred Networks free open source AI library The core Chainer project PaintsChainer Cupy Chainer [shareaholic app="share_buttons" id="7994466"] Leave a comment Transcript Welcome to Disrupting Japan, straight talk from Japan's most successful entrepreneurs. I'm Tim Romero and thanks for joining me. Preferred Networks is without question the brightest star in the constellation of Japanese AI startups. It attracted about 130 million in venture funding and have grown to more than 130 people over the past few years. Of course, if you don't follow AI, you might not have heard about them at all but they are the technology behind Toyota’s driverless cars, some of FANUC’s industrial robots, many cutting-edge applications in other verticals, and as a side project, they also built Japan's most powerful commercial supercomputer. It's an interesting team to say the least and today, we sit down and talk with Daisuke Okanohara, Preferred Networks’ technical cofounder. We talk about how Preferred Networks got started and got to scale and he also shares his challenges and strategies of trying to maintain the company's experimental and engineering culture as it grows larger and monthly revenue pressures increase. Daisuke also talks about his time at Google, how Japanese AI stacks up to China and the US, and why he's convinced that their biggest competition is going to come from somewhere you would never expect it. But you know, Daisuke tells that story much better than I can, so let’s gets right to the interview. [pro_ad_display_adzone id="1411"  info_text="Sponsored by"  font_color="grey"  ] [Interview] Tim: So I'm sitting here with Daisuke Okanohara, the cofounder and Executive Vice President of Preferred Networks, Japan's leading and probably most innovative AI startup. So thanks for sitting down with me today. Daisuke: Thank you very much. Tim: So Preferred Networks talks a lot about the importance of edge -heavy computing. So can you explain exactly what edge-heavy computing is and why it's important? Daisuke: Cloud computing is one of the most important trends in the IT area and most people believe that most computations or operations sho...

Legendary Creature - Podcast
Chainer: There's a Mono-Black Player in Everyone

Legendary Creature - Podcast

Play Episode Listen Later Dec 19, 2017 58:01


Everything dies and is reborn. Only to die again. You know that there's a part of you as a magic player that wants to get everything back from the graveyard, just so you can use it and kill it again. Andy and Kyle bring in AP from their play group to talk about his Chainer, Dementia Master deck. He takes us for a midnight stroll through the graveyard and shares what he's planning on doing with death in his deck. If you're a player looking for a mono-black deck possibility, don't overlook Chainer. This oft-forgotten commander gets a lot of value and proves problematic for many opponents. Check out AP's deck and other decks featured on the podcast on  (http://tappedout.net/users/LegendaryCreature-Podcast/mtg-decks/). Music this episode is from Dan Terminus’s new album Automated Refrains. The song is Fall of the Ancient World. Please support Dan Terminus. We think he's awesome! Check out his  (). Big shout out to Mikey Patch for all our art work!  ().

Pirate Radio Podcasts™
Episode #73 - Coz The Shroom

Pirate Radio Podcasts™

Play Episode Listen Later Oct 13, 2017 108:36


Celebrating his 50th Birthday, we're joined this week by yet another one of MINDS.com's more dynamic personalities, "Coz the Shroom" https://www.minds.com/coztheshroom "i'm not turning 50, i'm turning 40/TEN." INTRO - MINDS.com, clearing up brief sound issues - politics & the 2016 Presidential Election Cycle, FEEL THE BERN? HST, George Carlin 14min - WIZARDS remake? 17min - Visits from prominent wives within the "black" community, Muhammad Ali, Jesse Jackson, Black Panthers, political awakening ALI'S legacy & impact 18min - Flexibility, listening to others 20min - AmeriKa's private, for profit, prison system 23min - #WOH - War on Humanity, food supply, chem-trails, etc 25min - Shroom the Hobbit, Wendy O. Williams, Naturopathic healer 28min - Self-confessed "PSYCHONAUT", ethno-botany 31min - Bi-polar tripping in the mountains of New Mexico, VISION QUEST 35min - INDIE radio station, Operation Secret Santa 2017, WPRPN "profit-sharing" philosophy, URL sub-domain subscriptions https://www.fiverr.com/wprpn1/pay-finders-fees-4-subdomain-network-registrants-recruits 43min - Boosting issues with MINDS, Japhy the "lazy" Buddhist, Aristotle & criticism 45min - Jim Aquarian ( the Source 60's commune "cult" ) ? 48min - Mycology, RARE mushroom and plant hunting, "Backwash" Punk rocker Lee Walstad "discovers" rare psychedelic 53min - Anthropology, New Mexico's rich ethnic mix, cops targeting of natives 57min - Shroom's revolutionary American lineage, cross-dressing, the Bahai 1hr1min - Weaponized information https://www.youtube.com/channel/UCFfVwa-kJwcYRp9G0WXVlhg/videos 1hr8min - Punk Rock & authenticity Pirate Radio Podcasts & C2C AM, Lisa Suck Dogs, TYT 1hr13min - Technical issues via DISCORD platform, CTS drops out 1hr16min - Captain "Long John" Sinclair 1hr19min - Shroom's "Pirate" Story, Thunderhead Poker w/ CHAOS Radio in Austin, TX, "Freebooting" RPG, Cassette Artistry 1hr24min - "Chainer KING" Dave, "Chainer" Juice, Loose Cannons 1hr26min - Politics breeding corruption, UFO skepticism, ETs as myth, CTS drops off line 1hr30 - Setting out 2 Nebraska on an EPIC journey for the ECLIPSE Witnessing UFOS & missing time? 1hr40min - STRANGE PSYCHOSIS 1hr43min - HOPI's "BLUE vs. RED" Prophecy Closing remarks & URL links https://coztheshroom.bandcamp.com/ http://barbarianclan.com/main.html https://www.facebook.com/throatgorge

Potentially Harmful Podcast
Ep 06_6 Not Sex

Potentially Harmful Podcast

Play Episode Listen Later Jun 30, 2017 58:15


This month we kinda just "went with the flow", is the "best" way to put it. For we are outdoors with alcoholic beverages and down to one mic due to computer magic?! Unfortunately, we were down a Halcyon, but were able to scrounge up Chainer and Eagles Rising, whos id now PrinnyLord! And again, thanks for joining us, till the next one, Cheers! Follow us at these places, https://twitter.com/realmurderbus RoskaTyrant @ https://twitter.com/Roska https://www.youtube.com/user/RoskaTyrant https://www.twitch.tv/roskatyrant And HalcyonEyes @ https://twitter.com/astrozombies https://www.twitch.tv/dee_squared https://www.youtube.com/channel/UCyncRCXPmCV4wSUG9iITulA https://twitter.com/Endless_Reaper

Rebuild
181: UNK Reply Bot (higepon)

Rebuild

Play Episode Listen Later May 1, 2017 70:24


Taro Minowa さんをゲストに迎えて、ボット、機械学習、AI などについて話しました。 Show Notes seq2seq の chatbot を日本語で動かしてみた - Higepon’s blog ひげみbot (@higepon_bot) Convolutional neural network Sequence-to-Sequence Models ゼロから作るDeep Learning ―Pythonで学ぶディープラーニングの理論と実装 TensorFlow Keras Theano Chainer 意味分からない。最初からKeras使った方が良くない?流石日本人。Chainer好きすぎでしょ。 MeCab: Yet Another Part-of-Speech and Morphological Analyzer りんな Twitter taught Microsoft’s AI chatbot to be a racist asshole deepmind/sonnet: TensorFlow-based neural network library FaceApp apologizes for building a racist AI Google Photos labeled black people 'gorillas' 新海誠監督の映画から無断使用 「Everfilter」をアニメ会社が調査 Is Expensify using Mechanical Turk for reading my receipts? Introducing Echo Look - Hands-Free Camera and Style Assistant Your Samsung TV is eavesdropping on your private conversations Google Home now supports multiple users Google shuts down Burger King's cunning TV ad Facebook is developing a way to read your mind

Ars Moriendi
Épisode 4: Né pour déchainer les enfers

Ars Moriendi

Play Episode Listen Later Mar 30, 2017


Richard Speck Il entend la nouvelle du massacre à la radio et dit à la serveuse du bar qu'il espère qu'on attrapera ce salaud. C'est dans la stupéfaction qu'il apprend qu'il est recherché et suspecté d'en être le responsable... Sombre dimanche Cette chanson hongroise porte une bien triste réputation, celle de pousser au suicide...

Ars Moriendi
Épisode 4: Né pour déchainer les enfers

Ars Moriendi

Play Episode Listen Later Mar 29, 2017


Richard Speck Il entend la nouvelle du massacre à la radio et dit à la serveuse du bar qu'il espère qu'on attrapera ce salaud. C'est dans la stupéfaction qu'il apprend qu'il est recherché et suspecté d'en être le responsable... Sombre dimanche Cette chanson hongroise porte une bien triste réputation, celle de pousser au suicide...

O'Reilly Bots Podcast - O'Reilly Media Podcast
Richard Socher on the future of deep learning

O'Reilly Bots Podcast - O'Reilly Media Podcast

Play Episode Listen Later Dec 1, 2016 57:46


The O’Reilly Bots Podcast: Making neural networks more accessible.In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I talk with Richard Socher, chief scientist at Salesforce.  He was previously the founder and CEO of MetaMind, a deep learning startup that Salesforce acquired in 2016.  Socher also teaches the “Deep Learning for Natural Language Processing” course at Stanford University. Our conversation focuses on where deep learning and NLP are headed, and interesting current and near-future applications.Discussion points: Accessibility, in a couple of senses: making deep learning easier for computer scientists to implement, and making the power of deep learning available through intuitive applications AI-enabled question answering systems and dynamic co-attention networks The issue of interpretability, and progress in creating more interpretable models Why Socher believes that human-in-the-loop is the best solution for the current “fake news” controversy, the hottest topic in NLP now Why Quasi-Recurrent Neural Networks (QRNNs) are an advancement over Long Short Term Memory networks (LSTMs), the subject of a recent paper co-authored by Socher Other links: The Stanford Question Answering Dataset TensorFlow and Chainer, two frameworks for working with neural networks Summaries of recent papers by the Salesforce research team  

O'Reilly Bots Podcast - O'Reilly Media Podcast
Richard Socher on the future of deep learning

O'Reilly Bots Podcast - O'Reilly Media Podcast

Play Episode Listen Later Dec 1, 2016 57:46


The O’Reilly Bots Podcast: Making neural networks more accessible.In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I talk with Richard Socher, chief scientist at Salesforce.  He was previously the founder and CEO of MetaMind, a deep learning startup that Salesforce acquired in 2016.  Socher also teaches the “Deep Learning for Natural Language Processing” course at Stanford University. Our conversation focuses on where deep learning and NLP are headed, and interesting current and near-future applications.Discussion points: Accessibility, in a couple of senses: making deep learning easier for computer scientists to implement, and making the power of deep learning available through intuitive applications AI-enabled question answering systems and dynamic co-attention networks The issue of interpretability, and progress in creating more interpretable models Why Socher believes that human-in-the-loop is the best solution for the current “fake news” controversy, the hottest topic in NLP now Why Quasi-Recurrent Neural Networks (QRNNs) are an advancement over Long Short Term Memory networks (LSTMs), the subject of a recent paper co-authored by Socher Other links: The Stanford Question Answering Dataset TensorFlow and Chainer, two frameworks for working with neural networks Summaries of recent papers by the Salesforce research team  

Chain of Wealth - Debt, Investing, Entrepreneurship, Wealth & More
Greg McBride on investing for retirement, social security and IRAs

Chain of Wealth - Debt, Investing, Entrepreneurship, Wealth & More

Play Episode Listen Later Dec 31, 1969 16:37


Original Show Notes: Greg McBride from Bankrate   Bankrate.com is a one stop shop for financing. They offer services such that include personal finance loans, automobile loans, mortgages and credit cards. Today we are speaking with Greg McBride, a Chief Financial Analyst and a graduate from the University of Florida. He is the Senior Vice President for Bankrate.com. He helps provide analysis and advice on personal finance. With two decades of experience in personal finance. He also has the ability to provide an in-depth interpretation and practical advice to listeners.   Greg has some awesome advice on for people looking for information on investing for retirement. Greg, hopefully you can help our Chainer’s out with planning for home buying and retirement. [0:59] Tell us a bit about yourself. ·      Passion: to help people save more money [1:35] How can a person decide if it’s the right time to buy? What should be considered? ·      In it for the long haul ·      Have a long term view [2:33] When it comes to purchasing a home, what area do you see people getting hung up on most? ·      Lack of savings [3:48] How are you able to know how much house you can afford? Is there a formula to knowing what your budget can be? ·      Find the housing calculator at Bankrate.com [4:41] S&P Dow Jones indexes managing director David Blitzer said “Home prices have reached new all-time highs,” on October 31, 2017. What are your thoughts about the housing market at the moment? Do you think the increase is sustainable and do you foresee any potential correction in the coming months or years? ·      Areas where prices have gone way up- a risk there ·      Most of America- nothing equated to 2006 [6:11] We know that leaving mass amounts of money sitting in a bank account is not the best idea. Where do you recommend young professionals put their money? ·      Have emergency fund in bank account ·      Workplace 401K ·      Roth IRA ·      Teach yourself to save 10%-15% of your income [7:12] Can you explain a bit about a Roth IRA and a traditional IRA? How do I know which is right for me? ·      Can do both ·      Roth IRA- paying taxes on the front end ·      Traditional IRA- pay taxes when you take money out ·      Snow ball analogy [8:28] What is a CD? Are there different types and again, how do I know which one is right? ·      Certificate of Deposit ·      Committing money for an amount of time ·      Range a few months- years ·      Make sure you don’t need the money [10:21] How do I know that I am saving and investing enough to retire in 40 or 50 years? ·      Save 10%-15% percent of your income for retirement [11:43] A statement that has been told for a long time is not to plan on receiving Social Security when am millennials are older. How much truth is there to that? ·      Do not plan on Social Security ·      Save for yourself Sponsors [13:17] Chain of Wealth – If you’re enjoying the podcast don’t forget to subscribe, rate and review. Head over to Chain of Wealth. If you have an iPhone, subscribe in iTunes! Value Link Round (VLR) [13:56] Why do you think that people struggle to save and plan for the future? ·      Don’t prioritize saving [14:20] Do you recommend any other podcasts or books? ·      Books: The Millionaire Next Door [14:53] What is the best advice that someone has ever given you? ·      Think long term and save [15:22] What is your favorite quote? ·      “Don’t save what’s left over after spending, spend what’s left over after saving” [15:58] How can our listeners get in touch with you? ·      Twitter: @bankrategreg ·      Facebook: Money Masters Support this podcast at — https://redcircle.com/chain-of-wealth-debt-investing-entrepreneurship-wealth-and-more/donationsWant to advertise on this podcast? Go to https://redcircle.com/brands and sign up.