POPULARITY
I do what I do because of one company … IBM. Why? Because in the 1970s, I got into computers, with a ZX81 (1KB of RAM) and a Dragon 32 (32 KB of RAM). They were very much home computers, and where you would rush out and buy the latest computer magazine, and then spend a happy evening entering some BASIC code that made a cursor move across the screen using the IJLM keys. If you were very lucky you would manage to save it to a cassette — that could take over ten minutes to save a simple program — only to get an error at the end. I was hooked! But, at work, we had a DEC VAX minicomputer, and which cost a fortune to buy and maintain (even in those days). This mini ran typically Pascal, and I remember running labs for students, and where they all decided to compile their program at the same time, and 30 minutes later, some of them would get their errors, and have to compile it again. Basically, every lab ended with me saying, “Sorry about that.” The VAX, though, was not designed to support 25 students compiling their program at the same time … it was a batch processing machine and wanted to be given jobs that it could run whenever it had time. It basically came from the days when you handed in your punch cards (containing either FORTRAN if you were an engineer or COBOL if you were more business-focused) to someone with a white coat, and then came back the next week with a printed output with green lined paper. But, just in time, the IBM PC arrived, and it was heavy but beautiful. So, as many in my department pushed for the VAX, but pushed for the PC for our labs. With their clock speed of 4.7 MHz, and 640KB of memory, I went ahead and bought a batch for a new PC lab. In those days there were no network switches, so they all connected with coaxial cable and had T-pieces to connect to the shared Ethernet bus. My logic was that we were paying around £20K for maintenance on the VAX, and where we could buy 20 £1K PC clones for the same cost. But, we'd have to maintain them. And, it worked. It freed us, and allowed us to run the classic Turbo Pascal (and Turbo C): Our student could now bring in their 5-inch floppy disks and save their programs for later use. And the size of the hard disk? 20MB! And, so, it is to IBM that we turn in starting the PC revolution, and today is the 100th anniversary of the IBM name — and first defined on 15 Feb 1924.
FlashAttention was first published by Tri Dao in May 2022 and it had a deep impact in the large language models space. Most open models you've heard of (RedPajama, MPT, LLaMA, Falcon, etc) all leverage it for faster inference. Tri came on the podcast to chat about FlashAttention, the newly released FlashAttention-2, the research process at Hazy Lab, and more. This is the first episode of our “Papers Explained” series, which will cover some of the foundational research in this space. Our Discord also hosts a weekly Paper Club, which you can signup for here. How does FlashAttention work?The paper is titled “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. There are a couple keywords to call out:* “Memory Efficient”: standard attention memory usage is quadratic with sequence length (i.e. O(N^2)). FlashAttention is sub-quadratic at O(N). * “Exact”: the opposite of “exact” in this case is “sparse”, as in “sparse networks” (see our episode with Jonathan Frankle for more). This means that you're not giving up any precision.* The “IO” in “IO-Awareness” stands for “Input/Output” and hints at a write/read related bottleneck. Before we dive in, look at this simple GPU architecture diagram:The GPU has access to three memory stores at runtime:* SRAM: this is on-chip memory co-located with the actual execution core. It's limited in size (~20MB on an A100 card) but extremely fast (19TB/s total bandwidth)* HBM: this is off-chip but on-card memory, meaning it's in the GPU but not co-located with the core itself. An A100 has 40GB of HBM, but only a 1.5TB/s bandwidth. * DRAM: this is your traditional CPU RAM. You can have TBs of this, but you can only get ~12.8GB/s bandwidth, which is way too slow.Now that you know what HBM is, look at how the standard Attention algorithm is implemented:As you can see, all 3 steps include a “write X to HBM” step and a “read from HBM” step. The core idea behind FlashAttention boils down to this: instead of storing each intermediate result, why don't we use kernel fusion and run every operation in a single kernel in order to avoid memory read/write overhead? (We also talked about kernel fusion in our episode with George Hotz and how PyTorch / tinygrad take different approaches here)The result is much faster, but much harder to read:As you can see, FlashAttention is a very meaningful speed improvement on traditional Attention, and it's easy to understand why it's becoming the standard for most models.This should be enough of a primer before you dive into our episode! We talked about FlashAttention-2, how Hazy Research Group works, and some of the research being done in Transformer alternatives.Show Notes:* FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness (arXiv)* FlashAttention-2* Together AI* From Deep Learning to Long Learning* The Hardware Lottery by Sara Hooker* Hazy Research* Is Attention All You Need?* Nvidia CUTLASS 3* SRAM scaling slows* Transformer alternatives:* S4* Hyena* Recurrent Neural Networks (RNNs)Timestamps:* Tri's background [00:00:00]* FlashAttention's deep dive [00:02:18]* How the Hazy Research group collaborates across theory, systems, and applications [00:17:21]* Evaluating models beyond raw performance [00:25:00]* FlashAttention-2 [00:27:00]* CUDA and The Hardware Lottery [00:30:00]* Researching in a fast-changing market [00:35:00]* Promising transformer alternatives like state space models and RNNs [00:37:30]* The spectrum of openness in AI models [00:43:00]* Practical impact of models like LLAMA2 despite restrictions [00:47:12]* Incentives for releasing open training datasets [00:49:43]* Lightning Round [00:53:22]Transcript:Alessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. Today we have no Swyx, because he's in Singapore, so it's a one-on-one discussion with Tri Dao. Welcome! [00:00:24]Tri: Hi everyone. I'm Tri Dao, excited to be here. [00:00:27]Alessio: Tri just completed his PhD at Stanford a month ago. You might not remember his name, but he's one of the main authors in the FlashAttention paper, which is one of the seminal work in the Transformers era. He's got a lot of interest from efficient transformer training and inference, long range sequence model, a lot of interesting stuff. And now you're going to be an assistant professor in CS at Princeton next year. [00:00:51]Tri: Yeah, that's right. [00:00:52]Alessio: Yeah. And in the meantime, just to get, you know, a low pressure thing, you're Chief Scientist at Together as well, which is the company behind RedPajama. [00:01:01]Tri: Yeah. So I just joined this week actually, and it's been really exciting. [00:01:04]Alessio: So what's something that is not on the internet that people should know about you? [00:01:09]Tri: Let's see. When I started college, I was going to be an economist, so I was fully on board. I was going to major in economics, but the first week I was at Stanford undergrad, I took a few math classes and I immediately decided that I was going to be a math major. And that kind of changed the course of my career. So now I'm doing math, computer science, AI research. [00:01:32]Alessio: I had a similar thing. I started with physics and then I took like a programming course and I was like, I got to do computer science. I don't want to do physics. So FlashAttention is definitely, everybody's using this. Everybody loves it. You just released FlashAttention 2 last week. [00:01:48]Tri: Yeah. Early this week on Monday. Yeah. [00:01:53]Alessio: You know, AI time. Things move fast. So maybe let's run through some of the FlashAttention highlights, some of the innovation there, and then we can dive into FlashAttention 2. So the core improvement in FlashAttention is that traditional attention is a quadratic sequence length. And to the two, FlashAttention is linear, which obviously helps with scaling some of these models. [00:02:18]Tri: There are two factors there. So of course the goal has been to make attention go faster or more memory efficient. And ever since attention became popular in 2017 with the Transformer paper, lots and lots of folks have been working on this. And a lot of approaches has been focusing on approximating attention. The goal is you want to scale to longer sequences. There are tons of applications where you want to do that. But scaling to longer sequences is difficult because attention scales quadratically in sequence length on both runtime and memory, as you mentioned. So instead of trying to approximate attention, we were trying to figure out, can we do the same computation and maybe be more memory efficient? So in the end, we ended up being the memory is linear in sequence length. In terms of computation, it's still quadratic, but we managed to make it much more hardware friendly. And as a result, we do get wall clock speed up on the order of 2 to 4x, which really helps because that just means that you'll be able to train with 2 to 4x longer sequence length for the same cost without doing any approximations. As a result, lots of folks have been using this. The thing is available in a lot of libraries that do language model training or fine tuning. [00:03:32]Alessio: And the approximation thing is important because this is an exact thing versus a sparse. So maybe explain a little bit the difference there. [00:03:40]Tri: For sure. So in addition, essentially you compute pairwise similarity between every single element in a sequence against each other. So there's been other approaches where instead of doing all that pairwise computation, you only compute similarity for some pairs of elements in the sequence. So you don't do quadratic number of comparison. And this can be seen as some form of sparsity. Essentially you're ignoring some of the elements. When you write down the matrix, you essentially say, OK, I'm going to pretend there's zero. So that has some benefits in terms of runtime and memory. But the trade-off is that it tends to do worse in terms of quality because you're essentially approximating or ignoring some elements. And I personally have worked on this as well for a few years. But when we talk to practitioners who actually train models, especially at large scale, they say, tend not to use these approximate attention methods. Because it turns out, this was surprising to me at the time, was that these approximation methods, even though they perform fewer computation, they tend to not be faster in walk-on time. So this was pretty surprising because back then, I think my background was more on the theoretical side. So I was thinking of, oh, how many flops or floating point operations are you performing? And hopefully that correlates well with walk-on time. But I realized that I was missing a bunch of ideas from the system side where flops or floating point operations don't necessarily correlate with runtime. There are other factors like memory reading and writing, parallelism, and so on. So I learned a ton from just talking to systems people because they kind of figured this stuff out a while ago. So that was really eye-opening. And then we ended up focusing a lot more on memory reading and writing because that turned out to be the majority of the time when you're doing attention is reading and writing memory. [00:05:34]Alessio: Yeah, the I.O. awareness is probably one of the biggest innovations here. And the idea behind it is, like you mentioned, the FLOPS growth of the cards have been going up, but the memory bandwidth, not as much. So I think maybe that was one of the assumptions that the original attention paper had. So talk a bit about how that came to be as an idea. It's one of those things that like in insight, it's like, obviously, why are we like rewriting to like HBM every time, you know, and like once you change it, it's clear. But what was that discovery process? [00:06:08]Tri: Yeah, in hindsight, a lot of the ideas have already been there in the literature. And I would say is it was somehow at the intersection of both machine learning and systems. And you kind of needed ideas from both sides. So on one hand, on the system side, so lots of systems folks have known that, oh, you know, kernel fusion is great. Kernel fusion just means that instead of performing, you know, loading the same element, instead of performing an operation, write it down, load it back up and perform the second operation, you just load it once, perform two operations and then write it down again. So that saves you kind of memory read and write in the middle there. So kernel fusion has been a classic. There's been other techniques from the system side, like tiling, where you perform things in the form of computations in block, again, so that you can load it into a really fast memory. Think of it as a cache. And this is, again, classical computer science ideas, right? You want to use the cache. So the system folks have been thinking about these ideas for a long time, and they apply to attention as well. But there were certain things in attention that made it difficult to do a complete kernel fusion. One of which is there is this softmax operation in the middle, which requires you to essentially sum across the row of the attention matrix. So it makes it difficult to kind of break it, because there's this dependency. So it makes it difficult to break things into a block. So on the system side, people have been thinking about these ideas, but it's been difficult to kind of do kernel fusion for the entire operation. On the machine learning side, people have been thinking more algorithmically. They say, okay, either we can approximate attention, or there's this trick called the online softmax trick, which says that because of softmax, the way it's written mathematically, you can actually break it up into smaller pieces, do some rescaling, and still get the right answer. So this online softmax trick has been around for a while. I think there was a paper from NVIDIA folks back in 2018 about this. And then there was a paper from Google. So Marcus, Rob, and Stats wrote a paper late 2021 on using this online softmax trick to break attention up into smaller pieces. So a lot of the ideas were already there. But it turns out, you kind of need to combine ideas from both sides. So you need to understand that, hey, we want to do kernel fusion to reduce memory written writes. But we also need this online softmax trick to be able to break the softmax into smaller pieces so that a lot of the systems tricks kind of carry through. We saw that, and it was kind of a natural idea that we ended up using ideas from both sides, and it ended up working pretty well. Yeah. [00:08:57]Alessio: Are there any downsides to kernel fusion? If I think about databases and the reasons why we have atomic operations, you know, it's like, you have observability and fallback in between them. How does that work with attention? Is there anything that we lose by fusing the operations? [00:09:13]Tri: Yeah, I think mostly on the practical side is that you lose a little bit of flexibility in the sense that, hey, now you have, for example, faster attention, it's just a subroutine that you would call to do attention. But as a researcher, let's say you don't want that exact thing, right? You don't want just attention, let's say you want some modification to attention. You want to do, hey, I'm going to multiply the query and key, but then I'm going to do this extra thing before I carry on. So kernel fusion just means that, okay, we have a subroutine that does the entire thing. But if you want to experiment with things, you won't be able to use that fused kernel. And the answer is, can we have a compiler that then automatically does a lot of this kernel fusion? Lots of compiler folks are thinking about this, either with a new language or you can embed it in PyTorch. PyTorch folks have been working on this as well. So if you write just your code in PyTorch and they can capture the graph, can they generate code that will fuse everything together? That's still ongoing, and it works for some cases. But for attention, because of this kind of softmax rewriting stuff, it's been a little bit more difficult. So maybe in a year or two, we'll have compilers that are able to do a lot of these optimizations for you. And you don't have to, for example, spend a couple months writing CUDA to get this stuff to work. Awesome. [00:10:41]Alessio: And just to make it clear for listeners, when we say we're not writing it to memory, we are storing it, but just in a faster memory. So instead of the HBM, we're putting it in the SRAM. Yeah. [00:10:53]Tri: Yeah. [00:10:54]Alessio: Maybe explain just a little bit the difference there. [00:10:56]Tri: Yeah, for sure. This is kind of a caricature of how you think about accelerators or GPUs in particular, is that they have a large pool of memory, usually called HBM, or high bandwidth memory. So this is what you think of as GPU memory. So if you're using A100 and you list the GPU memory, it's like 40 gigs or 80 gigs. So that's the HBM. And then when you perform any operation, you need to move data from the HBM to the compute unit. So the actual hardware unit that does the computation. And next to these compute units, there are on-chip memory or SRAM, which are much, much smaller than HBM, but much faster. So the analogy there is if you're familiar with, say, CPU and RAM and so on. So you have a large pool of RAM, and then you have the CPU performing the computation. But next to the CPU, you have L1 cache and L2 cache, which are much smaller than DRAM, but much faster. So you can think of SRAM as the small, fast cache that stays close to the compute unit. Physically, it's closer. There is some kind of asymmetry here. So HBM is much larger, and SRAM is much smaller, but much faster. One way of thinking about it is, how can we design algorithms that take advantage of this asymmetric memory hierarchy? And of course, lots of folks have been thinking about this. These ideas are pretty old. I think back in the 1980s, the primary concerns were sorting. How can we sort numbers as efficiently as possible? And the motivating example was banks were trying to sort their transactions, and that needs to happen overnight so that the next day they can be ready. And so the same idea applies, which is that they have slow memory, which was hard disk, and they have fast memory, which was DRAM. And people had to design sorting algorithms that take advantage of this asymmetry. And it turns out, these same ideas can apply today, which is different kinds of memory. [00:13:00]Alessio: In your paper, you have the pyramid of memory. Just to give people an idea, when he says smaller, it's like HBM is like 40 gig, and then SRAM is like 20 megabytes. So it's not a little smaller, it's much smaller. But the throughput on card is like 1.5 terabytes a second for HBM and like 19 terabytes a second for SRAM, which is a lot larger. How do you think that evolves? So TSMC said they hit the scaling limits for SRAM, they just cannot grow that much more. HBM keeps growing, HBM3 is going to be 2x faster than HBM2, I think the latest NVIDIA thing has HBM3. How do you think about the future of FlashAttention? Do you think HBM is going to get fast enough when maybe it's not as useful to use the SRAM? [00:13:49]Tri: That's right. I think it comes down to physics. When you design hardware, literally SRAM stays very close to compute units. And so you don't have that much area to essentially put the transistors. And you can't shrink these things too much. So just physics, in terms of area, you don't have that much area for the SRAM. HBM is off-chip, so there is some kind of bus that essentially transfers data from HBM to the compute unit. So you have more area to essentially put these memory units. And so yeah, I think in the future SRAM probably won't get that much larger, because you don't have that much area. HBM will get larger and faster. And so I think it becomes more important to design algorithms that take advantage of this memory asymmetry. It's the same thing in CPU, where the cache is really small, the DRAM is growing larger and larger. DRAM could get to, I don't know, two terabytes, six terabytes, or something, whereas the cache stays at, I don't know, 15 megabytes or something like that. I think maybe the algorithm design becomes more and more important. There's still ways to take advantage of this, I think. So in the future, I think flash attention right now is being used. I don't know if in the next couple of years, some new architecture will come in and whatnot, but attention seems to be still important. For the next couple of years, I still expect some of these ideas to be useful. Not necessarily the exact code that's out there, but I think these ideas have kind of stood the test of time. New ideas like IO awareness from back in the 1980s, ideas like kernel fusions, tiling. These are classical ideas that have stood the test of time. So I think in the future, these ideas will become more and more important as we scale models to be larger, as we have more kinds of devices, where performance and efficiency become much, much more important. [00:15:40]Alessio: Yeah, and we had Jonathan Frankle on the podcast, and if you go to issattentionallyouneed.com, he has an outstanding bet, and he does believe that attention will be the state of the art architecture still in a few years. Did you think flash attention would be this popular? I'm always curious on the research side, you publish a paper, and obviously you know it's great work, but sometimes it just kind of falls flat in the industry. Could you see everybody just starting to use this, or was that a surprise to you? [00:16:11]Tri: Certainly, I didn't anticipate the level of popularity. Of course, we were extremely happy to have people using this stuff and giving us feedback and so on, and help us improve things. I think when we were writing the paper, I remember sending an email to one of my advisors, and like, hey, I'm excited about this paper, but I think the most important thing will be the artifact, which is the code. So I knew that the code will be valuable. So we kind of focus a lot on the code and make sure that the code is usable and as fast as can be. Of course, the idea, the paper presents the ideas and explain it and have experiments that validate the idea, but I knew that the artifact or the code was also pretty important. And that turned out to be the right focus, which is, you know, we put out the paper, we release the code and continue working on the code. So it's a team effort with my co-authors as well. [00:17:07]Alessio: We mentioned Hazy Research a bunch of times on the podcast before. I would love for you to spend five minutes just talking about how does the group work? How do people get together? How do you bounce ideas off of each other? Yeah. [00:17:21]Tri: So Hazy Research is a research group at Stanford led by one of my advisors, Chris Re. I love the people there. It was one of the best experiences I had. They've made my PhD so much more enjoyable. And I think there are a couple of ways that the group has been working pretty well. So one is, I think there's a diverse pool of people who either, you know, some of them focus on algorithms and theory, some of them focus on building systems, some of them focus on applications. And as a result, there is this flow of idea. So as an example, some of us were working on like more algorithms and theory, and then we can talk to the folks building systems and say, hey, let's try it out and let's put it in the systems and see how it is. And there you will get feedback from systems folks. They will say, hey, we implemented this, or we tried this and this is where it doesn't work, something like that. And once we put it in the systems, the application folks can use the algorithm or new methods or new models. And we again get great feedback from them because the application folks, for example, some of my good friends, they focus on medical imaging or seizure detection. And that is the problem they care about. And if your method doesn't work on the task they care about, they will tell you. Whereas I think a lot of people in machine learning, they're a little bit more flexible. So they will be like, hey, it doesn't work on seizure detection. Let's try some other task, right? But having that direct feedback of like, hey, it doesn't work there, let's figure out why. I think that that feedback allows us to do better work. And I think that kind of process of exchanging ideas, validating it in a real system so that applications folks can try it out and give you feedback. That cycle has been very, very useful. And so that's one, having a diverse group of people. The other one is, and this is something I really appreciate from advice from Chris was try to understand the fundamental, right? And he's happy letting me go off and read some textbooks and playing with things because I think a lot of research ideas come from understanding the old literature and see how it fits with the new landscape. And so if you just new archive papers every day, that's great, but you also need to read textbooks. And that's one advice I got from Chris, which is understand the fundamentals. And I think that allows us to do more impactful work. [00:19:46]Alessio: How do you think about academia versus industry? I feel like AI / Machine Learning has been an area where up until three, four years ago, most of the cutting edge work was being done in academia. And now there's all these big industry research labs. You're obviously going to Princeton, so you're an academia believer. How should people think about where to go? Say I'm doing my master's, I have to decide between doing a PhD and going into OpenAI Anthropic. How should I decide? [00:20:15]Tri: I think they kind of play a complementary role, in my opinion. Of course, I also was considering different paths as well. So I think right now, scaling matters a lot, especially when you talk about language models and AI and so on. Scaling matters a lot. And that means that you need compute resources and you need infrastructure and you need engineers time. And so industry tends to have an advantage when it comes to scaling things. But a lot of the ideas actually came from academia. So let's take Attention, which got popular with the Transformer in 2017. Attention actually has been around for a while. So I think the first mention was in 2014, a paper from Bernadot and others and Yoshua Bengio, which is coming from academia. A lot of ideas did come from academia. And scaling things up, of course, I think OpenAI has been great at scaling things up. That was the bet that they made after, I think, GPT-2. So they saw that scaling these things up to back then was 1.5 billion parameter seemed to give you amazing capabilities. So they really committed to that. They really committed to scaling things. And that turned out to be, it's been a pretty successful bet. I think for academia, we're still trying to figure out exactly what we're doing in this shifting landscape. And so lots of folks have been focusing on, for example, evaluation. So I know the Stanford Center for Foundation Model led by Percy, they have this benchmark called HELM, which is this holistic benchmark. So trying to figure out, okay, characterizing the landscape of different kinds of models, what people should evaluate, what people should measure, and things like that. So evaluation is one role. The other one is understanding. So this has happened historically where there's been some development in the industry and academia can play a role in explaining, understanding. They have the luxury to slow down trying to understand stuff, right? So lots of paper on understanding what's really going on, probing these models, and so on. I think I'm not as familiar with the NLP literature, but my impression is there's a lot of that going on in the NLP conferences, which is understanding what these models are doing, what capabilities they have, and so on. And the third one I could see is that the academia can take more risky bets in the sense that we can work on stuff that is quite different from industry. I think industry, my impression is you have some objective. You're trying to say, hey, for this quarter, we want to scale the model in this particular way. Next quarter, we want the model to have these capabilities. You're trying to get objectives that maybe, I don't know, 70% that will work out because it's important for the company's direction. I think for academia, the way things work is you have many, many researchers or PhD students, and they're kind of pursuing independent directions. And they have a little bit more flexibility on, hey, I'm going to try out this seemingly crazy idea and see, let's say there's a 30% chance of success or something. And however you define success, for academia, a lot of the time, success just means like, hey, we found something interesting. That could eventually go into industry through collaboration and so on. So I do see academia and industry kind of playing complementary roles. And as for someone choosing a career, I think just more and more generally, industry would be probably better in terms of compensation, in terms of probably work-life balance. But my biased perspective is that maybe academia gives you a little bit more freedom to think and understand things. So it probably comes down to personal choice. I end up choosing to be a professor next year at Princeton. But of course, I want to maintain a relationship with industry folks. I think industry folks can provide very valuable feedback to what we're doing in academia so that we understand where the field is moving because some of the directions are very much influenced by what, for example, OpenAI or Google is doing. So we want to understand where the field is moving. What are some promising applications? And try to anticipate, okay, if the field is moving like this, these applications are going to be popular. What problems will be important in two, three years? And then we try to start thinking about those problems so that hopefully in two, three years, we have some of the answers to some of these problems in two, three years. Sometimes it works out, sometimes it doesn't. But as long as we do interesting things in academia, that's the goal. [00:25:03]Alessio: And you mentioned the eval side. So we did a Benchmarks 101 episode. And one of the things we were seeing is sometimes the benchmarks really influence the model development. Because obviously, if you don't score well on the benchmarks, you're not going to get published and you're not going to get funded. How do you think about that? How do you think that's going to change now that a lot of the applications of these models, again, is in more narrow industry use cases? Do you think the goal of the academia eval system is to be very broad and then industry can do their own evals? Or what's the relationship there? [00:25:40]Tri: Yeah, so I think evaluation is important and often a little bit underrated. So it's not as flashy as, oh, we have a new model that can do such and such. But I think evaluation, what you don't measure, you can't make progress on, essentially. So I think industry folks, of course, they have specific use cases that their models need to do well on. And that's what they care about. Not just academia, but other groups as well. People do understand what are some of the emerging use cases. So for example, now one of the most popular use cases is Chatbot. And then I think folks from Berkeley, some of them are from Berkeley, call them MLCs. They set up this kind of Chatbot arena to essentially benchmark different models. So people do understand what are some of the emerging use cases. People do contribute to evaluation and measurement. And as a whole, I think people try to contribute to the field and move the field forward, albeit that maybe slightly different directions. But we're making progress and definitely evaluation and measurement is one of the ways you make progress. So I think going forward, there's still going to be just more models, more evaluation. We'll just have better understanding of what these models are doing and what capabilities they have. [00:26:56]Alessio: I like that your work has been focused on not making benchmarks better, but it's like, let's just make everything faster. So it's very horizontal. So FlashAttention 2, you just released that on Monday. I read in the blog post that a lot of the work was also related to some of the NVIDIA library updates. Yeah, maybe run us through some of those changes and some of the innovations there. Yeah, for sure. [00:27:19]Tri: So FlashAttention 2 is something I've been working on for the past couple of months. So the story is the NVIDIA CUTLASS team, they released a new version of their library, which contains all these primitives to allow you to do matrix multiply or memory loading on GPU efficiently. So it's a great library and I built on that. So they released their version 3 back in January and I got really excited and I wanted to play with that library. So as an excuse, I was just like, okay, I'm going to refactor my code and use this library. So that was kind of the start of the project. By the end, I just ended up working with the code a whole lot more and I realized that, hey, there are these inefficiencies still in Flash Attention. We could change this way or that way and make it, in the end, twice as fast. But of course, building on the library that the NVIDIA folks released. So that was kind of a really fun exercise. I was starting out, it's just an excuse for myself to play with the new library. What ended up was several months of improvement, improving Flash Attention, discovering new ideas. And in the end, we managed to make it 2x faster and now it's pretty close to probably the efficiency of things like matrix multiply, which is probably the most optimized subroutine on the planet. So we're really happy about it. The NVIDIA Cutlass team has been very supportive and hopefully in the future, we're going to collaborate more. [00:28:46]Alessio: And since it's an NVIDIA library, can you only run this on CUDA runtimes? Or could you use this and then run it on an AMD GPU? [00:28:56]Tri: Yeah, so it's an NVIDIA library. So right now, the code we release runs on NVIDIA GPUs, which is what most people are using to train models. Of course, there are emerging other hardware as well. So the AMD folks did implement a version of Flash Attention, I think last year as well, and that's also available. I think there's some implementation on CPU as well. For example, there's this library, ggml, where they implemented the same idea running on Mac and CPU. So I think that kind of broadly, the idea would apply. The current implementation ended up using NVIDIA's library or primitives, but I expect these ideas to be broadly applicable to different hardware. I think the main idea is you have asymmetry in memory hierarchy, which tends to be everywhere in a lot of accelerators. [00:29:46]Alessio: Yeah, it kind of reminds me of Sara Hooker's post, like the hardware lottery. There could be all these things that are much better, like architectures that are better, but they're not better on NVIDIA. So we're never going to know if they're actually improved. How does that play into some of the research that you all do too? [00:30:04]Tri: Yeah, so absolutely. Yeah, I think Sara Hooker, she wrote this piece on hardware lottery, and I think she captured really well of what a lot of people have been thinking about this. And I certainly think about hardware lottery quite a bit, given that I do some of the work that's kind of really low level at the level of, hey, we're optimizing for GPUs or NVIDIA GPUs and optimizing for attention itself. And at the same time, I also work on algorithms and methods and transformer alternatives. And we do see this effect in play, not just hardware lottery, but also kind of software framework lottery. You know, attention has been popular for six years now. And so many kind of engineer hours has been spent on making it as easy and efficient as possible to run transformer, right? And there's libraries to do all kinds of tensor parallel, pipeline parallel, if you use transformer. Let's say someone else developed alternatives, or let's just take recurrent neural nets, like LSTM, GRU. If we want to do that and run that efficiently on current hardware with current software framework, that's quite a bit harder. So in some sense, there is this feedback loop where somehow the model architectures that take advantage of hardware become popular. And the hardware will also kind of evolve to optimize a little bit for that kind of architecture and software framework will also evolve to optimize for that particular architecture. Right now, transformer is the dominant architecture. So yeah, I'm not sure if there is a good way out of this. Of course, there's a lot of development. Things like, I think compilers will play a role because compilers allow you to maybe still be much more efficient across different kinds of hardware because essentially you write the same code and compiler will be able to make it run efficiently different kinds of hardware. So for example, there's this language Mojo, they're compiler experts, right? And their bet is AI models will be running on different kinds of devices. So let's make sure that we have really good compilers with a good language that then the compiler can do a good job optimizing for all kinds of devices. So that's maybe one way that you can get out of this cycle. But yeah, I'm not sure of a good way. In my own research, I have to think about both the algorithm new model and how it maps to hardware. So there are crazy ideas that seem really good, but will be really, really difficult to run efficiently. And so as a result, for example, we can't really scale some of the architectures up simply because they're not hardware friendly. I have to think about both sides when I'm working on new models. [00:32:50]Alessio: Yeah. Have you spent any time looking at some of the new kind of like AI chips companies, so to speak, like the Cerebras of the world? Like one of their innovations is co-locating everything on the chip. So you remove some of this memory bandwidth issue. How do you think about that? [00:33:07]Tri: Yeah, I think that's an interesting bet. I think Tesla also has this Dojo supercomputer where they try to have essentially as fast on-chip memory as possible and removing some of these data transfer back and forth. I think that's a promising direction. The issues I could see, you know, I'm definitely not a hardware expert. One issue is the on-chip memory tends to be really expensive to manufacture, much more expensive per gigabyte compared to off-chip memory. So I talked to, you know, some of my friends at Cerebros and, you know, they have their own stack and compiler and so on, and they can make it work. The other kind of obstacle is, again, with compiler and software framework and so on. For example, if you can run PyTorch on this stuff, lots of people will be using it. But supporting all the operations in PyTorch will take a long time to implement. Of course, people are working on this. So I think, yeah, we kind of need these different bets on the hardware side as well. Hardware has, my understanding is, has a kind of a longer time scale. So you need to design hardware, you need to manufacture it, you know, maybe on the order of three to five years or something like that. So people are taking different bets, but the AI landscape is changing so fast that it's hard to predict, okay, what kind of models will be dominant in, let's say, three or five years. Or thinking back five years ago, would we have known that Transformer would have been the dominant architecture? Maybe, maybe not, right? And so different people will make different bets on the hardware side. [00:34:39]Alessio: Does the pace of the industry and the research also influence the PhD research itself? For example, in your case, you're working on improving attention. It probably took you quite a while to write the paper and everything, but in the meantime, you could have had a new model architecture come out and then it's like nobody cares about attention anymore. How do people balance that? [00:35:02]Tri: Yeah, so I think it's tough. It's definitely tough for PhD students, for researchers. Given that the field is moving really, really fast, I think it comes down to understanding fundamental. Because that's essentially, for example, what the PhD allows you to do. It's been a couple of years understanding the fundamentals. So for example, when I started my PhD, I was working on understanding matrix vector multiply, which has been a concept that's been around for hundreds of years. We were trying to characterize what kind of matrices would have theoretically fast multiplication algorithm. That seems to have nothing to do with AI or anything. But I think that was a time when I developed mathematical maturity and research taste and research skill. The research topic at that point didn't have to be super trendy or anything, as long as I'm developing skills as a researcher, I'm making progress. And eventually, I've gotten quite a bit better in terms of research skills. And that allows, for example, PhD students later in their career to quickly develop solutions to whatever problems they're facing. So I think that's just the natural arc of how you're being trained as a researcher. For a lot of PhD students, I think given the pace is so fast, maybe it's harder to justify spending a lot of time on the fundamental. And it's tough. What is this kind of explore, exploit kind of dilemma? And I don't think there's a universal answer. So I personally spend some time doing this kind of exploration, reading random textbooks or lecture notes. And I spend some time keeping up with the latest architecture or methods and so on. I don't know if there's a right balance. It varies from person to person. But if you only spend 100% on one, either you only do exploration or only do exploitation, I think it probably won't work in the long term. It's probably going to have to be a mix and you have to just experiment and kind of be introspective and say, hey, I tried this kind of mixture of, I don't know, one exploration paper and one exploitation paper. How did that work out for me? Should I, you know, having conversation with, for example, my advisor about like, hey, did that work out? You know, should I shift? I focus more on one or the other. I think quickly adjusting and focusing on the process. I think that's probably the right way. I don't have like a specific recommendation that, hey, you focus, I don't know, 60% on lecture notes and 40% on archive papers or anything like that. [00:37:35]Alessio: Let's talk about some Transformer alternatives. You know, say Jonathan Franco loses his bet and Transformer is not the state of the art architecture. What are some of the candidates to take over? [00:37:49]Tri: Yeah, so this bet is quite fun. So my understanding is this bet between Jonathan Franco and Sasha Rush, right? I've talked to Sasha a bunch and I think he recently gave an excellent tutorial on Transformer alternatives as well. So I would recommend that. So just to quickly recap, I think there's been quite a bit of development more recently about Transformer alternatives. So architectures that are not Transformer, right? And the question is, can they do well on, for example, language modeling, which is kind of the application that a lot of people care about these days. So there are methods based on state space methods that came out in 2021 from Albert Gu and Curran and Chris Re that presumably could do much better in terms of capturing long range information while not scaling quadratically. They scale sub-quadratically in terms of sequence length. So potentially you could have a much more efficient architecture when sequence length gets really long. The other ones have been focusing more on recurrent neural nets, which is, again, an old idea, but adapting to the new landscape. So things like RWKV, I've also personally worked in this space as well. So there's been some promising results. So there's been some results here and there that show that, hey, these alternatives, either RNN or state space methods, can match the performance of Transformer on language modeling. So that's really exciting. And we're starting to understand on the academic research side, we want to understand, do we really need attention? I think that's a valuable kind of intellectual thing to understand. And maybe we do, maybe we don't. If we want to know, we need to spend serious effort on trying the alternatives. And there's been folks pushing on this direction. I think RWKV scale up to, they have a model at 14 billion that seems pretty competitive with Transformer. So that's really exciting. That's kind of an intellectual thing. We want to figure out if attention is necessary. So that's one motivation. The other motivation is Transformer Alternative could have an advantage in practice in some of the use cases. So one use case is really long sequences. The other is really high throughput of generation. So for really long sequences, when you train with Transformer, with flash attention and so on, the computation is still quadratic in the sequence length. So if your sequence length is on the order of, I don't know, 16K, 32K, 100K or something, which some of these models have sequence length 100K, then you do get significantly slower in terms of training, also in terms of inference. So maybe these alternative architectures could scale better in terms of sequence length. I haven't seen actual validation on this. Let's say an RNN model release with context length, I don't know, 100K or something. I haven't really seen that. But the hope could be that as we scale to long sequences, these alternative architectures could be more well-suited. Not just text, but things like high resolution images, audio, video, and so on, which are emerging applications. So that's one, long sequences. Number two is a high throughput generation, where I can imagine scenarios where the application isn't like an interactive chatbot, but let's say a company wants to batch as many requests as possible on their server, or they're doing offline processing, they're generating stuff based on their internal documents, that you need to process in batch. And the issue with Transformer is that during generation, it essentially needs to keep around all the previous history. It's called the KV cache. And that could take a significant amount of memory, so you can't really batch too much because you run out of memory. I am personally bullish on RNNs. I think RNNs, they essentially summarize the past into a state vector that has fixed size, so the size doesn't grow with the history. So that means that you don't need as much memory to keep around all the previous tokens. And as a result, I think you can scale to much higher batch sizes. And as a result, you can make much more efficient use of the GPUs or the accelerator, and you could have much higher generation throughput. Now, this, I don't think, has been validated at scale. So as a researcher, I'm bullish on this stuff because I think in the next couple of years, these are use cases where these alternatives could have an advantage. We'll just kind of have to wait and see to see if these things will happen. I am personally bullish on this stuff. At the same time, I also spend a bunch of time making attention as fast as possible. So maybe hatching and playing both sides. Ultimately, we want to understand, as researchers, we want to understand what works, why do the models have these capabilities? And one way is, let's push attention to be as efficient as possible. On the other hand, let's push other alternatives to be as efficient at scale, as big as possible, and so that we can kind of compare them and understand. Yeah, awesome. [00:43:01]Alessio: And I think as long as all of this work happens and open, it's a net positive for everybody to explore all the paths. Yeah, let's talk about open-source AI. Obviously, together, when Red Pajama came out, which was an open clone of the LLAMA1 pre-training dataset, it was a big thing in the industry. LLAMA2 came out on Tuesday, I forget. And this week, there's been a lot of things going on, which they call open-source, but it's not really open-source. Actually, we wrote a post about it that was on the front page of Hacker News before this podcast, so I was frantically responding. How do you think about what open-source AI really is? In my mind, in open-source software, we have different levels of open. So there's free software, that's like the GPL license. There's open-source, which is Apache, MIT. And then there's kind of restricted open-source, which is the SSPL and some of these other licenses. In AI, you have the open models. So Red Pajama is an open model because you have the pre-training dataset, you have the training runs and everything. And then there's obviously RandomLens that doesn't make it one-to-one if you retrain it. Then you have the open-weights model that's kind of like StableLM, where the weights are open, but the dataset is not open. And then you have LLAMA2, which is the dataset is not open, the weights are restricted. It's kind of like not really open-source, but open enough. I think it's net positive because it's like $3 million of flops donated to the public. [00:44:32]Tri: How do you think about that? [00:44:34]Alessio: And also, as you work together, what is your philosophy with open-source AI? Right, right. [00:44:40]Tri: Yeah, I think that's a great question. And I think about it on maybe more practical terms. So of course, Meta has done an amazing job training LLAMA1, LLAMA2. And for LLAMA2, they make it much less restrictive compared to LLAMA1. Now you can use it for businesses, unless you are a monthly active user or something like that. I think just this change will have a very significant impact in the kind of landscape of open-source AI, where now lots of businesses, lots of companies will be using, I expect will be using things like LLAMA2. They will fine-tune on their own dataset. They will be serving variants or derivatives of LLAMA2. Whereas before, with LLAMA1, it was also a really good model, but your business companies weren't allowed to do that. So I think on a more practical term, it's kind of shifting the balance between a closed-source model like OpenAI and Anthropic and Google, where you're making API calls, right? And maybe you don't understand as much of what the model is doing, how the model is changing, and so on. Versus now, we have a model with open weight that is pretty competitive from what I've seen in terms of benchmarks, pretty competitive with GPT 3.5, right? And if you fine-tune it on your own data, maybe it's more well-suited for your own data. And I do see that's going to shift the balance of it. More and more folks are going to be using, let's say, derivatives of LLAMA2. More and more folks are going to fine-tune and serve their own model instead of calling an API. So that shifting of balance is important because in one way, we don't want just a concentration of decision-making power in the hands of a few companies. So I think that's a really positive development from Meta. Of course, training the model takes a couple of millions of dollars, but engineers have and I'm sure they spend tons of time trying many, many different things. So the actual cost is probably way more than that. And they make the weights available and they allow probably a lot of companies are going to be using this. So I think that's a really positive development. And we've also seen amazing progress on the open source community where they would take these models and they either fine-tune on different kinds of data sets or even make changes to the model. So as an example, I think for LLAMA1, the context lane was limited to 2K. Like a bunch of folks figured out some really simple methods to scale up to like 8K. [00:47:12]Alessio: Like the RoPE. [00:47:13]Tri: Yes. I think the open source community is very creative, right? And lots of people. LLAMA2 will, again, kind of accelerate this where more people will try it out. More people will make tweaks to it and make a contribution and then so on. So overall, I think I see that as still a very positive development for the field. And there's been lots of libraries that will allow you to host or fine-tune these models, like even with quantization and so on. Just a couple of hours after LLAMA2 was released, tons of companies announcing that, hey, it's on our API or hosting and so on and together did the same. So it's a very fast-paced development and just kind of a model with available weights that businesses are allowed to use. I think that alone is already a very positive development. At the same time, yeah, we can do much better in terms of releasing data sets. Data sets tend to be... Somehow people are not incentivized to release data sets. So philosophically, yeah, you want to be as open as possible. But on a practical term, I think it's a little bit harder for companies to release data sets. Legal issues. The data sets released tend to be not as eye-catchy as the model release. So maybe people are less incentivized to do that. We've seen quite a few companies releasing data sets together. Released a red pajama data set. I think Cerebus then worked on that and deduplicate and clean it up and release slim pajama and so on. So we're also seeing positive development on that front, kind of on the pre-training data set. So I do expect that to continue. And then on the fine-tuning data set or instruction tuning data set, I think we now have quite a few open data sets on instruction tuning and fine-tuning. But these companies do pay for human labelers to annotate these instruction tuning data set. And that is expensive. And maybe they will see that as their competitive advantage. And so it's harder to incentivize these companies to release these data sets. So I think on a practical term, we're still going to make a lot of progress on open source AI, on both the model development, on both model hosting, on pre-training data set and fine-tuning data set. Right now, maybe we don't have the perfect open source model since all the data sets are available. Maybe we don't have such a thing yet, but we've seen very fast development on the open source side. I think just maybe this time last year, there weren't as many models that are competitive with, let's say, ChatGPT. [00:49:43]Alessio: Yeah, I think the open data sets have so much more impact than open models. If you think about Elusive and the work that they've done, GPT-J was great, and the Pythia models are great, but the Pyle and the Stack, everybody uses them. So hopefully we get more people to contribute time to work on data sets instead of doing the 100th open model that performs worse than all the other ones, but they want to say they released the model. [00:50:14]Tri: Yeah, maybe the question is, how do we figure out an incentive structure so that companies are willing to release open data sets? And for example, it could be like, I think some of the organizations are now doing this where they are asking volunteers to annotate and so on. And maybe the Wikipedia model of data set, especially for instruction tuning, could be interesting where people actually volunteer their time and instead of editing Wikipedia, add annotation. And somehow they acknowledge and feel incentivized to do so. Hopefully we get to that kind of level of, in terms of data, it would be kind of like Wikipedia. And in terms of model development, it's kind of like Linux where people are contributing patches and improving the model in some way. I don't know exactly how that's going to happen, but based on history, I think there is a way to get there. [00:51:05]Alessio: Yeah, I think the Dolly-15K data set is a good example of a company saying, let's do this smaller thing, just make sure we make it open. We had Mike Conover from Databricks on the podcast, and he was like, people just bought into it and leadership was bought into it. You have companies out there with 200,000, 300,000 employees. It's like, just put some of them to label some data. It's going to be helpful. So I'm curious to see how that evolves. What made you decide to join Together? [00:51:35]Tri: For Together, the focus has been focusing a lot on open source model. And I think that aligns quite well with what I care about, of course. I also know a bunch of people there that I know and trust, and I'm excited to work with them. Philosophically, the way they've been really open with data set and model release, I like that a lot. Personally, for the stuff, for example, the research that I've developed, like we also try to make code available, free to use and modify and so on, contributing to the community. That has given us really valuable feedback from the community and improving our work. So philosophically, I like the way Together has been focusing on open source model. And the nice thing is we're also going to be at the forefront of research and the kind of research areas that I'm really excited about, things like efficient training and inference, aligns quite well with what the company is doing. We'll try our best to make things open and available to everyone. Yeah, but it's going to be fun being at the company, leading a team, doing research on the topic that I really care about, and hopefully we'll make things open to benefit the community. [00:52:45]Alessio: Awesome. Let's jump into the lightning round. Usually, I have two questions. So one is on acceleration, one on exploration, and then a takeaway. So the first one is, what's something that already happened in AI machine learning that you thought would take much longer than it has? [00:53:01]Tri: I think understanding jokes. I didn't expect that to happen, but it turns out scaling model up and training lots of data, the model can now understand jokes. Maybe it's a small thing, but that was amazing to me. [00:53:16]Alessio: What about the exploration side? What are some of the most interesting unsolved questions in the space? [00:53:22]Tri: I would say reasoning in the broad term. We don't really know how these models do. Essentially, they do something that looks like reasoning. We don't know how they're doing it. We have some ideas. And in the future, I think we will need to design architecture that explicitly has some kind of reasoning module in it if we want to have much more capable models. [00:53:43]Alessio: What's one message you want everyone to remember today? [00:53:47]Tri: I would say try to understand both the algorithm and the systems that these algorithms run on. I think at the intersection of machine learning system has been really exciting, and there's been a lot of amazing results at this intersection. And then when you scale models to large scale, both the machine learning side and the system side really matter. [00:54:06]Alessio: Awesome. Well, thank you so much for coming on 3. [00:54:09]Tri: This was great. Yeah, this has been really fun. [00:54:11] Get full access to Latent Space at www.latent.space/subscribe
French journalist Guillaume Pitron discusses his book "The Dark Cloud: How the Digital World is Costing the Earth" with guest host Tim Hughes. The book explores the environmental impact of the digital world. Pitron delves into concerns about energy usage, e-waste, and the carbon footprint of the internet. The episode concludes with a debrief of Tim by regular host Gene Tunny on the conversation. Please get in touch with any questions, comments and suggestions by emailing us at contact@economicsexplored.com or sending a voice message via https://www.speakpipe.com/economicsexplored. About this episode's guestGuillaume Pitron is a French journalist, author and filmmaker. He has written two books, published in some fifteen countries, about the natural resources needed for new technology. He has been invited to share his ideas in the French and international media (Le Figaro, BBC World Service, Bloomberg TV, El País, La Repubblica) and at international forums and institutions (Davos, IMF, European Commission, Unesco).Link to Guillaume's website:https://www.en-guillaumepitron.com/What's covered in EP189Introduction to this episode. (0:06)What is the dark cloud? (1:27)There is no digital life without rare earths. (3:54)What is the real cost of digital technology? (8:06)What's the cost to the environment? (13:07)What can we do as individuals to make this better? (17:38)Facebook's Lapland data center. (22:22)Facebook uses hydro-electricity to run its servers. (24:25)What happens if there's no water? (28:05)What is the future of the internet going to look like in 10 years? (33:18)Are there any governments around the world that are taking steps forward to regulate the internet? (41:02)What can be done to address this issue? (43:59)What were the main takeaways from the conversation? (48:11)Links relevant to the conversationThe Dark Cloud book:https://scribepublications.com.au/books-authors/books/the-dark-cloud-9781922585523Digital Cleanup Day:https://www.digitalcleanupday.org/Jevons paradox:https://en.wikipedia.org/wiki/Jevons_paradoxIt appears the Amiga hard drive Gene's neighbour in the late 1980s had was a 20MB hard drive:https://bigbookofamigahardware.com/bboah/product.aspx?id=534Thanks to Obsidian Productions for mixing the episode and to the show's sponsor, Gene's consultancy business www.adepteconomics.com.au. Full transcripts are available a few days after the episode is first published at www.economicsexplored.com. Economics Explored is available via Apple Podcasts, Google Podcast, and other podcasting platforms.
If you grew up in the 60's, 70's, or 80's, you will love StarPodLog!On this super episode of StarPodLog, we consider the contents of Starlog magazine from 1981 in issues 45 and 46.Read along with your personal issue from your collection or for free here:https://archive.org/details/starlog_magazine-045/mode/2up Shocking Jon gives us the lowdown on Thom Christopher as Hawk, and on collecting props from Buck Rogers in the 25th Century. Follow him on the Shocking Things podcast! https://anchor.fm/shockingthings Burt Bruce provides info on Outland!Kirby Bartlett- Sloan describes what it was like to watch Doctor Who on PBS. Follow Kirby on the 20MB podcast to hear more Doctor Who goodness!https://youtu.be/XOehb9dBwOY Edward German tells us how science fiction was represented in comic books of the '50s! Find out more about this era by listening to:https://podcasts.apple.com/us/podcast/the-1950s-science-fiction-podcast/id1530633890Main Man Jamie joins in to talk about how science fiction influenced comics of the '60s & '70s! Lou, Max, and Rich discuss details about The Planet of the Apes TV series!Subscribe to the MyMegoLike YouTube channel to hear more foolishness:https://youtube.com/channelSteve Younis and Michael Bailey consider the awesomeness of Superman II!Find the Superman Homepage on YouTube: https://youtube.com/c/SupermanHomepageVideos...and more on this episode of StarPodLog!We will attending Music City Multicon this fall. Join us!https://musiccitymulticon.com/Join us at ShadowCon January 6-8!https://www.shadowcon.info/Don't forget to join our StarPodLog Facebook group:https://m.facebook.com/profile.php?id=469912916856743&ref=content_filterLove Starlog magazine?Join the Facebook group:https://m.facebook.com/profile.php?id=303578380105395&ref=content_filterSuscribe to our YouTube Channel “StarPodLog and StarPodTrek”:https://www.youtube.com/channel/UCgE_kNBWqnvTPAQODKZA1UgFind us on Twitter and Instagram: @StarPodLog Reddit: u/StarPodTrek Visit us on Blogger at https://starpodlogpodcast.blogspot.com/ or iTunes or Spotify or wherever you listen to fine podcasts!Music used with permission by Checkpoint Charley. If you cannot see the audio controls, listen/download the audio file hereDownload (right click, save as)
Love and Monsters; Adam, Debbie, Kirby, Kirsty and Mary talk about the first Doctor light story and pay tribute the passing of a much loved 20MB legend.
Rachel Teichman was the researcher this week, and Paige Dempster was the guesser! Rachel talked about the environmental concerns of nature's Gushers, and then dived deep into the horrors and shockingly recent history of credit scores. Hosted and produced by Paige Dempster & Rachel TeichmanFB & IG: @ResearchRebuttalPodcastTwitter: @ResearchRebuttSources:https://www.irishtimes.com/life-and-style/which-detergent-is-better-for-the-environment-powder-liquid-or-sachets-1.4080329#:~:text=One%20Change%3A%20Pods%20are%20convenient,not%20the%20environmentally%2Dfriendly%20option&text=The%20outer%20packaging%20of%20pods,and%20causes%20tumours%20in%20rats.https://www.earthisland.org/journal/index.php/articles/entry/not_only_using_detergents_simply_washing_clothes_is_bad_for_our_oceans/https://en.wikipedia.org/wiki/Credit_score_in_the_United_Stateshttps://time.com/3961676/history-credit-scores/https://www.nerdwallet.com/blog/finance/origin-credit-score-history/https://www.scientificamerican.com/article/explore-the-pop-in-popcorn/#:~:text=Once%20the%20pressure%20gets%20high,starch%20inside%20the%20popcorn%20kernel.http://www.carrotmuseum.co.uk/whitecarrot.html#:~:text=Many%20people%20believe%20that%20the,Parsnip%20is%20pastinaca%20sativus.https://www.healthline.com/nutrition/is-fish-meat#:~:text=By%20some%20definitions%2C%20fish%20is,by%20that%20definition%2C%20it's%20meat.&text=There%20are%20also%20several%20important,profiles%20and%20potential%20health%20benefits.https://www.energystar.gov/products/lighting_fans/light_bulbs/learn_about_led_bulbshttps://www.sciencedirect.com/topics/engineering/floppy-disk#:~:text=Diskettes%20are%20of%20three%20standard,to%20be%20sent%20by%20post.https://simple.wikipedia.org/wiki/Floppy_disk#:~:text=A%20normal%203%C2%BD%20inch%20disk,store%202.88%20MB%20of%20data.https://www.mnopedia.org/thing/post-it-notes#:~:text=Introduced%20to%20the%20public%20in,to%20create%20a%20strong%20adhesive.https://www.revivalvintage.co.uk/blog/post/vintage-clothing-history-guide-zips-and-zippers/#:~:text=In%201906%20Swedish%20born%20Gideon,problem%20called%20hookless%20fastener%20no. See acast.com/privacy for privacy and opt-out information.
Apresentação João Paulo, 39 anos, Engenheiro em Eletrônica por formação. Desenvolvedor de sistemas embarcados e backend software em C/C++ e Python. Sempre me interessei por programação, desde os 9 anos de idade quando "copiava" e "colava" softwares em basic de uma enciclopédia de informática da escola num 286 com 20MB de disco rígido. Segui por estudar Visual Basic na adolescência, fui apresentado a Pascal, C, C++ durante a faculdade e logo percebi que o hobby/estudo se tornaria algo que carregaria pra a vida toda. Tenho hoje por volta de 18 anos de experiência com desenvolvimento de software seja embarcado(firmware), web, desktop, gui, mobile, backend. Trabalho na holanda para a ASML, uma empresa de litografia, ou seja cria máquinas que produzem microchips, com clientes como Samsung, Intel e muitas outras fabricantes de chips. Tenho por hobby, fora programação, video games desde basicamente 7 anos aonde comecei com um Atari 2600 a desvendar mundos fantásticos, passando por uma longa vida de jogos no computador e video games, como Nintendinho(8bits), SNES, Playstation 2, 3, 4 e ansiosamente esperando pra comprar um PS5. Eu acumulo por volta da mesma quantidade de tempo de um curso tecnológico em horas de jogos. Mais recentemente também fui apresentado, é depois de velho já, ao RPG de mesa(D&D) e ao jogo de cartas Magic The Gathering, por amigos e isso virou um hobby tão importante quanto os video games. Também viajamos bastante dentro e fora do Brasil, antes de nos mudarmos para a Holanda. Contato do João Paulo Página do Linkedin Dicas de Jogos Offline: Dungeos & Dragons Jogos de cartas colécionáveis - TCG PC Neverwinter SKYRIM CounterStrike CounterStrike_Global_Offensive Baldurs_Gate_Enhanced_Edition OsProgramadores Site do OsProgramadores Grupo do OsProgramadores no Telegram Canal do Youtube do OsProgramadores Twitter do Marcelo Pinheiro
- المقارنات بين الأجهزة.- تسريبات ساعة شاومي ثاني.- آبل ستطلق 6 أجهزة في 2020.- سامسونج S11 قد يتم تغير اسمه إلى S20.- مشكلة في واجهة أجهزة بكسل تخلي أيقونات التطبيقات في الواجهة تختفي.- شركات oppo و Mi و Vivo يعملون على تقنية نقل موحدة تنقل بسرعة 20MB/s.- هواوي Mate 30 pro وصل عدد شحناته إلى 12 مليون جهاز.- تطبيق تيليقرام يضيف بوت لربط إيميلك به.
When is a 20MB email to an external Gmail account dangerous? It all depends on context. Understanding what normal behavior is will reveal whether specific behavior is malicious or ordinary. We’ll walk you through how using Splunk’s Machine Learning Toolkit and Splunk Enterprise Security together provides actionable insight for analysts to improve security. We'll also detail how we caught insider threats in our environment with these tools. Speaker(s) Karthik Subramanian, Principal Senior Cybersecurity Engineer, SAIC Tyler Williams, Cybersecurity Data Analyst, SAIC Slides PDF link - https://conf.splunk.com/files/2019/slides/SEC1305.pdf?podcast=1577146228 Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Machine Learning Toolkit, AI/ML Track: Security, Compliance and Fraud Level: Advanced
Splunk [Security, Compliance and Fraud Track] 2019 .conf Videos w/ Slides
When is a 20MB email to an external Gmail account dangerous? It all depends on context. Understanding what normal behavior is will reveal whether specific behavior is malicious or ordinary. We’ll walk you through how using Splunk’s Machine Learning Toolkit and Splunk Enterprise Security together provides actionable insight for analysts to improve security. We'll also detail how we caught insider threats in our environment with these tools. Speaker(s) Karthik Subramanian, Principal Senior Cybersecurity Engineer, SAIC Tyler Williams, Cybersecurity Data Analyst, SAIC Slides PDF link - https://conf.splunk.com/files/2019/slides/SEC1305.pdf?podcast=1577146215 Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Machine Learning Toolkit, AI/ML Track: Security, Compliance and Fraud Level: Advanced
When is a 20MB email to an external Gmail account dangerous? It all depends on context. Understanding what normal behavior is will reveal whether specific behavior is malicious or ordinary. We’ll walk you through how using Splunk’s Machine Learning Toolkit and Splunk Enterprise Security together provides actionable insight for analysts to improve security. We'll also detail how we caught insider threats in our environment with these tools. Speaker(s) Karthik Subramanian, Principal Senior Cybersecurity Engineer, SAIC Tyler Williams, Cybersecurity Data Analyst, SAIC Slides PDF link - https://conf.splunk.com/files/2019/slides/SEC1305.pdf?podcast=1577146233 Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Machine Learning Toolkit, AI/ML Track: Security, Compliance and Fraud Level: Advanced
When is a 20MB email to an external Gmail account dangerous? It all depends on context. Understanding what normal behavior is will reveal whether specific behavior is malicious or ordinary. We’ll walk you through how using Splunk’s Machine Learning Toolkit and Splunk Enterprise Security together provides actionable insight for analysts to improve security. We'll also detail how we caught insider threats in our environment with these tools. Speaker(s) Karthik Subramanian, Principal Senior Cybersecurity Engineer, SAIC Tyler Williams, Cybersecurity Data Analyst, SAIC Slides PDF link - https://conf.splunk.com/files/2019/slides/SEC1305.pdf?podcast=1577146224 Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Machine Learning Toolkit, AI/ML Track: Security, Compliance and Fraud Level: Advanced
Splunk [AI/ML, Splunk Machine Learning Toolkit] 2019 .conf Videos w/ Slides
When is a 20MB email to an external Gmail account dangerous? It all depends on context. Understanding what normal behavior is will reveal whether specific behavior is malicious or ordinary. We’ll walk you through how using Splunk’s Machine Learning Toolkit and Splunk Enterprise Security together provides actionable insight for analysts to improve security. We'll also detail how we caught insider threats in our environment with these tools. Speaker(s) Karthik Subramanian, Principal Senior Cybersecurity Engineer, SAIC Tyler Williams, Cybersecurity Data Analyst, SAIC Slides PDF link - https://conf.splunk.com/files/2019/slides/SEC1305.pdf?podcast=1577146257 Product: Splunk Enterprise, Splunk Enterprise Security, Splunk Machine Learning Toolkit, AI/ML Track: Security, Compliance and Fraud Level: Advanced
主播 / 李大夫、狗叔后期 / 李大夫近日,谷歌在 GDC 2019 上举办了发布会,公开了自家云游戏平台 “Stadia”。同时,谷歌发布了对应 Stadia 平台的游戏手柄、成立了第一方游戏工作室。以上均表明:在沉静了很久的游戏平台领域,一位新的巨头加入战局。在本期节目中,您将听到以下内容:云游戏是什么、与云通关的区别谷歌这次的思路和优势用浏览器能玩《刺客信条》和《DOOM》目前市场上的各家“云游戏”设备的使用体验云游戏的瓶颈、目前尚未解决的问题预测将来与 Stadia 平台合作的厂商和作品哪些类型的游戏和哪些人群适合这个新平台?在节目即将上线的时候,2019年6月6日,谷歌宣布了Stadia更多的细节:首发游戏覆盖了比较广泛的欧美题材作品,比如情怀的《博德之门3》《DOOM》,近期经典的《古墓丽影》系列、《最终幻想15》《刺客信条:奥德赛》,最新的也有《无主之地3》《地铁:逃离》。设备上需要Chromecast机顶盒和手柄,随后会加入手机支持。720p的游戏环境需要10Mb网络带宽,1080p需要20Mb,4k需要35Mb。价格上,免费用户可以使用1080p的效果来游玩,同时也可以每月9.99美元成为Pro会员,享受4K支持、游戏购买折扣和一些免费游戏。目前谷歌还在实体商店上架了Founder's Edition套装,包括Chromecast Ultra机顶盒、限定版手柄、两张Stadia Pro季卡,以及其他一些优惠,价格129美元(个人认为这个套装还是挺值的)。Stadia服务预计11月份上线,暂时未包括中国。赞助商本期节目由七牛云赞助播出,七牛云是国内领先的以视觉智能和数据智能为核心的企业级云计算服务商。即日起七牛云推出,CDN深夜不打烊活动,每天零点到早上九点,CDN流量费仅需五折,欢迎登录七牛云官网购买。七牛云,持续挖掘海量数据的无限价值。听友调查为更好的提高节目品质,我们已启动新一轮听友调查活动,欢迎利用一分钟的时间参与调查。我们将每周抽取一位参与调查的听友,送出神秘礼品一份,活动截止日期为2019年7月31日。参与调查请点击这里打赏主播欢迎以打赏的方式支持本期嘉宾和参与主播,点击这里就可以立刻打赏,打赏成功后您的名字将在节目中被念出。 关于【津津乐道播客】由一群 IT 从业者创办的播客闲聊节目,主播朱峰是一位资深互联网创业者,其他主播和嘉宾来自各个城市、各个行业,跨界思考和嘉宾众多是我们的特点。经常出现的话题会包括生活、新科技、旅游和你所不了解的行业。每周日早 8:00 定期准时更新官网:https://jinjinledao.org/微信公众号: 津津乐道播客微博:津津乐道播客Twitter:@jinjinledaofmTelegram Group:@htnpodcastTelegram Channel:津津乐道播客知乎专栏:津津乐道Email: hi@jinjinledao.org版权声明除非特别说明,本播客所有作品均采用知识共享署名-非商业性使用-禁止演绎 4.0 国际许可协议进行许可。
Listen in as we talk about the Summer 19 release and how it affects you as a developer! Summer'19 release notes - full version Summer'19 release notes by Salesforce Reddit community - the abridged version Our Dev highlights: Maximum Debug log size increased to 20MB ; It's great but we need enterprise-level LogStream functionality Real-time event monitoring Make Long Running calls with Continuations You can now use LWC in Aura Apps, in Visualforce pages, and also add it to Lightning Out Asynchronous Apex Triggers Track Data Changes to External Objects More Easily View Data Updates in Real Time with Live Controller (Beta) Named Credentials now supports JWT auth protocol Lightning Flow: Apex Building Blocks with types in Flow Link to SFXD Discord group If you enjoyed (or didn't?) listening to this episode, please leave us a review on iTunes.
An airhacks.fm conversation with Stuart Douglas (@stuartwdouglas) about: starting with Visual Basic in high school, with the goal to build games, then quick transition to C++, creating Tetris from scratch in weeks in C++, building first commercial financial planning application with PHP, starting with Java 1.5 and annotations in 2007, Java is popular in Australia, building Seam applications with JBoss 4, contributing to Weld in spare time, improving the performance and startup performance of Weld, working for RedHat as JBoss AS 7 developer, JBoss is more than the application server and the advent of Wildfly, the braning clean up, creating Undertow, WildFly was shipped with deactivated EJB pooling, too small EJB pools can cause performance issues, how to kill EJBs with CDI, EJB vs. CDI, interview with Pavel Samolysov and EJB vs. CDI performance comparison, quarkus is not using reflection for injection, a small team (8 people) started quarkus to leverage GraalVM, the goal of quarkus is to make a subset of Java EE natively compilable to an "unikernel", updating the cloud platform without recompiling the app, serverless with quarkus, serverless without the function disadvantage, 20MB self contained, native images, building Java EE / Jakarta EE unikernels, extremely fast start times for Java EE applications, native images are running with a fraction of RAM, at one point in time, quarkus might be commercially supported by RedHat, CDI portable extensions are not supported now, quarkus wizard is not based on Maven archetype - what is a great idea, Maven is only one of many possible entry points to quarkus, a really good developer experience was always the main goal, hot reload is a must, currently the classloader with the "business" part is just dropped and the app is reloaded, adding dependencies via CLI or pom.xml, quarkus ThinJARs are compatible with the ThinWAR idea, FatJAR's builds have to be slower, packaging all dependencies into a single JAR, using Chrome Dev Tools Protocols for hot reloading the browser, misusing quarkus for building command line tools, community extensions are on the roadmap, quarkus is going to be well integrated with OpenShift, quarkus forum. Stuart on twitter: @stuartwdouglas, and github.
In this episode we are talking about our remote work tools that enable our distributed team across the world to collaborate, design, and build software. Throughout the episode, Todd, Ken, and Jamon touch on their favorite tools—from Slack, Zoom, and Google Sheets—why they chose them, and the ways they have added custom features to really make the remote experience special. Show Links & Resources Slack Zoom G Suite BlueJeans Screenhero RealtimeBoard InVision Trello Airtable Shush Dropbox Bigscreen VR Taking the Pain Out of Video Conferences by Ken Miller Episode Transcript CHRIS MARTIN: The topic at hand today is remote tools, and all of the different ways that you have built a remote company. Where do you even start when you're thinking about what tools to pick when you're going remote? KEN MILLER: This is Ken Miller, by the way. It happened very organically for us. To be honest, I don't know that we could've done this company this way before Slack. Because the tools that came before, Hipchat and IRC and Yammer, even though I worked there. Sorry, Yam-fam. They just didn't quite do it. Right? They didn't quite create the online atmosphere that we need to work the way that we do. Does that sound accurate to you, Todd? I feel like once we found Slack, we were like, "Holy crap, this is epic!" TODD WERTH: I think there's a few alternatives. Hipchat, at the time, wasn't good enough. There were a few alternatives we investigated. I would like to mention at the beginning of this ... This is Todd Werth, by the way. I would like to mention at the beginning, I imagine that a lot of companies in this podcast will need to be paying us an advertising fee. Like Slack. JAMON HOLMGREN: We actually adopted Slack before we were remote. We had ... I think we were using Google Hangouts or something. Or whatever of the myriad Google chats there are out there. They have like 12 apps. We were using something else in person, and then we started using Slack organically right when it first came out. TODD: Sorry about that noise you all heard. That was me throwing up a little bit in my mouth when you said "Google Hangouts". (laughter) KEN: We'll talk about video-chat in a minute. JAMON: By the way, this is Jamon Holmgren. It was ... Initially, we jumped onboard. They did a really good job marketing themselves. We had used Hipchat a little bit, but it just wasn't what we expected. We started using Slack. That was in early 2014, I think it was? I don't think it's a coincidence that within a year and a half we ended up going remote. I think that was one of the enabling tools. We got used to it in the office, but it enabled remote work. TODD: To talk about chat apps or chat services is important, but on a more general standpoint, I would say how you approach it is actually try 'em and do it. A lot of companies seem to just use whatever is available and not look for optimum solutions. If trying three or four different chat systems is too onerous for you, that's probably the wrong attitude, in my opinion. KEN: You think, "don't settle". Don't assume that the first thing that you try is the only thing, and then conclude that remote isn't gonna work because the tool that you tried sucks. JAMON: We tried a lot of tools at ClearSight, before the merger. We tried ... I can't even name them all, to be honest. Part of it is because I like ... I'm a gadget guy, I like to try new things and see how it goes. There was actually a lot of skepticism around Slack because they're just yet another tool that they had to log into and pay attention to. "We already had the email, so do we really need this." It was kinda funny, when I went back and looked at our inner-company email, just tracked ... I think I used the "everyone@clearsightstudio.com" or something email address to track how often we were using it for company communications. It just dropped off a cliff after Slack. The amount of email, the volume of email that was flying around went way, way, way down. In fact, I remember we used to send GIFs in the email threads, and stuff. There were elements of the culture that we have today in Slack going on in email threads. Slack was just so much more well-suited to that. That actually came about very organically. We had tried a bunch of different things. We tried Slack, and it just picked up steam, picked up steam, picked up steam. TODD: I don't ... I'm not even exaggerating, I don't believe I've ever sent an email to anyone at Infinite Red internally. I don't think so. KEN: Unless it's a forward from someone external. TODD: Correct. I think there's people on our team who probably don't check their email very often because they don't have a lot of -- KEN: Yeah, if you don't do sales or any kind of external outreach -- TODD: Yeah. That was a sticking point a few times, when people were sending out the emails, and we had to ... They were wondering why people weren't responding, it's because the variety of people never check their email. JAMON: It is funny, because email does still, it is still a tool that we use for remote communication with outside clients, especially people first coming to us. But as soon as we can, we get them onto Slack because we've found that that level of communication is the least friction, it's very seamless. Slack is definitely featuring very centrally in our remote-tool story, for sure. TODD: Rather than just ... I'm sure a lot of people out there use Slack. If you don't, give it a try. But rather than just gushing on Slack, I do wanna say that the important part here is we did go through a lot of different chat services. You have to give 'em some time. At first, for example ... We do love Slack, but at first it didn't seem that different. There wasn't a bullet list that's like, "Oh, this has feature X", it was a bunch of little, subtle things that made it work especially well for us. KEN: Part of the meta-point there, is you have to treat your tools really seriously. Right? Google and Amazon and all these big companies, any well-funded start-up, whatever, they're gonna lavish a lot of attention on making an office that works for them. Right? TODD: Mm-hmm (affirmative). KEN: They're gonna create an office environment very thoughtfully. I've been to a lot of these offices. A lot of them are very thoughtfully considered. Right? They're designed to create a certain atmosphere. For example, I was at the Square offices once. Huge, cavernous room designed to create a sense of energy. That's the open-office mantra, that sense of energy. They had these little cubicle ... nicely designed cubicle things where you could go if you wanted quiet. Clearly, noise was the default. That architecture creates a culture. At least it reinforces a culture. As a remote company, your tools are your architecture. You either need to buy them from people who design them in a way that works for you, and Slack seems to work for a lot of people, or you build things that work for you, or you create norms about how they're used that do the same thing. We've done some things on Slack, we've done some things on Zoom, to create that sense of being together. Todd? TODD: I would like to add emphasis to what Ken just said. Imagine a time that someone puts into an office: architecture, the layout, the furniture. Rearranging it multiple times, placing stuff. Now think about the time that companies you've worked for put into remote tools. Anyone out there with their hands up saying they spent about 30 minutes on their remote tools -- KEN: Ever! TODD: Yeah. It's not surprising that one is superior to other in those organizations. I would pile on, like Ken said, and take the same amount of effort and consideration of your tools as a remote company as you did with everything else in the physical space if you're a commuter company. CHRIS: I'm interested, too, because as you're talking, you're talking about the difference between physical architecture and the architecture of your tools that allow you to do remote work, and if everyone's using Slack, and it looks and functions the same way, what brings the sense of uniqueness to a company that's using the same tools? TODD: Me. Just me being around makes everything unique, wonderful, and amazing. To answer the real question, you have to take Slack ... One of the great things about Slack, 'cause it's highly customizable, you can add plug-ins, you can add all sorts of integrations. We're gonna talk about other tools than Slack. They literally just pay us a crapload of money just to talk about this. JAMON: I wish. KEN: I wish. TODD: You don't take the vanilla. The point of a tool like that is you take it and you make it your own. JAMON: I did see someone tweeting about switching remote companies. They quit one company and they got hired by another. They did mention, actually, how similar it was. You go into the same place; you sit down at the same chair; you have the same computer in front of you; you log in to a different Slack, and you start working. Right? There is some level of consistency there. In a way, that's a very good thing. You can be comfortable very, very, very soon. There are plenty of things to learn about a new company without having to also learn new office layout, new office norms, policies about who can put their lunch in the fridge and who can't. I don't know what else. It's been so long since I've been in an office, I don't even know. I think there is some level of normalcy there because people do use similar tools. Like Todd said, you can customize Slack to work the way that your company needs to, and you can customize other tools as well. Since we're programmers, since our team has a lot of programming capability on it, we do actually build a lot of glue code in the scripts and things that will help tie all the tools together. KEN: In most organizations that have adopted chat tools, whether it's Slack or something else, they are usually billed as an internal supplement replacement for email. It is great at that, don't get me wrong, but I think something that gets lost in the way people talk about in the way we communicate now is that ... Let me tell a little story. I used to be a big fan of Roger Ebert. Rest in peace. Brilliant writer, right? Super enthusiastic. He was very critical of the way people write online. Very critical of things like emojis and emoticons. I think, while I respect him a lot, I think he completely missed the point on that. The point of that is, although, yes, we type to communicate online, it's not really writing. Not in the way our English teachers taught us. Right? It's typed speech, really. Right? It's a register of communication that's closer to the way that we talk than it is to the way that we would write if we're writing an essay or a blog post. One of the things that I really like about, Slack for example, is the rich way that you can communicate without it looking junky. It doesn't look like something awful or 4chan or some of the other really junky-looking message boards that have that level of expressiveness. It gives you the level of expressiveness so that you can substitute for the lack of facial expressions and body-language, but it's not writing. You don't write ... you don't type into Slack the same way you do. It's much closer to the way that you talk. For a remote organization, where we're not on Zoom all the time, although we are a lot, it's super important that you have that level of human expressiveness in your medium, in the medium that you're using to replace spoken word. TODD: Three comments. One: Zoom is the video conferencing tool we use, and we'll talk about that in a second. Two: I don't spend much time on 4chan, Ken, so I'll take your word on that one. (laughter) Three: just to give an example, talking about customization and you might be asking yourself, "Okay, Todd, I've used Slack. I've used chat. What're you talking about?" Just give you a few flavors. The simplest is creating your own channels that have some sort of cultural significance to your organization. One of ours is called "Rollcall", where we ... It's the digital equivalency of walking in and out of the office. "I'm here this morning." "I'm gonna go get my car worked on." "I'm back." It's not just status, it's also ... not just whether you're working or not, but it's a way to communicate basic, little life things in a short way. We have another one called "Kudos", where we give kudos to people. Which, at first, I thought, probably, wouldn't take off, but it actually did. It's where you give kudos to people for things that they did well, and I'm really shocked how many people give kudos and how many people respond. That's obviously just using the base tool and choosing what content to put on there, and how to organize. There's other things, too. Obviously there's things like code-repository integration, a code bug-reporting integration. We integrate with other companies' Slacks. They have a Slack channel, we have a Slack channel, and they connect so that we can do that with our clients. All the way to we have a custom Bot we wrote for Slack. Her name is Ava. She does a variety of internal processes for us. She's kind of ... In the old days, you'd have a database and you'd have a Windows app written to connect your database for your company, you'd do things in there. We have a lot of internet SaaS-tools. And then we have Ava that integrates a lot of them together. JAMON: Todd, can you give an example of something that Ava does for us? TODD: Yes. There's some basic things that a chatbot might do. For instance, you might wanna ask her where Jamon is, and she'll tell you the information she knows about Jamon. It's a lot of operational stuff. For instance, our Project Manager, Jed, has to produce weekly reports for clients. Ava produces those for him. Stuff like that. Stuff that you would normally do, like I said, in the old days, in a desktop app personally. JAMON: Todd came up with Ava quite a while ago, actually. It was sort of a toy to start with, just playing around with it. He had some ideas where it might go, but over time we've actually invested more and more resources into this internal chatbot and it's proven to be quite valuable. It's saved a lot of time, reduced the amount of overhead that we have to have tracking things because it's able to do a lot of process things. KEN: So far, she has not escaped and murdered us. (laughter) TODD: Not so far. I'm working on that. JAMON: That's a win. TODD: There's some tiny things. She's just a way for us, if we need to program something that we have a sticking point like, here's a very simple thing that took me five minutes to ruin. We do a lot of things on Mondays, and constantly wanna know what last Monday was, or Monday three weeks ago. You can literally just say, "Ava, what was Monday two weeks ago," and she'll tell you. That's a very tiny thing. Generating project PDFs or generating project reports is a bigger thing, obviously. JAMON: Another tool we use to communicate, non-verbally in Slack, is "Reactions". Someone'll post something and we react to it. I think this is pretty common in Slack teams and this is something that Slack did a good job of coming up with a cool idea. Usually you think of up-voting and down-voting, but when you have the whole range of emojis, including custom ones and animated ones and things like that, it can be a very cool thing. One interesting example of this: we have an integration with ... Ken, what's the service we use for Chain React tickets? KEN: Zapier. JAMON: Zavier. Zapier, yeah, and it connects with Eventbrite, and that basically will post any time someone buys a ticket to Chain React, which is our React Native conference, of course, happening in Portland in July. You should buy a ticket. (laughter) We get a notification, and it pops in there, says who's coming. When we're getting down there ... We were getting down to the last few advanced workshops that were available, someone started putting a number emoji underneath it. 10, 9, 8, 7, 6, like that. You can see then, at a glance, how many were left. It was very cool how we were all collaborating on that. When someone would buy the advanced workshop, Kevin VanGelder, who's our resident Windows guy, he would put a little Windows emoji on there because that's part of the advanced workshop. It was just a cool way to communicate and collaborate without even using words. TODD: I think the important part of using reactions or emojis or Slack Responses ... Reactions, if you're not familiar, Slack is ... It's simply, someone posts a message, and instead of responding to it, you can post a little image on it, like heart, or a thumbs up, or a vote-up, or whatever. Slack Response is an automatic system that, when you say X, it outputs Y into it. One Slack Response that Jamon hates is that when you say "I'm not a big fan", it posts this picture of this really, really small fan. It's hilarious. I love it. (laughter) JAMON: Really hilarious. TODD: Every time someone put ... We had some that we had to remove, 'cause they just came up too much. Every time you'd say "founders" it would show the Three Stooges, which is "Accurate", but... KEN: It was "founders' meeting". TODD: Oh, whatever. KEN: But still, yeah. TODD: It was accurate but a little too much noise. The point is, it's very important. We've probably added a huge number of Slack Responses, a huge number of our own emojis, and the emojis you can use for Responses. A lot of them have become very cultural. Just to give you a few examples: my cat, Calle, that's short for Calle Berry, I took a picture of her paw. And, of course, cats, if you just do the front part of their paw, it looks like they have four fingers instead of five because their fifth one's back further. We came with this emoji and this thing where, if someone does a really great job, they get a "high-four", instead of high-five, and that's Calle's Response. JAMON: I didn't actually know that was Calle's paw. TODD: Oh, yeah, that's Calle's paw. JAMON: That's cool. TODD: So that's a cultural thing that I created one day, and it just kinda stuck. It became a "high-four"; it is an Infinite Red thing, you get a "high-four". We have other things like that, too, that are very specific to our culture, where you have to explain to people who come in what that means. I would definitely customize it, make it fun. We don't worry too much if clients see it. We're not doing anything inappropriate. At first, there was discussion, "Is it professional if they accidentally trigger one of the Slack Responses?" "No, but does that really matter?" "No," in my opinion. KEN: It depends on the Response. (laughter) TODD: Of course. KEN: There were some that were a little over the line and that, without context, could be a little startling. We removed those. TODD: Yeah, that's true. KEN: But for the most part, yeah, just something that's quirky. Hopefully, we all have clients that, at least the people who are in the Slack room are able to appreciate that. TODD: Another one that's totally part of our culture is, there was this early picture of me looking into the camera with a stern face. That became the "shame" emoji. That's been used ever since. Every time someone wants to throw shame upon someone, my face is there. I don't know if that's good or bad. JAMON: There's another one that's quite disturbing, of you, Todd. TODD: Oh! When you say yes "yis", Y, I, S, yes that is disturbing. JAMON: "Yis dream." TODD: You have to work here to ... KEN: You had to be there. KEN: Some of the things that came from my experience at Yammer, where a lot of the company was run internally on Yammer, there's a couple of really big advantages to that. Especially, at an all-remote company, where the vast majority of conversations happen there. One is that there's very much less pressure to include people in meetings just because, just in case they might have something to say about it. Because if you've having a conversation in Slack, you just pull 'em in. Right? After the fact, and they can catch up. But the other was, there was an ethos at Yammer that was, there was this pat question which was, "Why is this private?" "Why did you make this group private?" "Why is this in a private chat?" Making closed conversations justify themselves, rather than being the default. Particularly when we invite other people into Slack, I notice there's a little period of training, where people will instinctively start DMing, 'cause it's like "Well, I need to ask Ken this question." Say we brought our bookkeeper in, right? They would ask me 'cause I was the contact. I'm like, "Ask this question in Finance." Right? "Ask this question in the Finance channel." Which happens to be one of the private ones, for a variety of fairly obvious reasons. By asking in the channel, then the other people who might be interested can just observe. That's one of the ways that you compensate for the lack of that serendipitous, overheard conversation that people are so fond of in a office. CHRIS: In Episode Two, we talked about the philosophy of remote work. Todd, you actually made a comment that was really interesting to me. You said, "When the leadership uses the remote tools, they immediately get better." Why do you think that's the case? TODD: Human nature. I'll answer your question with a little story. I worked for company ... This is circa 1999. I don't know. I didn't work for 'em; they were a client of ours. For many, many years they were very much a Microsoft shop. They had no interest in testing anything on other platforms like Mac or whatever. We worked for them for nine years, something like that. So this is all through the 2000s. It was frustrating for people who wanted to produce websites that were universal. If someone opened 'em on a Mac, it would actually look good and not look horrible. One day, one of the VPs who was above the software group bought an iPad. I think, about a year later, he bought a MacBook. Once he had that iPad, all of a sudden, it'd become very important that things look good on his iPad, which is funny and horrible at the same time. It is just human nature. If you use something, it's much more front of mind than if you don't. Even the best of people suffer this. If you have a mixed company, meaning you're part remote, part commuter, one of those groups is gonna be a second-class citizen. Period. If 10 people are in a meeting, and eight are remote and two are in the office, the two in the office are gonna be the second-class citizens. More often, it's the vice versa, right? Getting everyone on the same page gets rid of second-class citizens. If you wanna make the best remote environment, either getting the majority or getting the people who have more power in the remote situation will increase your tools' quality big time. JAMON: That's for sure. We've seen that internally at Infinite Red, as well. When we use the tools, which we do, leadership team is probably the heaviest user of the remote tools in a lot of ways. There are situations where they're just not good enough, and we make sure that they get changed, for sure. Zoom is a good ... Zoom, the video chat, video call system, is really an interesting one because it has worked the best for us in terms of video calls. We've used a whole bunch of them. We've used everything from Google Hangouts, Skype, Appear.in, which is pretty decent. Pretty frictionless, actually. I like Appear.in for how fast it is to jump into it, but the quality is still a little bit sub-optimal. A few others as well. The nice thing about Zoom is that it allows you to put everybody into a grid pattern. It has a gallery view, which is really cool because then you feel like you're having a meeting and not doing a presentation. That's something that came out of us doing sales calls and internal meetings where we kinda felt like, "I don't wanna be the person on the big screen," right? Feel like your giving a presentation. "I wanna feel like this is a meeting with everybody in an equal place." It makes people feel more comfortable. That was a situation where we were using the tools for various things and found the one that, I think, has worked the best 'cause, as a leadership team, we needed it. TODD: Yes, as far as video chat or video calls ... We actually need a name for that. What do you say if ... It's not really video chatting. JAMON: Video conferencing? TODD: I don't like ... KEN: It's not exactly "conferencing". TODD: I don't like the term. JAMON: Video meeting? KEN: Video meeting. TODD: Yeah, there needs to be a term for that. We need to coin a term for that, at least internally. CHRIS: Zooming. TODD: Zooming. Well that's ... That's not tool-specific. KEN: Slack as a tool is much stickier, in the long term, probably, than Zoom is. At the moment, Zoom is, by far, in our experience, the best quality. JAMON: Mm-hmm (affirmative). KEN: But that could change. Slack ... there's a lot we've invested in customizing and it would be harder, but ... Although, we have invested some in Zoom, which we can talk about a bit. TODD: I would say Zoom is our favorite for our situation. One of our clients is BlueJeans.net, which is not really a competitor, but they do video conferencing. BlueJeans is really great for many things. One thing is they do every platform well. KEN: Mm-hmm (affirmative), yep. TODD: Which, Zoom, and a lot of the other ones don't necessarily do. Now, we're all mostly on Macs, and it works really well on that, so that works out well. Also, BlueJeans.net has a lot of additional features. Where we basically just need video conferencing; Zoom is so superior. Google Hangouts is horrible. Please, please stop using Google Hangouts. KEN: Don't use Skype. Don't use Google Hangouts. TODD: Well, Skype -- KEN: Skype has gotten better, but -- TODD: Skype's quality is great, but it does a max of six people. We have 26 people. KEN: I disagree that they're quality is great. TODD: I was being ni -- KEN: Even domestically, I've had problems with it. (laughter) JAMON: We have Microsoft people listening. TODD: I was being nice, Ken. JAMON: It crashes a lot on Mac. KEN: The point is, here, you should demand rock-solid video 99% of the time. TODD: Yeah. KEN: If that's not what you're getting, look at another tool. JAMON: This extends to the internet bandwidth that you have available at your place of work, too. Some people that were really scraping by on 20Mb or something connections, and it was impacting video quality, and -- TODD: On what tool? KEN: No, their connection. JAMON: Their internet connection, yeah. That was something that we, overtime, got everybody to upgrade to faster and faster internet. I think that was a success for, pretty much, everybody. They have pretty acceptable internet, now, at this point. TODD: Some aren't as much. We have a person who's a nomad and travels around. We have someone who's in extremely rural Canada, up above Toronto, Tor-on-toe, I'm told is the proper way to say that. Zoom does very well in bandwidth, so the people that do have limited bandwidth, that works very well. We actually have meetings, 26 people in Zoom, which before would have been crazy. Skype limits you to six, which I'm not sure how useful that is for most meetings, but good for you, Skype. KEN: The only thing it's not so great on is battery-life, if you're using a mobile device. JAMON: It sort of trades CPU time for bandwidth. KEN: It does, yeah. JAMON: One of the things that Zoom doesn't do, that we've sort of built a system on top of, is permanent conference rooms. We've found this to be very useful to say, "Hey, let's jump into this 'conference room A', or 'conference room B'." We have better names for it. We name them after rooms in the boardgame Clue. TODD: Trademark Milton Bradley. (laughter) JAMON: There's a billiard room, there's a conservatory, there's a study, kitchen, et cetera. We have different uses for those different rooms. Some are for sales calls; some are for ... One is called Kitchen, which we use for the kitchen table, it's basically where people just jump in there, and work together in relative quiet. It's a cool little concept. We actually built an online, like a website, as well as a desktop app that shows a Clue board with the different rooms that light up when people are in them, and then it puts avatars of who's in that room, including guests, which is very cool because I can go in there and say, "Hey, look! Chris and Todd are having a meeting over there. I'm gonna jump in and see what's going on." I can just click in there, and it opens a Zoom window, and I'm in their meeting. TODD: For example, currently, Chris, Jamon, Ken and I are in Study. We have Kevin and Ryan in Library, and we have Jed in the Billiard Room by himself. I'm not sure what that's about. Maybe playing a little pool. KEN: This goes back to the notion of tools as architecture. Consider the experience of being in an office, and you want a meeting. You say, "Hey, let's meet in Fisherman's Wharf." I was in an office where they named things after San Francisco neighborhoods. "Let's meet in Fisherman's Wharf." Everybody, after they've been oriented into the office, knows where that is and they just go. That's it, right? That's the experience, right? Furthermore, if you wanna know where somebody is, you walk around the building, look into the rooms, and see that so-and-so is in Fisherman's Wharf, so they're in a meeting, they're busy. Now let's look at what it's like to be remote, without a tool like this. "Where's the meeting? Okay, I gotta ask somebody. Oh, okay. Oh, did someone start the meeting? Oh, no, no, okay, somebody needs to start the meeting. Alright, gimme a second, I'm gonna start the meeting. Here's the Zoom URL." TODD: Oh, God! KEN: "Okay, you gotta invite somebody." "Do you remember the Zoom URL?" "I don't remember the Zoom URL." "Okay, hang on. Okay, I got it. Here you go." That's the UX, right now. JAMON: Yes. KEN: Of the base ... TODD: Oh, jeez. KEN: ... video conferencing tool, and it's no wonder people hate that! JAMON: Yep. KEN: Right? TODD: Can you imagine? KEN: Yeah. It turns out ... We've had to increase the number of rooms over the years, right? But how many do we have now? Eight? TODD: Eight. KEN: So we have eight rooms now? TODD: Eight current rooms. KEN: That's pretty much fine. TODD: Mm-hmm (affirmative). For a team our size, that works well. JAMON: We usually don't fill all of ... I think, yesterday, I looked in there and there were six in use, which was kind of a anomaly, but ... KEN: In an office, we can keep adding those as long as we need to. JAMON: That's right. KEN: This is a case where I think we've created something that is actually better than what people who have an office have. JAMON: Yeah. KEN: Right? Because you can, just at a glance, see where people are. Nobody has to even tell you what room they're in. They just say, "Hey, we're meeting." You go look at the Clue board, and you see where the people that you're meeting with are, and you join the room. JAMON: Yeah. KEN: It's just one more little piece of constant friction that we've eliminated. I love it. I think it's a fantastic tool. TODD: Yeah, I keep the Clue desktop app open all day long while I'm at work. It's also cool to see the little avatars and stuff. Makes me feel like I'm at work. When we first started, you did have to push ... This is a very common interaction. "Hey, Todd, I need your help with X." And I'm like, "Let's have a meeting" or "Let's jump in Zoom" or whatever. "Which one?" "I'm already there. I joined a room as soon as you said it." "Which one?" "Open Clue. (laughter) Look for my name. Click on it." JAMON: Yeah. TODD: That only took a few weeks, to be honest, of constantly just needling that to the point where, when someone says, "Hey, I wanna jump in a room," they look and they see where you jumped in. KEN: That brings back the importance of having the leadership on the tool. TODD: Yes. JAMON: That's right. This tool actually came out of a side-project. I think Gant and AJ, two of our engineers, came up with the idea and built a prototype, and put it out there. It was ... I remember being, initially, a little bit skeptical that it'd be useful and it's turned out to be a really key part of our remote experience. TODD: That's actually an important point. No one asked anyone to make that tool. No one asked for permission to make that tool. They made it. They turned it on. Now, we've had tools that people've made. For instance, my tool Ava, which, now, is very useful, originally was Dolores, which is from HBO's great TV show, "Westworld". Dolores never caught on. She didn't do enough important stuff, and so she just kinda died. Later I resurrected her as Ava, which is from the movie "Ex Machina". Excellent movie, by the way. KEN: It's still kind of a disturbing allusion, though. TODD: It is, but it's ... It's a great movie. And then the next movie he did, which was "Annihilation", was fantastic as well. Anyways, not important, obviously. The point is, no one needs to ask for permission. They can make tools. They do. They put 'em out there, and they live or die based on whether or not they're actually used. We do sunset things that just never really took off. CHRIS: You're mentioning a lot of tools that enable remote work, that enable productive work. What are some tools that you're thinking about or are in place that help with focus and eliminating distractions? 'Cause sometimes, people new to these environments can look at these tools going, "Man there's so many distractions. How do I work?" JAMON: I actually think that's one of the biggest benefits of working remotely, which is kind of counter-intuitive. You think, "Oh, there's so many distractions when you're working remotely." Actually, you can turn off Slack. You can turn your screen to "do not disturb". You can shut off Zoom. You can turn off you're email. You can close all of those applications and just have the app that you're doing the work in, you're writing a blog post, you're writing code, you can just have that open. You can turn on a "do not disturb" mode in Slack that'll actually tell people that you're currently away. If you use the tools that are available, remote work can actually be much better, because what happens in an office? Someone can't get a hold of you on email or Slack, so what do they do? They hop up and they walk over to your office, and they're like, "Hey, did you get my email?" (laughter) "Okay, I will check my email, eventually, here. Is this really important?" One of the things that we do is ... This is kind of funny, but we'll actually say "I'm going offline for three hours, 'cause I'm gonna focus on this thing. If it's really important, text me." Our phone numbers are there, right? Nobody's gonna text you, 'cause that just feels like a complete intrusion. Right? KEN: It does happen. Like, if it's a genuine emergency. JAMON: It does happen if it's like an emergency. But that is so rare. That is awesome, because you're adding a ton of friction, but you're still giving them some way to get to you. I think that's a good property of remote work, that you can actually focus more in those situations than you can in an office. TODD: Yeah, try to turn off all the noise in an open-concept office. Good luck! KEN: Yeah, an office is distracting by default. You have to use technology to get some focus. I can't think of any tool that we use just for focus. Right? It's about human habits around how they use the tools that are already there. TODD: I think there are some, Ken. I don't personally use them. KEN: Yeah, yeah. I mean there are things, but there's nothing we use as a company. TODD: No, but there are people here that use, for one thing, they'll use the various timer apps that tell them to stand up, or if they set a timer for focus -- KEN: I've used the Pomodoro timer. TODD: Yeah, there are things. What's cool about remote work as opposed to depressing cubicle work (laughter), is you can set up the environment -- KEN: Soul-crushing commute work. (laughter) TODD: Soul-crushing commute work, SCCW, I like it. In those situations, you have to go to the lowest common denominator. If 50% of the people are very productive and get focused with music, and 50 can't at all, you're gonna have no music. When you're sitting in your own environment, whatever that environment is, whether it's your home, or a café, or co-working space, or whatever it is that you've chosen to be most efficient in, when you're sitting in that environment, you can control and make it perfect for you to be able to focus. Personally, if I'm doing design work or visual work, I play music. It gets me in the groove. If I'm programming, I cannot have any music. Or if I do have music, it can't have any lyrics in it. That's a focus thing. I tend to like to work more in the dark, strangely. I love light and I live in a very sunny place, and a very sunny house, but I have noticed that I tend to get more in the zone in dark and often late at night, for me personally. CHRIS: I'm the same way, Todd. I have to fake my brain into thinking it's late at night by closing all the blinds and turning the lights off. And it actually helps productivity. TODD: Yeah, that's interesting. I used to have this problem at every company I worked at. Even, say, I shared a room with four other people. One office, and four. I would wanna have all the lights off and have a desk lamp so I could see. No one liked this. Having the fluorescent lights on ... I didn't take cyanide, but I do believe I shopped online for cyanide, just saying. (laughter) KEN: So this is in your browser history, now, forever, man. (laughter) There's a FBI file on you. TODD: Oh, there's been a FBI file. Come on. If you don't have a FBI file on you, what are you doing with your life? (laughter) JAMON: At the old ClearSight office, we had some fluorescent lights, and one by one they would burn out. Nobody would tell the maintenance guy because they just liked that they were burning out. (laughter) Eventually it got quite dark in there and everybody, they just wouldn't even turn on the light. TODD: I would like to make a confession. I have purposely broke some lights in offices. KEN: "True Confessions with Todd Werth." (laughter) TODD: You don't want true ones. No, that actually -- CHRIS: That's Season Two of the podcast. (laughter) TODD: That actually is very true. Sometimes you just have to ... KEN: Civil disobedience? TODD: Yes, I like the way you phrased that. Makes things more noble and less selfish. (laughter) KEN: Yeah, right. Guerilla productivity. JAMON: We have some other tools to talk about, too, right? TODD: Oh, yeah, we have other tools to talk about. JAMON: Should we talk about some of them, or ... TODD: Yes. KEN: But enough about Todd. (laughter) TODD: I'll be here all week. Do not eat the veal. JAMON: One of the tools that has been really helpful for us is Google Sheets. Obviously, that's the spreadsheet program in Google Apps. We ... We're having trouble ... Again, this is pre-merger. We're having trouble figuring out how to schedule people. It was just a real pain. Eventually, my Project Manager at the time, came up with a system that involved sticky notes on a board that were, across the top were weeks, and down the left side were the names of people. We could just put sticky notes. My wife went out and bought a whole bunch of different colored sticky notes. We'd put the same project as the same color across the board. You could, at a glance, see who was working on the same project. You could see how long it was going to be, as far as number of weeks, and every week we'd move 'em over to the left and add another column. That eventually migrated onto Google Sheets, 'cause, of course, that doesn't work so well when you're remote. The collaboration tools on Google Sheets are extremely good. It's very, very responsive to having multiple people on it. When we do our Friday scheduling meeting for the next week, and beyond, we'll all pull open the sheet, and we look at it, and we can all update it ... If we see something that's wrong, we can update it. We can change colors of the backgrounds. It's worked really well for, now, two and a half years. I think that's a remote tool that has actually been quite useful for us for quite some time. Not only does it give us forward-looking data, but it also gives us backward-looking. We can look at previous years and see what projects were we working on at the time, who was working on what, all the way throughout. It's been a very cool tool. We're just repurposing Google Sheets to use as a scheduling tool. TODD: Another tool we used to use ... Jeez, I can't remember what it's called. What was the [inaudible 00:43:17] tool we used to use? JAMON: Screenhero. KEN: Screenhero? TODD: Screenhero, yes, of course. I remember when Screenhero was ... It was eventually bought by Slack and is being integrated into Slack. We used to use that a lot, but truthfully, the tools in Zoom for screensharing stuff became superior and so I think almost everyone pairs with each other Zooming. TODD: Another tool we use is RealtimeBoard, which is a sticky board analogist tool; the designers -- KEN: Designers love it. TODD: The designers used it a lot, but we also use it in leadership and the developers, I think, are starting to look into it. It's great for brainstorming. It's a real-time tool, kinda like Google Docs or Google Sheets, where everyone can use it at the same time, and you see everyone using it. That's been really great. The designers use the heck out of InVision, which is a wonderful tool for showing designs, getting notes, and collaborating with clients, collaborating with the rest of the team, and that kind of stuff. Another tool we use for project management a lot is Trello. If you're not familiar, with it, it's a great project management tool. It's a Kanban board, if you're familiar with those. Not only do we use Trello, we also integrated ... Ava connects to Trello, produces reports from ... Ava connects to Airtable, which is another interesting mix between a database and a spreadsheet. We use Airtable and Trello. Those are some other tools we use. KEN: Something to mention, also, is that between Slack and Zoom we have some redundancy, because Zoom has rudimentary chat and Slack has video conferencing. It's not as good as Zoom's, but it's there, and we already have it. For example, when Slack is down, we have Zoom channels that we can all do basic communication in. That provides a certain amount of resiliency for the work environment, and that's very helpful. TODD: Yeah, it does go down every so often. It's funny because our company comes to a screeching halt when Slack goes down. KEN: Yeah, and that's a valid criticism, I think, of remote working. We do have the redundancy so that people can at least, basically, keep going. TODD: We all know now, if Slack's down ... It was, actually yesterday, coincidentally. JAMON: Yeah. TODD: If Slack is down, we go into Zoom chat. That took a while to get people ... It's funny 'cause we don't use email and stuff, and we use that so much. We could jump into a meeting. We've done that in the past, before we had this redundancy we would just jump into a meeting room and kinda like, "Hey, what do we do?" It was like the lights went out and everyone was confused at what to do. It's actually kind of amusing if you think about that. A bunch of virtual people wandering around in the dark wondering what to do. JAMON: We have a lot of redundancy of internet connection. Someone might be having internet issues, but not everybody is having internet issues. That's a pretty big deal. I remember the office internet would stop working and, even though we were all in the same place, yes we could collaborate, no we couldn't work 'cause we couldn't access -- KEN: Couldn't get to GitHub, can't get to... JAMON: ... Dropbox, whatever. Which, we do use GitHub, we use Dropbox. There's a little tool that I use that, I would say, about a third of the company also uses. We're on video calls a lot. When you're on a video call, sometimes it's nice to have a cough button: you hit a button and it mutes you for just a second, so you can cough or whatever. This one's called Shush. It's a Mac app. You can buy it for three bucks or something. It turns your function key into a mute button, so you just hit that button and it will mute you for a short amount of time. Or you can double-tap it and it turns into a push to talk button, which is nice when you're in a big group. TODD: Mm-hmm (affirmative). I don't use Shush, because I use a hardware version of that. I have quite a lot of audio equipment and video stuff. Pretty sure, in the remote podcast, we talked about the importance of having good equipment and spending a little money on good equipment. You cheap managers out there, stop doing that; you're horrible people. (laughter) JAMON: Also the background of your video call is really important. That was actually something Todd really emphasized when we first started. I will point out that he has the messiest background of all of us, right now. TODD: Well, to be clear, I have two cameras. One is a wide angle which I use for the team so I can move around and stuff; and I have a tighter angle I use for clients, in which case, what's behind me is very specifically chosen to be a background, and I keep that incredibly clean. JAMON: I just say that to tweak Todd, because he's the biggest champion of having a good background. TODD: Yes. Jamon's horizon, right now, is extremely tilted, and it's been driving me crazy the whole time, but I'll get over it. (laughter) KEN: I know. I can't unsee that. TODD: In my 46 years on this planet, I've learned not to mention that, even though I really, really want him to straighten his camera. KEN: It doesn't help, Jamon, you've still got a vertical line that is -- TODD: I'll tell you a funny story about backgrounds. Poor Ken. Ken had this very nice ... I don't know what it was. What was it, Ken? KEN: It's a bookcase, right, (laughter) but it's IKEA furniture, so it looks -- TODD: It's IKEA? KEN: It looks like a dresser. Yeah. TODD: This whole time it was IKEA? We thought it was important. We felt bad for making fun of it. 'Cause it looks like a dresser. It was right behind him, and it looked like Ken was sitting in bed (laughter) with his dresser behind him. KEN: Yes, reinforcing every stereotype about remote workers. (laughter) TODD: Right. We kept on bugging him, and he said, "It's a really nice bookcase." I didn't realize it was IKEA. KEN: I didn't say it was a really nice bookcase. I said it was a bookcase. (laughter) TODD: It looked like a dresser. JAMON: It really did, in fact. KEN: That's because it's IKEA furniture, so it's looks like that. TODD: I guess the point is, how things appear is more important than what they actually are. This is something a lot of people aren't familiar with. We have different people with different levels of quality of what they produce as far as visually or audio. I think the general takeaway is take some time. You are almost doing a mini-television broadcast, and you wanna be ... I wouldn't say the word "professional", because it's not stuffy, it's fine if you're wearing your tie-dye and your shorts, but you should make it a pleasant experience for the viewers. KEN: Yeah. You should look inviting, and it should look intentional. TODD: Mm-hmm (affirmative). KEN: And kept. JAMON: We have some other tips for remote video meetings that, I think, are on a blog post that we created. Was that you, Ken, that wrote that post? KEN: Yeah. We could do a whole podcast, frankly, on how to have a good video meeting. JAMON: We can link to that in the show notes. KEN: We can link to that for now. TODD: That is a podcast I wanna do. I do wanna point out to the audience who can't see us now, we're recording this for your listening pleasure, and I put pleasure in quotation marks 'cause I don't wanna oversell it. But, we are actually on Zoom, so we can see each other. Jamon, thankfully moved his camera so we can't see the horizon any more, which is crooked, but right over his left shoulder is a door-line that's incredibly crooked. I appreciate the effort, Jamon, but come on. Have some dignity. JAMON: I will point out that I'm moving out of this rental in a week because I had a house fire, Todd. (laughter) TODD: Oh, jeez. You can't pull a house fire out every time there's a criticism. KEN: The only thing in my background is my Harvard diploma (laughter) because it's all that anyone cares about. JAMON: Yes, exactly. Over my shoulder, I'm thinking about putting my not-Harvard diploma. KEN: "Narvard". JAMON: It'll just say, "Not Harvard." TODD: Sometimes we just invite Ken's Harvard diploma, instead of Ken, to meetings. (laughter) KEN: Yeah, I just put it in frame and then I walk out. (laughter) I'm like, "I'm just the janitor." CHRIS: I do have one final question, as we bring this episode to a close: Is there any tool that you use outside of remote work or in your daily life that you wish existed as a remote tool. KEN: Blow torch. (laughter) CHRIS: Elon's got that for ya. TODD: Not a tool, completely, but here's something ... I have ideas for tools that'd be cool in the future. We have the concept of "kitchen table". This is a real quick story; please, bear with me. The three of us ... I don't know if Ken was, but there was multiple of us of the company who were speaking at a conference in Paris. We rented a large Airbnb apartment in Paris, and a bunch of us were staying there. It had a very large kitchen table. When we weren't doing stuff individually, we'd all sit around the kitchen table, and we'd work together. We would just sit there, like you would at a library in a university or something like that, and work. We wanted to recreate that in ... virtually. The simple solution is we dedicated one of our Zoom rooms, the "Kitchen", to the "kitchen table" and you can't use that for anything else. If you just wanna be around people, but you're working, you're not really saying anything, as if you're in a library ... I guess we should do the library, but whatever ... you'd go in the kitchen table and just be around people. Sometimes people say things and have little conversations, like you would in an office, but typically you're just sitting there working together. That's cool. It's missing a few features which I'd love to see. For one is, if you're not ... Say there was a group of people working in an open office, and they're in the center and you're on the perimeter of the office. You see them working together there, the "kitchen table", now we have that, with our tool, we can see who's in the "kitchen table" and they're there. Great. But you can also, even if you're far away and they're dim enough ... not dim, but the volume's low enough that it's not disturbing, you can still hear them, and sometimes you'll pick up on little words that may interest you. They'll mention a project you're on, or they'll mention a personal interest that you're interested in or whatever, and you can choose then to go walk over and join them, because of that kind of low-noise but informational thing you're getting by being in the perimeter. I would love to somehow integrate that into our tool, where you could have a low-murmur of people in the background of the meetings that you're not in, and listen for things that might be interesting, something like that. KEN: I don't really know how to think about that question. TODD: I find it very interesting that none of us can really come up with a tool that we wish we had. That's a fantastic answer. KEN: I mean ... JAMON: I think there's probably tools that, eventually, we'll get that will be like, "How did we live without this?" But I don't ... I can't think of one. KEN: I can imagine in the future, basically a VR setup. JAMON: Mm-hmm (affirmative). Yes. KEN: If VR gets to the point where it feels natural; it's comfortable to wear the equipment, it's not a burden just to have the stuff on your head, and the resolution is to the point where you could have a virtual monitor in space, and you can have that feeling of actually being next to people. Then you could, in theory, have the best of both worlds, where you can drop out and leave the space if you want to. You can also be in the space and be available for that. JAMON: Yeah. KEN: I think that would be pretty nice, but ... JAMON: There is a tool out there that's ... I think they're, maybe, in beta right now. It's called Bigscreen VR, it's by a guy that I know, Darshan Shankar, who's on Twitter. I met him on Twitter. He's doing this Bigscreen VR system. It's very much what you described, Ken. Right now, it's only on Windows, and of course the VR headsets are still evolving. But apparently the new Oculus Go or Oculus Now, or something, is apparently quite good -- KEN: Yeah, they're getting better. JAMON: It's also likely, they said that within the next year, that it'll come to Mac 'cause they're working on it. KEN: I think another threshold, though, is the quote-unquote "retina" threshold, to where the resolution of the headsets is such that you can't, in terms of resolution, anyway, you can't tell the difference between that and something that you're looking at. JAMON: Yep. KEN: You could actually make a projected display without any compromise. JAMON: Yes. TODD: I agree, in the future that's gonna be wonderful. I do have some current ideas on how to add spacial stuff to our tools to give us proximity information of each other, virtually. Kind of what you would get if you were in a VR situation, but without having VR. Anyways, there's some interesting things there. KEN: Yeah, we've talked about making an ambient audio device, something like that, that can just sit there and ... Kind of like "kitchen table", but without the video. There's a bunch of things we've talked about, but not of them are things that exist today. They're just things that we've thought about creating or ... yeah.
One of the more common questions I get is, "Hey Kurt, what apps do you recommend?" Like any experienced Shopify Expert, I normally give a cryptic non-answer like "I recommend installing nothing until you have a specific pain or problem than can be solved by a single app. Audit your apps regularly to make sure you still need them." This is not the answer you or anyone is looking to hear. It's not my fault, it's because I have PTSD from logging into client stores and discovering 40+ apps installed resulting in a 20MB page size and more javascript errors than I can diagnose. That's not hyperbole either, it's happened repeatedly. So how can we navigate the app store effectively? What apps are recommended? Joining us to discuss it is Aaron Wadler from ShopPad. Aaron Wadler is co-founder and CEO of ShopPad, an Oakland, CA company built around serving Shopify merchants through App Store applications and custom services. Since its founding in 2012, ShopPad has become a Shopify Plus partner and their applications are now used by over 70,000 brands and retailers across the globe. — Subscribe to The Unofficial Shopify Podcast via Email Subscribe to The Unofficial Shopify Podcast on iTunes Subscribe to The Unofficial Shopify Podcast on Stitcher Subscribe to The Unofficial Shopify Podcast via RSS Join The Unofficial Shopify Podcast Facebook Group Work with Kurt — Learn: The technology shift that got ShopPad started Aaron's contribution to the Shopify app store 5 years ago How can merchants know which apps are right for their store Aaron's approach to support How you should analyze reviews How to audit apps for quality A methodology for safely testing apps What merchants should know about approaching an agency for custom development Links Mentioned chronologically: Get a free 60 day trial of ShopPad Apps Shopify New Apps Newsletter Kurt's Shopify Apps Don't Get Caught With Your App Down on Black Friday & Cyber Monday Jeremy Green, Rails Developer @ShopPad Black Friday is around the corner Are you prepared? Adding email marketing automation to your store can help make this your best holiday season ever: ☞ https://ethercycle.com/pricing/email-flow-automation/ Free Guide I want to send you a sample chapter of Ecommerce Bootcamp, absolutely free. Tell me where to send your sample at ecommerce-bootcamp.com
A day spent in the delightful company of Dr Margaret Rainbird from Australia, who has set herself the challenge of walking a different labyrinth every day for a year. I managed to take her round 6 Scottish labyrinths during her time in the UK (27 mins, 20MB).
Vocabulario fotográfico | Descentrables |Búffer de cámara y contraluz Hola y bienvenidos, un día más, a Aprender Fotografía. Soy Fran Valverde y me acompaña Pere Larrègula, fotógrafo de moda y publicidad y formador. Hoy respondemos algunas preguntas de nuestros oyentes y seguimos con nueva sección sobre vocabulario fotográfico. Como siempre, os recordamos que Studio Lightroom es un espacio de alquiler para el fotógrafo aficionado y profesional. Ofrecemos tanto el alquiler del espacio, un estudio de 90m2 bañados por luz natural, cómo alquiler de material fotográfico. Además, ofrecemos cursos y talleres de manera habitual. Os recordamos que usando la palabra "podcast" recibiréis un 20% de descuento en todos los cursos. Preguntas de los oyentes: Fetel: A mitad podcast no resisto ya a decir: "queridos reyes magos..." ???? ojala os tuviera en mi ciudad para alquilar un b2x para una boda. Que envidia aqui en castellon no hay casi nada. Ya me disteis ganas de mirar los B1. Pero he descubierto innovafoto y morando su web he visto lo de los martes Profoto. Me acercaré. Y lo del taller de valencia me apunto ya!!! A ver si dais datos para tener agenda despejada. Jon López: Buenos días! Un apunte. Decía Nico en este podcast que el enemigo de todo es el 'cuñao'. Corrijo, el enemigo de todo es mi "cuñao" Un abrazo a los tres y feliz fin de semana. Gracias una vez más por el podcast. Objetivo DESCENTRABLE: Objetivo que puede desplazarse respecto a su eje, reproduce en parte los movimientos de las cámaras de gran formato. Básicamente sirven para el control de la perspectiva y se desplazan horizontal y verticalmente. Se utilizan en fotografía de arquitectura e interiorismo para corregir la convergencia vertical, y también en bodegón. BUFER de cámara: Memoria de la cámara que almacena las fotos digitales antes que se graben en la tarjeta de memoria. Y la importancia de la velocidad de la tarjeta 100x = 15MB/s 133x = 20MB/s 200x = 30MB/s 300x = 45MB/s 400x = 60MBs 600x = 90MB/s 800x = 120MB/s 1000x= 160MB/s Es la velocidad de Lectura… la de escritura suele ser un tercio menor. Lleva las tarjetas de memoria SD o CF protegidas contra los golpes, humedad, etc… CONTRALUZ: Luz natural o artificial situada por detrás del motivo. El contraste elevado que casi siempre le es característico dificulta el cálculo de la exposición. Como la lectura medida de toda la escena suele provocar sobre o subexposición es aconsejable tomar una lectura independiente de la parte del motivo que se quiera reproducir de forma normal. El contraluz aunque crea problemas tiene gran valor creativo y permite crear siluetas y halos en torno al cabello por ejemplo. También queremos invitarte a participar y a asistir a nuestros cursos y talleres de fotografía en Barcelona. Descuento: usando la palabra "podcast" en la cesta de compra podrás obtener un 20% de descuento. Como siempre, te pedimos que nos valores con una reseña de 5 estrellas en iTunes e iVoxx. ¡Muchísimas gracias por tu feedback! Y no dudes en escribirnos si tienes alguna duda o pregunta adicional.
【魔~声之五】我们与万智牌背景故事的缘起缘灭 这次我们在技术上进行了新的尝试,除了提供大家抱怨无法暂停无法拖动的音频版,还有能够拖动(但音效较差)的伪视频版。 微信对音频的限制是时长不超30分钟,大小不超30MB;对视频的限制则是时长不超10小时,但是大小超过20MB的文件需要从腾讯视频外链(会附带更长的广告),所以如何取舍希望大家在文末的投票告诉我们。 这一期,我们会一起回忆一下各自点燃火花,结缘万智牌背景故事的起源,无名老师最后到底说了什么给断杖的火种浇了一盆冷水…… 03:00 断杖的万智牌背景故事启蒙,万智牌四大中文官译长篇都有啥? 05:00 无名老师居然自称断杖的门徒!断杖激动得语无伦次! 07:20 共同的感受——奥德赛系列之后,牌面不大讲故事了,你赞不赞同? 08:30 牌盲断杖再次一本正经口胡……明明是“杰拉尔德的决断”的背景叙述,却说成了“雪恨” 09:50 暴风雨、天罗城塞、出瑞斯记用了30-50多张牌面讲故事?! ...
【魔~声之五】我们与万智牌背景故事的缘起缘灭这次我们在技术上进行了新的尝试,除了提供大家抱怨无法暂停无法拖动的音频版,还有能够拖动(但音效较差)的伪视频版。微信对音频的限制是时长不超30分钟,大小不超30MB;对视频的限制则是时长不超10小时,但是大小超过20MB的文件需要从腾讯视频外链(会附带更长的广告),所以如何取舍希望大家在文末的投票告诉我们。这一期,我们会一起回忆一下各自点燃火花,结缘万智牌背景故事的起源,无名老师最后到底说了什么给断杖的火种浇了一盆冷水……03:00 断杖的万智牌背景故事启蒙,万智牌四大中文官译长篇都有啥?05:00 无名老师居然自称断杖的门徒!断杖激动得语无伦次!07:20 共同的感受——奥德赛系列之后,牌面不大讲故事了,你赞不赞同?08:30 牌盲断杖再次一本正经口胡……明明是“杰拉尔德的决断”的背景叙述,却说成了“雪恨”09:50 暴风雨、天罗城塞、出瑞斯记用了30-50多张牌面讲故事?!14:10 未知领域的雏形——神河系列配发的20篇短小说,请期待 星辰太子@MTGCN 的翻译补完。15:30 从翻译“列将传”到翻译小说,断杖跳入更深的坑(然后逃走)。17:25 无名老师入坑时间成迷,又说自己洛温入坑的了!19:10 万智背景故事进入创新之年代,肥包不再送小说,被迫弃读的两人,你(断杖再次口胡……《阿拉若时空鹏洛客指南》定价其实是16.99美元不是26.99美元)20:25 海外扫货重新点燃断杖的火花,奠定了博识都公众号的基础21:45 威世智爸爸我爱(塞洛斯时期的)你——那时候的万智故事:表现形式多,故事质量高,相互配合好。断杖眼中的白银时代。25:45 无名老师带你上层次——林语堂先生论小说。断杖听完居然不想做博识都了?如果你听完了节目看完了文章可就是想看看猫,请告诉我前期回顾(点击跳转):Vol.1 对决包:心智vs力量(上):风暴vs部落,谁更强? 对决包:心智vs力量(下):尤依拉和她的男朋友们Vol.2 尼可波拉斯:一个糟糕的游戏玩家Vol.3 28岁还在过本命年的基定Vol.3.5 介绍我们自己和我们的想法Vol.4 波拉斯还缺个地质学家或者烧窑师傅点击“阅读原文”跳转我们在喜马拉雅FM的页面
At Spring Equinox 2016, we attended the last public event at Glasgow's Sighthill stone circle, constructed in 1979 by local astronomer and science-fiction author Duncan Lunan. The circle has been removed to make way for a housing development, but due to public outcry the Council have agreed to relocate it nearby once the housing construction is complete. Duncan answers many questions about the circle's design and construction, and we talk to Kenny Brophy, the Urban Prehistorian, and Kevin Andrew Morris, a local artist who is co-ordinating a rather interesting art project in the circle (29mins, 20MB).
Today’s podcast is recorded in Michele Neylon’s office at Blacknight HQ in Carlow. It’s our first show recorded there since we refurbished it with echo-absorbing acoustic panels. Michele jokes that I’ve put him in “a padded cell” – but the sound quality is hugely improved! Click on the player above to listen to the show, or download it here: 34:31; 20MB; […] Podcast from the Padded Cell [Audio] originally appeared on Technology.ie News & Views on Gadgets & Tech - A Podcast about technology
An interview with Matthew Dillon about the upcoming 4.0 release of DragonFly BSD.File Info: 43Min, 20MB.Ogg Link: https://archive.org/download/bsdtalk248/bsdtalk248.ogg
28 July 2015 Ivan McBeth is a Druid, Geomancer, builder of stone circles, an amazing DJ, and all-round wonderful human being, originally from England but now living in Vermont. Ivan is the builder of many stone circles, and with his partner Fearn, runs the Green Mountain Druid Order in Worcester, Vermont. We chat about his stone circle building, his music and ecstatic dancing, Druid training, and death and dying in modern Vermont, where they have recently passed assisted dying legislature. (29min, 20MB). Sadly, Ivan himself passed unexpectedly in 2016. Green Mountain Druid Order
Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
Whether the block size should be increased to 20MB has created more controversy than any other question in Bitcoin’s recent history. For some, it is an urgent and necessary step in Bitcoin’s evolution. Their view is that leaving the block size at 1MB would be irresponsible inaction with potentially catastrophic consequences. Others see increasing the block size as unnecessary and a dangerous first step down a slippery slope towards a more centralized Bitcoin. We were joined by Mike Hearn, along with Gavin Andresen the most outspoken supporter of a block size increase. He is also the creator of Bitcoin XT, a modified fork of Bitcoin Core, that may become the vehicle for the push for bigger blocks if no agreement is reached regarding Bitcoin Core. Don’t miss this crucial conversation! Topics covered in this episode: What would happen if blocks started being consistently full Whether bigger blocks create a centralization risk Why Bitcoin core development has become pervaded with toxic division What Bitcoin XT is and how it differs from Bitcoin Core The roadmap ahead and how a transition to BitcoinXT would occur Update on Lighthouse Episode links: The Capacity Cliff Crash Landing Let's Talk Bitcoin! #217 The Bitcoin Block Size Discussion Gavin Andresen Moves Ahead with Push for Bigger Blocks Bitcoin Dev Mailing List Lighthouse This episode is hosted by Brian Fabian Crain and Sébastien Couture. Show notes and listening options: epicenter.tv/082
All pictures from The Running of the Butt Mandle in Bertchesgaden, Germany, Dec 6th 2014. Photos property of Astonishing Legends/Scott Philbrook & Forrest Burgess 2014 - All Rights Reserved Background: Christmas, Hanukkah, Kwanza, Krampus? Every part of the world has different traditions at the end of the year. Have you ever stopped to think about the origins of those traditions? Ever stop to think they might be Pagan in nature? Well we not only stopped to think about it, we dove into it and came back with some amazing information, bolstered with an iPhone movie taken only days ago in Germany at one of the strangest, but most interesting traditions of them all, The Krampus! Legend: Twas the Night Before Christmas and all through the house not a creature was stirring except for The Krampus on the front porch waiting for the bad kids so he could kidnap them in the basket on his back and take them away. Stories: If you can't click on these links, visit our website. We've got some good stuff for this one. Chuck Jones, born in the regions of Mr. F.M. Burgess. Perry Futard (yes I mispelled that on purpose to aggravate Forrest) A link to one of our inspirations, Jim Harold and Scott's favorite show of his, Campfire. Friends of the show, George and Ashley have agreed to share their video from The Running of the Butt Mandle, shot just days ago on December 6th, 2014, clicking this link will put it in your downloads folder, don't worry, it's only 20MB and about 1 minute long. Kindle version of Mary Beth Crain's book, 'Haunted Christmas'. The Venture Brothers - A Very Venture Christmas Credits: Episode 007 - 'A Krampus Christmas' Produced by Scott Philbrook & Forrest Burgess, Ryan McCullough Sound Design, Scott Philbrook Editing. Copyright Scott Philbrook & Forrest Burgess 2014, All Rights Reserved. photo sources: All photos by Astonishing Legends, Copyright 2014. All Rights Reserved.
BSC: O melhor podcast de humor do Brasil! Diversão e entretenimento por Bobos Sem Corte
Tava mais do que na hora do BSC podcast juntar as piadas sobre a cidade origem de todos quase todos do grupo, em um só lugar – nem que fosse para poupar os outros programas das mesmas piadas e comentários de sempre. Saiba também o que os moradores de outros estados pensam sobre São Paulo. E claro, sem esquecer da eterna rivalidade com o Rio de Janeiro. Duração: 30 min | Download: baixar 20MB Deixe seu comentário abaixo ou mande um email para contato@bobossemcorte.com Acompanhe os próximos programas e baixe os antigos: RSS Feed Assinar no iTunes Ver no Smartphone Ouvir no Stitcher | Baixar Stitcher mobile: IOS - Android - Kindle Fire Ouvir no TuneIn Radio | Baixar TuneIn Radio mobile: IOS - Android - Windows Phone - Blackberry Podcasts relacionados: BSC#69 - Coisa de Rei do Camarote BSC#64 - Só me fodo Brasil com Vida de Fudido BSC#59 - As Baladas Nossas de Cada Dia BSC#51 - Manias com Banda Betterman Acesse também: Facebook Twitter
BSC: O melhor podcast de humor do Brasil! Diversão e entretenimento por Bobos Sem Corte
O BSC podcast se junta com Luíz Yassuda do Brainstorm9 para discutir essa literal orgia das novas mídias sociais da pegação: Lulu, Tubby, Tinder, Badoo; as de nicho: Brenda, Grindr e tantas outras mais exóticas como a de fetiche Fetlife, acredita nisso tudo? Saiba qual é a melhor rede social para você sair da seca e obter a relação de sua preferência, e descubra também os risco e vantagens de cada uma. Programa gravado no Estúdio Liverpool, o melhor estúdio de São Paulo - pois todo mundo grava pelado e com ar condicionado no bumbum! Duração: 29 min | Download: baixar 20MB Deixe seu comentário abaixo ou mande um email para contato@bobossemcorte.com Acompanhe os próximos programas e baixe os antigos: RSS Feed Assinar no iTunes Ver no Smartphone Ouvir no Stitcher | Baixar Stitcher mobile: IOS - Android - Kindle Fire Ouvir no TuneIn Radio | Baixar TuneIn Radio mobile: IOS - Android - Windows Phone - Blackberry Podcasts relacionados: BSC#59 - As Baladas Nossas de Cada Dia BSC#56 - Videogame: O ritual com 99Vidas BSC#55 - Reality People com Jean Massumi - BBB3 BSC#38 - Tecnobrega: Tecnologias Novas VS Antigas Acesse também: Facebook Twitter Conheça o Estúdio Liverpool!
[News] Neil, Lewis and Leon Cox from new podcast Cane & Rinse discuss the latest gaming headlines, including: I am Alive is, er, Alive Ubisoft's long-in-development disaster action game skips retail and heads for XBLA and PSN. Is this a good move? Does anyone actually care about this game anymore? Vita 3G has 20MB download limit Sony reveals that download limit for its the 3G version of its upcoming handheld will be the same as iOS devices. L.A. Noire Complete Edition due next month For those who haven't played this summer's critically acclaimed but divisive blockbuster, you'll soon be able to pick it up with all the DLC. Dead Island movie confirmed Remember that teaser trailer for Dead Island that everyone thought was just like a movie and made the game look soooo awesome, etc? Well, now it's inspired the latest attempt at a video game movie. Should be good, right? Skyrim vocal talent includes Captain Von Trapp and Wonder Woman Christopher Plummer, Max Von Sydow, Lynda Carter, Joan Allen and more lend their voices to the upcoming Elder Scrolls game. Syndicate reboot dated and trailered Let's face it - we just put this in to wind Lewis up. Revenge is sweet!
2011-04-24 礼拝メッセージ(20MB).m4a
2011-04-24 礼拝メッセージ(20MB).m4a
2011-04-24 礼拝メッセージ(20MB).m4a
12 November 2010 Down Under dowser and geomancer Alanna Moore joins us for a chat at our 2010 Conference, and tells us what she's been up to since her last visit to the UK. (29 mins, 20MB) visit Alanna's website: Geomantica.com
Hosted by Steve Cherubino and Scott Moulton of MyHardDriveDied.com. Topics discussed: Upcoming Conferences Feb/6 Washington, DC ShmooCon 2010 Security conference & Data Recovery Link to PDF presentation 20MB Feb/9–13 Atlanta, GA Hardware and Software data recovery, how to start your own company, etc Average Forensic / Data Recovery Job Civil Cases / Private Investigators $2,500 to […]
PODCAST LA ALDEA IRREDUCTIBLECAPÍTULO 19 - LOS ARQUITECTOS DE SAQQARAVuelve el Podcast de la Aldea y nos traslada en este capítulo 19 a las misteriosas arenas de Egipto, para conocer la primera pirámide: Saqqara.Viajamos 5.000 años atrás para conocer a uno de los arquitectos más geniales de la Historia: Imhotep, el constructor de la pirámide de Saqqara y del conjunto templario y funerario unido a ella.Pero el tiempo y la arena, dejaron aquella maravilla enterrada bajo las dunas. En la década de los años 1920, tan sólo se veía la figura solitaria y abandonada de la pirámide de Saqqara. Nadie, se imaginaba que, sumergida, esperaba una necrópolis llena de templos, patios y columnas.En 1926, a las excavaciones de Saqqara llegó un joven arquitecto llamado Jean Philippe Lauer. No sabía nada sobre arqueología ni sobre egiptología... Era la primera vez que pisaba el desierto y lo hacía con un contrato temporal para 8 meses... Ese contrato se iba a convertir en toda una vida dedicada a Saqqara. DESCARGAR EL PODCAST:- 23MB DESCARGA DIRECTA FORMATO .MP3 - 20MB DESCARGA DIRECTA FORMATO .OGG- 11MB DESCARGA EN FORMATO COMPRIMIDO .ZIP- 23MB DESCARGA MEDIANTE MEGAUPLOAD- DESCARGA EN OTROS FORMATOS- DESCARGA EN iTUNESLas Músicas utilizadas en este Podcast están bajo Licencia Creative Commons.- Celestian Aeon Project- Butterfly Tea- Whiteyes- Greendjohn- David Ospina- Canción "Broken Stereo" de Sean Fournier------------------------------------------------------SUSCRIBETE AL PODCAST DE HISTORIA Y CIENCIALA ALDEA IRREDUCTIBLE
The fourth episode of our Journées de la Francophonie 2007 video coverage presents the scenes from the concert of the French jazz quartet Lady Bird Jazz’tet at the local jazz club RURA (24.03.2007). Video download (iPod video compatible): ladybird.mp4 (H.264 / AAC / 26.20MB / 4:38) Please use Quicktime 7 or VLC Media Player for [...]
Listen to Altered Reflection, Part Three A (20MB mp3)
Vorsicht, über 20MB. Es geht wieder weiter, zum Einstimmmen ein etwas angestaubtes Video - in gewohnter "Qualität". ;)
Podsafe Music Network (PMN)The Argyle Pimps - Argyle Pimpin The Argyle Pimps PMN page The argyle Pimps websiteFriday snow in the valleys - Saturday and Sunday 30cm - 40cm - 50cmSaturday morning 10 cm alreadyMilan airports closedTurin-Bardonechia motorway closedout at 10 - snow getting ever heavierpoor visibility above the tree line1300 - 2700
In Volume 4, DaveO reads from his own “Letters from Russia,” Leo Tolstoy’s epic “War and Peace” and a bit of French poetry while fumbling with pronunciation and enjoying a Guinness in the nighttime garden. https://archive.org/download/RussianEpicsFrenchPoemsAndIrishBeer-Postcard4/Russian%20Epics%2C%20French%20Poems%20and%20Irish%20Beer%20%E2%80%93%20Postcard%20%234.mp3 Wander along for Russian epics, French poems and Irish beer – Postcard #4 (19:22, 20MB, stereo, .mp3) Countryside, watercolour … Continue reading Russian Epics, French Poems and Irish Beer – Postcard #4 →