Image combining a photograph of a cat with text intended to contribute humour
POPULARITY
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe
The snowdrop wranglers of Altamont Gardens in Co Carlow, Joanna Walsh on being an amateur online and the meaning of LOLcats, and novelist Niamh Mulvey takes a waterside tour of the Kilkenny City that seeps into her fiction.
Si vous avez Internet, vous en avez forcément déjà vu, ou même envoyé. Sous forme de GIF ou de photos avec des légendes humoristiques en majuscules, les mèmes ont toujours beaucoup de succès sur la Toile. Le facepalm du capitaine Picard, lolCats, “Shut up and take my money”, “J'suis pas venu pour souffrir”, John Travolta perdu…, les mèmes animent les conversations numériques en détournant tout ce qui passe, le plus souvent des sujets d'actualités ou des oeuvres culturelles. Cet humour viral ultra référencé, très second voire troisième degré, ne pouvait exister que sur Internet. Mais ce terme n'est pas tout récent. Il apparaît en 1976, dans "Le Gène égoïste" du biologiste britannique, Richard Dawkins, un essai sur le rôle de l'imitation dans la transmission culturelle. Et que veut dire ce mot ? Comment les mèmes ont-ils du succès ? Écoutez la suite de cet épisode de "Maintenant Vous Savez - Culture". Un podcast Bababam Originals, écrit et réalisé par Jonathan Aupart. À écouter aussi : Pourquoi dit-on "Silence, moteur et action" ? Comment l'Eurovision est-il redevenu tendance ? Pourquoi les stars de Disney Channel ont-elles porté une bague de pureté ? Retrouvez tous les épisodes de "Maintenant vous savez - Culture". Suivez Bababam sur Instagram. Learn more about your ad choices. Visit megaphone.fm/adchoices
Depuis les premières chaînes d'emails jusqu'aux LOLcats en passant par l'avènement des plateformes de streaming vidéo, Internet a largement contribué à faire évoluer notre rapport à l'humour. Que ce soit la façon de le pratiquer ou de le consommer, notre rapport à la satire, comment nous acceptons de rire des autres ou de nous-même, les réseaux sociaux et les plateformes ont radicalement changé la place du rire dans nos vies. Comment Internet a-t-il changé l'humour ?Les sources qui ont été utilisées pour écrire cet épisode :The New York Times - Sophomorically IncorrectThe Guardian - What effect has the internet had on comedy?The Ringer - Dying LaughingThe BBC - The jokes that have made people laugh for thousands of yearsLes Echos - TikTok, la nouvelle horloge du mondeThe Economist - Netflix is driving stand-up comedy's second boom Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
In this episode, Emily and Dave continue their chat with Darko Mesaros, Senior Developer Advocate at Amazon Web Services. In part two of this two-part series, Darko talks about his command line setup, some of the tools he uses, and tips for a productive developer environment. Part 1 is available as Episode 037. Darko on Twitter: twitter.com/darkosubotica Emily on Twitter: twitter.com/editingemily Dave on Twitter: twitter.com/thedavedev Darko on LinkedIn: https://www.linkedin.com/in/darko-mesaro%C5%A1-02b66622/ Darko's Website: https://www.rup12.net/ Darko on GitHub: https://github.com/darko-mesaros Darko on Getting Started with AWS CLI: https://www.youtube.com/watch?v=9gg0AyLhEHM Darko on Getting and restoring a Sun Ultra 1 Workstation: https://www.rup12.net/posts/2021/adventures-with-sun-ultra-1-workstation/ Lolcats – rainbow output in terminal: https://www.tecmint.com/lolcat-command-to-output-rainbow-of-colors-in-linux-terminal/ Simple PlainText Presentation Tool: https://tools.suckless.org/sent/ Arch Linux: https://archlinux.org/ Zshell: https://zsh.sourceforge.io/ VIM: https://en.wikipedia.org/wiki/Vim_(text_editor) Atari 2600: https://en.wikipedia.org/wiki/Atari_2600 Commodore C64c: https://en.wikipedia.org/wiki/Commodore_64#Commodore_64C Hayes Command Sets: https://en.wikipedia.org/wiki/Hayes_command_set Hayes Smart Modem: https://en.wikipedia.org/wiki/Hayes_Microcomputer_Products#The_Smartmodem AWS CLI: https://aws.amazon.com/cli/ AWS Cloud Development Kit: https://aws.amazon.com/cdk/ AWS Cloud Formation: https://aws.amazon.com/cloudformation/ AWS SDK - Multiple Programming Languages: https://aws.amazon.com/getting-started/tools-sdks/ Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
In this episode, Emily and Dave chat with Darko Mesaros, Senior Developer Advocate at Amazon Web Services. In part one of this two-part series, Darko talks about growing up in Serbia, his journey to the cloud, his love of vintage computers, and the power of coding within limited hardware constraints. Darko on Twitter: twitter.com/darkosubotica Emily on Twitter: twitter.com/editingemily Dave on Twitter: twitter.com/thedavedev Darko on LinkedIn: https://www.linkedin.com/in/darko-mesaro%C5%A1-02b66622/ Darko's Website: https://www.rup12.net/ Darko on GitHub: https://github.com/darko-mesaros Darko on Getting Started with AWS CLI: https://www.youtube.com/watch?v=9gg0AyLhEHM Darko on Getting and restoring a Sun Ultra 1 Workstation: https://www.rup12.net/posts/2021/adventures-with-sun-ultra-1-workstation/ Lolcats – rainbow output in terminal: https://www.tecmint.com/lolcat-command-to-output-rainbow-of-colors-in-linux-terminal/ Simple PlainText Presentation Tool: https://tools.suckless.org/sent/ Arch Linux: https://archlinux.org/ Zshell: https://zsh.sourceforge.io/ VIM: https://en.wikipedia.org/wiki/Vim_(text_editor) Atari 2600: https://en.wikipedia.org/wiki/Atari_2600 Commodore C64c: https://en.wikipedia.org/wiki/Commodore_64#Commodore_64C Hayes Command Sets: https://en.wikipedia.org/wiki/Hayes_command_set Hayes Smart Modem: https://en.wikipedia.org/wiki/Hayes_Microcomputer_Products#The_Smartmodem AWS CLI: https://aws.amazon.com/cli/ AWS Cloud Development Kit: https://aws.amazon.com/cdk/ AWS Cloud Formation: https://aws.amazon.com/cloudformation/ AWS SDK - Multiple Programming Languages: https://aws.amazon.com/getting-started/tools-sdks/ Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
Jenny joins us for the fifth episode of the podcast. We talk about living in a constant state of change, her work at the intersection of art and design, her strategies and approaches to working with people, as well as struggles with trying to get a baby with IVF. Jenny also shares with us her work with Toolbox Toolbox, a meta website that curates the most interesting toolboxes, methodologies, and ways of working. Sources Studio Theolin Interview with Jenny Toolbox Toolbox Community on Design Education (Slack)
This week, Nate (@inthesedeserts) comes back on the Posts Pod to tell us about some internet history, and this little forum you might have heard of called "Something Awful". Following the death of Something Awful's founder, "Lowtax", Nate tells us how the forums shaped much of contemporary internet culture, ranging from terms like 'doxxing' and the ubiquity of trolls, to Lolcats, gifs, and uh, a lot of racism and Nazi insignia. But, it did also give us Dril, so who's to say if it was good or bad??? -------- If you want to hear more Nate, listen to his podcast, "A Hell Of A Way To Die" here: https://t.co/u0OcRMkTd1?amp=1 -------- Ten Thousand Posts is a show about how everything is posting. It is hosted by Hussein (@HKesvani), Phoebe (@PRHRoy) and produced by Devon (@Devon_onEarth). For weekly bonus episodes, subscribe to us on Patreon at : www.patreon.com/10kpostspodcast.
Family. Cars. Coronas. Cheeseburgers. Wrestling? Maybe. Geoff & Paul try to talk about Saturday night's Dynamite, and look ahead to a return to Wednesdays and "Normalcy." Follow us on Twitter
Charles, Alex, and Dan explore some of the finest music ever written about LOLcats, manga, and gaming. Fortune Kit on Patreon: https://www.patreon.com/fortunekit/posts
Did you notice when it suddenly became okay not to say goodbye at the end of a text message conversation? Have you responded to work emails solely using ?? Is ~ this ~ your favorite punctuation mark for conveying exactly just how much you just don’t care about something? Welcome, Internet Person—you’re using a different kind of English from the previous generation. But these conversational norms weren’t set on high, and how they evolved over the past decades of Internet usage tells us a lot about how language has always been created: collaboratively. Or, as Internet linguist Gretchen McCulloch puts it, “Language is humanity’s most spectacular open source project.” She joins us to analyze the language we use online and off—how it got this way, where it’s going, and why it’s a good thing that our words are changing so quickly. This episode originally aired in 2019.Go beyond the episode:Gretchen McCulloch’s Because InternetRead her Resident Linguist column at Wired, formerly at The Toast (you may remember reading about the grammar of doge, perhaps? Much wow) or catch up on the Lingthusiasm PodcastPhone calls have been supplanted by text messages—will voice texting be next? Or are the people using voice texting pointing out a fundamental lack, in language or keyboard support?Inevitably, Godwin’s Law states, “as an online discussion continues, the probability of a reference or comparison to Hitler or Nazis approaches 1.” Read creator Mike Godwin’s explanation for why he created his counter-meme, and why, in the case of actual fascists, calling someone a Nazi is well within the norms of discoursePeruse the LOLCat Bible or the Creepypasta Wiki, deemed worthy of archive by the Library of Congress (file under folklore)If all these memes confuse you, you can always find your footing at Know Your MemeTune in every week to catch interviews with the liveliest voices from literature, the arts, sciences, history, and public affairs; reports on cutting-edge works in progress; long-form narratives; and compelling excerpts from new books. Hosted by Stephanie Bastek. Follow us on Twitter @TheAmScho or on Facebook.Subscribe: iTunes •
Did you notice when it suddenly became okay not to say goodbye at the end of a text message conversation? Have you responded to work emails solely using ?? Is ~ this ~ your favorite punctuation mark for conveying exactly just how much you just don’t care about something? Welcome, Internet Person—you’re using a different kind of English from the previous generation. But these conversational norms weren’t set on high, and how they evolved over the past decades of Internet usage tells us a lot about how language has always been created: collaboratively. Or, as Internet linguist Gretchen McCulloch puts it, “Language is humanity’s most spectacular open source project.” She joins us to analyze the language we use online and off—how it got this way, where it’s going, and why it’s a good thing that our words are changing so quickly. This episode originally aired in 2019.Go beyond the episode:Gretchen McCulloch’s Because InternetRead her Resident Linguist column at Wired, formerly at The Toast (you may remember reading about the grammar of doge, perhaps? Much wow) or catch up on the Lingthusiasm PodcastPhone calls have been supplanted by text messages—will voice texting be next? Or are the people using voice texting pointing out a fundamental lack, in language or keyboard support?Inevitably, Godwin’s Law states, “as an online discussion continues, the probability of a reference or comparison to Hitler or Nazis approaches 1.” Read creator Mike Godwin’s explanation for why he created his counter-meme, and why, in the case of actual fascists, calling someone a Nazi is well within the norms of discoursePeruse the LOLCat Bible or the Creepypasta Wiki, deemed worthy of archive by the Library of Congress (file under folklore)If all these memes confuse you, you can always find your footing at Know Your MemeTune in every week to catch interviews with the liveliest voices from literature, the arts, sciences, history, and public affairs; reports on cutting-edge works in progress; long-form narratives; and compelling excerpts from new books. Hosted by Stephanie Bastek. Follow us on Twitter @TheAmScho or on Facebook.Subscribe: iTunes •
In today's episode, Mike learns how to pronounce MEME and LOLCATS; Jean teaches Mike how to pronounce MEME and LOLCATS; and we have a Fun Fact about an answer that appeared earlier in the week, ECOTONE. Download and enjoy!
On prête aux animaux des qualités humaines, des vies humainesCes animaux sont drôles et ils nous décomplexent, ils mettent du vidéo gag dans notre vie, on se prend d’amitié pour eux et ils nous font diablement rigoler.Autre particularité du texte des Lolcats : il est souvent mal écrit. (par et pour des anglophones) et cela a donné naissance à un langage spécial Lolcats : le « lolspeak ».D’ailleurs le tout premier Lolcat à s’être popularisé portait comme légende le texte « [I can has cheezburger]Mais d’ou ça vient les LOLCATS ?Figure toi que tout ça a commencé en Angleterre, en 1870 quasiment en même temps que la photographie . Harry Pointer pionnier de la photographie faisait poser ses chats dans son studio de Brighton dans des pauses cocasses. Puis de fil en aiguille ils sont arrivés dans le calendrier des posts de nos grands parents pour pour finir sur les réseaux sociaux. La place des animaux dans nos vies aujourd’huiXavier Niel et les live de réunions animales.Tiktok la plateforme des animaux par excellenceQuoi de mieux pour ceignes qui adorent passer du temps avec leurs animaux et leur apprendre des trucs fous que de regarder ce que font les autres avec leurs animaux.Et c’est très très chronophage.Les animaux c’est un peu comme les bébés de tout le monde.##pumbathelionSur TikTOk justement je suis tombé sur le compte de Pumba The Lion un Golden retrouver Français aux 40k abonnés. Et sur la plateforme son maîtresse donne à coeur joie .Mario challenge, 1 an après challenge, smiley challenge tout marche aussi bien avec un chien qu’avec un humain !Avènement du mème et du LOLanimals Nyan Cat est un mème Internet, consistant en un gif animé en 8-bits d’un chat volant gris avec le corps en Pop-Tart rose, avec un arc-en-ciel derrière lui ## Harley the cockatooRécemment, un client me demandait comment être certifié sur Instagram; Je répondais que c’était devenu quasiment mission impossible. Et bien Harley Le Cacatoes lui il l’est !On le retrouve également sur Youtube dans des vidéos allant jusqu’à 6,8Millions de vue. Je me rappelle cette vidéo qui est devenue virale il y a 2 ou 3 ans, on le voit perché sur son coffre à jouets et Hurlant dans un petit cube en plastique.Ce que Harley Adore aussi c’est détruire des trucs et comme ce n’est pas courant en ligne il est très populaire.## Scotty Hubs et son chienAlors lui il m’a vraiment fait beaucoup rire ces derniers temps.Scotty hubbart c’est avant tout un créateur de contenu génial, un homme simple qui a un petit chien prénommé Gracie. Je ne saurais même pas te dire la race . Et ce chien est très très docile.Il se laisse positionner et prendre en photo et en vidéo sous toutes les coutures.Côté Grille Instagram on le retrouve plutôt sage et élégant, mais alors côté reels là c’est l’éclate. Star du rap, dinosaure, cambrioleur, lion , bébé , le petit animal sait jouer tous les rôles à merveille et son naturel comique à lui est du à cette étonnante stoicité. Il ne bouge pas et n’a aucune expression du visage. Alors autant te prévenir aussi.. l’une de ses grandes spécialités c’est le twerk !## Funnycat117Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs.Nous sommes une agence social media basée à Lyon : https://supernatifs.com/. Nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs.
Matti and English James discuss masks, carnival and more...Intro/Outro ATA Records - Planet Nine
WE HAVE RETURNED. That's right, we're back and we decided to tackle one of the most important and disgusting websites of all time, 4chan! In Part One, we talk about the humble beginnings of 4chan, back when memes were harmless. We look into LOLcats, the infamous Habbo Hotel raid and how to triforce. Protip: delete System32. References: Angela Nagel - Kill All Normies. Cole Stryker – Epic Win for Anonymous Fernando Alfonso’s articles on Daily Dot.
An hour and 8 minutes of Uninstall Media, English James, Matti & Lumberjack Greg....Intro - Stomp Your Feet, The Lewis ExpressOutro - Kaye Okay, ATA Records
Are you feeling flummoxed, bamboozled, and stupefied by communication and language online? And how exactly did LOLcats irreversibly change the course of the English language? Gretchen McCulloch is a Canadian linguist who studies the language of the internet, for the people of the internet. She is currently the Resident Linguist at WIRED. Goodreads: https://www.goodreads.com/book/show/51828723-because-internet (https://www.goodreads.com/book/show/51828723-because-internet) Audio production by Graham Stephenson Episode music: "Sneaky Snitch" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/
Are you feeling flummoxed, bamboozled, and stupefied by communication and language online? And how exactly did LOLcats irreversibly change the course of the English language? Gretchen McCulloch is a Canadian linguist who studies the language of the internet, for the people of the internet. She is currently the Resident Linguist at WIRED. Goodreads: https://www.goodreads.com/book/show/51828723-because-internet Audio production by Graham Stephenson Episode music: "Sneaky Snitch" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/
Live The Roll - A shorter audio with Matti, English, and Uninstall Media
Sold Out of Common Sense - With the usual gang
Mostly True: The Pre-Insanity Madness with Originalsimulant, English, Matti, TheNovaScotion, and Uninstall Media
Dr. Kate Miltner is a technology and society researcher examining the ways that technology, identity, and structural power intersect. Coming from a background in tech and advertising, Dr. Miltner conducts ethnographic research that digs into things we’re so close to, we may not even take notice. She’s taken a closer look at memes as cultural artifacts, in particular those cute but spelling-optional Lolcat memes, and is now examining coding boot camps and the “learn to code” movement and whether the hype around learning to code is really the solution many think it is. LINKS MENTIONED IN THIS EPISODE: Status Update: Celebrity, Publicity, and Branding in the Social Media Age by Alice E. Marwick Cheez Town Crier, the hub for Lolcats fans This Woman Getting a Master's Degree In LolCats Will Be Richer Than You by Adrian Chen, Gawker (with the Princess Bride-esque final line: “Meme culture is serious business these days. Anyone who tries to convince you otherwise wants to sell you something.” The World Made Meme: Public Conversations and Participatory Media, by Ryan Milner “One part politics, one part technology, one part history”: Racial representation in the Unicode 7.0 emoji set” - Kate’s article in New Media and Society Mar Hicks’s episode on Stayin’ Alive in Tech: “We Belong” April Wensel’s episode on Stayin’ Alive in Tech: “Better People” Nathan Ensmenger's book: The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise (History of Computing) MUSICAL INSPIRATION FOR THIS EPISODE ON SPOTIFY: "School's Out" by Alice Cooper ABOUT THIS PODCAST Stayin' Alive in Tech is an oral history of Silicon Valley and technology. Melinda Byerley, the host, is a 20-year veteran of Silicon Valley and the founder of Timeshare CMO, a digital marketing intelligence firm, based in San Francisco. We really appreciate your reviews, shares on social media, and your recommendations for future guests. And check out our Spotify playlist for all the songs we refer to on our show.
Escape From The Prison Planet! We talk Prisons, "Mad" Mike Hughes, The lost art of Daredevilry...With: English James, Originalsimulant, TheNovaScotian, and Uninstall Media.Music: Handsome Boy Modeling School - "The Truth" (feat. Roisin and J. Live)
Alternative Conspiracy Idiots: Talkin’ Brexit, Art, and Tiny Money. With Uninstall Media, Matti, English, and Originalsimulant
Music 'N StuffRoyal Cxnts, etc..Outro tune: DJ Shadow “Building Steam With A Grain Of Salt”
New Year New War?Iranian Shenanigans, Impeachment (huh?) and Misc. Musings from EllisDee33, Matti, Original Simulant and Uninstall Media
Massaging the VariablesZeitgeist, Cults, The Bible, The Amish, ASMRcons, Virtual World, Alien Meth.With Originalsimulant, TheNovaScotian, Tasteoffreedom, and UninstallMedia
SANE Show: Eat More. Lose More. Smile More. with Jonathan Bailor
Silly Strong LOLCats #SANE with Tony Gentilcore & Jonathan Bailor
Épisode 296 : Dans cet épisode nous revenons sur les Mèmes, ce format grandissant ! Ils sont partout ! Avec leur design bien pixelisé, bien fait maison. Avec leur tonalité sarcastique et un brun désabusé. Les mèmes sont partout ! Ils ont envahi les réseaux sociaux et pour certaines marques ils constituent aujourd’hui une alternative aux influenceurs. Oui oui aux influenceurs ! Ce matin on décrypte avec vous la tendance du meme marketing et on va parler des meme-fluenceurs. Ces comptes de mèmes sur-puissants que vous intégrerez peut-être dans vos prochaines campagnes de marque. Définir 'mème' Inventé par le biologiste britannique Richard Dawkins en 1976 pour décrire une "chose imitée", la définition moderne d'un mème est très large. Une étude réalisée en 2017 par l'Université de Budapest caractérise un meme en tant qu'images, vidéo ou texte faisant référence à la culture populaire et conçu pour être réutilisé. Cela inclut les créations des consommateurs et des marques. EP169 : https://lesuperdaily.com/episode/plongee-au-coeur-de-la-culture-meme/ Les comptes de mème explosent ! Les comptes de memes compilent du texte et des images ou de courtes vidéos qui se moquent d'un symbole culturel ou d'une idée sociale. Ils sont drôles et rassemblent des millions de followers. — Le mème, un phénomène culturel, un language natif Il n’est donc pas surprenant que les mèmes soient devenus un phénomène culturel. Grumpy Cat, Good Guy Greg (GGG) et LOLcats ont tous une chose en commun: ce sont tous des personnages mèmes qui font maintenant partie de la culture pop Internet. Ils ont été utilisés de différentes manières pour diffuser diverses idées. Les médias sociaux ont inventé un nouveau language Il y a d’abord eu les infographies, puis les vidéos face caméra et maintenant les mèmes. — Le mème marketing Les marques proposent toujours de nouveaux moyens innovants de capter leur public. C’est comme ça, c’est comme dans la nature. Le Guépard est un prédateur qui court très vite et bien les marques c’est un peu pareil… De nombreuses marques de divers secteurs profitent de cette tendance en ligne pour attirer davantage l’attention sur les médias sociaux. Le marketing avec memes est une tendance discrète qui devient rapidement populaire. Les adolescents adorent les mèmes et les marques adorent les adolescents. 2 options : faire des mèmes ou s’acheter de la même-fluence ——— Un moyen pour parler à la génération Z et au millenials En tant que marque, si vous êtes en mesure de puiser dans ces mèmes vraiment d'actualité de manière authentique, je pense que c'est un outil vraiment puissant pour montrer à la génération Z et aux millenials ce que votre marque représente, et aussi que vous êtes une marque qui comprend eux et leur style de vie Les comptes de memes sont un moyen pour les marques de toucher un public puissant qui ne consomme pas les médias de la même manière que leurs parents et leurs grands-parents. La génération Z, qui a entre 7 et 22 ans, est la plus grande cohorte de consommateurs au monde, avec un pouvoir d' achat de plus de 143 milliards de dollars aux États-Unis seulement. — Après les influenceurs, les mème-fluenceurs ? Dans le passé, les entreprises qui essayaient de toucher les jeunes en ligne se tournaient vers les influenceurs d'Instagram. Selon une étude de Markets and Markets, alors que le marché des influenceurs est en croissance permanente, passant d'environ 5,5 milliards de dollars en 2019 à une hypothèse de 22 milliards de dollars en 2024, les influenceurs apparaissent aussi comme inauthentiques, en particulier à cause de la sur exploitation de posts sponsorisés. Des marques plus traditionnelles comme JetBlue Airways et Budweiser ont également acheté du contenu sponsorisé sur des comptes mèmes. Plus authentique Les mèmes, souvent sarcastiques et beaucoup moins raffinées que les publications d’influence, offrent une voix alternative précieuse pour s’adresser à une cible qui n’aime pas la publicité. Une tonalité très libre et sarcastique Alors que les marques qui travaillent avec des influenceurs gardent souvent le contrôle du contenu (jusqu’à écrire la légende au-dessous d’une photo), pour réussir avec les pages du meme, elles ont besoin de pouvoir se laisser aller et se moquer d’eux-mêmes Plus d’engagement Parler le même langage aide également les marques à obtenir un plus grand engagement de leur public cible. Le taux d’engagement sur les comptes mèmes flirt avec les 30% sur Facebook et Instagram quand le contenu d'influence ou de marque génère des taux d'environ 1% à 15%. Plus viral Alors que les influenceurs offrent un endossement plus personnalisé d'un produit, les annonces de mème sont généralement plus virales et plus susceptibles de se diffuser en dehors du compte. Un influenceur pourrait inciter un adepte à acheter un certain rouge à lèvres ou à partager ce message avec un ami qui aime le maquillage, a déclaré Alexander de Socialyte. Mais contrairement à un drôle de mème, "ses chances de partager cela avec 20 de ses amis sont plutôt minces". Attention au plagiat Cependant, le meme marketing comporte aussi des risques. certains comptes ont été accusés de republier les blagues de comédiens sans permission, et Instagram a réagi en interdisant certains comptes mèmes , y compris ceux qui comptent des millions d'adeptes. Un exercice d'écoute sociale Travailler avec des mèmes peut être moins une tactique marketing, mais plutôt un ensemble de compétences impliquant l'observation de la tendance des principaux consommateurs et l'alignement de ces sujets sur le message d'une marque. —— Une tendance française, le meme en franglais « My daronne every time I look at my phonetel more than 30 secondes » « When c’est la fin du mois » (Sandwich aux glaçons) http://www.topito.com/top-meilleur-meme-francais —— Exemple : Overheard über Uber s'est associé à Jesse Margolis, qui dirige les comptes populaires «Overheard LA» et « Overheard New York » avec des commentaires amusants sur la vie dans les villes, pour créer «Overheard Uber», qui publie des blagues sur les interactions humoristiques et parfois maladroites d'Uber. Exemple Gucci En 2017 déjà. Le hashtag TFWGucci, qui est l'abréviation de «That Feeling When Gucci», était une campagne conçue pour promouvoir leur nouvelle gamme de montres. Dans une série de mèmes associés, Gucci a utilisé des mèmes allant de l’absurde à l’hilare. Ils ont créé tous les memes en collaboration avec des artistes du monde entier. Exemple Netflix . . . Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs. Nous sommes une agence social media basée à Lyon. Nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs.
LOLcats and starter packs. Evil Kermit and Pepe the Frog. One does not simply record a podcast episode on memes, but they can't stop all of us, let alone 30-50 feral hogs. In this mini-episode, Pippa and Karina talk memes past, present, and future to expand your brain. You love to see it.
AA Morris – The Ultimate Truth >Faithless – Crazy English Summer > Jim Morrison – The Futureof Music > Thievery Corporation – 33 > Eugenics and otherEvils, Chapter 2, The First Obstacles – G.K. Chesterton > NickelCreek – Destination > The Servile State, Sections 3 and 4 –Hillaire Belloc > Brandi Carlile – Raise Hell > Dayz of Noah– Reverse Culture Jam > Clint Richardson – Attack of the Nerds> Dr Hans and Jan Irvin on Miley Cyrus' Teeth > Chris Kendall –Hoax Busters-The Inalienable Constitutional Right to be a Human Torso
Episode 22 - released, by pure coincidence, on International Cat Day - features Elizabeth, Ben and resident Pratcat Asimov for a look at one of Pratchett's oddest books: 1989's humorous examination of all things feline, The Unadulterated Cat. Cats these days just aren't a patch on the ones you used to get: untameable aloof outdoor beasts who are more likely to trap you in a neighbours' house with a broken leg (long story) than to sit nicely on your lap and purr. The Campaign for Real Cats has had enough of modern, "fizzy keg" cats, with their bows and bells and posing. This is the Campaign's guide to identifying, understanding and appreciating honest-to-Bastet real cats. Pratchett teams up with cartoonist and illustrator Gray Jolliffe to give us a tongue-firm-in-furry-cheek guide to the world of cats in one of his rare non-fiction works. It's the kind of thing you buy the cat lover in your life for Christmas, full of chapters detailing the types of cats, their names, the games they play and "advice" on how to deal with them. Are you a cat lover? Did this ring true for you? We'd love to hear from you - and to hear your cat stories, and any real cats you've identified in fiction! Use the hashtag #Pratchat22 on social media to join the conversation. In September we return to the Discworld - and its most real of cats, Greebo - as we head to the opera for Maskerade, the 1994 book which brings the witches to Ankh-Morpork! Our guest will be teacher and opera singer Myf Coghill. We'd love your questions - send them to us via social media using the hashtag #Pratchat23. And as mentioned in this episode, we'll soon be releasing our first bonus episode just for subscribers! All bonus episodes will be available to anyone who subscribes, so if you're interested, jump over to our Support Us page for details. Show Notes and Errata: Asimov lives with Liz and is our resident "Pratcat". He was previously audible in the background of episode 10, We're Gonna Need a Bigger Broomstick, and episode 18, Sundog Gazillionaire. You can follow his adventures on Instagram at @asimovthecat.Best-selling humorous cat books include How to Tell if Your Cat is Planning to Kill You, several volumes dedicated to Internet sensations Grumpy Cat and the LOLcats of I Can Has Cheezburger?, and other books that draw on similar themes to The Unadulterated Cat, including Cats Are the Worst and Sorry I Barfed on Your Bed.Eric Ernest Jolliffe - the wrong Jolliffe - was an Australian cartoonist and illustrator who led an adventurous life, including work all over Australia and serving as a camouflage officer with the RAAF in World War II. He is best remembered for his magazine and newspaper strips Saltbush Bill and Sandy Blight, and his own magazine, Jolliffe's Outback.Gray Jolliffe's anthropomorphic penis character, Wicked Willie, was the star of both a series of comic books and also a straight-to-video series of animated shorts directed by Australian Bob Godfrey, best remembered for his work on the children's animated series Roobarb and Henry's Cat.Real Men Don't Eat Quiche is a satire of masculinity, originally subtitled "A Guidebook to All That Is Traditionally Masculine". It was written in 1982 by American humorist and screenwriter Bruce Feirstein and stayed on the New York Times bestseller list for over a year. Localised adaptations were subsequently written for the UK and Australia, the latter by Australian playwright and author Alex Buzo. Nathan W. Pyle's strange planet series of comics about aliens trying to understand life on Earth is available at his web site, nathanwpyle.art, and on his Instagram at @nathanwpyle. Pyle experienced some controversy in April 2019 over an old tweet, but his cartoons remain a delightful commentary on the absurdities of our world. Both the cat name cartoon and the vibrating cat cartoon are still on Instagram.Operant conditioning is a form of learning where a behaviour becomes more or less frequent because of posi...
Tophats & Tongue Twisters (part 1): With James, Matti, James, Russ, Mahatmacoat, and AdamMusic: Edan - "Beauty"
POISON, feat. Craigbot, Blessercize, Missy Elliot, EL-P “4$ Vic”, Aesop Rock “Mystery Fish”, Slim Goodbody, Ronnie Bosh “You Know”, Bill Nye, Moby, Neil DeGrasse Tyson, The Prodigy “Poison (Remaster)”, D-Styles, J-Magik “Your Sound”, Sebastian Horsley, DJ Krush “Code 4109 , Red One - Alive ’N Kickin(Origin Unknown RMX), Terrence Mckizzle, Kool Keith AKA Papa Large, MF Doom & The RZA “Books of War”, Bill Burr & Mia, Halloween, Deltron 3030 & Dan The Automator “Virus”, Westworld, Aesop Rock “TUFF”, Bill Hicks, Alex Jones (Goblins), Wutang C.R.E.A.M, Acosta + Epstein, Pish Posh “Corrupt Cops” (Evol Intent Rmx), Technical Itch - Elevation (VIP)
Artemis' Artifice - UMG Fire, New Moon Missions, Star Forts, Situationist Shenanigans, King Rollo etc. With: Mahatmacoat, Matti, Originalsimulant, and Uninstall Media(Song) The Magnificent Room - "Digging Up The Dirt"Situationist International DocumentaryKing Mob"Up Against The Wall Motherfucker!"
Amish Butter In The “Amazon” Jungle, Ideology vs. Individual, Mass Meaning, Language of Thought, Q War, MAGA / MAGUS, Magical Fetishism, Acceptable Abnormality, Weaponised Thumbnails. With Originalsimulant, James, Matti and Kelitobrigante.Sounds: Baraka - "I'll Be There", Boogie Beat Records 1995 & The Winstons - "Amen Brother"
Deracinated Elongated Skulls - QCult, Nanotech PPE, Neo-Nihiliberalism, Robots Fixing Robots, Family Stuff, EU etc... with Originalsimulant, Matti, James, and James
"You Can't Snort Coke With Bitcoin..."Matti's Tarot Reading DEBUNKS Lefty Climate Change predictionsOriginalsimulant reveals his obsession with L. Fletcher ProutyUninstall Media laughs too much...And much more! With Kelitobrigante, English James and Adam Miller.L. Fletcher Prouty - All-purpose Expert, Or Crackpot?
SONIC ACTS FESTIVAL 2019 – HEREAFTER Gregory Sholette – Can an Anti-Capitalist Avant-Garde Art Survive in a World of Lolcats, Doomsday Preppers and Xenophobic Frog Memes? Do We Have a Choice? 23 February – De Brakke Grond, Amsterdam, The Netherlands With an introduction by Ash Sarkar. As artistic activism becomes a signature attribute of contemporary high culture, a wave of museum boycotts, protests, occupations and labour unrest marks our current decade. Meanwhile, much of the post-2008, post-Occupy art generation abhors the multi-billion-euro capitalist art market, even as the very term art is radically shifting, twisting, inverting, if not undergoing an outright self-expulsion from itself as it moves from its familiar white cube dwelling places to occupy the public sphere at an ever-accelerating tempo. But as art joins the everyday social world, its status as a privileged and critical realm that is set apart from the ubiquitous materialistic pursuits of a consumer society is likewise receding from view, and in truth, most high cultural practitioners have yet to really face this new, ‘bare’ art world and what it represents. Nevertheless, as art sheds its centuries-old ideological privilege of autonomy, it is gaining a front-row seat in the contentious struggle to rethink society, as well as the expressive, imaginative and artistic value is generated, for whom, why and to what ends. Still, the question lingers, how will art, especially activist and anti-capitalist art, remain critically radical once fully submerged in a world of lolcats, doomsday preppers and xenophobic frog memes? Gregory Sholette is an artist, activist and writer. He was a founding member of several collectives such as Political Art Documentation and Distribution, REPOhistory and Gulf Labor. In his artistic work and seven books – including Art as Social Action (with Chloë Bass, 2018), Delirium and Resistance (2017), Dark Matter (2011), It’s The Political Economy, Stupid (with Oliver Ressler, 2012) – Sholette reflects upon decades of activist art that, for its ephemerality, politics and market resistance, might otherwise remain invisible. Sholette holds a PhD in History and Memory Studies from the University of Amsterdam (2017). He teaches Studio Art and co-directs the Social Practice Queens MFA program at Queens College, CUNY. He is an associate of the Art, Design and the Public Domain program of Harvard University’s Graduate School of Design.
The Seattle Public Library - Author Readings and Library Events
Climate Of ChangeOriginalsimulant, Uninstall Media, Matti, English, and LumberJackGreg discuss the Extinction Rebellion astro-turf movement, among other things.
Topics Technology, SnakeOil, Prison, IndustryCan We Vote Against Voting? "Rights", Gubberment, Guidestones, Homelessness, Prison Industrial Complex.Matti, Uninstall James, English James, Kelitobrigante, Originalsimulant, and Lumberjackgreg. Jacques Ellul - "The Technological Bluff"
Topics Constitution, Rights, Magic"Rights" - The Magic Scroll
Deep CutsUninstall Media, Matti, OriginalsimulantThe 23 Enigma
Topics Comedy, Immigration, ImmigrationOpinionated epic chat from the ACI gang. Gaia, Matti, Originalsimulant, Kelitobrigante, and Uninstall Media.Comedy, “Waking Up”, Immigration & Geopolitricks, Housing Markets, Stone Circles, etc.
Topics Media, PsyOps, Mind ControlAdam, Matti, and Uninstall Media carry on into the wee hours. Flicker Rates & Brain Entrainment, Deep Fake, Grandfathers of Conspiracy.
Topics VirtualReality, Technology, Society, Jacques Ellul Kelitobrigante, Matti, Uninstall Media and Adam discuss: Technological Society, Benevolent Progress, Virtual Empathy, among other things. Part 1 of 2Jacques Ellul playlist How Cooking Meth in Virtual Reality Can Bring People Closer TogetherCan Virtual Reality Bring World Peace?Using Virtual Reality To Help Kids With AutismHospital Uses VR To Make Shots Easier For Kids7 Ways To Overcome VR Motion Sickness
Topics History, Conspiracy, WWII, Media, PsyOps, TrumpHITLER IS DEAD! - Trump Induced Toxoplasmosis. With Deletetheelite, Gaia, Matti, Originalsimulant, and Uninstall Media.MUSIC: "Broke Ass Waltz" by the Yellow-Bellied Sapsusckers https://youtu.be/etsFX114C2k
Topics History, Media, Ancient Rome, PsyOps, OwlsMatti, Gaia, English, Uninstall Media and Originalsimulant convene to discuss all the hottest issues of the day. Will Trump build his wall? Is the "Roman Empire" a hoax? Psychedelia, memetics, and owls are cool.
Topics Media, PsyOps, LOLCatsPodchaos with Gaia, Matti, and Uninstall Media.Seth McFarlane 9/11 story, Germany, Antarctica, Trans-speculation, Synchronicity, Kraftwerk, Cats.
Topics Science, Moon, Earth, Space, Apollo Program, EliteLunacy! Gaia, Greg and Uninstall Media peruse Elite Sci/Tech articles for Elite Readers and Elite Listeners.Moon hoax / Punk-rock Sauropods / A.I. Cat Shelters
What is the single most effective - and least expensive marketing channel - available to all businesses? This week on The Inbound Success Podcast, BirdEye Head of Marketing Sam Mallikarjunan shares why your customers are your best marketing channel and how BirdEye is developing a platform designed to help businesses leverage trust - via customer evangelism - at scale. From his year's spent as "the face of HubSpot" to teaching marketing at Harvard to taking over marketing for BirdEye, a martech SaaS startup, Sam has gathered fascinating insights into what it takes to build a high growth business and the role that marketing plays in that process. Listen to the podcast to hear Sam's thoughts on leveraging customers for your marketing and to learn more about his plans for marketing BirdEye. Transcript Kathleen Booth (host): Welcome back to The Inbound Success podcast. I'm your host, Kathleen Booth and today, my guest is Sam Mallikarjunan, who is the Head of Marketing for BirdEye. Welcome, Sam. Sam Mallikarjunan (guest): Thanks for having me. Sam and I recording this episode Kathleen: I'm excited to speak with you. You told me that this is going to be your first podcast since joining BirdEye, so I'm really excited to dig in and learn a little bit more about it and share that with the audience, but also talk about some of the things you've learned throughout your career because you have a really interesting background with many years at HubSpot, and you're doing some teaching now. I have a lot of questions that I want to ask you! Sam: I'm looking forward to it. It's been a weird ride, so we can go in whatever direction you want. Kathleen: Great. Well, why don't we start by having you tell the audience a little bit about yourself, and your background, and how you wound up where you are today. Sam: Sure. So my name is Sam Mallikarjunan. If you can't pronounce it, you can Google anything even close to it and you'll generally find me. For seven years, I worked at HubSpot, which if your listeners don't know, is a software company based out of Boston. For the last three or so years, I was teaching the advanced digital marketing course at Harvard University. And then for all of last year, as we discussed before we started recording, I lived in a van, both teaching at Harvard, and then also I was HubSpot's full time speaker. So I spoke in 49 US states and about eight other countries last year on a range of topics: innovation, and innovation marketing management, et cetera, marketing strategy. (to learn more about Sam's adventures traveling the world and living in a van, check out the "Sam from the Van" Facebook page) So now, however, what people thought would never happen is happening. They used to joke that we could change my name to "Sam from HubSpot," so that people didn't have to say Mallikarjunan. But no, I have left. I have left and taken over as Head of Marketing at birdeye.com, which is based in Dallas. So I'm moving from Tampa to Dallas, and I'm really, really, really excited because it feels ... First of all, we share some board members with HubSpot, so it's kind of similar in that way. But second of all, it feels like HubSpot did back in the early days. So I'm very, very excited. Kathleen: Oh that's great. So true confession, both times I've heard you say, "I lived in a van," in my head what comes up is Chris Farley. And I want to say, "Was it down by the river?" Sam: Many times it was down by a river. We posted on Instagram, everybody got their joke, ha ha ha, very funny. Kathleen: I'm sure it's not the first time you've heard someone say that. I'm not super original in that. Sam: In fact, if you bust out, "Do you like green eggs and ham," based on my name, between those two jokes, you'll have hit about 50% of the recurring jokes that I've heard in my life. Kathleen: Oh, I didn't even think of that. Sam: Yeah. Kathleen: Alright. Well, fascinating kind of journey to where you are. Can you share what was it that prompted you to leave HubSpot after so many years? Because you were there for a long time, and I mean, when I hear what you've been doing - you were Head of Experimental Marketing, you were the full time speaker - I mean some of those gigs sound like dream jobs. What got you to move on? Sam: So here's the weird thing about dream jobs, is that once you do it long enough, it becomes work again. And then also, I had an enormous privilege being at HubSpot and getting to work with and under some incredible people. HubSpot was the same size when I joined it as BirdEye is now, but I always had Brian Halligan and Dharmesh Shah, the two co-founders. I had Mike Volpe, the former CMO, Kipp Bodnar, the current CMO ... I always had them to fall back on, right? It was never ... There was always a limit to how much damage I could actually do to the long term success of the company. HubSpot's huge now. I think it crossed the five billion dollar market cap rate, 2300 employees and something like seven or eight global offices. It's absolutely huge and to be honest, I could have spent the rest of my life at HubSpot and been absolutely happy. But what I wanted to see is if I could do it if I didn't have Volpe, and Kipp, and everybody else to fall back on. Kathleen: Yeah. Sam: So now I'm the Head of Marketing for a company that's the size that HubSpot was when I joined it, and if I fail I have nobody to blame but myself. HubSpot's always had this role where if you have good trust with your manager you should be able to tell your manager when you think it's time to move on. So Kipp, and Dharmesh, and everybody always said that to me, "If you eventually want to leave the company, let us know and we'll help you find something awesome." And so I did, about six months ago I told them that, "Hey, I really want to try and do this on my own." So I had a freelancer make me a list of 144 different start-ups in the U.S., post-Series-B, pre-IPO, either MarTech SaaS, blockchain or AI. I shortlisted those into three categories of pretty cool, really cool, and insanely cool. And then I got introductions, and feedback, and everything else from my bosses, from the people on the Executive Team. From those 144, I chose BirdEye. Kathleen: That's amazing actually. I mean, it says a lot, first of all, for HubSpot's culture that they've created an environment where you can go and feel safe saying basically, "I'm mentally getting ready to leave." That's a scary proposition for anybody, but I think it's wonderful that that environment exists there. Sam: It's good both ways, right? Because it's a good retention mechanism. So I have turned down two formal CMO offers in the last several years, and many, many more opportunities and it's because they've made me really snobby. I would look at it and I would be like, "I bet Brian, and Dharmesh, and Kipp, between us we could find something even better." So it was never a surprise to them, it always gave them an opportunity to move me internally. Almost every time you see a job in the last five years that I've moved internally at HubSpot on my LinkedIn profile it's because Sam was thinking about leaving, and we figured out a way to make it better for me to stay. And, obviously, it's good for the employee, right? Probably the most interesting opportunities in my professional career was a couple months ago. I'm literally sitting at breakfast with my boss, texting back and forth with my new boss negotiating comp. Most people hide the fact that they're looking for a new job from their boss? My boss helped me negotiate comp. Which is good, because I had never heard of things like single option triggers and stuff like that. Kathleen: Yeah. That's amazing and it's also really smart on the part of the employer because, especially if you're talking about key personnel. I mean, really in the technology space any personnel it seems like is key, but particularly someone like yourself who's been there so long. You're the kind of person who's hard to replace, and so having that ramp or that runway to know that you're ready for that departure as an employer is really great as well. Such an interesting process that you went through. What an incredible opportunity to get introductions - warm introductions - to all those companies. Now you have me dying to learn more about BirdEye because I want to know what it is about this company that made it the one, right? I feel like you were on The Bachelor and there are all these companies handing you roses and you chose this one. Sam: Yeah. So first off, you're right. They functionally got six month's notice, so it was a little sad actually, by the time I left they no longer needed me because they had a replacement. So I didn't have that ... you know. I don't know, it was both good and bad. Kathleen: Yeah. Sam: Yeah, so BirdEye. There was a couple of things I was looking for, right? One was I wanted to work for a company where solving the problem was meaningful. What I loved about HubSpot in the early days was inbound marketing felt right. You know? The way the world was was that you made money by pissing people off. I used to train ... Those annoying people in the mall who try and sell you cell phones? I used to train them, so that was my background. But it felt wrong. I was never happy about it, the work that I was doing. Inbound marketing felt right. You should be able to build a big, profitable business off of creating an experience that people love on the internet and in all of your market. What I love about BirdEye was that it felt right too, which is - the website we're still working on, clarifying our value propositions - but the way that I think about it is if you're a world class dentist, or a lawyer, or autobody repair shop, or whatever, you should not also have to be a world class internet marketing professional. You should be able to just be good at your job and empower your customers with a framework that's going to help you grow your business. Obviously the opposite is true, which is that if you ask your local mechanic how they feel about the local big dealerships, they're going to say the work is subpar and overpriced. Same thing if you asked most dentists, or lawyers, or whatever the small business is. So I loved that bit of it, where every day I come into work, my team comes into work, the better we do our jobs, the closer we are towards shifting the world of business the way that it should be. I also just like it too because I love things that are unfair advantages that really irritate large entrenched companies. So for a hundred years functionally, the business growth has been about, "Can my Sales and Marketing team beat up your Sales and Marketing team? Can we just sell better than you?" In this day and age, I think as we've seen with companies like United, right - great Sales and Marketing team at United - but if you piss off the customers there's no defense from that anymore. Kathleen: Oh yeah. Sam: Right? So it's not this marginal battle anymore. Companies like BirdEye came and flipped the table over and it says that, "My community of empowered community fans can just obliterate your Sales and Marketing team." That's what I loved about it. So it was the mission, it was the brand. I mean, it's a MarTech SaaS company with executives that I love and it's a very comfortable fit. But for me, I wanted to do what Brian and Dharmesh and Mike did for inbound marketing, which is create that movement. I wanted to do that for what I honestly think ... We haven't finished defining it yet, but this has got to be the next wave in growth, right? The only thing that matters about you is how empowered customers are that like you. Because you don't want the only empowered customers to be the ones that don't like you. Kathleen: You know, it really resonated because you talk about doctors, and dentists, and lawyers, and people like that. I owned an agency for 11 years and I had many of them as clients, and the best campaigns we did - in fact we won HubSpot's first ever Client Campaign of the Year award back in 2015 for work we did for a LASIK eye surgeon. The reason it was so successful is, it was kind of like what you're talking about mixed with a little dash of influencer marketing. We found a guy that happened to have a really strong Facebook presence, and out of nothing but dumb luck figured out that he wore glasses, would love to have LASIK. We paired him up with a doctor, they agreed to do the surgery at no cost if he would just blog and talk about his experience, good, bad, or otherwise, there was no requirement that it could only be positive. He had a great experience; he went and vlogged, and blogged, and just spoke to his audience about it and that campaign far and away crushed anything else we've ever done. Especially with things like healthcare and attorneys, you really trust your friends and those people in your network so much more than you trust an e-book, because we did plenty of those too. But it wasn't the e-book that killed it for us, it was this guy telling his story and personally endorsing the doctor and the procedure that was the lightening in a bottle. So I can totally see how that's so important. Sam: Yeah, now the question is, can you do that 100,000 times, right? Kathleen: Right? Sam: Especially for local marketing, there's not always local influencers who you go to to determine what dentist you go to. For dentists it's funny, it's the old joke, it's a cliché. It's, "What do you call the person who graduated last in their class in medical school? You call them doctor." Kathleen: Right. Sam: So the only way that I, as a patient, or whatever, can tell the difference between Dr. A and Dr. B is what their patients say about them online. And yeah, we trust them way more than what people say about themselves. I think the other thing that's changed is the passionate relationship we have with certain brands. It feels new. I don't have data on this, but it feels super new. I love using Uber as an example, because Uber in 2011 was banned by the state of Massachusetts for 23 hours. It's the fastest I've ever seen government move. And it's not because Uber had a bunch of lobbyists then like they do now, it's because ... We literally got a phone call from the mayor of Boston's office at the HubSpot office asking us to stop slamming them on Twitter. It was a decision by the governor's office, not the mayor's office, and we just didn't know that. Uber got hundreds of people to show up to the Cambridge City Council meeting, which is used to a dozen or so people showing up. When I see that and I see things like what happened with United, or I see things both good and bad, communities of customers rising to your defense, or communities of customers tearing you down, there's something there. Kathleen: Oh, it's incredibly powerful. I was going to say Uber is a study in and of itself of both dynamics, like how it can go well and how it can go not so well. You said a word that I think is so important, which is trust. You know, one of my colleagues at IMPACT is Marcus Sheridan. I've seen him speak numerous times and he has this one thing he always says that I find so powerful, which is that, "Every company is in the same business, whether you're Uber selling rides, or you're McDonald's selling hamburgers, or whether you're HubSpot selling software." When you boil it down, they're really selling trust, because if somebody can't trust you they're not going to buy from you. Just like my campaign, even though we had an influencer, it's really no different than if I go on Facebook and ask my friends. It's about who do I trust, who's opinion do I trust? So it sounds like what you're building is something that helps you leverage trust at scale. Sam: I like that, "Leverage trust at scale." Kathleen: There you go, you can put that on the website. Sam: When I teach at Harvard there's a metaphor I like to use, which is about how all economists, of which business is a subset, of which marketing is a subset, have physics envy, right? In physics, I can drop this pen a hundred times out of a hundred, and it's going to fall and hit the ground. I can stand in Harvard Square handing out a hundred $1 bills and at least 20 people will make the irrational decision, they'll call me a "chowda head" and keep walking, right? We work in a profession where it's not this simple, "If this, then that, zero in one binary value," marketing is a social science, economics and all of business is a social science and the definition of social science is, "A science about which we are very uncertain." Kathleen: Yeah. Sam: The most important variable, by far, is exactly what you said, which is that trust. That's what separates us from all of the other professional disciplines, is our dentists, or lawyers, right? Whatever, they know there's something objectively true that they can work against. We have to work in an environment where that's never the case, things are always changing. The one constant is it doesn't matter how compelling the argument is, or how cheap it is, or how cool it is, whatever, if there's no trust that's the deal breaker. Kathleen: Yeah. Sam: Fell out of your hand while I'm standing in the square. Kathleen: Yeah. So, I would love it if you could talk a little bit about how you see this playing out for companies, whether these are dental practices, law firms, any other type of company in terms of trying to leverage trust at scale. What does that really look like and how does that manifest in terms of a company's marketing? And you using that at all with BirdEye or planning to use it at all? Sam: Yeah, well first of all, you should always drink your own champagne, eat your own dog food, whatever metaphor you want to use, so we definitely are ... That's really important to us because people want to buy from a company that sells to people like them. So we're not done with this yet, but you'll notice that soon, if you come to the BirdEye website from one of our dental ad campaigns it's all going to show you reviews and stories of dentists versus lawyers, right? That would be very different. I will say one of the cool things, again, about how this is like HubSpot was in the early days is you remember how easy blogging was back in 2011? 2010? Kathleen: Yeah. Sam: I mean, it was great. If you had a blog, you were light years ahead of the curve, right? If you were blogging frequently, you would win your market, right? I had a toenail fungus remover company, I had knee scooters, I had mortgage companies, if you just did the work, you'd be fine and absolutely crush it. Now that's really hard, growing your traffic, your acquisition engine off of blogging is really, really hard because it's a very crowded space. The good thing about reputation marketing, reviews, and leveraging your customer base like that is almost universally everyone is really bad at it. The large companies, like T-Mobile sends me an NPS survey, right, which is one way to begin the conversation about leaving a review, and whenever a company does it I always give them a zero because I know I'm not going to mess with their data that bad. I want to see if there's follow up. If I send you a zero ... If I send you a 10, right, yes, I'm absolutely going to recommend you, you should send me a link. Say, "Hey, here's an easy way to do that." Kathleen: Right. Sam: If I send you a zero, I would expect that a company would have that mentality of following up with me to find out way. Almost no one does. T-Mobile, Verizon... you know, as much as I hate to admit, even at HubSpot it was still a very basic implementation of no, somebody gave you a bad NPS score whether or not they'd get a follow up. You know, if you do it at all, you're going to be in good shape. Asking your customers for reviews is still innovative as weird as that sounds. We don't feel that way because we see everybody moving in this direction. You and I see lots of people are talking about this sort of thing, but the vast majority of businesses and the vast majority of markets don't even ask their customers for reviews. If their customers say something negative, they don't follow up, and if their customers say something positive they don't use that in any way. They don't put it in their email. They don't put it on their website, they don't put it in their ads, so the- Kathleen: Why do you think that is? Sam: Well, you know, the bell curve of adoption, right? So you've always got the people who are the innovators and the early adopters who are going to try everything just because it's new, and they're worried about being second place, and you know, we just haven't got there with some of the technologies and behaviors that are new. Stuff like Bird Eye is new. How important reviews are may not feel new, but it's relatively new to the world of business. It's not been around for 30 years. The underlying concepts have, but the websites - Yelp hasn't been around for 30 years sort of thing. The other thing is that, you know, if you've read 'The Innovator's Dilemma' by Clayton Christensen it's a really great book. I have a different concept of the innovator's dilemma, which is that it's really, really easy to be innovative when things are going well, because you have lots of breathing room. It's really, really easy to be innovative when things are going really poorly. So like, when I first applied to HubSpot I didn't apply. I built hiremeHubSpot.com and ran ads targeting people who worked at HubSpot to register for the free webinar on why you should hire me. It's because I was a college drop out with no previous experience, so you know, when you have no chance of success it's easy to be innovative. It's the middle area where things are going okay, but if you mess up they could go off the rails really quickly where it's hard to be innovative, and that's where most of the world of small business is right now. You know, if you're a dentist or a lawyer, auto repair shop, whatever, you're running on pretty thin margins. You're having to fight pretty hard to get your customers. You're already behind the curve, because you don't know the highly technical things, like local SEO and PPC. You generally don't have a sophisticated understanding of the marketing engine behind that, and you don't have the luxury to be innovative, so that's, again, one of the things I loved about Bird Eye was we try and take some of the hard work out of that and make it a little more attainable. Kathleen: So focusing on reviews for a second, because that seems like it's a big part of this, you know, you want to get a customer to review you, and I've worked with different companies and talked to them about this, and you know, some of the times it seems like they don't do it because they're just afraid to ask. Other times, they don't know how to ask, so can you talk about what is the right way to ask for a review? How do you navigate that process in a way that doesn't seem too pushy and doesn't seem like you're placing too much of a burden on the customer? Sam: I mean, so NPS, the net promoter score, is sort of an easy cheat, because it asks on a scale of zero to 10 how likely are you to refer us to a friend or colleague. If they give you a zero through six you should follow up immediately, right? Sevens and eights are passives, and nines and 10s are promoters. You would really only tell the people who would give you a nine or a 10, "Hey, that's awesome. I'm glad you were happy. Can you share your story with the world?" Then, everybody who's less than that you would put them into a service remediation process, right? Just send a text message to the business owner or whatever you want to do to follow up with this customer because they're unhappy. I definitely think you're right, which is that people are somewhat afraid of the answer, because it is, especially for small businesses, highly personal. This is ... I put my blood, my sweat, and my money, and my risk and everything into this business that I built, and then to actively solicit anybody to say anything negative about it is hard. It's a hard thing to do emotionally. There's a humility in that, which is that you've got to know that you're never going to be perfect, and as we say here it's not about being the best. It's about being the best at getting better. We have a tool that tells you all of the things that your customers hate in a market. You can look at it just by your company or you can look at it by your entire industry. Kathleen: Oh, that's really interesting. Like if you're a dentist, is it the anonymized aggregate feedback from all the dental- Sam: Yeah. Cool thing about our industry is most of the data set we're working with is public, so I call it our blue ocean finder for the business strategy nerds who are listening to the podcast, because you can literally plot what's important to my customers and which competitors are bad at that? You can adjust your strategy accordingly. Also, on the more micro level you can say what's important to my customers that I'm bad at? What's important to my customers that I'm good at? Then, you make the decision. Do I fix the things that I'm bad at or do I stop doing those things entirely, or what, right? The exact same process you'd follow going through a blue ocean strategy canvas. Yeah, it's about listening but not just about hearing, right? It's actually listening and making change based on that. Kathleen: And what industries do you currently have that for? Sam: So the really good ones for us so far, the people who have been willing to take a risk, are people like dentists and lawyers and auto body repair shops. We're working on our own buyer persona exercise right now, so you'll forgive me. I don't have a nice "Marketing Mary" to show you like we had at HubSpot. The key variables for us are people who their customers don't want to be their customer, so like divorce lawyer, collision repair shops, etc. People for whom differentiation is very difficult, like dentists. And then people for whom the consequences of the decision are extremely severe, right? Kathleen: Surgeons. Sam: Surgeons. Well, wedding venues, that sort of thing, right? You mess that up you can't get that back, right? Kathleen: Yeah. Sam: So those are generally the three psychographic categories of businesses that we're looking at right now. Kathleen: Interesting. So for example, if I were to go on and I wanted to get that industry-wide view of what customers are and are not happy with, could I get that right now for marketing agencies for example or is there a certain pick list I need to choose from? Sam: I don't know if we have marketing agency ... We should. We have advertising and media as one of our categories in our database, but we're a startup, so you know exactly what that means- Kathleen: Oh yeah. Sam: -which is that odds are all of the data exists. It's just a question of if anybody has asked that question before. That'd be a fun follow up to do for the podcast. Kathleen: I mean, I have a feeling I know the answer, but you know, you can't assume. It would be interesting to look. I'd love to play around with that at some point, so if you ever want a beta tester for agencies, you know who to call. Sam: Absolutely. Yeah. Kathleen: I think that kind of competitive intelligence is really interesting, and one of the things you said really struck me, which is that it's not just about understanding how to change your messaging and your marketing. You could truly use that to make very fundamental decisions about your product offering, your service offering, what you want to do as a company, you know? Do we cut certain services because we're just never going to be great at it and it's a huge pain point? There are some really interesting potential in terms of how that data can be used. Sam: We haven't even begun to tap into this, but you're right. It's the lipstick on a pig. If you're changing your sales and marketing but not changing who you really are, in 2018 you're going to be found out, and you're going to be found out because your customers are going to sell you out hard. Kathleen: Yeah. Sam: They're going to hop on Google, Facebook, and everything else like that and tell people that your marketing does not match up with the customer experience. I will say man, you're getting me excited here, because it is super fascinating. You know, when we think about the world of disruptive innovation and - forgive me for the Harvard jargon terms here, right - but you think about things like the extendable core, which is what's the thing that a business should lean on to survive the disruption of its market? The classic example here is, like, hotels, right? Have you ever stayed in an Airbnb? Kathleen: Oh yeah. Sam: Yeah, have you ever attended a conference in an Airbnb? Kathleen: No. Sam: Yeah, right? So there's some things that Airbnb simply can't do without adopting the same cost structure. Turns out they're really important. So business travelers, there's a reason Airbnb's never really nailed business travel. It's because of the standardization. You can look at what is important to the customers who are leaving me and what is important to the customers who are staying around? You can look at some of those mappings, and you know, if I'm Marriott hotel group right now, I'm not actually worried about spending too much time solving for the destination vacation traveler, right? I'm really focused on events. I'm focused on business travel. I landed here in Palo Alto at 12:30 in the morning, didn't matter. I walked into the Sheraton. I know exactly what the lobby looks like even though I've never been to this hotel. That's what I value. I don't have to think about it. Kathleen: Yeah, yeah. Sam: So yeah, you're absolutely right. There's a lot of interesting data that can come from the fact that we now have the ability to listen to our customers at scale and make decisions. Kathleen: I'm always struck by how many companies have that information - like have it in their hands, not just have access to it, but have been given it - and don't do anything with it. Sam: Most of them. Kathleen: Yeah, it's kind of shocking actually. Sam: So this is going to sound super weird I guess, but I don't work at HubSpot anymore, so I'm allowed to say nice things about them. HubSpot was so humble by the way that we never felt comfortable bragging about ourselves. You know, in DC they have the beltway syndrome, right? Everybody in DC thinks everybody else in the world sees things the way people in DC do. At HubSpot we had "sprocket syndrome," which is we thought everybody in the world was just as sophisticated in their concepts of economics and growth and business as we were, which isn't true, right? You know, things are changing so fast. What was the Deloitte research? The average life span of a knowledge stock, a competitive piece of information like a knowledge that you own, is down to like five years. Whatever it is you own that you're basing your business on, much less your career on, you can expect to be a differentiator for something like five years as opposed to we literally used to name our families after what we did. You were Smith, you were a Wainright, you made wagons, whatever. Now, it's like you can't even name your company after what you do, right? Like you know, it's hard to even have a job title after what you do, because everything changes so fast. The mechanisms for perpetual learning and keeping up with all of that, I just don't think most professionals and definitely most businesses haven't figured out. Kathleen: Yeah, you know, it's so funny that you just said that about the pace of change, because as I was telling you before we started, I just came back from a two week vacation, and I'm going to fly my geek flag now. On vacation, I decided to read 'Becoming Steve Jobs'. There's probably a lot I could have read, but for some reason I was really into that. And you know, I lived through the whole evolution of Apple. I'm old enough that I was working pre-Apple, but yet I had forgotten how quickly all of that happened - how we went from we didn't even have personal computers to "wow, we have a laptop," to "oh my gosh, now we have a little music player and iTunes," and then "we have phones that are full screen and tablets." I mean, rereading it was really both exciting but also kind of frightening. I have an 11 year old, and all I could think was "wow, I just have no idea what the future holds for him when I read this book." It's true. When I think about any business, you know, my company that I used to own, we were EOS practitioners, the entrepreneurial operating system, and they talk about having your long term plan. I don't know how you could ever have more than a ... You could have a three year plan, but it's going to change dramatically, right? I don't even know how you could have a five year plan anymore. It used to be when I graduated from business school it was all about the rolling five year plan. I just think that would be a piece of fiction today if I created it. Sam: Yeah. There's somebody ... I don't remember who it is. They had this great graphic of the pace of change, and if you went back to 10,000 BC you could bring somebody forward in time to 5,000 BC before they saw something that fundamentally challenged their world view, and then 5,000 BC, okay, to 2,000 BC and then 2,000 BC to zero BC. You're starting to see some innovation. Zero BC to like 1,000 BC, very different world. 1000 BC to 1500 - hugely different world, and now if you brought somebody from the early 1900s to just 100 years later it's nuts. If you brought somebody even just from the 60s or the 70s- Kathleen: Totally. Sam: -right just with no context, they saw everything new, this is dark magic, right? It's incredible. That pace of change is accelerating, and the virtue of planning is being replaced by the virtue of adaptability. Kathleen: Yeah. Sam: It is not nearly as important to me. When I'm interviewing people, for example, it's not nearly as important to me for most roles whether or not you have deep domain experience. What matters to me is your ability to comprehend new concepts that you've never studied before and your ability to adapt to change, because you know, it's a cliché that the only constant is change, but that used to be true, and now it is not only true, it is the defining characteristic of what life is for all of us. If you can't be adaptable, if you can't wrap your mind around concepts that you've never even been presented with before, you're not going to survive - definitely not in the world of business. Kathleen: Yeah, and the other fascinating thing that came out of me reading that book was Steve Jobs talked about how there's a difference between people who are focused on improving what already exists - which he kind of looked at as the Microsoft model - and seeing what doesn't exist but what is fundamentally needed. That's what obviously he saw as the Apple model. It's a really interesting construct if you think about it, because if you're only working off of the existing reality and looking to improve it, you can only experience change so quickly, whereas if you kind of forget about the reality and are able to think about what's not here that should be, all of a sudden you get these leaps and bounds that start to happen. That's a tough ask for a lot of people though. I don't think there's a large percentage of people that are comfortable in that realm. Sam: Yeah, I mean, if you do what everyone else does you get what everyone else gets sort of thing, right? Again, it's one of the reasons I loved this company is, for a century it's sales and marketing versus sales and marketing team, and now it's we're flipping the table and doing something new. I think part of that is the way that we grow up, right? We grow up not learning how to think but learning what to think. It's this graded progression, right? It's still amazing to me when people come out of college and they come into their first role and there's all these stereotypes about them needing positive feedback. That's because that's how they were raised, right? Like "I do the thing, and then I get this" - it's an "If this then that" sort of world. Kathleen: Everyone gets a trophy. Sam: Yeah, I study ... Not everybody just getting a trophy, but it's even the high performers, the exceptionally good people were told that the way to be exceptionally good, okay, you study, you take the test, you get an A, and then the assumption was you get a job, which everybody who's graduated college in the last five years knows that's not true. You know, and now we live in a fundamentally different world where we have to take everybody who grew up in that universe and teach them something new. We also need to start teaching our kids and future generations it is not about knowing the thing. It's about knowing the way to think and knowing new ways to think and processing it that way. When I'm in an argument at a bar, it's not a question of whether or not I can figure out who was batting for the Red Sox in the 1986 World Cup or something like that. I can just ask my phone that. What matters way more is that I know that I should ask that question and why that question's important. Some of the stuff, it's not as clear. It's not this logical, linear progression. Kathleen: Yeah, man, that makes parenting sound more intimidating. Sam: It is. I don't have kids, but good luck, right? Kathleen: I'm not convinced I'm doing a great job, so ... No. It's a lot to think about, and it's pretty overwhelming, but love the philosophical bent that this conversation took, because this is all really important stuff, and it's easy to sink into just talking about tactics, because marketers love that, and it's easy to say, “Oh, give me a 10 point checklist of the things I should do to be successful,” but a lot of times the reality really is it's not a 10 point checklist, it's take a step back and think differently. Sam: For everyone listening to this, if you ever come across a blog article that says "here's exactly what you need to do," that means that it has been codified to the point, like "10 steps to do whatever," it has been codified to the point that everybody else in your industry knows it too. Right? This is why it's valuable, because it's hard. It's because it's not clearly defined. I can't just write a roadmap for you, I don't even have a name for this movement, yet. Right? What's my inbound marketing? We haven't figured that out yet, but I can tell you it's important, and you and I know intuitively we believe that it's important, and the people who are going to grow by leaps and bounds, 10-X, 100-X, are going to be people who work with people like you and me to figure that out, not the people who wait to, you know, AOL still makes what, 20 million a year, or something like that off of their dial up internet subscription? Those sorts of people are not going to be the ones who are going to figure this stuff out, and are going to make that big change. Kathleen: Unless everything old is new again, and dial up comes back just like record players did. Kidding. You have all these years of really interesting experience at HubSpot. I mean, you were with other companies before that. You've been in marketing roles for a very long time, you taught marketing at Harvard. You're coming into this role at BirdEye, I would love to just hear a little bit about what are you planning to do with BirdEye, what's in your roadmap that you think is going to really help you achieve the goals that you set out? BirdEye's Marketing Roadmap Sam: Yeah. This isn't like the cool thing to say, but what matters most is the fundamental mechanics, right? We have to execute consistently over time. We have to build a team that's aligned very closely with an inside sales team. That's why I'm moving to Dallas, by the way, that's where most of the sales team is, even though we have a Palo Alto office. I'm building the marketing team where the sales team is. We've got to measure the right things. We've got to train and empower folks. We got to build just the disciplined cadence. That sounds easy. That is not easy, right? Making sure that people are aligned. Making sure that people can execute. Making sure that the right people are on the bus, because there are some people at this company, and at all companies who help them get from zero dollars to the run rate they're at now. But the people who are going to help you get from $30 million dollars to $300 million dollars are not necessarily the same people, and the people who are going to help you get from $30 million to $300 billion dollars, are not necessarily the same people. Making that transition smooth, making sure that you're recruiting people who are good fits, that's all the basics, right? The next thing that I wanted to do is this is a community play. We have to build a movement here. We have to build something like inbound marketing. It was such a moment of pride for me, it was actually 2015 on Google Trends the phrase inbound marketing exceeded the phrase cold calling. Kathleen: Oh, that's awesome. Sam: We won. It was great. We need to figure that out. What that is on our end, and we need to... Again this is the innovators, the real innovator's dilemma, is things aren't going bad, but they're also, we're not like 10-Xing for no reason, so it's how do we make the time, and make sure that everybody on my team is carving out that bandwidth to do the things that for lack of a better term are end plus one, they're innovative. Right? How do we have a podcast that tells the story of peoples' favorite customers? So I used to host an AM talk radio show, AM/FM talk radio show about cigars, right? Kathleen: I was sniffing around online, and I saw on your LinkedIn profile that you once worked for a company called cheaphumidors.com, is that right? Do I have that right? Sam: Yeah. This was before that, but yeah. Kathleen: I totally wanted to ask you about that, but we'll do that in a separate conversation. Sam: This was before that, but every cigar lounge, like Cheap Humidors is another good example, but every cigar lounge in the country, I joke, has somebody named Rex who remembers Cuba before the revolution. He's usually a great guy to talk to, you can sit down and have great conversation, and what we are selling is that kernel, that relationship between the business owner and their favorite customer. That is just storytelling gold. Kathleen: Yeah. Sam: Right? We've really got to nail that. We've got to know the strategy better than everything else. On Cheap Humidors, by the way, don't judge me, because back then exact match domains were really important, so if you googled cheap humidors ... Kathleen: I was going to say it's probably a domain a lot of people would like to own. Sam: Yeah. Now, I mean, with RankBrain and everything it's more about the conceptual topic extraction from the search engines- Kathleen: Right. Sam: And stuff like that. You could call yourselves reallylowcosthumidors.com and somebody googles really low cost humidors they're not necessarily going to find you. Kathleen: Yeah. Sam: Marketing - it's hard. It used to be easy. Well, it used to be way easier. The problem is, is now we've got brilliant people, who their minds are working against yours, and you're really fighting, you know, at least if you're following the old sales and marketing team versus sales and marketing team you're following this optimization, this game of inches, sort of thing, and it's hard. I can't do seven eCommerce applications of LOLcats any more - it's one of my favorite articles I wrote. Kathleen: It's hard, but I've got to tell you, in some ways I think it's great for smaller businesses, because when it wasn't so hard, when you could game the search engines, you could basically buy your way to the top, and that favors people with deeper pockets. You could never compete against them. I feel like now, if you're willing to put in the elbow grease and really create awesome content, you have a shot, and that's a matter of time. Granted, time is always at a premium for everybody, but in some funny ways there's a little more of an even playing field than before, but I could be wrong about that. Sam: Not to sound too self promotional, but again there was a reason I chose to work for this company, all of the arch of history has bent - business history at least - has bent towards doing the right thing, being more profitable, right? You could never run a business model now based off of the horrible things that people used to do back in the day. The way they treated their workers, for example, much less the way they treated their customers, or their competitors. The cool thing is companies like Google - whether we like to admit it or not - have forced us to do better marketing. Doing the right thing is now good business. Kathleen: Yeah. Sam: And that feels great, right? Because when I talk about T-Mobile, I could do that sales pitch in Spanish, even though I don't speak Spanish, right? Because it didn't matter. I didn't care what you were going to say back to me, you were either going to sign it or you're going to walk away, so it didn't matter to me that I understood what I was saying. I didn't feel good about that, right? It was just the best way to make money at the time. Now, like creating a good value-added inbound experience is the best way to make money, and that's again what I love about this company, which is the best way to make money should be being good at your job, like serving customers well, and I think all of the weight and inertia of the history of business is driving us towards this point, where whether it's Google, whether it's Yelp, whether it's Facebook, or whatever, you're going to have to solve that bit, or you're never going to succeed in business. Kathleen's Two Questions Kathleen: I want to ask you my favorite two questions that I ask everybody, because I think you've given me the perfect segue into it, and we've talked about how to be successful in business these days you have to right by your customers. When you think about the world of companies, and brands, and even individual marketers out there, my usual question is, who do you think is doing inbound marketing really well, but I'm going to put a little twist on that, and say, who do you think is doing inbound marketing really well by virtue of how they are kind of nurturing, and building, and leveraging that trust with the customer? Sam: Yeah. HubSpot does a good job, that's way to softball of an answer. You know what I really love, and this is one of my favorite business models in the world, is Netflix, because Netflix has scaled the relationship. I've rented more than 900 movies through Netflix, and I do that because I know that every time I give them that information, they're going to listen and use that to make my experience better. If the internet is about bringing together some of these groups of people with similar interests, Netflix does that beautifully, because it figures out, "Hey, listen, like you like Star Trek, I like Star Trek - people may not put the two of us next together on a demographics sheet, but Netflix will put us back together." The more information we give it, the more valuable that relationship becomes. I actually couldn't leave Netflix now, like let's say you launched your own streaming service for a $1.00 a month, I still wouldn't leave Netflix, because there's so much value in the history of that relationship that I have. They're probably my favorite from the customer delight, and customer retention perspective. From the actual using your customers to grow, Apple is still amazing, because there's three things you can never talk about at a party or at an office. Right? Politics, religion, and PC versus MAC, because no one can have a rational conversation about that, and - Kathleen: Or jiffy versus giffy, at least in our office. Sam: Whoa, that's true. You start talking about MAC, and the MAC fans will just like, they're so passionate, they're so ravenous. Right? And Apple actually does a pretty good job of leveraging those evangelists. So do companies like Uber. You know Uber grew enormously fast, because I told everybody to take Uber, you know, companies that did not have that like Lyft, Lyft started about the same time, if not slightly before Uber, but what they never nailed was that customer evangelism piece, and so that's why Uber managed to outgrow them. Those are some companies that I think do it right. Kathleen: Yeah. Those are great recommendations. You also touched on the fact that marketing is changing so quickly, and that you look for people who are able to keep pace with that change, and are able to embrace, and quickly learn and understand new concepts. Given that pace of change, how do you personally stay up to date, and educate yourself on everything that's happening in the world of digital marketing? Sam: Yeah. That is a difficult question, which unfortunately has a difficult answer, which is that we are, especially in this day and age, like our own businesses. My fathers generation, my grandfathers generation, could expect to work for one company their entire life, get a pension, and move on. We have to think about ourselves as businesses. We're generally not going to stay with the same company for our entire lives and then get a pension, and whatever, which we define ourselves that way. We have to start thinking about disruptive innovation the same way they do. There's a few core characteristics of that. One, is get ridiculously good at defining the value you bring. We call this the "jobs to be done framework." Henry Ford had the most famous quote, if he'd ask his customers what they wanted they would have said a faster horse.. Obviously he didn't found the Ford Horse Breeding Corporation. He founded the Ford Motor Company. Kathleen: That goes back to the Steve Job's thing- Sam: Yeah. Kathleen: Find the thing that's missing. Sam: Right now, if I asked my boss what he wants me to do, he's going to say, “Drive more leads for the sales team.” That's not really what it is. Right? That's not the value that I bring. The value that I bring is the coaching, and unique perspective, et cetera, so I have focused not on the tactics of marketing, but I'm focusing, and I'm ridiculously good at coaching, and ridiculously good at strategy, not, and that's sort of self disruption. That self disruption is the next piece, so you define your value, you need to be really, really paranoid. The best companies, like HubSpot Labs, for example, are those who are continually investing in testing whether or not they can provide more value for their customers than the core model. So the free version of HubSpot, right? For example, we knew somebody was going to do that eventually, and it might as well be us and not some random nerd out of MIT's basement who does it, don't fight, it's uncomfortable, but don't fight the change. Lean into that change, and be very, very, like... get comfortable with change. The value that I'm adding to business right now is probably not going to be, as you said, the value that I'm adding to five years, it's going to be something different. We have to be comfortable with that. Now, the flip side of that is adopting this mindset of continuous learning, which is, I hate when people ask me for book recommendations, because very rarely do I feel you have to read the entire book to get the point. Kathleen: Yeah. Sam: And it's way more interesting to me to see specific blog articles, like send me the three most interesting blog articles that you've read in the last six months on recruiting marketers. You could probably do that, and that would take a shorter amount of my time, and add more value than you telling me to read random books on hiring. That self selection comes from joining communities, not from going and getting a degree, not from trying to read a book a day, or something like that, but from joining communities and asking those hard questions, and never being afraid to ask stupid questions. That is my greatest pet peeve. We saw this on inbound.org, so I ran Labs, which built inbound.org, HubSpot's community site, people never wanted to use our "Quora for Marketers" that we built because they were terrified of looking like they didn't already know the answer, those are the people who are going to find it very hard to have long successful careers. The fear of asking stupid questions is how company's are killed, the fear of asking stupid questions is also how careers are killed. Where to Find Sam (and BirdEye) Online Kathleen: Yeah. That's great advice. Wow. There is so much to think about, and this was really fun. I'm so glad I got to be the first person to talk to you about BirdEye, and excited to check it out myself, and hopefully learn a little bit more about what people do and do not like about marketing agencies. If somebody has a question, wants to followup with you, and learn more, what's the best way for them to connect with you online? Sam: Again, if you Google anything close to my name you will find my website, my Twitter, my LinkedIn. I answer every website inquiry, every tweet, every LinkedIn message. Before you do that, if you're going to ask me for an opinion on something my one favor that I would ask you go check out the BirdEye website, and try to do something. I'm not trying to get you to buy here, what I want you to do, though, is play around with it, see what things break, see what things are interesting to you, and then let's talk about that, too. We're a startup just like HubSpot was back in the day. A startup is a temporary organization in search of a repeatable business model, so I want feedback from you all now that I don't have Kip and Volpe and Dharmesh and Halligan, and everybody else to hide behind. Yeah. Definitely, please do that, and reach out to me if you want. I'd love to talk. Kathleen: All right. Awesome. I'm going to put all those links in the show notes, so that if people don't know how to spell your name they can just go to the show notes, click the link, and find it, but we'll also of course put links into BirdEye, so that they can go and try to find all the bugs, and expose the weaknesses, and then make that the platform for their conversation with you. Great. Thank you so much, Sam. I really appreciate it. If you are listening, and you found some value in today's conversation, I would really appreciate it if you consider giving the podcast a review on iTunes, or Stitcher, or whatever platform you chose to listen on, and if you know somebody doing kick ass inbound marketing work tweet me @workmommywork, because I would love to interview them. Thanks again, Sam. Sam: Thanks.
Do you remember the comedy dry zone? I’m talking about the barren, hardscrabble times when getting a free laugh from the comfort of your toilet wasn’t easy. Before Internet memes, before parody Twitter themes, before viral SNL skits, before ShowerThoughts subreddits, before LOLCats and even before giant email chats… … there was one man. The inimitable, indomitable, indefatigable Dave Barry. Beginning in 1983 and running for over twenty years, Dave Barry sent his syndicated humor column out to over 500 newspapers from his home base at The Miami Herald. Every single week his columns offered guaranteed laughs and a fresh, head-tilting way of seeing the world. Dave Barry poured perspective on political conventions, kicked socialites off soapboxes, cajoled critics into colonoscopies, and even popularized International Talk Like A Pirate Day. Together with MAD Magazines and Calvin and Hobbes cartoons, Dave Barry columns gave me and millions of others a drink … in the dry zone. I was beyond nervous to fly down to Miami and meet up with Dave at Books&Books in Coral Gables, Florida where he shared a fresh dose of his head-tilting way of looking at the world as only he sees it. Please enjoy my conversation with screenwriter, novelist, performing musician, Pulitzer Prize winner, and New York Times bestselling author of over thirty books… Dave Barry. WHAT YOU'LL LEARN: How is comedy changing these days? What is the "cue card" approach to developing a story? What are some key guidelines for giving introductions or appearing on a TV show? In Dave's view, who has the most unique voice in modern American comedy writing? How should every stand-up comedian end their performances? How do we balance ambition with contentment ... and feeling like you have “enough?” What’s the one thing anyone can do to improve their writing? Leave us a voicemail! Your message may be included in a future episode: 1-833-READ-A-LOT You can find show notes and more information by clicking here: https://www.3books.co/chapters/9 Sign up to receive podcast updates here: https://www.3books.co/email-list/
Abbey breaks down the biblical reference from the movie A Knights Tale (when William tells a pretty girl about god stopping the sun to give Joshua more time to fight the Amorites). You remember that part of the movie right? You weren't too busy staring at Heath Ledger. . . And Shannon has bought a new bible! Abbey does not approve. Music by: Guitalele's Happy Place by Stefan Kartenberg (c) copyright 2017 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/JeffSpeed68/56194 Ft: Kara Square (mindmapthat)
Nina & Jose are in bed discussing & analyzing the line "It's like a traffic jam when you're already late" from the song "Ironic" by Alanis Morissette. This is a bonus episode where we talk about a lot of stuff including the origins of the melody of "Kirkos Is The One (The Kirkos Song)," Luciano Pavarotti, people lying about puns being intended, Scrooge McDuck, the first GIF, Mary Tyler Moore inventing LOLcats, and much more.
The Mummy - Where is the Vice Pharaoh during all this? Host Joe Rosensteel and Dan Sturm.
More at http://philosophytalk.org/shows/memes-viruses-mind. Gangnam style, Lolcats, and Chuck Norris’ superhuman feats are all memes – units of cultural transmission – that spread through the internet. But when the term was originally coined, memes were posited as vehicles of a kind of evolution, similar to genes and biological evolution. So are the memes that colonize our brains simply those that survive natural selection? Don’t we get any say in the viruses that populate our minds? What happens if the fittest memes are also the most detrimental to us? John and Ken spread ideas with Susan Blackmore from the University of Plymouth, author of "The Meme Machine."
In this episode of Bibliophiles Anonymous, Denise and Jess review Shadowbound, the latest book in the Shadow World series by Dianne Sylvan. This is such a good series if you are a fan of vampires, good urban fantasy, and snark. This latest installment doesn't disappoint, furthering the plot, bringing well-loved characters more into the forefront, exploring new concepts in the mythology, and introducing new characters who are all kinds of awesome. There are also new villains that no one expected, who will stop at nothing to bring our vampires down, even if it means sacrificing their own souls. For anyone who missed our episode on Queen of Shadows, the first book in this series, you can listen to it here. Also, just for giggles, here is a gem from Dianne Sylvan's official website - Queen of Shadows, as told by LOLcats. Just because it's funny. Next week's topic: our favorite characters who have come back from the dead. Be sure to let us know which ones you would like to see mentioned in the show. Feedback? Topic suggestions? Book recommendations? Email the show at bibliophiles.podcast@gmail.com, or find us on Facebook or Twitter. We would love to hear from you! Thanks for listening! Please rate, review and subscribe!
So this is a game that exists. That I bought at Gencon last year. And decided to run. And the other players agreed to play. This is now a thing. They play cats, who fight the Cthulhu mythos, because reasons. So I chose the scenario seed that involved LOLcat memes because I am a horrible monster. Anyway, enjoy an indie RPG about feline comedy/horror investigation!
This week Sorg and Chachi, are joined by Wrestlefan to talk about ChachiPlays, Netflix, and House of Cards, Migraine help, Zombies!??!, LOLCats, Self Driving Cars, Surface Pro, and so much more! Join us live Tuesdays at 7:00 p.m. EST on live.sorgatronmedia.com! Join the AwesomeCast on Twitter, Facebook and be sure to follow us on iTunes in both video and audio formats, as well as YouTube, Boxee, Roku, and Blip.tv! As always, you can chime in with news, thoughts, or comments at Contact@AwesomeCast.com or 724-25-A-CAST.
Dell is going private, iOS 6 finally jailbroken and we reveal why Nokia did not choose Android. All this and more on TECHGEEK Weekly. We also delve into the 10 years of failed Dell Devices, Spotify for Windows Phone 8 and LOLcats. Join Stewart Wilson, Chris Southcott and Terence Huynh on this 48 minute podcast! It’s short mainly due to Stewart needing to leave but it’s all part of our effort [...] The post TECHGEEK Weekly 117: Dell Inspirion by Microsoft appeared first on TechGeek.
Soutenez nous sur Patreon.com/PodcastScience // Retrouvez nous sur PodcastScience.fm // Twitter: Twitter.com/PodcastScience // Facebook: Facebook.com/PodcastScience //
Through custom coding and modification, Melissa Barron has modified the classic Oregon Trail to use in-game text that’s a blend of l337, chatspeak, and LOLcats syntax. Learn about the process of hacking this game and see it in action on an Apple IIc. Learn more at http://melissabarron.net/ or see her similar presentation at Notacon 7 […]
Let’s take a look at the cutting edge of what the Web can do. It’s not just about LOLcats and static news anymore. Eric Shepherd demonstrates how to create dynamic web applications using the latest technologies, including WebSockets. And, for the non-programmers, a few fun demos of what the web can do that you might […]
Rue Brutalia is Jason Kalter and Jon Pack, a sketch duo that's been performing here in New York since 2006/2007. The duo met in one of Kevin Allison's sketch classes at The PIT and struck up a fast friendship and writing/performing relationship. Rue Brutalia's sketches are delightful and absurd and fun and playful, and the two have been picking up steam lately. They not only performed at NYC Sketchfest for their third year in 2010, but they also helped to produce the festival. Kalter and Pack are a fountain of knowledge for the sketch scene both in New York and nationally. We sat down with the duo at the end of August to dish about the nadir of sketch, how to write sketch, lolcats, Broadway conspiracies, and the death of Google Wave.
The Business: Clay Shirky, Facebook and Lolcats
Listen now or subscribe to the podcast feed! This week, equine shenanigans at the fancy lounge at the Natick Collection, feedback from a confused Nigerian "listener", Steve takes on the evil Omaha Steaks telemarketers, the mystery of Le Petit Bistro, adn a love/hate review of The Simpsons Game. Links: LOLCats at I Can Has Cheezburger Music: Up Up Down Down Left Right Left Right B A Start "I Know You'll Find Out That I'm a Geek" (mp3) from "and Nothing is #1" (Steven Poponi) Buy at iTunes Music Store More On This Album Intro Music: "Pocketbook" by Derek K Miller Outro Music: "Remember Hope" by Farewell Redemption Podcasts Mentioned: Barely Podcasting Love Long and Prosper Redboy Podcast Technorati Tags: Podcasts Boston Massachusetts New England Feedback: Feel free to e-mail us at WickedGoodPodcast|at|gmail.com or call us at 206-600-MASS(6277)!
Listen now or subscribe to the podcast feed! This week, a trip to obtain McDonalds Monopoly pieces results in fisticuffs, the worst pizza ever, whether teenagers know about LOLCats, Maureen nearly gets run off the Mass Pike, attack of the wild Turkeys, and hard-hitting analysis of the Red Sox World Series victory. Links: McDonalds As Pizza Toppings Music: "Draped In Blue" by Mieka Pauley, courtesy of the Podsafe Music Network Intro Music: "Pocketbook" by Derek K Miller Outro Music: "Remember Hope" by Farewell Redemption Podcasts Mentioned: Shelly's Podcast Extra Points Atomic Suburbia Mostly News Life On Tap Love Long and Prosper Technorati Tags: Podcasts Boston Massachusetts New England Feedback: Feel free to e-mail us at WickedGoodPodcast|at|gmail.com or call us at 206-600-MASS(6277)!