Podcasts about cloudmark

  • 16PODCASTS
  • 62EPISODES
  • 1h 23mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jul 22, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about cloudmark

Latest podcast episodes about cloudmark

Go To Market Grit
#200 CEO and Co-Founder Together AI, Vipul Ved Prakash w/ Bucky Moore: Super Cycle

Go To Market Grit

Play Episode Listen Later Jul 22, 2024 55:29


Guests: Vipul Ved Prakash, CEO and co-founder of Together AI; and Bucky Moore, partner at Kleiner PerkinsNo one knows for sure whether the future of AI will be driven more by research labs and AI-native companies, or by enterprises applying the technology to their own data sets. But one thing is for sure, says Together AI CEO and co-founder Vipul Ved Prakash: It's going to be a lot bigger. “If you look at the next 10 years or the next 20 years, we are doing maybe 0.1 percent of [the] AI that we'll be doing 10 years from now.” In this episode, Vipul, Bucky, and Joubin discuss startup table stakes, Tri Dao, tentpole features, open-source AI, non-financial investors, Meta Llama, deep learning researchers, WeWork, “Attention is All You Need,” create vs. capture, Databricks, Docker, scaling laws, Ilya Sutskever, IRC, and Jordan Ritter and Napster.Chapters:(00:53) - Executive hiring (04:40) - How Vipul and Bucky met (06:54) - Six years at Apple (08:19) - Together and the AI landscape (12:47) - Apple's deal with OpenAI (14:27) - Open vs. closed AI (17:32) - Nvidia GPUs and capital expenditures (22:48) - Fame and reputation (24:17) - Planning for an uncertain future (27:00) - Stress and attention (30:18) - AI research (34:58) - Challenges for AI businesses (39:02) - Frequent disagreements (43:05) - Vipul's first startups, Cloudmark and Topsy (47:55) - Taking time off (50:09) - The crypto-AI connection (53:20) - Who Together AI is hiring (54:37) - What “grit” means to Vipul Links:Connect with VipulTwitterLinkedInConnect with BuckyTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Cloud Intelligence at the speed of 5000 tok/s - with Ce Zhang and Vipul Ved Prakash of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 8, 2024 63:11


Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ​”Beyond Transformers & GPUs”, and “​Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer

Startup Field Guide by Unusual Ventures: The Product Market Fit Podcast
How open source AI will find product market fit: A conversation with Databricks, and AI startup Together

Startup Field Guide by Unusual Ventures: The Product Market Fit Podcast

Play Episode Listen Later Sep 25, 2023 46:33


Open source AI models have become key drivers of innovation and collaboration. An increasing number of developers and end users are leveraging open source technologies. There is immense potential in the long-term impact of open source AI.  In this episode, we are releasing a conversation on the future of open source AI between Wei Lien Dang (Unusual Ventures), and Reynold Xin (Databricks) and Vipul Ved Prakash (Together). Join us as we discuss:3:16: The rise of open source LLMs and foundation models 7:23  Building open source AI platforms to serve customers 10:35 Why Together and Databricks decided to build with open source 13:33 LLMs and the need for standardization 21:09 The role of academia in AI research and innovation 26:57 Innovations in training data 30:55 Making the decision to choose open source models 36:52 Growing Accessibility of Machine Learning with LLMs 40:31 How the open source ecosystem will evolve in the future 47:18 Best practices for parameterizing LLMs over timeWei Lien Dang is a General Partner at Unusual Ventures and leads investments in infrastructure software, security, and developer tool.  Wei was a co-founder of StackRox, a cloud-native security company prior to its acquisition by Red Hat. He can be reached at wei@unusual.vc and Twitter LinkedIn Vipul Ved Prakash is the co-founder and CEO of Together. He was also the founder of Topsy and Cloudmark. Reynold Xin is the co-founder of Databricks. Last valued at $43B, Databricks has been a juggernaut data infrastructure business built on Apache Spark analytics engine. They recently launched multiple AI products including Lakehouse AI and their own open source LLM — Dolly. Unusual Ventures is a seed-stage venture capital firm designed from the ground up to give a distinct advantage to founders building the next generation of software companies. Unusual has invested in category-defining companies like Webflow, Arctic Wolf Networks, Carta, Robinhood, and Harness. Learn more about us at https://www.unusual.vc/.Further reading from Unusual Ventures: Why the future of AI-native infrastructure will be open How good is your LLM? Nobody know yet What AI builders should know about data protection and privacy

Changemaker van de Week Podcast
TODAY'S CHANGEMAKERS 2023 #13: Finn McClain

Changemaker van de Week Podcast

Play Episode Listen Later May 31, 2023 39:06


Finn McClain heeft meer dan twintig jaar ervaring in het bedrijfsleven en hielp wereldwijd verschillende tech-bedrijven met opschalen. McClain werkte jarenlang op Sillicon Valley in Amerika voor onder meer Cloudmark en Trend Micro. Nu is hij Chief Commercial Officer (CCO) van Seenons, een Amsterdams tech-bedrijf dat circulair afvalbeheer mogelijk maakt. Today's Changemakers is powered by: Ebbinge, Renewi & Vattenfall

Mortgage Masterminds
The Power of Tech in the Mortgage Industry with Karl Jacob

Mortgage Masterminds

Play Episode Listen Later Sep 8, 2022 18:50


Karl Jacob is a serial entrepreneur who has been building, advising, and investing in companies for the last 20 years. During his career, Karl has raised 23 rounds of financing from a wide range of investors, including True Ventures, Baseline Ventures, Richard Branson's Virgin Group, Microsoft, eBay, Integral Partners, Norwest Ventures, Greylock, Benchmark Capital, FT Ventures, Ignition Partners and Vulcan Ventures. Many of Karl's companies have been successfully acquired, including Dimension X, acquired by Microsoft; Keen/Ingenio, acquired by AT&T; Cloudmark, acquired by ProofPoint; and Coveroo, acquired by Zazzle. Across his various tenures as a start-up CEO, Karl has generated hundreds of millions of dollars in investor returns and up to $150 million in revenue per year. In 2005 he joined Facebook as one of its first advisors and has gone on to advise several other companies. Karl is also a prolific angel investor and mentor for start-up companies including Mayvenn, June, Healthtap, Everlane, Skillshare, Rescale, Memsql, Haven, Shippo and many others. He holds a B.S. in Computer Science from the University of Southern California's Engineering School, where he sits on its board of counselors.

Untold Stories
Mortgage NFTs and the Bacon Protocol with Karl Jacob

Untold Stories

Play Episode Listen Later Apr 26, 2022 54:41


Today, my guest is Karl Jacob, the co-founder, and CEO of LoanSnap. Karl is a serial entrepreneur who has been building, advising, and investing in companies for the last 20 years. LoanSnap invented the world's first smart loan technology that uses AI and machine learning to analyze a person's entire financial picture and show them how to benefit from a smart home loan. LoanSnap is the parent company that built the Bacon Protocol, a decentralized mortgage lending protocol created using smart contract technology on the Ethereum blockchain. Anyone with a Web3 wallet can lend money and earn interest, all while seeing exactly which homes they're lending against. Moreover, anyone with a home that meets the protocol's criteria can create an NFT and use it as collateral to borrow money. Karl's career has been focused on founding companies that solve big problems, and those companies have helped tens of millions of consumers. He has raised 23 rounds of financing from investors, including True Ventures, Baseline Ventures, Richard Branson's Virgin Group, Microsoft, eBay, Integral Partners, Norwest Ventures, Greylock, Benchmark Capital, FT Ventures, Ignition Partners, and Vulcan Ventures. Many of his companies have had successful acquisitions, including; Dimension X, acquired by Microsoft, Keen/Ingenio, acquired by AT&T, Cloudmark, acquired by ProofPoint, and Coveroo, acquired by Zazzle. While CEO, Jacob has generated hundreds of millions in returns to investors and over $150 million in revenue per year. He holds a B.S. in Computer Science from the University of Southern California Engineering School, where he sits on the board of counselors. We discuss a wide range of topics, including LoanSnap, Bacon Protocol, the real estate industry, the benefits of blockchain, and much more. We begin our discussion by delving into why the banking/loan industry has been tough to disrupt. Our conversation leads us to how he and his team have finally found a way to disrupt their value chain. Karl explains how the Bacon Protocol works and what makes it unique. Karl elaborates why they decided to build the bacon protocol on blockchain/crypto rails instead of building on the traditional financial stack. He illustrates this by explaining how blockchain removes the barriers and overhead that plague the conventional loan and real estate industry. Karl discusses the fallacies that persist throughout the loan industry and how the Bacon Protocol democratizes access to affordable housing and the loan industry. We also discuss the socioeconomic impact of foreclosures on neighborhoods and how blockchain technology can be used to solve the misalignment between individuals and corporations. Karl also addresses the impediments that currently make it challenging to tokenize real-world assets and how they've optimized the Bacon Protocol to avoid these issues. We finish our conversation by discussing how Karl envisions the entire loan industry to be disrupted by blockchain technology. Please enjoy my conversation with Karl Jacob. -- This podcast is powered by Blockworks. For exclusive content and events that provide insights into the crypto and blockchain space, visit them at https://blockworks.co

WE DIGRESS - the podcast that's a hot mess
We're Back!! - Game Awards Recap Show

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Dec 13, 2021 85:59


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Pokimane's RTS signs Mark?!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Nov 1, 2021 80:31


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- Support this podcast: https://anchor.fm/wedigress/support

signs pokimane cloudmark
WE DIGRESS - the podcast that's a hot mess
Mark didn't make the list :(

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Oct 18, 2021 83:49


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

cloudmark
WE DIGRESS - the podcast that's a hot mess
It's SpOOooooOoooOoOky season!!!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Oct 4, 2021 84:24


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

cloudmark
Protecting People
Five Minute Forecast for the week of 9/27/2021

Protecting People

Play Episode Listen Later Sep 27, 2021 4:13 Transcription Available


Five Minute Forecast for the week of September 27th. All the cyber security news you need to stay ahead, from Proofpoint's Protecting People podcast. No honor among thieves as REvil is caught stealing by its own affiliates America's food supply under attack from Ransomware New mobile malware emerges in North America Joining us is Adam McNeil, Senior Threat Researcher at Cloudmark to discuss a new mobile malware emerging in the U.S. and Canada.

WE DIGRESS - the podcast that's a hot mess
PlayStation Showcase recap!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Sep 13, 2021 126:06


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

playstation cloudmark
WE DIGRESS - the podcast that's a hot mess
The future of this podcast...Mike's internet still sucks

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Sep 6, 2021 105:36


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Gamescom 2021 ONL Recap and Mark schools Mike about dinosaurs

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Sep 1, 2021 108:34


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Y'all got anymore of that OnlyFans??

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Aug 23, 2021 107:49


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

onlyfans cloudmark
WE DIGRESS - the podcast that's a hot mess
It keeps getting worse for Blizzard...

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Aug 9, 2021 102:32


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
We need to talk about Blizzard

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Aug 2, 2021 134:00


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://www.wedigresspodcast.com/ Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- Support this podcast: https://anchor.fm/wedigress/support

blizzard cloudmark
Success Made to Last
Success Made to Last Legends with Jerome Lecat, CEO of Scality

Success Made to Last

Play Episode Listen Later Jul 20, 2021 25:45


Success Made to Last Legends with Jerome Lecat,CEO of Scality. Jerome Lecat is a serial entrepreneur and business angel with 15 years of internet start-up experience. You can find him on Twitter: @jlecat. From 2003 to 2010, Jerome led Bizanga, the leading email MTA for service providers, which he founded with Olivier Lemarie, Marc Sheldon and Giorgio Regni. Bizanga achieved major market penetration worldwide with customers such as Comcast, Cox Communications, Telefonica and United Internet (1&1). Bizanga was successfully sold to Cloudmark in February 2010. In 2001, Jerome became Chairman of the Board of Data Center Technology (DCT), a Belgium based start-up which developed a unique Content Addressable Storage (CAS) technology, especially for the backup market. After signing over 70 customers, DCT was sold to Veritas in 2005 with significant profit for its investors. In 1994, Jerome founded with Olivier Dauchot and Olivier Lemaria Internet-Way, the fourth ISP for enterprises to open in France. As CEO, he built the company from a garage start-up to the second largest ISP in France. In 1997, after the company had reached profitability, he sold the company to UUNET where he served as vice president of products for EMEA. Jerome has also been active as a Business Angel and Board Member in several leading technology companies, including Vision Objects, the world leader in handwriting recognition, which was sold to DoubleDay in 2009. Jerome holds an engineering degree from the Ecole Nationale des Ponts et Chausses, a research masters degree on Cognitive Science from Universitae Paris VII. Jerome attended the AMD program at INSEAD. Jerome has also been active as a Business Angel and Board Member in several leading technology companies, including Vision Objects, the world leader in handwriting recognition, which was sold to DoubleDay in 2009. Jerome holds an engineering degree from the Ecole Nationale des Ponts et Chausses.Become a supporter of this podcast: https://www.spreaker.com/podcast/success-made-to-last-legends--4302039/support.

WE DIGRESS - the podcast that's a hot mess
We vibing......Plus, SGDQ and Steam Deck news!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jul 19, 2021 85:04


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Nintendo did us dirty...

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jul 12, 2021 94:13


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

nintendo cloudmark
WE DIGRESS - the podcast that's a hot mess
Twitch and TOS integrity...Name a better duo.

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jul 6, 2021 109:06


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
E3 Awards Show - LIVE REACTION

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jun 21, 2021 41:06


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Thigh Guy Summer Game Fest - LIVE REACTION

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jun 14, 2021 126:42


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Samsung's Virtual Assistant broke the Internet...and we're okay with that

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jun 7, 2021 83:19


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
How to lose $42 million and alienate people.

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later May 31, 2021 108:52


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

alienate cloudmark
WE DIGRESS - the podcast that's a hot mess
Twitch vs the Hot Tub Meta

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later May 27, 2021 101:02


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

twitch hot tubs hot tub meta cloudmark
WE DIGRESS - the podcast that's a hot mess
Another friendship episode???

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later May 27, 2021 109:50


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

friendship cloudmark
WE DIGRESS - the podcast that's a hot mess
We forgot about the DICE Awards...

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Apr 26, 2021 91:24


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
All hail the new King of Twitch!!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Apr 19, 2021 87:16


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
If you cheat in a game, if you cheat on your wife, if you cheat in real life, then &$!% you!!

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Apr 12, 2021 105:33


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
-the non-fungible token that's a hot mess-

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Mar 29, 2021 106:29


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
"If you see something, say something."

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Mar 22, 2021 134:41


anti-asianviolenceresources.carrd.co Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
4K Nintendo Switch when??? Plus Pokemon Direct recap and Twitch's 2020 analytics

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Mar 15, 2021 94:05


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
HAS THE PS5 ALREADY WON 2021?? plus, Blizzconline recap and VCT Game Changers announcement

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Mar 1, 2021 115:47


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Texas is FROZEN...Plus Nintendo Direct News

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Feb 22, 2021 102:55


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Mandalorian X The Last of Us Crossover?? Plus Pokemon Scalpers are getting Ridiculous

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Feb 21, 2021 92:29


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
BUY LOW, SELL HIGH - Explaining Gamestop Stock...very badly

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Feb 18, 2021 88:39


Support us on Patreon: https://www.patreon.com/wedigresspod Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Mark is an OFFICIAL Resident Evil Creator

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jan 25, 2021 78:05


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess

Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
America is on fire 2: Electric Boogaloo

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jan 17, 2021 102:28


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Our most CHAOTIC episode yet. Goodbye 2020, hello 2021

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jan 6, 2021 99:46


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

chaotic cloudmark
WE DIGRESS - the podcast that's a hot mess
You can't say Simp on Twitch??

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Dec 24, 2020 89:26


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

twitch simp cloudmark
WE DIGRESS - the podcast that's a hot mess
The Game Awards Live Commentary Part 1

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Dec 17, 2020 92:44


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
The Game Awards Live Commentary Part 2

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Dec 17, 2020 86:44


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Catching Up and Talking the Poki/Fed drama

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Dec 10, 2020 115:55


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Happy Thanksgiving from WE DIGRESS

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Nov 26, 2020 3:55


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Mark won't shut up about ANIME

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Nov 23, 2020 125:31


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

anime shut up cloudmark
WE DIGRESS - the podcast that's a hot mess
Twitch has a HUGE DMCA Problem

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Nov 16, 2020 103:14


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

twitch dmca cloudmark
WE DIGRESS - the podcast that's a hot mess

Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

america cloudmark
WE DIGRESS - the podcast that's a hot mess

Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

spooky cloudmark
WE DIGRESS - the podcast that's a hot mess
Politics in the Games Industry, Google Stadia and Mark Reads a Scary Story

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Oct 26, 2020 90:48


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
More Ubisoft Misconduct News, Riot Dissolves OPL, and Mark Rages

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Oct 12, 2020 121:16


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Microsoft buys Zenimax, Worlds 2020 Play-ins, and the Tokyo Game Show

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Oct 5, 2020 92:44


Support the show for as little as $0.99 a month: https://anchor.fm/wedigress/support Buy the Official WE DIGRESS merch here: https://streamlabs.com/mikemedran0/merch Find Mike here: https://www.twitch.tv/mikemedran0 https://twitter.com/MikeMedran0 https://www.instagram.com/mikemedran0/ Find Mark here: https://www.twitch.tv/cloudmark27 https://twitter.com/CloudMark_ https://www.instagram.com/cloudmark_/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

Screaming in the Cloud
Managing Humans with Charity Majors

Screaming in the Cloud

Play Episode Listen Later Sep 17, 2020 30:49


Charity Majors is the cofounder and CTO at Honeycomb.io, makers of an observability platform for engineers and DevOps teams. Before Honeycomb, Charity worked as a production engineering manager at Facebook, an infrastructure tech lead at Parse, and senior systems engineer at Cloudmark, and a systems engineer at shopkick, among other positions. She’s also the co-author of Database Reliability Engineering: Designing and Operating Resilient Database Systems. Join Corey and Charity as they discuss how to manage teams effectively, how humans want autonomy and why managers need to understand that dynamic, how a manager’s job is more like curating a team than actually managing people, why Charity believes companies don’t actually exist but instead are created every day, why managers should be less like King George and more like the articles in the Constitution, why technology companies should focus on letting people do what they love instead of automatically encouraging them to climb the ladder and get into management, and more.

WE DIGRESS - the podcast that's a hot mess
FGC Grooming, OTV Drama, and Mental Health in Games

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jul 6, 2020 118:54


TRIGGER WARNING: Sexual Assault, Suicide Join MikeMedran0 and Cloudmark as they discuss the week's latest gaming news. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

WE DIGRESS - the podcast that's a hot mess
Mixer Shutdown, Twitch's Sexual Assault Problem and a Ridiculous Lawsuit

WE DIGRESS - the podcast that's a hot mess

Play Episode Listen Later Jun 29, 2020 104:11


TRIGGER WARNING: Sexual Assault, Racism Join MikeMedran0 and Cloudmark as they discuss everything and nothing at all! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/wedigress/support

How to Live to 200 Podcast
Nutrition and Bio-Hacking w/ Martin Tobias, CEO, Bulletproof Labs

How to Live to 200 Podcast

Play Episode Listen Later Jul 24, 2018 52:27


Martin Tobias is the CEO of Bulletproof Labs, the CEO of Upgrade Labs and the co-founder of People For Cause, Element 8 Angels, and MGT Investments. He’s served on the board of directors for Cloudmark and Tippr, and co-founded numerous startups, including Kashless, Imperium Renewables, and Loudeye Technologies. He was a partner at Ignition Partners, a venture capital firm in Seattle, and previously served as an executive at Microsoft. In addition to being an experienced technology executive, Martin is a health enthusiast and an advocate for people taking control of their own biology. Bulletproof Labs is a high-end health center with an emphasis on biohacking. Bulletproof Labs features the latest, cutting-edge workout equipment and experimental and emerging therapies, including cryotherapy, float tanks, and high-intensity interval training. In this episode, we answer questions on biohacking and discuss the latest trends in life extension. What are telomeres? Can you regrow your telomeres? What are NAD supplements? Could nicotine be part of a healthy diet? What is a vampire facial? About Martin bulletprooflabs.com Martin Tobias Twitter Martin's Personal Blog Show Links Palo Alto Longevity Prize 40 years of Zen Telomere extension turns back aging clock in cultured human cells, study finds Epithalon peptide induces telomerase activity and telomere elongation in human somatic cells Nicotinamide adenine dinucleotide (NAD) NAD+ and sirtuins in aging and disease, Shin-ichiro Imai, Leonard P. Guarente BrainCheck Martin’s blog on NAD supplementation: DO THIS: Increase your NAD levels TRU NIAGEN Elysium Basis Bulletproof Labs' Body Hacks Bulletproof products Will a Nicotine Patch Make You Smarter? [Excerpt] Platelet-rich Plasma (PRP) Oura DO THIS: Morning Pages Hacks DAVID Delight Pro Muse

Inside Out Security
How Infosec Can Implement Diversity & Inclusion Programs to Address Workforce Shortage and Make More Money Too

Inside Out Security

Play Episode Listen Later Jun 15, 2018 18:26


Data breaches keep on happening, information security professionals are in demand more than ever. Did you know  that there is currently a shortage of one million infosec pros worldwide? But the solution to this “man-power” shortage may be right in front of and around us. Many believe we can find more qualified workers by investing in Diversity & Inclusion programs. According to Angela Knox, Engineering Director at Cloudmark, "We're missing out on 50% of the population if we don't let them [women] know about the job." For skeptics: creating a more diverse workplace isn't about window dressing. It makes your company more profitable, notes Ed Lazowska, a Professor of Computer Science and Engineering at the University of Washington-Seattle. "Engineering (particularly of software) is a hugely creative endeavor. Greater diversity — more points of view — yields a better result." According to research from Center of Talent Innovation, companies with a diverse management and workforce are 45 percent more likely to report growing market share, and 70 percent likelier to report that their companies captured a new market. I wanted to learn more about the benefits of a D&I program, and especially how to create a successful one. So I called Allison F. Avery, Senior Organizational Development & Diversity Excellence Specialist at NYU Langone Medical Center, to get the details from a pro. She is responsible for providing organizational development consultation regarding issues such as diversity and inclusion, performance improvement, workforce engagement, leadership development, and conflict resolution. In part one of our interview, Ms. Avery sets the foundation for us by describing what a successful diversity & inclusion program looks like, explaining unconscious bias and her thoughts on hiring based on one's social network. Transcript Cindy Ng: Allison Avery is a senior organizational development and diversity specialist at NYU's medical center. She is responsible for providing organizational development, consultation regarding issues such as diversity and inclusion, workforce engagement, leadership development and conflict resolution. In our interview, Allison demystifies common misperceptions about diversity and inclusion, offers a successful framework and methodology to implement D&I and, yes, confirms that diverse organizations do make more money. Can you define for us what diversity and inclusion means? Allison Avery: The way that I like to define, or the way that I'm going to talk about diversity, is really referring to the richness of human differences. And so, that can mean anything from socio-economic status, race, ethnicity, language, nationality, sexual orientation, religion, all the way to learning styles and life experiences. I know, for the context of this conversation. We're really going to target specifically on a lot with regard to race, and ethnicity and gender because that's really who's primarily underrepresented in the tech field. We're going to talk a lot about that, but diversity in and of itself primarily just means, really, difference, and it's sort of a naturally-occurring phenomenon. And then, inclusion is the way in which we engage that diversity. So, it refers to active, intentional and ongoing engagement with that diversity. It's the way that we foster belonging, that we value and encourage engagement and that we really connect individuals throughout. Whether it's an organization or institution, to leverage their excellence, leverage their skills, leverage their skill sets and promote them to grow into the climate and the culture that we're trying to cultivate within an organization, within an institution and even within an industry. So, it's the way that we intentionally, and ongoingly and actively engage the diversity at hand. Cindy Ng: Describe for us the kinds of diversity and inclusion programs you've implemented and what has been successful. Allison Avery: There are a couple of different arenas that I think diversity and inclusion programming gets parsed into. One is primarily along the lines of recruitment and retention. Now, in medical school, we tend to not have any general issue with retention, but that tends to be in the domain of professional development. And that's pervasive throughout any industry, and I see that within a lot of the articles I was reading in the tech industry. There are some initiatives going on through Google and Twitter of trying to recruit individuals from different industries to companies, and that's just a pervasive element. So, we do a lot of recruiting here at the medical school for students from the educational pipeline. So, we go to undergraduate institutions, we have summer programs for students that are rising juniors and seniors to come and spend the summer to do basic science research, primarily targeted for Blacks and Latinos because those targeted minority groups are underrepresented in medicine. Only about 6% of medical school matriculants are Black-identified and about 4% are Hispanic-identified in the country. About 56% are white-identified matriculants in medical school in 2014. So, there's a huge underrepresentation and, as we see the shifting demographics of the country over time, minorities will become the majority by 2050. That's kind of the projected...and even before, that's kind of of the projected year. So, we see a kind of need for greater representation in a medical school, so we do a lot of recruitment effort. NYU just matriculated its highest composition of diversity this past year or so. The entering class of 2014 was the most diverse ever, and so our efforts were quite rewarded in having a cultivated class of compositional diversity. That was a very successful effort and that is from going schools to having a very diverse group of individuals on the screening committee, on the interview committee. We have multiple mini interviews, so we have, where individuals do not review the full record. When students come into interviews, we try to eliminate aspects of bias. So, there's trainings on unconscious bias for all the interviewers, trainings on unconscious bias for all the screeners. That's another effort that we do. So, recruitment is a really big, targeted effort with regard to any industry for trying to attract and recruit underrepresented minorities. Another area is educational enrichment. And so, there's a lot of efforts to look at how do we ameliorate and reduce health and health care disparities. That's basically looking at cultural competency training for all physicians, because healthcare is something, and rendering appropriate healthcare and rendering it across different cultural lines, is something that every physician needs to have the capacity for, especially when we're looking at the diversity in the pluralistic community of the patient population that all physicians are needing to have the capacity to serve. And so, I think that that's also generalizable to the tech industry when you look at the shifting demographics of the country of users. So, there is a huge pluralistic nation that we have, and people have different needs and there are very different markets that can be targeted and marketed toward. Having different educational initiatives, looking at how do we reduce health and health care disparities, and training students has been a very big initiative within the curriculum. So, how do we basically educate our entire population of students to be able to render care for a huge and diverse patient population? They need to know about things like health disparities, they need to know about things like social determinants of health. They need to know about how bias might impact their decision-making on treating different types of patients of certain races, of certain genders, of certain sexual orientations. And they need to know how, generally, socially disadvantaged groups tend to receive worse quality healthcare. Cindy Ng: Earlier you mentioned unconscious bias. Can you define that term for us? Allison Avery: Unconscious is pretty much anything that's outside of our conscious awareness, which is primarily the main way that we operate, it’s likened that about 90% of our mental processes and the way that we operate is outside of consciousness. So, the unconscious is pretty much any mental process that is inaccessible to consciousness, but it influences our judgments, our feelings and our behavior. It's pretty pervasive. And then, bias is really neutral term. It gets a kind of negative rap and it's something that we cannot do without, nor would we want to. But bias is pretty much, it's just a tendency or an inclination, but it's one that prevents an unprejudiced consideration of a question. So, it has this sort of stigma to it but bias is really, it's just a neutral thing. But the way that we understand unconscious bias and the way that we're talking about it, is in this arena of prejudice, social stereotypes and attitudes that we form about certain groups of people without our intention or our conscious awareness. And that's what we really mean when we're talking specifically about unconscious bias as it relates to certain groups of people and how that influences the way that we engage with people. That's how I'm sort of using the term as it relates to D&I work in our workspace and how it might prevent the hiring of a person, how it might impede diversity and inclusion efforts, and that's been noted as one of a main and contributing barrier to compositional diversity effort. Hiring practices in the recruitment phase, in the interview phase, in trying to really, really have a very, very diverse workplace, unconscious bias has been kinda targeted and denoted as one of a huge area or an impediment to having the diversity that we would like to consciously see. And I think it's really important to make the distinction. It's the distinction between the way that we consciously believe, and we might have these very consciously-held egalitarian views, which I believe that we do if you look at social attitudes in this country over the past 40 years and the evolution of which they've grown, and they've changed and they've evolved very, very drastically. It's more stigmatized now to be a racist in this country than probably almost anything else. It's very, very stigmatized. However, when you look at some of our unconscious attitude and what some of the outcomes, a lot of our actual practices, i.e. some of the health outcomes, some of our housing outcomes, some of the actual behaviors and outcomes have remained unchanged. So, like you were saying, in the tech industry, there have been a lot of things that have remained unchanged for the past 15 years or, you know, two years or 10 years. It's that spectrum or that dichotomy between the way that we consciously believe and, sometimes, the way that our unconscious behaviors and the manifestation of which gets played out. And bridging those two is the space of bias, and trying to bring those two things a little bit more in alignment and a little bit more closer together. So, we have there pretty egalitarian conscious attitudes, but the outcome of which doesn't really reflect that when you look at some of our composition in the workspace, some of our health outcomes and the way that we hope to think of ourselves. You know, look at the composition of our prison system, look at the composition of women in the tech field. Cindy Ng: It's popular in the tech field to hire based on one's social network. What's your opinion on that? Allison Avery: I think on face value and on first flush, that seems like a good idea but I don't think we've tracked the full ramifications of what that means. And I think that there's a way that, on first pass, that seems like a very respectable way to go about doing business, and I think on one level it is. But we need to do a little bit of a deeper dive on what do we mean by things like, how do we define culture fit? How do we define somebody who is aligned with our organization and the diversity that we want? And what are the actual ramifications of just pulling from our social networks? So, when we look at how people's social networks get created and cultivated, they tend to be, like you said, people tend to migrate toward people that are like them. And that tends to also fall within similar social identity categories, socio-economic lines and class status, correct? So, on one level, it seems like a very good...on first pass, if you don't dig any deeper, it seems like a very good idea. Okay. Somebody suggests a friend and that person comes into the organization, and they probably do fit in very well, and they probably get along very well and then you kind of go forward without thinking much further. But then, when you look at the compositional diversity of who, then, you attract, everybody sort of seems to either come from similar schools so you're not getting a diversity of educational experiences, come from similar classes and, potentially, demographics. So, you might have similar social identity categories of composition. When you look at the composition...I was just reading this article called, "What it's actually like to be a black employee in a tech company," and they cited some really, really interesting statistics and I think it's very worthwhile to go over those because the Public Religion Research Institute has some statistics related to people's social networks. And you know, white Americans have 91 times as many white friends as black friends. I think that's really important because three-quarters of whites have entirely white social networks without any minority presence. So, if that's where you're pulling from, what are the odds that you're going to have a huge minority presence if that's the pool that you're pulling from? Clearly, just from a statistical representation, very, very small, correct? But unless you know that and unless you're thinking in those terms, it just seems like a very good idea from first pass. That's why a deeper dive is so much more necessary, and that's why I think that there isn't this intentionally evilness to people who are anti-diversity. It's just that they don't tend to know, nor do they tend to dig, and there's this naiveté of, "Well, invite individuals from their social networks and things should just be fine." But people think that other social networks are much more diverse than they actually are, and that's just not true. And so, once you know that, once you know that, "Okay, if this is our structure, employees are actively encouraged to suggest friends or former colleagues," well, if you also know that your company is comprised of 57% of this, and then you know that those individuals are going to be 91 more times likely to, "Blah, blah, blah," well, then you're going to rethink your methodology. But generally, people don't have that type of statistical awareness or insight into how these social networks are formed or structured, and so they don't understand all the nuance related to recruitment and why it's so difficult to have elements of compositional diversity. Cindy Ng: How would you reshape hiring practices? Allison Avery: So, a couple of different things. One, I would have pervasive unconscious bias training for all hiring managers completely required. I mean, that's just a given and an automatic. Number two, there are some things right at the outset that take people out of the running right away, like affiliate universities. There's pooling from similar universities that have a lower representation of underrepresented minorities. So, you make partnerships with schools that are serving very high, either women or very high minority-serving institutions, and those tend to actually not be the Berkeleys and the Stanfords of the world. So, you can look at the compositional diversity of different institutions. So, I know at NYU we tend to partner with certain very specific institutions that have either very strong STEM programs, so they're doing a lot of work with very high-quality students and doing a lot of rigorous scientific work, and we make very strong partnerships with them so that we also know the quality and the caliber of the student. And so, you can be a hiring manager and you make partnerships with, whether it's a nonprofit or whether it's an undergraduate institution that's a high serving minority, but that you also are vetting with regard to the quality or you're investing in the quality. So, you can help mentor them in the creation or co-creation of their program and have some sort of influence. That's another way. So, you develop these kind of pipeline programs, that's another one, and then you reward those elements. Having internship, that's another element. Not just pooling people from your social network. Also, the more diverse your hiring system is...so, we know that whatever kind of interview process you have, if you put five people in a room and that's the interview team, they are going to replicate themselves in who they hire. So, whomever you want hired is how you comprise your hiring team. So, if you would like a very diverse team hired, then you need to have a very diverse hiring team. The worst thing that you want to do is just have one hiring manager because you're most likely going to have that person replicated in whomever they hire. So, you want as many people to weigh in as possible and you want that team that gets weighed in as diverse as possible. So, that's another recommendation that we do. So, those would be just the first pass of things that I would recommend, very quickly. And taking out words in the job description of what you're looking for. So, we know that there's a lot of gender priming in the job description, like things like, "Strong leader," and "Aggressive manager," and those are very, very gender-oriented. Or when people assume at the very outset, sometimes, a lot of things about people, relocation, if they're interested relocating or not, or inappropriate questions that they wouldn't ask, you know, a man versus a woman, and things like that and really being conscientious that is not present within any part of the on-boarding. So, that's also looking at the job descriptions and really making sure that those aren't either gender or sort of racially-leaning. And making sure that these things are advertised and reaching individuals in different pockets, so utilizing and leveraging people in-house too, utilizing any type of people in-house. So, you know, in kind of reading some of these articles, there's a lot of informal or even formal professional networks within an organization or institution. So, we have the Black and Latino Student Association and they belong to a professional association called the Student National Medical Association. Well, that's primarily for black medical students. Then there's the NHMA, which is National Hispanic Medical Association and that serves Hispanic medical affiliates. And so, there's a lot of affiliate, there's formal and there's informal. I know there was one in one of the articles that I was reading of Twitter, called each other the Blackbird, Twitter's internal group for black employees leveraging the internal group that is serving or is in the interest group of certain underrepresented or underserved minorities that is your target. And being really intentional about saying that this is a priority, and this is why and this is why we're valuing a certain demographic that's extraordinarily underrepresented in this organization. Also, when we look at paid differentials, so something that is very pervasive. So, when you look at how people are staffed, when you look at upper-level management and the composition, and how the color changes as you go along the rungs. And we know that the American Institute for Economic Research has done a lot of noting that, you know, employees of color as statistically paid less by a considerable margin. And that's substantiated by a lot of economic research looking at how pay is a differential and trying to reconcile that, looking at how people are promoted and looking where they're staffed. Are the majority of black employees on the janitorial and security contractor level, or are they, you know, in middle management? And how are people being staffed throughout the organization, and where, and what does that look like? And you can be more intentional about that, and it's important.

Hire Power Radio
Jordan Ritter!: Hire For The 3 C’s In Your Company! Culture, Capacity For Mastery, And Craft

Hire Power Radio

Play Episode Listen Later Mar 8, 2018 30:58


A 5x tech entrepreneur and his unique approach to hiring great people for his  companies, using the 3 C’s: Culture, Capacity for Mastery, and Craft. Utilizing this interview methodology can take your company from good to great! Episode highlights: What the 3 C’s are? Apply this methodology to your hiring for your company A really unique approach to hiring, the 3 C’s Culture: Values- evolution of culture, Ethos (montrose), Traits Capacity - critical thinking & problem solving skills Craft- skills (not defined by) How to apply this methodology Narrative arc interview White space interview Jordan Ritter is an accomplished entrepreneur and technologist, having co-founded several companies including music company Napster, messaging security platform Cloudmark, labor-as-a-service platform CloudCrowd and most recently, personal digital search engine Atlas Informatics. He also served as the CTO of entertainment company Columbia Music Entertainment, as well as fan interaction platform Zivity.  Jordan is also a regular open-source contributor, having authored free software commonly included in modern Linux distributions as well as Windows software licensed by Microsoft.  Several of his projects have been featured in well-known publications and books, and incorporated into University-level curricula. His works have won numerous nominations and awards spanning across Comdex, DEMO, SIIA, PC World, PC Magazine, and WIRED. Jordan speaks at technology conferences around the world on topics ranging across entrepreneurism, startup culture, AI, computer and messaging security, and the music industry. Check out the Blog on the Stride Search, Inc site for the supplementary “show recap” article with detailed takeaways/insights from the interview.

Internet History Podcast
139. The Napster Story with Jordan Ritter

Internet History Podcast

Play Episode Listen Later Apr 16, 2017 81:44


If you know the Napster story at all, then you know about the Shawn(Sean)s. Shawn Fanning and Sean Parker. But in my opinion, and in the opinion of a lot of other people, a name that you should be just as familiar with is Jordan Ritter. Napster was an incredible phenomenon, reaching tens of millions of users at its height, and though Jordan Ritter didn't invent Napster, he very much was responsible for scaling it and turning it into the phenomenon it became. In today's episode, Jordan recounts the entire Napster story, from its gestation in the w00w00 hacker collective (which, by the way, people talk a lot about the PayPal mafia, but an argument can be made for a w00w00 mafia) all the way through Napster's legal descent into oblivion. You might know Jordan as the cofounder of Cloudmark and Servio, and at the end of the episode, he talks about the big problems he's working to solve today. See acast.com/privacy for privacy and opt-out information.

Black Hat Briefings, Las Vegas 2006 [Video] Presentations from the security conference

Social networking sites such as MySpace have recently been the target of XSS attacks, most notably the "samy is my hero" incident in late 2005. XSS affects a wide variety of sites and back end web technologies, but there are perhaps no more interesting targets than massively popular sites with viral user acquisition growth curves, which allow for exponential XSS worm propagation, as seen in samy's hack. Combine the power of reaching a wide and ever-widening audience with browser exploits (based on the most common browsers with such a broad "normal person" user base) that can affect more than just the browser as we saw with WMF, a insertion and infection method based on transparent XSS, and payloads which can themselves round-trip the exploit code back into the same or other vulnerable sites, and you have a self-healing distributed worm propagation platform with extremely accelerated infection vectors. We investigate the possibilities using MySpace and other popular sites as case studies, along with the potential posed by both WMF and The Metasploit Project's recently-released browser fuzzing tool, Hamachi, to own a site with self-replicating XSS containing a malicious browser-exploiting payload which itself will modify the browser to auto-exploit other sites, all transparent to the user. On top of this one could layer any additional functionality, some loud, some quiet, such as DDoS bots, keyloggers, other viral payloads, and more. Dan Moniz is a independent security consultant, and is also a member of The Shmoo Group, a world-recognized affiliation of information security professionals. Mr. Moniz has spoken at a number of conferences, including Defcon, ShmooCon, and The Intelligence Summit, in addition to private audiences at Fortune 50 companies and universities. In 2003 he testified in front of California State Senate in a hearing on the issues of RFID technology, privacy, and state legislation. In the past, he has held positions with a variety of high tech companies and organizations, including Alexa Internet (an Amazon.com company), Electronic Frontier Foundation, Cloudmark, OpenCola, and Viasec. HD Moore is Director of Security Research at BreakingPoint Systems where he focuses on the security testing features of the BreakingPoint product line. Prior to joining BreakingPoint, HD co-founded Digital Defense, a managed security services firm, where he developed the vulnerability assessment platform and lead the security research team. HD is the founder of the Metasploit Project and one of the core developers of the Metasploit Framework, the leading open-source exploit development platform. In his spare time, HD searches for new vulnerabilities, develops security tools, and contributes to open-source security projects."

Black Hat Briefings, Las Vegas 2006 [Audio] Presentations from the security conference

"Social networking sites such as MySpace have recently been the target of XSS attacks, most notably the "samy is my hero" incident in late 2005. XSS affects a wide variety of sites and back end web technologies, but there are perhaps no more interesting targets than massively popular sites with viral user acquisition growth curves, which allow for exponential XSS worm propagation, as seen in samy's hack. Combine the power of reaching a wide and ever-widening audience with browser exploits (based on the most common browsers with such a broad "normal person" user base) that can affect more than just the browser as we saw with WMF, a insertion and infection method based on transparent XSS, and payloads which can themselves round-trip the exploit code back into the same or other vulnerable sites, and you have a self-healing distributed worm propagation platform with extremely accelerated infection vectors. We investigate the possibilities using MySpace and other popular sites as case studies, along with the potential posed by both WMF and The Metasploit Project's recently-released browser fuzzing tool, Hamachi, to own a site with self-replicating XSS containing a malicious browser-exploiting payload which itself will modify the browser to auto-exploit other sites, all transparent to the user. On top of this one could layer any additional functionality, some loud, some quiet, such as DDoS bots, keyloggers, other viral payloads, and more. Dan Moniz is a independent security consultant, and is also a member of The Shmoo Group, a world-recognized affiliation of information security professionals. Mr. Moniz has spoken at a number of conferences, including Defcon, ShmooCon, and The Intelligence Summit, in addition to private audiences at Fortune 50 companies and universities. In 2003 he testified in front of California State Senate in a hearing on the issues of RFID technology, privacy, and state legislation. In the past, he has held positions with a variety of high tech companies and organizations, including Alexa Internet (an Amazon.com company), Electronic Frontier Foundation, Cloudmark, OpenCola, and Viasec. HD Moore is Director of Security Research at BreakingPoint Systems where he focuses on the security testing features of the BreakingPoint product line. Prior to joining BreakingPoint, HD co-founded Digital Defense, a managed security services firm, where he developed the vulnerability assessment platform and lead the security research team. HD is the founder of the Metasploit Project and one of the core developers of the Metasploit Framework, the leading open-source exploit development platform. In his spare time, HD searches for new vulnerabilities, develops security tools, and contributes to open-source security projects."