POPULARITY
On the 103rd episode of What is a Good Life? podcast, I am delighted to introduce our guest, Nathan (Nate) Kinch. Nate is a sociotechnology ethicist, organisational designer, and trust researcher. His work centres around the question: how might we best design trustworthy organisations, technologies, and systems that support a healthy and dignified life for all, within planetary boundaries? To do this, he draws on many branches of philosophy, systems science, cognitive sciences, and related disciplines.He is a Fellow of The Royal Society of the Arts, co-lead of the Responsible AI Network (RAIN), and Lead of Ethics in Action. He is also the Ethicist in Residence at CoLabs and guest lectures at several universities. Previously, Nate was an angel and impact investor and the founding CEO of a venture capital-backed startup. In everyday life, he's an urban rewilder, community convener, and ‘inner developer,' bringing regenerative thinking, practice, and culture to his corner of Melbourne.In this wonderful conversation, Nathan shares his exploration of expressing and feeling into the whole of himself. We discuss realisations he made through his psycho-physical inquiries, from suppressing emotions to crumbling worldviews, to the relief and healing of crying, and expressing and experiencing more joy and gratitude in life.For anyone on their own path of self-inquiry, this conversation contains many universal themes that I sense we all experience, which may make you feel more at home on your own path. While Nate's beautiful anecdotes, transparency, and reflections make this as grounded and heartfelt as it is insightful.For further content and information check out the following:- Nate's company: https://www.trustworthyby.design/- Nate's LinkedIn: https://www.linkedin.com/in/nathankinch/- For the What is a Good Life? podcast's YouTube page: https://www.youtube.com/@whatisagoodlife/videos- My newsletter: https://www.whatisagood.life/- My LinkedIn: https://www.linkedin.com/in/mark-mccartney-14b0161b4/Contact me at mark@whatisagood.life if you'd like to explore your own lines of self-inquiry through 1-on-1 coaching, take part in my Silent Conversation group courses, discuss experiences I create to stimulate greater trust, communication, and connection, amongst your leadership teams, or you simply want to get in touch.00:00 Introduction03:18 What is the whole of me?09:40 Allowing us to see more of ourselves11:48 Redeploying the soldier archetype20:33 Realisations from a psycho-physical inquiry29:33 Learning how to feel again33:18 Realisations around crying and dancing39:18 The containment of life41:48 When our worldviews crumble47:18 Acknowledging the whole of us55:18 What is a good life for Nate?
Hoje o entrevistado é André Barros. André é um dos sócios da On the Nose, um agência de comunicação que desenvolve experiências de marca, criação de conteúdo e distribuição em multi plataformas. Exerceu cargos executivos, foi o idealizador e fundador do Desimpedidos, o maior canal de futebol da América Latina e um dos fundadores da NWB, network de canais de esporte hoje comandada pelo grupo SBF. Iremos falar sobre o futuro do consumo de conteúdo esportivo, como a evolução da tecnologia e das redes sociais moldará o papel dos influenciadores, influência x manipulação, experiência de marca, métricas, Colabs estratégicos e muito mais. Vale a pena conferir!
Hillbilly Horror Stories does a colaboration with Justin Rimmel of Mysterious Circumstances & Tim Dennis of Darkness Radio. You guys will love this one!
We would LOVE to hear what you think. Please drop a line.This week we take a break from the interviews and talk about artist who have teamed up with another artist to make some music. We also give some brief details about them and play a snippet of a track they were involved with. Support the Show.
O Pedcast é uma roda de discussões quinzenal encabeçada pelo SneakersBR, primeiro veículo do mundo a falar de cultura sneaker em português, em atividade desde 2007.
O Pedcast é uma roda de discussões quinzenal encabeçada pelo SneakersBR, primeiro veículo do mundo a falar de cultura sneaker em português, em atividade desde 2007.
Een zomer van sport en anime - je zou het de cross-over van dit jaar kunnen noemen. Deze aflevering bespreken Gerard, Jocelyn en Kevin hun favoriete cross-overs en colabs in anime… en een klein beetje daarbuiten.
This week, Ginny, Nicola and Tom shoot the electric breeze on a variety of subjects, including brand colabs, the big back Scenic and the Volvo EX30's troublesome introduction. The team also answer listeners' questions to help them find the perfect electric car. This podcast is also available on the Electrifying.com YouTube channel (https://www.youtube.com/channel/UC29JbxEwr7q5bP7ANJMSqAg) where you can leave comments and questions for the team. We can also be reached at info@electrifying.com. Hosted on Acast. See acast.com/privacy for more information.
44th episode of the Straight Faded Podcast. In this episode we have Ozzy from JefmoCBD.Talks about his colabs with his CBD brand. Expanding into other cities and state. JefMoCBD.com use code "StraightFaded1" for 20% off on all products. Make sure to follow us on IG: @lilfaded310 @Straightfadedpodcast @JefmoCBD #getfaded #TEAMNOZLEEP #TNZ #FADEDFRIDAY #STRAIGHTFADEDPODCAST #JefmoCBD #RadioVission http://www.lilfaded.com/podcast.html Equipment Set Up: @PHANTOMLOKZ310 @ta2galie @Radio_Vission Apple Music: #StraightFadedPodcastPlaylist https://music.apple.com/us/playlist/s... Lil Faded Merch, Music etc... http://www.lilfaded.com/
This Friday we're doing a special crossover event in SF with of SemiAnalysis (previous guest!), and we will do a live podcast on site. RSVP here. Also join us on June 25-27 for the biggest AI Engineer conference of the year!Replicate is one of the most popular AI inference providers, reporting over 2 million users as of their $40m Series B with a16z. But how did they get there? The Definitive Replicate Story (warts and all)Their overnight success took 5 years of building, and it all started with arXiv Vanity, which was a 2017 vacation project that scrapes arXiv PDFs and re-renders them into semantic web pages that reflow nicely with better typography and whitespace. From there, Ben and Andreas' idea was to build tools to make ML research more robust and reproducible by making it easy to share code artefacts alongside papers. They had previously created Fig, which made it easy to spin up dev environments; it was eventually acquired by Docker and turned into `docker-compose`, the industry standard way to define services from containerized applications. 2019: CogThe first iteration of Replicate was a Fig-equivalent for ML workloads which they called Cog; it made it easy for researchers to package all their work and share it with peers for review and reproducibility. But they found that researchers were terrible users: they'd do all this work for a paper, publish it, and then never return to it again. “We talked to a bunch of researchers and they really wanted that.... But how the hell is this a business, you know, like how are we even going to make any money out of this? …So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. Do you want like a deployment platform for deploying models? Do you want a central place for versioning models? We were trying to think of lots of different products we could sell that were related to this thing…So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day.”The team graduated YCombinator with no customers, no product and nothing to demo - which was fine because demo day got canceled as the YC W'20 class graduated right into the pandemic. The team spent the next year exploring and building Covid tools.2021: CLIP + GAN = PixRayBy 2021, OpenAI released CLIP. Overnight dozens of Discord servers got spun up to hack on CLIP + GANs. Unlike academic researchers, this community was constantly releasing new checkpoints and builds of models. PixRay was one of the first models being built on Replicate, and it quickly started taking over the community. Chris Dixon has a famous 2010 post titled “The next big thing will start out looking like a toy”; image generation would have definitely felt like a toy in 2021, but it gave Replicate its initial boost.2022: Stable DiffusionIn August 2022 Stable Diffusion came out, and all the work they had been doing to build this infrastructure for CLIP / GANs models became the best way for people to share their StableDiffusion fine-tunes:And like the first week we saw people making animation models out of it. We saw people make game texture models that use circular convolutions to make repeatable textures. We saw a few weeks later, people were fine tuning it so you could put your face in these models and all of these other ways. […] So tons of product builders wanted to build stuff with it. And we were just sitting in there in the middle, as the interface layer between all these people who wanted to build, and all these machine learning experts who were building cool models. And that's really where it took off. Incredible supply, incredible demand, and we were just in the middle.(Stable Diffusion also spawned Latent Space as a newsletter)The landing page paved the cowpath for the intense interest in diffusion model APIs.2023: Llama & other multimodal LLMsBy 2023, Replicate's growing visibility in the Stable Diffusion indie hacker community came from top AI hackers like Pieter Levels and Danny Postmaa, each making millions off their AI apps:Meta then released LLaMA 1 and 2 (our coverage of it), greatly pushing forward the SOTA open source model landscape. Demand for text LLMs and other modalities rose, and Replicate broadened its focus accordingly, culminating in a $18m Series A and $40m Series B from a16z (at a $350m valuation).Building standards for the AI worldNow that the industry is evolving from toys to enterprise use cases, all these companies are working to set standards for their own space. We cover this at ~45 mins in the podcast. Some examples:* LangChain has been trying to establish "chain” as the standard mental models when putting multiple prompts and models together, and the “LangChain Expression Language” to go with it. (Our episode with Harrison)* LLamaHub for packaging RAG utilities. (Our episode with Jerry)* Ollama's Modelfile to define runtimes for different model architectures. These are usually targeted at local inference. * Cog (by Replicate) to create environments to which you can easily attach CUDA devices and make it easy to spin up inference on remote servers. * GGUF as the filetype ggml-based executors. None of them have really broken out yet, but this is going to become a fiercer competition as the market matures. Full Video PodcastAs a reminder, all Latent Space pods now come in full video on our YouTube, with bonus content that we cut for time!Show Notes* Ben Firshman* Replicate* Free $10 credit for Latent Space readers* Andreas Jansson (Ben's co-founder)* Charlie Holtz (Replicate's Hacker in Residence)* Fig (now Docker Compose)* Command Line Interface Guidelines (clig)* Apple Human Interface Guidelines* arXiv Vanity* Open Interpreter* PixRay* SF Compute* Big Sleep by Advadnoun* VQGAN-CLIP by Rivers Have WingsTimestamps* [00:00:00] Introductions* [00:01:17] Low latency is all you need* [00:04:08] Evolution of CLIs* [00:05:59] How building ArxivVanity led to Replicate* [00:11:37] Making ML research replicable with containers* [00:17:22] Doing YC in 2020 and pivoting to tools for COVID* [00:20:22] Launching the first version of Replicate* [00:25:51] Embracing the generative image community* [00:28:04] Getting reverse engineered into an API product* [00:31:25] Growing to 2 million users* [00:34:29] Indie vs Enterprise customers* [00:37:09] How Unsplash uses Replicate* [00:38:29] Learnings from Docker that went into Cog* [00:45:25] Creating AI standards* [00:50:05] Replicate's compute availability* [00:53:55] Fixing GPU waste* [01:00:39] What's open source AI?* [01:04:46] Building for AI engineers* [01:06:41] Hiring at ReplicateThis summary covers the full range of topics discussed throughout the episode, providing a comprehensive overview of the content and insights shared.TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: Hey, and today we have Ben Firshman in the studio. Welcome Ben.Ben [00:00:18]: Hey, good to be here.Swyx [00:00:19]: Ben, you're a co-founder and CEO of Replicate. Before that, you were most notably founder of Fig, which became Docker Compose. You also did a couple of other things before that, but that's what a lot of people know you for. What should people know about you that, you know, outside of your, your sort of LinkedIn profile?Ben [00:00:35]: Yeah. Good question. I think I'm a builder and tinkerer, like in a very broad sense. And I love using my hands to make things. So like I work on, you know, things may be a bit closer to tech, like electronics. I also like build things out of wood and I like fix cars and I fix my bike and build bicycles and all this kind of stuff. And there's so much, I think I've learned from transferable skills, from just like working in the real world to building things, building things in software. And you know, it's so much about being a builder, both in real life and, and in software that crosses over.Swyx [00:01:11]: Is there a real world analogy that you use often when you're thinking about like a code architecture or problem?Ben [00:01:17]: I like to build software tools as if they were something real. So I wrote this thing called the command line interface guidelines, which was a bit like sort of the Mac human interface guidelines, but for command line interfaces, I did it with the guy I created Docker Compose with and a few other people. And I think something in there, I think I described that your command line interface should feel like a big iron machine where you pull a lever and it goes clunk and like things should respond within like 50 milliseconds as if it was like a real life thing. And like another analogy here is like in the real life, you know, when you press a button on an electronic device and it's like a soft switch and you press it and nothing happens and there's no physical feedback of anything happening, then like half a second later, something happens. Like that's how a lot of software feels, but instead like software should feel more like something that's real where you touch, you pull a physical lever and the physical lever moves, you know, and I've taken that lesson of kind of human interface to, to software a ton. You know, it's all about kind of low latency of feeling, things feeling really solid and robust, both the command lines and, and user interfaces as well.Swyx [00:02:22]: And how did you operationalize that for Fig or Docker?Ben [00:02:27]: A lot of it's just low latency. Actually, we didn't do it very well for Fig in the first place. We used Python, which was a big mistake where Python's really hard to get booting up fast because you have to load up the whole Python runtime before it can run anything. Okay. Go is much better at this where like Go just instantly starts.Swyx [00:02:45]: You have to be under 500 milliseconds to start up?Ben [00:02:48]: Yeah, effectively. I mean, I mean, you know, perception of human things being immediate is, you know, something like a hundred milliseconds. So anything like that is, is yeah, good enough.Swyx [00:02:57]: Yeah. Also, I should mention, since we're talking about your side projects, well, one thing is I am maybe one of a few fellow people who have actually written something about CLI design principles because I was in charge of the Netlify CLI back in the day and had many thoughts. One of my fun thoughts, I'll just share it in case you have thoughts, is I think CLIs are effectively starting points for scripts that are then run. And the moment one of the script's preconditions are not fulfilled, typically they end. So the CLI developer will just exit the program. And the way that I designed, I really wanted to create the Netlify dev workflow was for it to be kind of a state machine that would resolve itself. If it detected a precondition wasn't fulfilled, it would actually delegate to a subprogram that would then fulfill that precondition, asking for more info or waiting until a condition is fulfilled. Then it would go back to the original flow and continue that. I don't know if that was ever tried or is there a more formal definition of it? Because I just came up with it randomly. But it felt like the beginnings of AI in the sense that when you run a CLI command, you have an intent to do something and you may not have given the CLI all the things that it needs to do, to execute that intent. So that was my two cents.Ben [00:04:08]: Yeah, that reminds me of a thing we sort of thought about when writing the CLI guidelines, where CLIs were designed in a world where the CLI was really a programming environment and it's primarily designed for machines to use all of these commands and scripts. Whereas over time, the CLI has evolved to humans. It was back in a world where the primary way of using computers was writing shell scripts effectively. We've transitioned to a world where actually humans are using CLI programs much more than they used to. And the current sort of best practices about how Unix was designed, there's lots of design documents about Unix from the 70s and 80s, where they say things like, command line commands should not output anything on success. It should be completely silent, which makes sense if you're using it in a shell script. But if a user is using that, it just looks like it's broken. If you type copy and it just doesn't say anything, you assume that it didn't work as a new user. I think what's really interesting about the CLI is that it's actually a really good, to your point, it's a really good user interface where it can be like a conversation, where it feels like you're, instead of just like you telling the computer to do this thing and either silently succeeding or saying, no, you did, failed, it can guide you in the right direction and tell you what your intent might be, and that kind of thing in a way that's actually, it's almost more natural to a CLI than it is in a graphical user interface because it feels like this back and forth with the computer, almost funnily like a language model. So I think there's some interesting intersection of CLIs and language models actually being very sort of closely related and a good fit for each other.Swyx [00:05:59]: Yeah, I'll say one of the surprises from last year, I worked on a coding agent, but I think the most successful coding agent of my cohort was Open Interpreter, which was a CLI implementation. And I have chronically, even as a CLI person, I have chronically underestimated the CLI as a useful interface. You also developed ArchiveVanity, which you recently retired after a glorious seven years.Ben [00:06:22]: Something like that.Swyx [00:06:23]: Which is nice, I guess, HTML PDFs.Ben [00:06:27]: Yeah, that was actually the start of where Replicate came from. Okay, we can tell that story. So when I quit Docker, I got really interested in science infrastructure, just as like a problem area, because it is like science has created so much progress in the world. The fact that we're, you know, can talk to each other on a podcast and we use computers and the fact that we're alive is probably thanks to medical research, you know. But science is just like completely archaic and broken and it's like 19th century processes that just happen to be copied to the internet rather than take into account that, you know, we can transfer information at the speed of light now. And the whole way science is funded and all this kind of thing is all kind of very broken. And there's just so much potential for making science work better. And I realized that I wasn't a scientist and I didn't really have any time to go and get a PhD and become a researcher, but I'm a tool builder and I could make existing scientists better at their job. And if I could make like a bunch of scientists a little bit better at their job, maybe that's the kind of equivalent of being a researcher. So one particular thing I dialed in on is just how science is disseminated in that all of these PDFs, quite often behind paywalls, you know, on the internet.Swyx [00:07:34]: And that's a whole thing because it's funded by national grants, government grants, then they're put behind paywalls. Yeah, exactly.Ben [00:07:40]: That's like a whole, yeah, I could talk for hours about that. But the particular thing we got dialed in on was, interestingly, these PDFs are also, there's a bunch of open science that happens as well. So math, physics, computer science, machine learning, notably, is all published on the archive, which is actually a surprisingly old institution.Swyx [00:08:00]: Some random Cornell.Ben [00:08:01]: Yeah, it was just like somebody in Cornell who started a mailing list in the 80s. And then when the web was invented, they built a web interface around it. Like it's super old.Swyx [00:08:11]: And it's like kind of like a user group thing, right? That's why they're all these like numbers and stuff.Ben [00:08:15]: Yeah, exactly. Like it's a bit like something, yeah. That's where all basically all of math, physics and computer science happens. But it's still PDFs published to this thing. Yeah, which is just so infuriating. The web was invented at CERN, a physics institution, to share academic writing. Like there are figure tags, there are like author tags, there are heading tags, there are site tags. You know, hyperlinks are effectively citations because you want to link to another academic paper. But instead, you have to like copy and paste these things and try and get around paywalls. Like it's absurd, you know. And now we have like social media and things, but still like academic papers as PDFs, you know. This is not what the web was for. So anyway, I got really frustrated with that. And I went on vacation with my old friend Andreas. So we were, we used to work together in London on a startup, at somebody else's startup. And we were just on vacation in Greece for fun. And he was like trying to read a machine learning paper on his phone, you know, like we had to like zoom in and like scroll line by line on the PDF. And he was like, this is f*****g stupid. So I was like, I know, like this is something we discovered our mutual hatred for this, you know. And we spent our vacation sitting by the pool, like making latex to HTML, like converters, making the first version of Archive Vanity. Anyway, that was up then a whole thing. And the story, we shut it down recently because they caught the eye of Archive. They were like, oh, this is great. We just haven't had the time to work on this. And what's tragic about the Archive, it's like this project of Cornell that's like, they can barely scrounge together enough money to survive. I think it might be better funded now than it was when we were, we were collaborating with them. And compared to these like scientific journals, it's just that this is actually where the work happens. But they just have a fraction of the money that like these big scientific journals have, which is just so tragic. But anyway, they were like, yeah, this is great. We can't afford to like do it, but do you want to like as a volunteer integrate arXiv Vanity into arXiv?Swyx [00:10:05]: Oh, you did the work.Ben [00:10:06]: We didn't do the work. We started doing the work. We did some. I think we worked on this for like a few months to actually get it integrated into arXiv. And then we got like distracted by Replicate. So a guy called Dan picked up the work and made it happen. Like somebody who works on one of the, the piece of the libraries that powers arXiv Vanity. Okay.Swyx [00:10:26]: And the relationship with arXiv Sanity?Ben [00:10:28]: None.Swyx [00:10:30]: Did you predate them? I actually don't know the lineage.Ben [00:10:32]: We were after, we both were both users of arXiv Sanity, which is like a sort of arXiv...Ben [00:10:37]: Which is Andre's RecSys on top of arXiv.Ben [00:10:40]: Yeah. Yeah. And we were both users of that. And I think we were trying to come up with a working name for arXiv and Andreas just like cracked a joke of like, oh, let's call it arXiv Vanity. Let's make the papers look nice. Yeah. Yeah. And that was the working name and it just stuck.Swyx [00:10:52]: Got it.Ben [00:10:53]: Got it.Alessio [00:10:54]: Yeah. And then from there, tell us more about why you got distracted, right? So Replicate, maybe it feels like an overnight success to a lot of people, but you've been building this since 2019. Yeah.Ben [00:11:04]: So what prompted the start?Alessio [00:11:05]: And we've been collaborating for even longer.Ben [00:11:07]: So we created arXiv Vanity in 2017. So in some sense, we've been doing this almost like six, seven years now, a classic seven year.Swyx [00:11:16]: Overnight success.Ben [00:11:17]: Yeah. Yes. We did arXiv Vanity and then worked on a bunch of like surrounding projects. I was still like really interested in science publishing at that point. And I'm trying to remember, because I tell a lot of like the condensed story to people because I can't really tell like a seven year history. So I'm trying to figure out like the right. Oh, we got room. The right length.Swyx [00:11:35]: We want to nail the definitive Replicate story here.Ben [00:11:37]: One thing that's really interesting about these machine learning papers is that these machine learning papers are published on arXiv and a lot of them are actual fundamental research. So like should be like prose describing a theory. But a lot of them are just running pieces of software that like a machine learning researcher made that did something, you know, it was like an image classification model or something. And they managed to make an image classification model that was better than the existing state of the art. And they've made an actual running piece of software that does image segmentation. And then what they had to do is they then had to take that piece of software and write it up as prose and math in a PDF. And what's frustrating about that is like if you want to. So this was like Andreas is, Andreas was a machine learning engineer at Spotify. And some of his job was like he did pure research as well. Like he did a PhD and he was doing a lot of stuff internally. But part of his job was also being an engineer and taking some of these existing things that people have made and published and trying to apply them to actual problems at Spotify. And he was like, you know, you get given a paper which like describes roughly how the model works. It's probably listing lots of crucial information. There's sometimes code on GitHub. More and more there's code on GitHub. But back then it was kind of relatively rare. But it's quite often just like scrappy research code and didn't actually run. And, you know, there was maybe the weights that were on Google Drive, but they accidentally deleted the weights of Google Drive, you know, and it was like really hard to like take this stuff and actually use it for real things. We just started talking together about like his problems at Spotify and I connected this back to my work at Docker as well. I was like, oh, this is what we created containers for. You know, we solved this problem for normal software by putting the thing inside a container so you could ship it around and it kept on running. So we were sort of hypothesizing about like, hmm, what if we put machine learning models inside containers so they could actually be shipped around and they could be defined in like some production ready formats and other researchers could run them to generate baselines and you could people who wanted to actually apply them to real problems in the world could just pick up the container and run it, you know. And we then thought this is quite whether it gets normally in this part of the story I skip forward to be like and then we created cog this container stuff for machine learning models and we created Replicate, the place for people to publish these machine learning models. But there's actually like two or three years between that. The thing we then got dialed into was Andreas was like, what if there was a CI system for machine learning? It's like one of the things he really struggled with as a researcher is generating baselines. So when like he's writing a paper, he needs to like get like five other models that are existing work and get them running.Swyx [00:14:21]: On the same evals.Ben [00:14:22]: Exactly, on the same evals so you can compare apples to apples because you can't trust the numbers in the paper.Swyx [00:14:26]: So you can be Google and just publish them anyway.Ben [00:14:31]: So I think this was coming from the thinking of like there should be containers for machine learning, but why are people going to use that? Okay, maybe we can create a supply of containers by like creating this useful tool for researchers. And the useful tool was like, let's get researchers to package up their models and push them to the central place where we run a standard set of benchmarks across the models so that you can trust those results and you can compare these models apples to apples and for like a researcher for Andreas, like doing a new piece of research, he could trust those numbers and he could like pull down those models, confirm it on his machine, use the standard benchmark to then measure his model and you know, all this kind of stuff. And so we started building that. That's what we applied to YC with, got into YC and we started sort of building a prototype of this. And then this is like where it all starts to fall apart. We were like, okay, that sounds great. And we talked to a bunch of researchers and they really wanted that and that sounds brilliant. That's a great way to create a supply of like models on this research platform. But how the hell is this a business, you know, like how are we even going to make any money out of this? And we're like, oh s**t, that's like the, that's the real unknown here of like what the business is. So we thought it would be a really good idea to like, okay, before we get too deep into this, let's try and like reduce the risk of this turning into a business. So let's try and like research what the business could be for this research tool effectively. So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. And we were like, do you want like a deployment platform for deploying models? Like, do you want like a central place for versioning models? Like we're trying to think of like lots of different like products we could sell that were like related to this thing. And terrible idea. Like we're not sales people and like people don't want to buy something that doesn't exist. I think some people can pull this off, but we were just like, you know, a bunch of product people, products and engineer people, and we just like couldn't pull this off. So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day. And then we then like tried to figure out, okay, what can we build in like two weeks that'll be something. So we like desperately tried to, I can't remember what we've tried to build at that point. And then two weeks before demo day, I just remember it was all, we were going down to Mountain View every week for dinners and we got called on to like an all hands Zoom call, which was super weird. We're like, what's going on? And they were like, don't come to dinner tomorrow. And we realized, we kind of looked at the news and we were like, oh, there's a pandemic going on. We were like so deep in our startup. We were just like completely oblivious to what was going on around us.Swyx [00:17:20]: Was this Jan or Feb 2020?Ben [00:17:22]: This was March 2020. March 2020. 2020.Swyx [00:17:25]: Yeah. Because I remember Silicon Valley at the time was early to COVID. Like they started locking down a lot faster than the rest of the US.Ben [00:17:32]: Yeah, exactly. And I remember, yeah, soon after that, like there was the San Francisco lockdowns and then like the YC batch just like stopped. There wasn't demo day and it was in a sense a blessing for us because we just kind ofSwyx [00:17:43]: In the normal course of events, you're actually allowed to defer to a future demo day. Yeah.Ben [00:17:51]: So we didn't even take any defer because it just kind of didn't happen.Swyx [00:17:55]: So was YC helpful?Ben [00:17:57]: Yes. We completely screwed up the batch and that was our fault. I think the thing that YC has become incredibly valuable for us has been after YC. I think there was a reason why we couldn't, didn't need to do YC to start with because we were quite experienced. We had done some startups before. We were kind of well connected with VCs, you know, it was relatively easy to raise money because we were like a known quantity. You know, if you go to a VC and be like, Hey, I made this piece of-Swyx [00:18:24]: It's Docker Compose for AI.Ben [00:18:26]: Exactly. Yeah. And like, you know, people can pattern match like that and they can have some trust, you know what you're doing. Whereas it's much harder for people straight out of college and that's where like YC sweet spot is like helping people straight out of college who are super promising, like figure out how to do that.Swyx [00:18:40]: No credentials.Ben [00:18:41]: Yeah, exactly. We don't need that. But the thing that's been incredibly useful for us since YC has been, this was actually, I think, so Docker was a YC company and Solomon, the founder of Docker, I think told me this. He was like, a lot of people underestimate the value of YC after you finish the batch. And his biggest regret was like not staying in touch with YC. I might be misattributing this, but I think it was him. And so we made a point of that. And we just stayed in touch with our batch partner, who Jared at YC has been fantastic.Ben [00:19:10]: Jared Friedman. All of like the team at YC, there was the growth team at YC when they were still there and they've been super helpful. And two things have been super helpful about that is like raising money, like they just know exactly how to raise money. And they've been super helpful during that process in all of our rounds, like we've done three rounds since we did YC and they've been super helpful during the whole process. And also just like reaching a ton of customers. So like the magic of YC is that you have all of, like there's thousands of YC companies, I think, on the order of thousands, I think. And they're all of your first customers. And they're like super helpful, super receptive, really want to like try out new things. You have like a warm intro to every one of them basically. And there's this mailing list where you can post about updates to your products, which is like really receptive. And that's just been fantastic for us. Like we've just like got so many of our users and customers through YC. Yeah.Swyx [00:20:00]: Well, so the classic criticism or the sort of, you know, pushback is people don't buy you because you are both from YC. But at least they'll open the email. Right. Like that's the... Okay.Ben [00:20:13]: Yeah. Yeah. Yeah.Swyx [00:20:16]: So that's been a really, really positive experience for us. And sorry, I interrupted with the YC question. Like you were, you make it, you just made it out of the YC, survived the pandemic.Ben [00:20:22]: I'll try and condense this a little bit. Then we started building tools for COVID weirdly. We were like, okay, we don't have a startup. We haven't figured out anything. What's the most useful thing we could be doing right now?Swyx [00:20:32]: Save lives.Ben [00:20:33]: So yeah. Let's try and save lives. I think we failed at that as well. We had a bunch of products that didn't really go anywhere. We kind of worked on, yeah, a bunch of stuff like contact tracing, which turned out didn't really be a useful thing. Sort of Andreas worked on like a door dash for like people delivering food to people who are vulnerable. What else did we do? The meta problem of like helping people direct their efforts to what was most useful and a few other things like that. It didn't really go anywhere. So we're like, okay, this is not really working either. We were considering actually just like doing like work for COVID. We have this decision document early on in our company, which is like, should we become a like government app contracting shop? We decided no.Swyx [00:21:11]: Because you also did work for the gov.uk. Yeah, exactly.Ben [00:21:14]: We had experience like doing some like-Swyx [00:21:17]: And the Guardian and all that.Ben [00:21:18]: Yeah. For like government stuff. And we were just like really good at building stuff. Like we were just like product people. Like I was like the front end product side and Andreas was the back end side. So we were just like a product. And we were working with a designer at the time, a guy called Mark, who did our early designs for Replicate. And we were like, hey, what if we just team up and like become and build stuff? And yeah, we gave up on that in the end for, I can't remember the details. So we went back to machine learning. And then we were like, well, we're not really sure if this is going to work. And one of my most painful experiences from previous startups is shutting them down. Like when you realize it's not really working and having to shut it down, it's like a ton of work and it's people hate you and it's just sort of, you know. So we were like, how can we make something we don't have to shut down? And even better, how can we make something that won't page us in the middle of the night? So we made an open source project. We made a thing which was an open source Weights and Biases, because we had this theory that like people want open source tools. There should be like an open source, like version control, experiment tracking like thing. And it was intuitive to us and we're like, oh, we're software developers and we like command line tools. Like everyone loves command line tools and open source stuff, but machine learning researchers just really didn't care. Like they just wanted to click on buttons. They didn't mind that it was a cloud service. It was all very visual as well, that you need lots of graphs and charts and stuff like this. So it wasn't right. Like it was right. We actually were building something that Andreas made at Spotify for just like saving experiments to cloud storage automatically, but other people didn't really want this. So we kind of gave up on that. And then that was actually originally called Replicate and we renamed that out of the way. So it's now called Keepsake and I think some people still use it. Then we sort of came back, we looped back to our original idea. So we were like, oh, maybe there was a thing in that thing we were originally sort of thinking about of like researchers sharing their work and containers for machine learning models. So we just built that. And at that point we were kind of running out of the YC money. So we were like, okay, this like feels good though. Let's like give this a shot. So that was the point we raised a seed round. We raised seed round. Pre-launch. We raised pre-launch and pre-team. It was an idea basically. We had a little prototype. It was just an idea and a team. But we were like, okay, like, you know, bootstrapping this thing is getting hard. So let's actually raise some money. Then we made Cog and Replicate. It initially didn't have APIs, interestingly. It was just the bit that I was talking about before of helping researchers share their work. So it was a way for researchers to put their work on a webpage such that other people could try it out and so that you could download the Docker container. We cut the benchmarks thing of it because we thought that was just like too complicated. But it had a Docker container that like, you know, Andreas in a past life could download and run with his benchmark and you could compare all these models apples to apples. So that was like the theory behind it. That kind of started to work. It was like still when like, you know, it was long time pre-AI hype and there was lots of interesting stuff going on, but it was very much in like the classic deep learning era. So sort of image segmentation models and sentiment analysis and all these kinds of things, you know, that people were using, that we're using deep learning models for. And we were very much building for research because all of this stuff was happening in research institutions, you know, the sort of people who'd be publishing to archive. So we were creating an accompanying material for their models, basically, you know, they wanted a demo for their models and we were creating a company material for it. What was funny about that is they were like not very good users. Like they were, they were doing great work obviously, but, but the way that research worked is that they, they just made like one thing every six months and they just fired and forget it, forgot it. Like they, they published this piece of paper and like, done, I've, I've published it. So they like output it to Replicate and then they just stopped using Replicate. You know, they were like once every six monthly users and that wasn't great for us, but we stumbled across this early community. This was early 2021 when OpenAI created this, created CLIP and people started smushing CLIP and GANs together to produce image generation models. And this started with, you know, it was just a bunch of like tinkerers on Discord, basically. There was an early model called Big Sleep by Advadnoun. And then there was VQGAN Clip, which was like a bit more popular by Rivers Have Wings. And it was all just people like tinkering on stuff in Colabs and it was very dynamic and it was people just making copies of co-labs and playing around with things and forking in. And to me this, I saw this and I was like, oh, this feels like open source software, like so much more than the research world where like people are publishing these papers.Swyx [00:25:48]: You don't know their real names and it's just like a Discord.Ben [00:25:51]: Yeah, exactly. But crucially, it was like people were tinkering and forking and things were moving really fast and it just felt like this creative, dynamic, collaborative community in a way that research wasn't really, like it was still stuck in this kind of six month publication cycle. So we just kind of latched onto that and started building for this community. And you know, a lot of those early models were published on Replicate. I think the first one that was really primarily on Replicate was one called Pixray, which was sort of mid 2021 and it had a really cool like pixel art output, but it also just like produced general, you know, the sort of, they weren't like crisp in images, but they were quite aesthetically pleasing, like some of these early image generation models. And you know, that was like published primarily on Replicate and then a few other models around that were like published on Replicate. And that's where we really started to find our early community and like where we really found like, oh, we've actually built a thing that people want and they were great users as well. And people really want to try out these models. Lots of people were like running the models on Replicate. We still didn't have APIs though, interestingly, and this is like another like really complicated part of the story. We had no idea what a business model was still at this point. I don't think people could even pay for it. You know, it was just like these web forms where people could run the model.Swyx [00:27:06]: Just for historical interest, which discords were they and how did you find them? Was this the Lion Discord? Yeah, Lion. This is Eleuther.Ben [00:27:12]: Eleuther, yeah. It was the Eleuther one. These two, right? There was a channel where Viki Gangklep, this was early 2021, where Viki Gangklep was set up as a Discord bot. I just remember being completely just like captivated by this thing. I was just like playing around with it all afternoon and like the sort of thing. In Discord. Oh s**t, it's 2am. You know, yeah.Swyx [00:27:33]: This is the beginnings of Midjourney.Ben [00:27:34]: Yeah, exactly. And Stability. It was the start of Midjourney. And you know, it's where that kind of user interface came from. Like what's beautiful about the user interface is like you could see what other people are doing. And you could riff off other people's ideas. And it was just so much fun to just like play around with this in like a channel full of a hundred people. And yeah, that just like completely captivated me and I'm like, okay, this is something, you know. So like we should get these things on Replicate. Yeah, that's where that all came from.Swyx [00:28:00]: And then you moved on to, so was it APIs next or was it Stable Diffusion next?Ben [00:28:04]: It was APIs next. And the APIs happened because one of our users, our web form had like an internal API for making the web form work, like with an API that was called from JavaScript. And somebody like reverse engineered that to start generating images with a script. You know, they did like, you know, Web Inspector Coffee is Carl, like figured out what the API request was. And it wasn't secured or anything.Swyx [00:28:28]: Of course not.Ben [00:28:29]: They started generating a bunch of images and like we got tons of traffic and like what's going on? And I think like a sort of usual reaction to that would be like, hey, you're abusing our API and to shut them down. And instead we're like, oh, this is interesting. Like people want to run these models. So we documented the API in a Notion document, like our internal API in a Notion document and like message this person being like, hey, you seem to have found our API. Here's the documentation. That'll be like a thousand bucks a month, please, with a straight form, like we just click some buttons to make. And they were like, sure, that sounds great. So that was our first customer.Swyx [00:29:05]: A thousand bucks a month.Ben [00:29:07]: It was a surprising amount of money. That's not casual. It was on the order of a thousand bucks a month.Swyx [00:29:11]: So was it a business?Ben [00:29:13]: It was the creator of PixRay. Like it was, he generated NFT art. And so he like made a bunch of art with these models and was, you know, selling these NFTs effectively. And I think lots of people in his community were doing similar things. And like he then referred us to other people who were also generating NFTs and he joined us with models. We started our API business. Yeah. Then we like made an official API and actually like added some billing to it. So it wasn't just like a fixed fee.Swyx [00:29:40]: And now people think of you as the host and models API business. Yeah, exactly.Ben [00:29:44]: But that just turned out to be our business, you know, but what ended up being beautiful about this is it was really fulfilling. Like the original goal of what we wanted to do is that we wanted to make this research that people were making accessible to like other people and for it to be used in the real world. And this was like the just like ultimately the right way to do it because all of these people making these generative models could publish them to replicate and they wanted a place to publish it. And software engineers, you know, like myself, like I'm not a machine learning expert, but I want to use this stuff, could just run these models with a single line of code. And we thought, oh, maybe the Docker image is enough, but it's actually super hard to get the Docker image running on a GPU and stuff. So it really needed to be the hosted API for this to work and to make it accessible to software engineers. And we just like wound our way to this. Yeah.Swyx [00:30:30]: Two years to the first paying customer. Yeah, exactly.Alessio [00:30:33]: Did you ever think about becoming Midjourney during that time? You have like so much interest in image generation.Swyx [00:30:38]: I mean, you're doing fine for the record, but, you know, it was right there, you were playing with it.Ben [00:30:46]: I don't think it was our expertise. Like I think our expertise was DevTools rather than like Midjourney is almost like a consumer products, you know? Yeah. So I don't think it was our expertise. It certainly occurred to us. I think at the time we were thinking about like, oh, maybe we could hire some of these people in this community and make great models and stuff like this. But we ended up more being at the tooling. Like I think like before I was saying, like I'm not really a researcher, but I'm more like the tool builder, the behind the scenes. And I think both me and Andreas are like that.Swyx [00:31:09]: I think this is an illustration of the tool builder philosophy. Something where you latch on to in DevTools, which is when you see people behaving weird, it's not their fault, it's yours. And you want to pave the cow paths is what they say, right? Like the unofficial paths that people are making, like make it official and make it easy for them and then maybe charge a bit of money.Alessio [00:31:25]: And now fast forward a couple of years, you have 2 million developers using Replicate. Maybe more. That was the last public number that I found.Ben [00:31:33]: It's 2 million users. Not all those people are developers, but a lot of them are developers, yeah.Alessio [00:31:38]: And then 30,000 paying customers was the number late in space runs on Replicate. So we had a small podcaster and we host a whisper diarization on Replicate. And we're paying. So we're late in space in the 30,000. You raised a $40 million dollars, Series B. I would say that maybe the stable diffusion time, August 22, was like really when the company started to break out. Tell us a bit about that and the community that came out and I know now you're expanding beyond just image generation.Ben [00:32:06]: Yeah, like I think we kind of set ourselves, like we saw there was this really interesting image, generative image world going on. So we kind of, you know, like we're building the tools for that community already, really. And we knew stable diffusion was coming out. We knew it was a really exciting thing, you know, it was the best generative image model so far. I think the thing we underestimated was just like what an inflection point it would be, where it was, I think Simon Willison put it this way, where he said something along the lines of it was a model that was open source and tinkerable and like, you know, it was just good enough and open source and tinkerable such that it just kind of took off in a way that none of the models had before. And like what was really neat about stable diffusion is it was open source so you could like, compared to like Dali, for example, which was like sort of equivalent quality. And like the first week we saw like people making animation models out of it. We saw people make like game texture models that like use circular convolutions to make repeatable textures. We saw, you know, a few weeks later, like people were fine tuning it so you could make, put your face in these models and all of these other-Swyx [00:33:10]: Textual inversion.Ben [00:33:11]: Yep. Yeah, exactly. That happened a bit before that. And all of this sort of innovation was happening all of a sudden. And people were publishing on Replicate because you could just like publish arbitrary models on Replicate. So we had this sort of supply of like interesting stuff being built. But because it was a sufficiently good model, there was also just like a ton of people building with it. They were like, oh, we can build products with this thing. And this was like about the time where people were starting to get really interested in AI. So like tons of product builders wanted to build stuff with it. And we were just like sitting in there in the middle, it's like the interface layer between like all these people who wanted to build and all these like machine learning experts who were building cool models. And that's like really where it took off. We were just sort of incredible supply, incredible demand, and we were just like in the middle. And then, yeah, since then, we've just kind of grown and grown really. And we've been building a lot for like the indie hacker community, these like individual tinkerers, but also startups and a lot of large companies as well who are sort of exploring and building AI things. Then kind of the same thing happened like middle of last year with language models and Lama 2, where the same kind of stable diffusion effect happened with Lama. And Lama 2 was like our biggest week of growth ever because like tons of people wanted to tinker with it and run it. And you know, since then we've just been seeing a ton of growth in language models as well as image models. Yeah. We're just kind of riding a lot of the interest that's going on in AI and all the people building in AI, you know. Yeah.Swyx [00:34:29]: Kudos. Right place, right time. But also, you know, took a while to position for the right place before the wave came. I'm curious if like you have any insights on these different markets. So Peter Levels, notably very loud person, very picky about his tools. I wasn't sure actually if he used you. He does. So you've met him on your Series B blog posts and Danny Post might as well, his competitor all in that wave. What are their needs versus, you know, the more enterprise or B2B type needs? Did you come to a decision point where you're like, okay, you know, how serious are these indie hackers versus like the actual businesses that are bigger and perhaps better customers because they're less churny?Ben [00:35:04]: They're surprisingly similar because I think a lot of people right now want to use and build with AI, but they're not AI experts and they're not infrastructure experts either. So they want to be able to use this stuff without having to like figure out all the internals of the models and, you know, like touch PyTorch and whatever. And they also don't want to be like setting up and booting up servers. And that's the same all the way from like indie hackers just getting started because like obviously you just want to get started as quickly as possible, all the way through to like large companies who want to be able to use this stuff, but don't have like all of the experts on stuff, you know, you know, big companies like Google and so on that do actually have a lot of experts on stuff, but the vast majority of companies don't. And they're all software engineers who want to be able to use this AI stuff, but they just don't know how to use it. And it's like, you really need to be an expert and it takes a long time to like learn the skills to be able to use that. So they're surprisingly similar in that sense. I think it's kind of also unfair of like the indie community, like they're not churning surprisingly, or churny or spiky surprisingly, like they're building real established businesses, which is like, kudos to them, like building these really like large, sustainable businesses, often just as solo developers. And it's kind of remarkable how they can do that actually, and it's in credit to a lot of their like product skills. And you know, we're just like there to help them being like their machine learning team effectively to help them use all of this stuff. A lot of these indie hackers are some of our largest customers, like alongside some of our biggest customers that you would think would be spending a lot more money than them, but yeah.Swyx [00:36:35]: And we should name some of these. So you have them on your landing page, your Buzzfeed, you have Unsplash, Character AI. What do they power? What can you say about their usage?Ben [00:36:43]: Yeah, totally. It's kind of a various things.Swyx [00:36:46]: Well, I mean, I'm naming them because they're on your landing page. So you have logo rights. It's useful for people to, like, I'm not imaginative. I see monkey see monkey do, right? Like if I see someone doing something that I want to do, then I'm like, okay, Replicate's great for that.Ben [00:37:00]: Yeah, yeah, yeah.Swyx [00:37:01]: So that's what I think about case studies on company landing pages is that it's just a way of explaining like, yep, this is something that we are good for. Yeah, totally.Ben [00:37:09]: I mean, it's, these companies are doing things all the way up and down the stack at different levels of sophistication. So like Unsplash, for example, they actually publicly posted this story on Twitter where they're using BLIP to annotate all of the images in their catalog. So you know, they have lots of images in the catalog and they want to create a text description of it so you can search for it. And they're annotating images with, you know, off the shelf, open source model, you know, we have this big library of open source models that you can run. And you know, we've got lots of people are running these open source models off the shelf. And then most of our larger customers are doing more sophisticated stuff. So they're like fine tuning the models, they're running completely custom models on us. A lot of these larger companies are like, using us for a lot of their, you know, inference, but it's like a lot of custom models and them like writing the Python themselves because they've got machine learning experts on the team. And they're using us for like, you know, their inference infrastructure effectively. And so it's like lots of different levels of sophistication where like some people using these off the shelf models. Some people are fine tuning models. So like level, Peter Levels is a great example where a lot of his products are based off like fine tuning, fine tuning image models, for example. And then we've also got like larger customers who are just like using us as infrastructure effectively. So yeah, it's like all things up and down, up and down the stack.Alessio [00:38:29]: Let's talk a bit about COG and the technical layer. So there are a lot of GPU clouds. I think people have different pricing points. And I think everybody tries to offer a different developer experience on top of it, which then lets you charge a premium. Why did you want to create COG?Ben [00:38:46]: You worked at Docker.Alessio [00:38:47]: What were some of the issues with traditional container runtimes? And maybe yeah, what were you surprised with as you built it?Ben [00:38:54]: COG came right from the start, actually, when we were thinking about this, you know, evaluation, the sort of benchmarking system for machine learning researchers, where we wanted researchers to publish their models in a standard format that was guaranteed to keep on running, that you could replicate the results of, like that's where the name came from. And we realized that we needed something like Docker to make that work, you know. And I think it was just like natural from my point of view of like, obviously that should be open source, that we should try and create some kind of open standard here that people can share. Because if more people use this format, then that's great for everyone involved. I think the magic of Docker is not really in the software. It's just like the standard that people have agreed on, like, here are a bunch of keys for a JSON document, basically. And you know, that was the magic of like the metaphor of real containerization as well. It's not the containers that are interesting. It's just like the size and shape of the damn box, you know. And it's a similar thing here, where really we just wanted to get people to agree on like, this is what a machine learning model is. This is how a prediction works. This is what the inputs are, this is what the outputs are. So cog is really just a Docker container that attaches to a CUDA device, if it needs a GPU, that has a open API specification as a label on the Docker image. And the open API specification defines the interface for the machine learning model, like the inputs and outputs effectively, or the params in machine learning terminology. And you know, we just wanted to get people to kind of agree on this thing. And it's like general purpose enough, like we weren't saying like, some of the existing things were like at the graph level, but we really wanted something general purpose enough that you could just put anything inside this and it was like future compatible and it was just like arbitrary software. And you know, it'd be future compatible with like future inference servers and future machine learning model formats and all this kind of stuff. So that was the intent behind it. It just came naturally that we wanted to define this format. And that's been really working for us. Like a bunch of people have been using cog outside of replicates, which is kind of our original intention, like this should be how machine learning is packaged and how people should use it. Like it's common to use cog in situations where like maybe they can't use the SAS service because I don't know, they're in a big company and they're not allowed to use a SAS service, but they can use cog internally still. And like they can download the models from replicates and run them internally in their org, which we've been seeing happen. And that works really well. People who want to build like custom inference pipelines, but don't want to like reinvent the world, they can use cog off the shelf and use it as like a component in their inference pipelines. We've been seeing tons of usage like that and it's just been kind of happening organically. We haven't really been trying, you know, but it's like there if people want it and we've been seeing people use it. So that's great. Yeah. So a lot of it is just sort of philosophical of just like, this is how it should work from my experience at Docker, you know, and there's just a lot of value from like the core being open, I think, and that other people can share it and it's like an integration point. So, you know, if replicate, for example, wanted to work with a testing system, like a CI system or whatever, we can just like interface at the cog level, like that system just needs to put cog models and then you can like test your models on that CI system before they get deployed to replicate. And it's just like a format that everyone, we can get everyone to agree on, you know.Alessio [00:41:55]: What do you think, I guess, Docker got wrong? Because if I look at a Docker Compose and a cog definition, first of all, the cog is kind of like the Dockerfile plus the Compose versus in Docker Compose, you're just exposing the services. And also Docker Compose is very like ports driven versus you have like the actual, you know, predict this is what you have to run.Ben [00:42:16]: Yeah.Alessio [00:42:17]: Any learnings and maybe tips for other people building container based runtimes, like how much should you separate the API services versus the image building or how much you want to build them together?Ben [00:42:29]: I think it was coming from two sides. We were thinking about the design from the point of view of user needs, what are their problems and what problems can we solve for them, but also what the interface should be for a machine learning model. And it was sort of the combination of two things that led us to this design. So the thing I talked about before was a little bit of like the interface around the machine learning model. So we realized that we wanted to be general purpose. We wanted to be at the like JSON, like human readable things rather than the tensor level. So it was like an open API specification that wrapped a Docker container. And that's where that design came from. And it's really just a wrapper around Docker. So we were kind of building on, standing on shoulders there, but Docker is too low level. So it's just like arbitrary software. So we wanted to be able to like have a open API specification that defined the function effectively that is the machine learning model. But also like how that function is written, how that function is run, which is all defined in code and stuff like that. So it's like a bunch of abstraction on top of Docker to make that work. And that's where that design came from. But the core problems we were solving for users was that Docker is really hard to use and productionizing machine learning models is really hard. So on the first part of that, we knew we couldn't use Dockerfiles. Like Dockerfiles are hard enough for software developers to write. I'm saying this with love as somebody who works on Docker and like works on Dockerfiles, but it's really hard to use. And you need to know a bunch about Linux, basically, because you're running a bunch of CLI commands. You need to know a bunch about Linux and best practices and like how apt works and all this kind of stuff. So we're like, OK, we can't get to that level. We need something that machine learning researchers will be able to understand, like people who are used to like Colab notebooks. And what they understand is they're like, I need this version of Python. I need these Python packages. And somebody told me to apt-get install something. You know? If there was sudo in there, I don't really know what that means. So we tried to create a format that was at that level, and that's what cog.yaml is. And we were really kind of trying to imagine like, what is that machine learning researcher going to understand, you know, and trying to build for them. Then the productionizing machine learning models thing is like, OK, how can we package up all of the complexity of like productionizing machine learning models, like picking CUDA versions, like hooking it up to GPUs, writing an inference server, defining a schema, doing batching, all of these just like really gnarly things that everyone does again and again. And just like, you know, provide that as a tool. And that's where that side of it came from. So it's like combining those user needs with, you know, the sort of world need of needing like a common standard for like what a machine learning model is. And that's how we thought about the design. I don't know whether that answers the question.Alessio [00:45:12]: Yeah. So your idea was like, hey, you really want what Docker stands for in terms of standard, but you actually don't want people to do all the work that goes into Docker.Ben [00:45:22]: It needs to be higher level, you know?Swyx [00:45:25]: So I want to, for the listener, you're not the only standard that is out there. As with any standard, there must be 14 of them. You are surprisingly friendly with Olama, who is your former colleagues from Docker, who came out with the model file. Mozilla came out with the Lama file. And then I don't know if this is in the same category even, but I'm just going to throw it in there. Like Hugging Face has the transformers and diffusers library, which is a way of disseminating models that obviously people use. How would you compare your contrast, your approach of Cog versus all these?Ben [00:45:53]: It's kind of complementary, actually, which is kind of neat in that a lot of transformers, for example, is lower level than Cog. So it's a Python library effectively, but you still need to like...Swyx [00:46:04]: Expose them.Ben [00:46:05]: Yeah. You still need to turn that into an inference server. You still need to like install the Python packages and that kind of thing. So lots of replicate models are transformers models and diffusers models inside Cog, you know? So that's like the level that that sits. So it's very complementary in some sense. We're kind of working on integration with Hugging Face such that you can deploy models from Hugging Face into Cog models and stuff like that to replicate. And some of these things like Llamafile and what Llama are working on are also very complementary in that they're doing a lot of the sort of running these things locally on laptops, which is not a thing that works very well with Cog. Like Cog is really designed around servers and attaching to CUDA devices and NVIDIA GPUs and this kind of thing. So we're actually like, you know, figuring out ways that like we can, those things can be interoperable because, you know, they should be and they are quite complementary and that you should be able to like take a model and replicate and run it on your local machine. You should be able to take a model, you know, the machine and run it in the cloud.Swyx [00:47:02]: Is the base layer something like, is it at the like the GGUF level, which by the way, I need to get a primer on like the different formats that have emerged, or is it at the star dot file level, which is model file, Llamafile, whatever, whatever, or is it at the Cog level? I don't know, to be honest.Ben [00:47:16]: And I think this is something we still have to figure out. There's a lot yet, like exactly where those lines are drawn. Don't know exactly. I think this is something we're trying to figure out ourselves, but I think there's certainly a lot of promise about these systems interoperating. We just want things to work together. You know, we want to try and reduce the number of standards. So the more, the more these things can interoperate and, you know
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro to Superposition & Sparse Autoencoders (Colab exercises), published by CallumMcDougall on November 29, 2023 on The AI Alignment Forum. This is a linkpost for some exercises in sparse autoencoders, which I've recently finished working on as part of the upcoming ARENA 3.0 iteration. Having spoken to Neel Nanda and others in interpretability-related MATS streams, it seemed useful to make these exercises accessible out of the context of the rest of the ARENA curriculum. Links to Colabs: Exercises, Solutions. If you don't like working in Colabs, then you can clone the repo, download the exercises & solutions Colabs as notebooks, and run them in the same directory. The exercises were built out from the Toy Models of Superposition exercises from the previous iteration, but now with new Sparse Autoencoder content. These exercises fall into 2 groups: SAEs in toy models We take the toy models from Anthropic's Toy Models of Superposition paper (which there are also exercises for), and train sparse autoencoders on the representations learned by these toy models. These exercises culminate in using neuron resampling to successfully recover all the learned features from the toy model of bottleneck superposition: SAEs in real models And there are exercises on interpreting an SAE trained on a transformer, where you can discover some cool learned features (e.g. a neuron exhibiting skip trigam-like behaviour, which activates on left-brackets following Django-related sytax, and predicts the completion (' -> django). You can either read through the Solutions colab (which has all output displayed & explained), or go through the Exercises colab and fill in the functions according to the specifications you are given, looking at the Solutions when you're stuck. Both colabs come with test functions you can run to verify your solution works. List of all exercises I've listed all the exercises down here, along with prerequisites (although I expect most readers will only be interested in the sparse autoencoder exercises). Each set of exercises is labelled with their prerequisites. For instance, the label (1*, 3) means the first set of exercises is essential, and the third is recommended but not essential. Abbreviations: TMS = Toy Models of Superposition, SAE = Sparse Autoencoders. TMS: Superposition in a Nonprivileged Basis TMS: Correlated / Anticorrelated Features (1*) TMS: Superposition in a Privileged Basis (1*) TMS: Feature Geometry (1*) SAEs in Toy Models (1*, 3) SAEs in Real Models (1*, 5*, 3) Please reach out to me if you have any questions or suggestions about these exercises (either by email at cal.s.mcdougall@gmail.com, or a LessWrong private message / comment on this post). Happy coding! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Před deseti lety u nás došlo k flagrantnímu propojení politiky a médií. Co se od té doby změnilo? A můžeme se po prodeji Mafry dočkat další divoké éry? Náš jubilejní 70. díl jsme natočili s publikem v klubu Co.Labs v Brně.
Louis & Gabe are fitness experts and hosts of the wildly successful NineToFive fitness podcast. They share all their secrets to their mind-blowing success and break down exactly what you need to do to create a healthier, better life and live your dreams. WATCH ON SPOTIFY: Click here WATCH ON YOUTUBE: Click here LISTEN ON APPLE: Click here .
Faisal is a Pakistani-American entrepreneur and investor. He is the Founder & General Partner of Zayn Venture Capital (Formally Zayn Capital), a leading US & Cayman domiciled venture capital fund, investing in early-stage tech ventures in Pakistan. He is considered to be one of the most prolific investors with the most number of notable investment calls in Pakistan. His track record includes: Haball, NayaPay, PostEX, Bookme.pk, Abhi Finance, AdalFi, Bazaar-tech, KTrade, SnappRetail, Laam.pk,Krave Mart, GrocerApp, Truck It In, Savyour, Roomy.pk, Bagallery, Trellis Housing Finance, Tazah, Zaraye and Colabs.Faisal was also a founding partner of Lakson Venture Capital before creating Zayn VC. He built the Lakson Fund from the ground up as Managing Partner and Executive Director, attaining one the highest TVPIs in Pakistan in just 3 years. Along with his early notable investments in Bitcoin and Ethereum, his substantial expertise in venture capital and private investing has given him a thorough grasp of blockchain technology and global macroeconomics, which he applies to his investment decisions.Faisal has an MBA from Oxford University, joint Bachelor's and Master's degrees from Michigan State University USA. He has also been an investor in Hedge Funds, Venture Capital Funds and an active Angel Investor for over 15 years. Faisal is a limited partner of the Silicon Valley-based Venture Capital Fund 500 Startups. LinkedIn: https://www.linkedin.com/in/faisal-aftab-93a8b3129/ --- Support this podcast: https://podcasters.spotify.com/pod/show/geeksofthevalley/support
El tío Noto visita al tío Métrica para hablar de sus inicios en redes sociales y el contenido de moda, la serie de cazando fakes con Cesar Vega, Andres Hurtado, Melissa Klug, Samir Velazquez, Samahara Lobaton y Ator Untela. También hablamos del acuerdo de Farfán con Nike, NERO LVIGI y Alianza Lima, las colaboraciones de Anuel con Reebok, Alianza Lima con Supreme, Bad Bunny con Adidas, J BALVIN con Jordan, el problema de Kanye West con Yeezy y su nueva tienda de zapatillas Kicks 4 All
Craig and Isaac discuss new bikes from Throne and GT and continue to discuss the DirtyFest weekend.
Have you ever looked at your favourite brand and thought. ”My work would look great on that?!” If you've ever imagined your art on the local cafe's coffee cups, as a mural or on a dress by your favourite brand then this episode, all about art licensing and brand collaborations is for you. If you're curious about licensing your artwork or landing a brand collab, make sure you follow the podcast. In this episode I ask the very lovely and talented Jasmine Kroeze all the questions you want to know from contracts to how she landed her collab with Kathmandu. We also reveal something super exciting towards the end so keep your arty ears listening to the end! SHOWNOTES AND SIGN UP FOR THE BIG REVEAL!!
Welcome to Episode 7 of Ballsy with Marcus Aitken. Marcus is a contemporary artist living and working in South London. Known for his gestural paintings, Marcus uses a combination of layering, distressing and blending to present a multifaceted surface to his work. His background in design has developed his artistic style creating cutting-edge abstract works. He has shown in exhibitions internationally and has collectors around the globe. His work has been featured by various publications including Schön! Magazine, Art Plugged, Saatchi Online, Soft Punk Magazine, High Snobiety, Trebuchet, Condé Nast, and Culture Trip. He was also named as one of Saatchi Art's top 20 emerging artists to watch in 2020. He makes more than art, doing lots of interesting fashion Colabs, and we get deep into the way he approaches his business. Really great interview. You can find him on instagram @marcusaitken_ And on his website https://www.marcusaitken.com/
So, kali ini kita colabs bareng dengan bang Hasnan Manik, Beliau sangat konsen di parenting. Bagaimana peran ayah untuk anak? seberapa penting kah? stay tune
Horos é um dos nomes mais incríveis da cena atual de RPG. Com criatividade, inventividade e paixão Horos compõe com recursos simples e afetivos todo o potencial evocativo do RPG dos primórdios, passando pelos anos 80 e no contemporâneo. O papo flui passando por assuntos como Dungeon Synth, Zines, Colabs e Conteúdo de Vídeo. --- Send in a voice message: https://podcasters.spotify.com/pod/show/brainstorm-cast/message
David Schiesser (he/him) - Berlin - 2017 LOST TAPES @ds_008 For a number of years David would come up to Berlin considering making the move from Offenbach (he landed a huge old studio way outside of Berlin that was for painting, tattooing, and sleeping). I couldn't count how many times he's visited me by now (even when living in Berlin he would commute south and work here to be in the city energy). These LOST TAPES have a video aspect from us sitting in the kitchen (one day there might be a good purpose for em'). Thanks as always.
Be part of our community by joining our Facebook group: https://www.facebook.com/groups/thoughtbehindthings In conversation with tonight's guest, Fatima Mazhar. What has Fatima's journey been like? How did her career begin? What was her role? After this job ended, what happened to her? How did she land her first operational job? What brought her to Careem? What was it like to work for Careem? How did her career in Careem progress? Completing her MBA. How did she end up at KeepTruckin? How was it like relocating to Islamabad? What type of people did KeepTruckin hire back then? When and why did she leave? How did she start working on Dukan.pk and what happened? How did she end up at COLABS? What do they do at COLABS? Why COLABS? How does she see the entire space? Is she certain that the expansion will continue? Why is there so much money spent on subsidies? How does she envision the Pakistan of 2050? Why aren't we more concerned about what's going on in the country? Catch this and much more in tonight's episode. Do not forget to subscribe and press the bell icon to catch on to some amazing conversations coming your way! Connect with us: • https://www.instagram.com/thoughtbehindthings • https://www.instagram.com/muzamilhasan Fatima's LinkedIn: https://www.linkedin.com/in/mazharfatima?originalSubdomain=pk COLABS Website: https://colabs.pk/ The Pakistan Pivot podcast: https://www.youtube.com/watch?v=FmB64PGzL_c&ab_channel=PakistanNow TBT shorts: https://www.youtube.com/channel/UC6akyz6EpkwyzBmKh0L2rSQ --- Support this podcast: https://anchor.fm/syed-muzamil-hasan-zaidi3/support
Deborah and Andrew talk about collaborations, meeting new people in the industry, and how we learn from each other in the retail space.
Fatima Mazhar is an entrepreneur, and in her current role as COO at COLABS, is working to redefine the future of work in Pakistan. Her story is not one of a typical entrepreneur, where we talk about building grand dreams. Fatima's story is about dealing with the conflict of wanting to do something exciting and impactful, while fighting against her cultural context. Fatima grew up in a Pakistani family where nearly every decision was made for her, where there was very little room for pursuing her dreams. She started to rebel, to do things that were considered ‘unexpected', unaccepted and often inappropriate. This eventually led to severe depression and self-neglect. It took Fatima time to acknowledge her mental health challenges, and start to make healthy changes. Specific topics covered in this episode include: Depression, self-neglect, and withdrawal from society; How our upbringing has a profound impact on the decisions we make in life; The manageable steps Fatima has taken to support her mental health. In addition, we discuss the following questions: How to deal with the internal conflict of wanting to achieve something when you're held back; How do you recognise the right time to seek help? How can you equip yourself to manage healthy relationships while running a business? Connect with Fatima: LinkedIn | Twitter | Instagram Reading materials: The Power of Now: A Guide to Spiritual Enlightenment | Eckhart Tolle How can you support the podcast? Tell your friends and share online. Subscribe & review: Please make sure to review, share comments and subscribe to the show on the various platforms (YouTube, Apple Podcasts, Spotify & Google Podcasts) Spread the word: Help grow our reach by sharing your enthusiasm for the podcast and/or your favourite episodes by posting it on social media. If you enjoy the podcast, please consider leaving a short review on Apple Podcasts. It takes less than 60 seconds, and it really makes a difference in helping to convince future guests to share their stories! For show notes and past guests, visit https://www.thefuturefarm.co/naked-podcast Don't forget to sign up for more! Sign up for The Future Farm's monthly newsletter (Pit-Stop) at https://www.thefuturefarm.co/newsletter Follow The Future Farm on LinkedIn, Twitter, Instagram and YouTube
Sven meets up with Sunny Ture in his Temple Studio to talk about his song "Vacation" off of the album "Hotline," the therapy that comes from creating, how the Push(soul) Collective began, and his favorite non-musical thing(s). SONG: Vacation ALBUM: Hotline (Sunny Ture & Solomon Grunge) Released 4/15/21 BANDS: Push(soul) Hip Hop Collective, Colabs with Kelvah, Kivon Redd Favorite Treat: Grippos, Bannana, Strawberry, Greek yogurt Smoothie Photo Credit: photo by Kivon Redd of Push(soul)
Neste segundo bloco, Tiago irá nos contar como as empresas estão entrando em processamento de dados, e também falará sobre geração de proposta de valor, Fake Data e Data Voice.
Hoje, iremos receber no Future Hacker o vice governador do Estado de São Paulo, Rodrigo Garcia. Advogado, empresário e com quase 30 anos de vida pública, ele já presidiu a Assembleia Legislativa do Estado de São Paulo, passou por diversas secretarias, desde Desburocratização da Prefeitura de São Paulo até Desenvolvimento Econômico, Ciência, Tecnologia e Inovação. Ele irá falar sobre fundamentos da democracia, planejamento pluri-partidário de longo prazo, iniciativas de centros de inovação e parques tecnológicos, necessidade de COLABS entre governo federal, governamental e municipal. Além disso, falará também sobre os centros de Midia SP e seus impactos.
Hoje iremos falar com Marcelo Trevisani, profissional com mais de 18 anos de experiência em posições de Diretoria e Head de Digital Marketing, Transformação Digital, Inovação, Branding, Comunicação, Sales Growth e planejamento estratégico em empresas nacionais e multinacionais. Ele também foi professor por mais de 10 anos em MBA / Pós-graduação em Marketing Digital para ESPM, FGV Business School e FIAP. Hoje é CMO da IBM. Ele irá falar sobre a evolução da inteligência de dados, lab com foco em negócios e COLABS.
Salah satu cerita Loula bersama teman temannya. Aria ingin memberikan cookies kepada ibunya, tapi cookiesnya gosong. Waaaaa tidaak tapi Aria nggak putus asa, dia mencoba kembali dan membuatkan yang baru.
Introduction:I happened across Tai Lake and the Hawaii Artists Collaboration by accident, and I am so grateful that I did. In this episode, Tai shares his perspectives on the creative power of collaboration and the ability to bring together masters of the craft to grow their skills and their vocabulary. I found his enthusiasm for collaboration more than a little infectious, and I am grateful that we were able to connect.Our conversation makes me think about the kind of people we bring into collaborative work and the level of maturity or mastery in an area of expertise that they represent. Tai talks about leaving your ego out and coming together to raise the state of the craft overall. I'm sure you will enjoy this conversation with a master furniture builder on the Hawaii Artists Collaboration.During this episode we discuss:Getting to know Tai and his craftCollaboration and creativityOrigins of the Hawaii Artists CollaborationWho comes to a Colab?Cross-pollinating across ColabsThe changing scene for artistsLast thoughts - one thing to know about collaborationResources mentioned in this episode:Tai Lake WoodworkingHawaii Artists CollaborationDanish architect Eero Saarinen:"Always design a thing by considering it in its next larger context - a chair in a room, a room in a house, a house in an environment, an environment in a city plan" - Eero Saarinen.Emma Lake CollaborationCollaborationNZJake James - BlacksmithHenry Pomfret – There was no specific link for Henry, but I did find this link to a blacksmithing demonstration at the International Blacksmithing event in 2016.Lisa Geertsen - BlacksmithNow it's your turn:You should absolutely check out the Hawaii Artists Collaboration webpage and book mark it so you can check back from time to time. At the time I published this, the links to the various studio and artists tours that Tai mentioned had not yet been posted. I am looking forward to seeing the stories behind some of works these Colabs and artists put together. Your comments and ratings in Apple Podcasts and other providers is really important, so be sure to subscribe to the podcast. Most importantly, suggest to your friends that they subscribe and share as well. Don't forget to sign up for other interesting collaboration tidbits at Collaboration Dynamics.
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app
Samuel Suraphel is an entrepreneur, global program manager and technology specialist. Since 2014, he has led Mansa Colabs, a company that supports the growth of early-stage companies and non-profit projects in both North America and Africa with A strong focus is also placed on entrepreneurship ecosystem building, particularly for African Diaspora businesses, and the promotion of the Creative Sector as a source for entrepreneurial and employment opportunities.--- This episode is sponsored by Anchor: The easiest way to make a podcast. https://anchor.fm/app