POPULARITY
This week we dive into the fascinating world of AI innovation and competition. We start by looking at Pieter Levels' AI-powered game, fly.pieter.com, examining its impressive solo creation and what its lifespan tells us about the early days of "vibe coding." Then, we discuss Canva's entry into the AI code generation space, sparking a conversation about the increasing homogeneity of AI tools. We highlight a thought-provoking take on the lack of unique problem-solving and the allure of imitation in the tech world. Plus, is Google's Gemini quietly leveling up its free tier "Deep Research"? We also touch on Alphabet's massive investment in AI infrastructure and the growing focus on smaller, task-specific AI models. Finally, we analyze the Bank of England's cautious approach to LLMs in finance The breaking news: is OpenAI building its own social network to take on X? Tune in for a packed episode exploring the cutting edge of AI development and its societal impact.@dejavucoder@schitzobabel@Austen@ai_for_success
Generatieve AI heeft de laatste jaren enorme sprongen gemaakt, vooral op het gebied van coding en de nieuwe trend van ‘vibe-coding’. In de tijd van GPT-3.5 was vibe-coding nog een wat ongrijpbaar concept. Hoewel AI toen al redelijk kon programmeren, ontbrak het vaak aan context en nuance om echt intuïtief te coderen. Tegenwoordig, met modellen zoals GPT-4 en Claude en toepassingen als Cursor, is vibe-coding een stuk geavanceerder en praktischer geworden. De focus ligt nu niet alleen op syntactische correctheid, maar ook op de ‘flow’ en het gevoel dat de code moet uitstralen. Te gast is Daniel de Vos, head of data & AI bij Triple, en ook enthousiast vibe-coder. Een belangrijke discussie binnen de community is de vraag of het veilig is om code te draaien die je zelf niet volledig begrijpt. Via platforms zoals LinkedIn stellen programmeurs de vraag of dit moreel verantwoord is, aangezien AI weliswaar krachtige oplossingen kan bieden, maar ook risico’s met zich meebrengt door fouten die moeilijk te traceren zijn. AI programmeert niet per se ‘mooi’, hoewel het in veel gevallen wel pragmatisch te werk gaat. De gegenereerde code is vaak functioneel correct, maar mist soms de elegantie of efficiëntie die een ervaren programmeur zou nastreven. Vibe-coding roept ook de vraag op: kan iedereen nu programmeren? In zekere zin wel, want tools als Cursor en Claude maken het laagdrempeliger dan ooit. Toch is het geen volledige vervanging van klassieke programmeervaardigheden. De output kan immers onvoorspelbaar zijn en vraagt nog steeds om enige technische achtergrond om problemen te herkennen en op te lossen. AI maakt bovendien andere fouten dan mensen. Waar mensen vaak slordig zijn in syntax of consistentie, is AI soms juist te rigide en mist het creatieve oplossingen. Bekende voorbeelden van vibe-coding zijn te vinden bij makers als Pieter Levels, die op Twitter laat zien hoe hij snel applicaties bouwt met AI. Zo creëerde hij recent een spelletje met Cursor en Claude. Dit soort toepassingen laten zien dat vibe-coding niet slechts een gimmick is, maar een fundamentele verschuiving in hoe apps worden gebouwd. Het concept gaat verder dan low-code/no-code, omdat het niet alleen eenvoud biedt, maar ook een nieuwe manier van denken over softwareontwikkeling. Toch zijn er uitdagingen. Hoewel je met vibe-coding snel apps kunt maken, stuit je op problemen wanneer de codebasis te complex of chaotisch wordt. Het taalmodel heeft moeite met duizenden regels spaghetti-code, wat betekent dat meer geavanceerde projecten vaak alsnog een menselijke touch nodig hebben. Uiteindelijk is vibe-coding geen volledige vervanging van traditionele ontwikkeling, maar biedt het wel nieuwe mogelijkheden voor makers die intuïtief willen experimenteren met code. Gast Daniel de Vos Video YouTube Hosts Ben van der Burg & Daniël Mol Redactie Daniël MolSee omnystudio.com/listener for privacy information.
This week we talk about Studio Ghibli, Andrej Karpathy, and OpenAI.We also discuss code abstraction, economic repercussions, and DOGE.Recommended Book: How To Know a Person by David BrooksTranscriptIn late-November of 2022, OpenAI released a demo version of a product they didn't think would have much potential, because it was kind of buggy and not very impressive compared to the other things they were working on at the time. This product was a chatbot interface for a generative AI model they had been refining, called ChatGPT.This was basically just a chatbot that users could interact with, as if they were texting another human being. And the results were good enough—both in the sense that the bot seemed kinda sorta human-like, but also in the sense that the bot could generate convincing-seeming text on all sorts of subjects—that people went absolutely gaga over it, and the company went full-bore on this category of products, dropping an enterprise version in August the following year, a search engine powered by the same general model in October of 2024, and by 2025, upgraded versions of their core models were widely available, alongside paid, enhanced tiers for those who wanted higher-level processing behind the scenes: that upgraded version basically tapping a model with more feedstock, a larger training library and more intensive and refined training, but also, in some cases, a model that thinks longer, than can reach out and use the internet to research stuff it doesn't already know, and increasingly, to produce other media, like images and videos.During that time, this industry has absolutely exploded, and while OpenAI is generally considered to be one of the top dogs in this space, still, they've got enthusiastic and well-funded competition from pretty much everyone in the big tech world, like Google and Amazon and Meta, while also facing upstart competitors like Anthropic and Perplexity, alongside burgeoning Chinese competitors, like Deepseek, and established Chinese tech giants like Tencent and Baidu.It's been somewhat boggling watching this space develop, as while there's a chance some of the valuations of AI-oriented companies are overblown, potentially leading to a correction or the popping of a valuation bubble at some point in the next few years, the underlying tech and the output of that tech really has been iterating rapidly, the state of the art in generative AI in particular producing just staggeringly complex and convincing images, videos, audio, and text, but the lower-tier stuff, which is available to anyone who wants it, for free, is also valuable and useable for all sorts of purposes.Just recently, at the tail-end of March 2025, OpenAI announced new multimodal capabilities for its GPT-4o language model, which basically means this model, which could previously only generate text, can now produce images, as well.And the model has been lauded as a sort of sea change in the industry, allowing users to produce remarkable photorealistic images just by prompting the AI—telling it what you want, basically—with usually accurate, high-quality text, which has been a problem for most image models up till this point. It also boasts the capacity to adjust existing images in all sorts of ways.Case-in-point, it's possible to use this feature to take a photo of your family on vacation and have it rendered in the style of a Studio Ghibli cartoon; Studio Ghibli being the Japanese animation studio behind legendary films like My Neighbor Totoro, Spirited Away, and Princess Mononoke, among others.This is partly the result of better capabilities by this model, compared to its precursors, but it's also the result of OpenAI loosening its policies to allow folks to prompt these models in this way; previously they disallowed this sort of power, due to copyright concerns. And the implications here are interesting, as this suggests the company is now comfortable showing that their models have been trained on these films, which has all sorts of potential copyright implications, depending on how pending court cases turn out, but also that they're no long being as precious with potential scandals related to how their models are used.It's possible to apply all sorts of distinctive styles to existing images, then, including South Park and the Simpsons, but Studio Ghibli's style has become a meme since this new capability was deployed, and users have applied it to images ranging from existing memes to their own self-portrait avatars, to things like the planes crashing into the Twin Towers on 9/11, JFK's assassination, and famous mass-shootings and other murders.It's also worth noting that the co-founder of Studio Ghibli, Hayao Miyazaki, has called AI-generated artwork “an insult to life itself.” That so many people are using this kind of AI-generated filter on these images is a jarring sort of celebration, then, as the person behind that style probably wouldn't appreciate it; many people are using it because they love the style and the movies in which it was born so much, though. An odd moral quandary that's emerged as a result of these new AI-provided powers.What I'd like to talk about today is another burgeoning controversy within the AI space that's perhaps even larger in implications, and which is landing on an unprepared culture and economy just as rapidly as these new image capabilities and memes.—In February of 2025, the former AI head at Tesla, founding team member at OpenAI, and founder of an impending new, education-focused project called Eureka Labs named Andrej Karpathy coined the term ‘vibe coding' to refer to a trend he's noticed in himself and other developers, people who write code for a living, to develop new projects using code-assistant AI tools in a manner that essentially abstracts away the code, allowing the developer to rely more on vibes in order to get their project out the door, using plain English rather than code or even code-speak.So while a developer would typically need to invest a fair bit of time writing the underlying code for a new app or website or video game, someone who's vibe coding might instead focus on a higher, more meta-level of the project, worrying less about the coding parts, and instead just telling their AI assistant what they want to do. The AI then figures out the nuts and bolts, writes a bunch of code in seconds, and then the vibe coder can tweak the code, or have the AI tweak it for them, as they refine the concept, fix bugs, and get deeper into the nitty-gritty of things, all, again, in plain-spoken English.There are now videos, posted in the usual places, all over YouTube and TikTok and such, where folks—some of whom are coders, some of whom are purely vibe coders, who wouldn't be able to program their way out of a cardboard box—produce entire functioning video games in a matter of minutes.These games typically aren't very good, but they work. And reaching even that level of functionality would previously have taken days or weeks for an experienced, highly trained developer; now it takes mere minutes or moments, and can be achieved by the average, non-trained person, who has a fundamental understanding of how to prompt AI to get what they want from these systems.Ethan Mollick, who writes a fair bit on this subject and who keeps tabs on these sorts of developments in his newsletter, One Useful Thing, documented his attempts to make meaning from a pile of data he had sitting around, and which he hadn't made the time to dig through for meaning. Using plain English he was able to feed all that data to OpenAI's Deep Research model, interact with its findings, and further home in on meaningful directions suggested by the data.He also built a simple game in which he drove a firetruck around a 3D city, trying to put out fires before a competing helicopter could do the same. He spent a total of about $13 in AI token fees to make the game, and he was able to do so despite not having any relevant coding expertise.A guy named Pieter Levels, who's an experienced software engineer, was able to vibe-code a video game, which is a free-to-play, massively multiplayer online flying game, in just a month. Nearly all the code was written by Cursor and Grok 3, the first of which is a code-writing AI system, the latter of which is a ChatGPT-like generalist AI agent, and he's been able to generate something like $100k per month in revenue from this game just 17 days, post-launch.Now an important caveat here is that, first, this game received a lot of publicity, because Levels is a well-known name in this space, and he made this game as part of a ‘Vibe Coding Game Jam,' which is an event focused on exactly this type of AI-augmented programming, in which all of the entrants had to be at least 80% AI generated. But he's also a very skilled programmer and game-maker, so this isn't the sort of outcome the average person could expect from these sorts of tools.That said, it's an interesting case study that suggests a few things about where this category of tools is taking us, even if it's not representative for all programming spaces and would-be programmers.One prediction that's been percolating in this space for years, even before ChatGPT was released, but especially after generative AI tools hit the mainstream, is that many jobs will become redundant, and as a result many people, especially those in positions that are easily and convincingly replicated using such tools, will be fired. Because why would you pay twenty people $100,000 a year to do basic coding work when you can have one person working part-time with AI tools vibe-coding their way to approximately the same outcome?It's a fair question, and it's one that pretty much every industry is asking itself right now. And we've seen some early waves of firings based on this premise, most of which haven't gone great for the firing entity, as they've then had to backtrack and starting hiring to fill those positions again—the software they expected to fill the gaps not quite there yet, and their offerings suffering as a consequence of that gambit.Some are still convinced this is the way things are going, though, including people like Elon Musk, who, as part of his Department of Government Efficiency, or DOGE efforts in the US government, is basically stripping things down to the bare-minimum, in part to weaken agencies he doesn't like, but also, ostensibly at least, to reduce bloat and redundancy, the premise being that a lot of this work can be done by fewer people, and in some cases can be automated entirely using AI-based systems.This was the premise of his mass-firings at Twitter, now X, when he took over, and while there have been a lot of hiccups and issues resulting from that decision, the company is managing to operate, even if less optimally than before, with about 20% the staff it had before he took over—something like 1,500 people compared to 7,500.Now, there are different ways of looking at that outcome, and Musk's activities since that acquisition will probably color some of our perceptions of his ambitions and level of success with that job-culling, as well. But the underlying theory that a company can do even 90% as well as it did before with just a fifth of the workforce is a compelling argument to many people, and that includes folks running governments, but also those in charge of major companies with huge rosters of employees that make up the vast majority of their operating expenses.A major concern about all this, though, is that even if this theory works in broader practice, and all these companies and governments can function well enough with a dramatically reduced staff using AI tools to augment their capabilities and output, we may find ourselves in a situation in which the folks using said tools are more and more commodified—they'll be less specialized and have less education and expertise in the relevant areas, so they can be paid less, basically, the tools doing more and the humans mostly being paid to prompt and manage them. And as a result we may find ourselves in a situation where these people don't know enough to recognize when the AI are doing something wrong or weird, and we may even reach a point where the abstraction is so complete that very few humans even know how this code works, which leaves us increasingly reliant on these tools, but also more vulnerable to problems should they fail at a basic level, at which point there may not be any humans left who are capable of figuring out what went wrong, since all the jobs that would incentivize the acquisition of such knowledge and skill will have long since disappeared.As I mentioned in the intro, these tools are being applied to images, videos, music, and everything else, as well. Which means we could see vibe artists, vibe designers, vibe musicians and vibe filmmakers. All of which is arguably good in the sense that these mediums become more accessible to more people, allowing more voices to communicate in more ways than ever before.But it's also arguably worrying in the sense that more communication might be filtered through the capabilities of these tools—which, by the way, are predicated on previous artists and writers and filmmakers' work, arguably stealing their styles and ideas and regurgitating them, rather than doing anything truly original—and that could lead to less originality in these spaces, but also a similar situation in which people forget how to make their own films, their own art, their own writing; a capability drain that gets worse with each new generation of people who are incentivized to hand those responsibilities off to AI tools; we'll all become AI prompters, rather than all the things we are, currently.This has been the case with many technologies over the years—how many blacksmiths do we have in 2025, after all? And how many people actually hand-code the 1s and 0s that all our coding languages eventually write, for us, after we work at a higher, more human-optimized level of abstraction?But because our existing economies are predicated on a certain type of labor and certain number of people being employed to do said labor, even if those concerns ultimately don't end up being too big a deal, because the benefits are just that much more impactful than the downsides and other incentives to develop these or similar skills and understandings arise, it's possible we could experience a moment, years or decades long, in which the whole of the employment market is disrupted, perhaps quite rapidly, leaving a lot of people without income and thus a lot fewer people who can afford the products and services that are generated more cheaply using these tools.A situation that's ripe with potential for those in a position to take advantage of it, but also a situation that could be devastating to those reliant on the current state of employment and income—which is the vast, vast majority of human beings on the planet.Show Noteshttps://en.wikipedia.org/wiki/X_Corphttps://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/https://arstechnica.com/tech-policy/2025/03/what-could-possibly-go-wrong-doge-to-rapidly-rebuild-social-security-codebase/https://en.wikipedia.org/wiki/Vibe_codinghttps://www.newscientist.com/article/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter/https://nmn.gl/blog/dangers-vibe-codinghttps://x.com/karpathy/status/1886192184808149383https://simonwillison.net/2025/Mar/19/vibe-coding/https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.creativebloq.com/3d/video-game-design/what-is-vibe-coding-and-is-it-really-the-future-of-app-and-game-developmenthttps://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/https://en.wikipedia.org/wiki/Studio_Ghibli This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Episode 51: Is it really possible to rebuild an entire website using A.I.? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into the evolving world of AI-driven development, sharing their insights on the latest buzzword, vibe coding. In this episode, Matt discusses how he is rebuilding the Future Tools website from scratch using AI, detailing the new business model emerging from these AI tools. They take listeners through the journey of leveraging tools like V0.dev, Cursor, and Windsurf to streamline the coding process, and explore how AI can help overcome challenges even with little to no prior coding experience. With AI taking center stage, the hosts delve into how it's revolutionizing their workflows, the concept of MCP, and the real-world application of vibe coding, like the successful venture of Pieter Levels' airplane game. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Vibe Coding & AI Business (04:50) Future Tools Rebuild and Updates (09:29) AI Over Human Workers: A Necessity (12:20) Deep Dive into Website Functions (14:08 Detailed Roadmap Integration Guide (17:01) Self-Coding vs. Developer Challenges (20:48) AI Empowering High Agency Mindset (24:52) Cursor MCPs: Enhanced Database Interaction (27:04) Automated Webpage Change Validation (32:03) Start with Existing Designs (34:56) Live Audience-Driven Development Trend (39:15) Iterative Community-Driven Product Development (41:39) Instant Video Transcription Tool (44:43) Subscribe for Future AI Episodes — Mentions: FutureTools: https://www.futuretools.io/ V0:https://v0.dev/ Cursor: https://www.cursor.com/ Windsurf: https://codeium.com/windsurf Pieter Levels: https://x.com/levelsio Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
ThePrimeagen (aka Michael Paulson) is a programmer who has educated, entertained, and inspired millions of people to build software and have fun doing it. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep461-sc See below for timestamps, and to give feedback, submit questions, contact Lex, etc. CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: ThePrimeagen's X: https://twitter.com/ThePrimeagen ThePrimeagen's YouTube: https://youtube.com/ThePrimeTimeagen ThePrimeagen's Twitch: https://twitch.tv/ThePrimeagen ThePrimeagen's GitHub: https://github.com/theprimeagen ThePrimeagen's TikTok: https://tiktok.com/@theprimeagen ThePrimeagen's Coffee: https://www.terminal.shop/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Invideo AI: AI video generator. Go to https://invideo.io/i/lexpod Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (10:27) - Love for programming (20:00) - Hardest part of programming (22:16) - Types of programming (29:54) - Life story (39:58) - Hardship (41:29) - High school (47:15) - Porn addiction (57:01) - God (1:12:44) - Perseverance (1:22:40) - Netflix (1:35:08) - Groovy (1:40:13) - Printf() debugging (1:46:35) - Falcor (1:56:05) - Breaking production (1:58:49) - Pieter Levels (2:03:19) - Netflix, Twitch, and YouTube infrastructure (2:15:22) - ThePrimeagen origin story (2:30:37) - Learning programming languages (2:39:40) - Best programming languages in 2025 (2:44:35) - Python (2:45:15) - HTML & CSS (2:46:05) - Bash (2:46:45) - FFmpeg (2:53:28) - Performance (2:56:00) - Rust (3:00:48) - Epic projects (3:14:12) - Asserts (3:23:26) - ADHD (3:31:34) - Productivity (3:35:58) - Programming setup (4:11:28) - Coffee (4:18:32) - Programming with AI (5:01:16) - Advice for young programmers (5:12:48) - Reddit questions (5:20:20) - God PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
De Nederlandse programmeur Pieter Levels creëerde zonder game-ervaring in slechts 3 uur een multiplayer vluchtsimulator met behulp van AI. Binnen een dag speelden duizenden mensen het spel, en na 17 dagen verdient hij er 80.000 euro per maand mee.Dit is geen uitzondering – overal bouwen mensen zonder programmeerkennis indrukwekkende apps. Vandaag ontdekken we de AI-tools die deze ontwikkelingsrevolutie mogelijk maken, hoe jij hiervan kan profiteren, en waarom experts waarschuwen voor mogelijke controleverlies over onze creaties. Oftewel: programmeren met AI. Knowledge Navigator: https://archive.org/details/knowledge-navigator Hyperland: https://archive.org/details/HyperlandBBSDouglasAdamsAndTomBaker1990Braid Game: https://en.wikipedia.org/wiki/Braid_(video_game)Als je een lezing wil over AI van Wietse of Alexander dan kan dat. Mail ons op lezing@aireport.emailOp de hoogte blijven van het laatste AI-nieuws en 2x per week tips & tools ontvangen om het meeste uit AI te halen (en bij de webinar te zijn). Abonneer je dan op onze nieuwsbrief via aireport.emailVandaag nog beginnen met AI binnen jouw bedrijf? Ga dan naar deptagency.com/aireport This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.aireport.email/subscribe
Episode 140: Alex breaks down Pieter Levels' AI-coded flying game, which hit $67,000 in monthly revenue in just 3 weeks. Here's what to expect: The stats & story behind Levels' flying game Key lessons to take from this business Understanding critiques of the game — Show Notes: (0:00) A note from our sponsor (2:26) Welcome back to Founder's Journal (3:09) Peter Levels' AI-coded flying game (6:17) The power of AI in game development (8:00) The value of trusted distribution (13:24) Vibe marketing explained (15:58) Addressing critiques (20:15) Conclusion— Thanks to our presenting sponsor, Gusto. Head to www.gusto.com/alex — Episode Links: • Flying game - https://fly.pieter.com/ • Levels on X - https://x.com/levelsio • Andrej Karpathy - https://www.youtube.com/@AndrejKarpathyCheck Out Alex's Stuff: • storyarb - https://www.storyarb.com/ • CTA - https://www.creatortalentagency.co/ • X - https://x.com/businessbarista • Linkedin - https://www.linkedin.com/in/alex-lieberman/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Camille Fournier is the author of The Manager's Path, which many consider the definitive guide for navigating one's career path in tech. Camille was previously the CTO of Rent the Runway, VP of Technology at Goldman Sachs, Head of Platform Engineering at Two Sigma, and Global Head of Engineering and Architecture at JPMorgan Chase. She is about to release new newest book, Platform Engineering: A Guide for Technical, Product, and People Leaders. In our conversation, we discuss:• What product managers do that annoys engineers• Why major rewrites are a trap• Why you should have fewer one-on-ones• Strategies for organizing and working with platform teams• Tips for new managers• Advice for transitioning from individual contributor to manager• Much more—Brought to you by:• DX—A platform for measuring and improving developer productivity• CommandBar—AI-powered user assistance for modern products and impatient users• Coda—The all-in-one collaborative workspace—Find the transcript and show notes at: https://www.lennysnewsletter.com/p/engineering-leadership-camille-fournier—Where to find Camille Fournier:• LinkedIn: https://www.linkedin.com/in/camille-fournier-9011812/• Website: https://skamille.medium.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Camille's background(02:17) Common annoyances between PMs and engineers(07:09) Avoiding the telephone game(08:05) Hoarding ideas and over-engineering(09:55) The importance of involving engineers in ideation(11:37) The middle-person dilemma(14:21) Rewriting systems: a big trap?(20:40) Engineering leadership lessons(36:02) Moving from IC to management(40:32) One-on-one meetings(45:10) Pushing beyond comfort zones(45:27) Building a balanced work culture(48:01) Effective time management strategies(54:15) Advice for platform team success(01:02:42) Platform team responsibilities(01:04:43) When to form a platform team(01:07:02) Thriving on a platform team(01:12:48) AI corner(01:17:03) Lightning round and final thoughts—Referenced:• Platform Engineering: A Guide for Technical, Product, and People Leaders: https://www.amazon.com/Platform-Engineering-Technical-Product-Leaders/dp/1098153642/• The Manager's Path: A Guide for Tech Leaders Navigating Growth and Change: https://www.amazon.com/Managers-Path-Leaders-Navigating-Growth/dp/1491973897• 97 Things Every Engineering Manager Should Know: Collective Wisdom from the Experts: https://www.amazon.com/Things-Every-Engineering-Manager-Should/dp/1492050903• Avoiding the Rewrite Trap: https://skamille.medium.com/avoiding-the-rewrite-trap-b1283b8dd39e• Levelsio on X: https://x.com/levelsio• Pieter Levels on the Lex Fridman Podcast: https://www.youtube.com/watch?v=oFtjKbXKqbg• GraphQL: https://graphql.org/• New Blue Sun by André 3000 on Spotify: https://open.spotify.com/album/33Ek6daAL3oXyQIV1uoItD• Musk's 5 Steps to Cut Internal Bureaucracy at Tesla and SpaceX: https://icecreates.com/insight/musk-s-5-steps-to-cut-internal-bureaucracy-at-tesla-and-spacex-you-may-say-it-s-his-algorithm/• Ian Nowland on LinkedIn: https://www.linkedin.com/in/inowland/• Studio Pulls ‘Megalopolis' Trailer Using Fake Quotes from Famed Movie Critics: https://www.huffpost.com/entry/studio-pulls-megalopolis-trailer-using-fake-quotes-from-famed-movie-critics_n_66c74046e4b0f1ca469413c7• Claude 2: https://www.anthropic.com/news/claude-2• What Got You Here Won't Get You There: How Successful People Become Even More Successful: https://www.amazon.com/What-Got-Here-Wont-There/dp/1401301304• When Things Fall Apart: Heart Advice for Difficult Times: https://www.amazon.com/When-Things-Fall-Apart-Difficult/dp/1611803438• Alien: Romulus: https://www.imdb.com/title/tt18412256/• Whoop: https://www.whoop.com—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Co tam Panie się uwarzyło? Dzieje się: CEO Telegrama aresztowany i wypuszczony, Only Fans staje się gorącym startupem po wynikach a indie hacker Peter Levels, rozwala paradygmate DevOps ze swoim VPS.Czyli... kolejne Brew, dostarcza jak zwykle opinie, przegląd wydarzeń i wiadomości w Tech i nie tylko. Linki:
Tiago Ferreira is the co-founder of Podsqueeze, an AI podcast tool that helps automate your podcast content. The tool, that helps you create show notes, newsletters, social posts and more, is currently doing $16k MRR and growing. You might also know Tiago from his podcast Wannabe Entrepreneur, where he's interviewed impressive founders including Pieter Levels.Timestamps00:00 - Intro120 - Tiago Ferreira01:06 - Tiago's background02:25 - Lessons from failures03:40 - Starting Podsqueeze - solving your own problem05:53 - How Podsqueeze had a successful launch06:40 - How to have a successful launch08:10 - Growth tactics for Podsqueeze - SEO13:03 - Future plans and exit14:28 - RecommendationsRecommendationsBook: SaaS Playbook by Rob WallingPodcast: Startups for the Rest of UsIndie Hacker: Elston BarettoMy linksTwitterIndie Bites TwitterIndie Bites YouTubeJoin the membershipPersonal Website2 Hour Podcast CoursePodPanda (hire me to edit your podcast)This Indie Life PodcastSponsor - EmailOctopus
In Brick Talk's nineteenth episode, Braden Naquin and Melton Bell III discuss Coffee and Creatives, Midjourney's Web Interface, Pavel Durov Arrested, Telegram, and Lex Fridman's Interview with Pieter Levels. 00:00 - Intro 01:35 - Coffee and Creatives 06:40 - Midjourney's Web Interface 10:40 - Pavel Durov Arrested + Telegram 19:08 - Recommendations 23:35 - Lex Fridman's Interview with Pieter Levels 26:38 - Closing Remarks Brick By Brick brickbrick.us instagram.com/brickbrick.us Braden Naquin bradennaquin.com instagram.com/bradennaquin Melton Bell III meltonbell.com instagram.com/meltonbell3
Pieter Levels (aka levelsio on X) is a self-taught developer and entrepreneur who has designed, programmed, launched over 40 startups, many of which are highly successful. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep440-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/pieter-levels-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Pieter's X: https://x.com/levelsio Pieter's Techno Optimist Shop: https://levelsio.com/ Indie Maker Handbook: https://readmake.com/ Nomad List: https://nomadlist.com Remote OK: https://remoteok.com Hoodmaps: https://hoodmaps.com SPONSORS: To support this podcast, check out our sponsors & get discounts: Shopify: Sell stuff online. Go to https://shopify.com/lex Motific: Generative ai deployment. Go to https://motific.ai AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex MasterClass: Online classes from world-class experts. Go to https://masterclass.com/lexpod BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex Eight Sleep: Temp-controlled smart mattress. Go to https://eightsleep.com/lex OUTLINE: (00:00) - Introduction (11:38) - Startup philosophy (19:09) - Low points (22:37) - 12 startups in 12 months (29:29) - Traveling and depression (42:08) - Indie hacking (46:11) - Photo AI (1:22:28) - How to learn AI (1:31:04) - Robots (1:39:21) - Hoodmaps (2:03:26) - Learning new programming languages (2:12:58) - Monetize your website (2:19:34) - Fighting SPAM (2:23:07) - Automation (2:34:33) - When to sell startup (2:37:26) - Coding solo (2:43:28) - Ship fast (2:52:13) - Best IDE for programming (3:01:43) - Andrej Karpathy (3:11:09) - Productivity (3:24:56) - Minimalism (3:33:41) - Emails (3:40:54) - Coffee (3:48:40) - E/acc (3:50:56) - Advice for young people PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
What does it take to build a successful company to $600K as an indie hacker? Jon Yongfook Cockle says successful founders need to have resiliance. Jon Yongfook would know. In addition to attempting the well-known Indie Hackers challenge 12 Startups in 12 Months, he also went through 20-30 failed projects before striking the right product-market fit with Bannerbear, an image generation software. In this episode of Ahrefs Podcast, Jon Yongfook lays out: (0:00) Introduction (01:33) The state of the Indie Hackers community (03:19) Doing the 12 startups in 12 months challenge (09:24) Coming up with the idea for Bannerbear (14:06) Bannerbear design (17:02) Marketing week/coding week (19:44) Building in public (30:48) The current Bannerbear marketing strategy (34:18) Building free tools (41:04) Creating vs and alternatives pages (45:24) Modern growth hacking (48:53) Building and using ChatGPT wrappers (55:03) Using AI in marketing (56:37) Posting spicy takes on Twitter Where to find Jon Yongfook: X: https://x.com/yongfook Website: https://www.yongfook.com/ Where to find Tim: LinkedIn: https://www.linkedin.com/in/timsoulo/ X: https://x.com/timsoulo Website: https://www.timsoulo.com/ Referenced: Bannerbear: https://www.bannerbear.com/ Indie Hackers: https://www.indiehackers.com/ Pieter Levels: https://x.com/levelsio Danny Postma: https://x.com/dannypostmaa Ahrefs Backlink Checker: https://ahrefs.com/backlink-checker
Tu lances ton podcast ? 1 mois d'abonnement Boost gratuit chez mon hébergeur Ausha (lien : https://www.ausha.co/fr/ ) avec le code FLAVIEXAUSHA1MOISBOOSTTu rêves de succès en solopreneur mais ne sais pas par où commencer ?
Episode 582: Sam Parr ( https://twitter.com/theSamParr ) and Shaan Puri ( https://twitter.com/ShaanVP ) talk about the $1B idea that is basically inevitable, why stock exchanges are the smartest business model ever created and the female Pieter Levels who we're crowning our Hustler of the Week. Want to see Sam and Shaan's smiling faces? Head to the MFM YouTube Channel and subscribe - http://tinyurl.com/5n7ftsy5 — Show Notes: (0:00) Shaan's billion-dollar idea: Executive Check-ups (2:12) Mayo Clinic's $100M proof of concept (12:58) $15K/mo lead gen arbitrage (17:25) Pizza intelligence (19:41) Shaan's Mount Rushmore of indexes (24:54) The most profitable companies in the world (26:32) Why stock exchanges have the best business model (33:20) "Shoot your old idea in the head" - Eric Ries (41:20) Hustler of the week: The Female Pieter Levels (43:30) "Marketing is the tax you pay for an unremarkable product" — Links: • Prenuvo - https://www.prenuvo.com/ • Big Desk Energy playlist - https://shorturl.at/ajuG5 • Suno - http://suno.ai/ • ICE Index - https://www.ice.com/index • Long-term Stock Exchange - https://ltse.com/ • “Startup Lessons Learned” - https://www.startuplessonslearned.com/ • Branded Fruit - https://brandedfruit.com/ • Danielle Baskins - https://daniellebaskin.com/ • Get HubSpot's Free AI-Powered Sales Hub: enhance support, retention, and revenue all in one place https://clickhubspot.com/sym — Check Out Shaan's Stuff: Need to hire? You should use the same service Shaan uses to hire developers, designers, & Virtual Assistants → it's called Shepherd (tell ‘em Shaan sent you): https://bit.ly/SupportShepherd — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth My First Million is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano
This Friday we're doing a special crossover event in SF with of SemiAnalysis (previous guest!), and we will do a live podcast on site. RSVP here. Also join us on June 25-27 for the biggest AI Engineer conference of the year!Replicate is one of the most popular AI inference providers, reporting over 2 million users as of their $40m Series B with a16z. But how did they get there? The Definitive Replicate Story (warts and all)Their overnight success took 5 years of building, and it all started with arXiv Vanity, which was a 2017 vacation project that scrapes arXiv PDFs and re-renders them into semantic web pages that reflow nicely with better typography and whitespace. From there, Ben and Andreas' idea was to build tools to make ML research more robust and reproducible by making it easy to share code artefacts alongside papers. They had previously created Fig, which made it easy to spin up dev environments; it was eventually acquired by Docker and turned into `docker-compose`, the industry standard way to define services from containerized applications. 2019: CogThe first iteration of Replicate was a Fig-equivalent for ML workloads which they called Cog; it made it easy for researchers to package all their work and share it with peers for review and reproducibility. But they found that researchers were terrible users: they'd do all this work for a paper, publish it, and then never return to it again. “We talked to a bunch of researchers and they really wanted that.... But how the hell is this a business, you know, like how are we even going to make any money out of this? …So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. Do you want like a deployment platform for deploying models? Do you want a central place for versioning models? We were trying to think of lots of different products we could sell that were related to this thing…So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day.”The team graduated YCombinator with no customers, no product and nothing to demo - which was fine because demo day got canceled as the YC W'20 class graduated right into the pandemic. The team spent the next year exploring and building Covid tools.2021: CLIP + GAN = PixRayBy 2021, OpenAI released CLIP. Overnight dozens of Discord servers got spun up to hack on CLIP + GANs. Unlike academic researchers, this community was constantly releasing new checkpoints and builds of models. PixRay was one of the first models being built on Replicate, and it quickly started taking over the community. Chris Dixon has a famous 2010 post titled “The next big thing will start out looking like a toy”; image generation would have definitely felt like a toy in 2021, but it gave Replicate its initial boost.2022: Stable DiffusionIn August 2022 Stable Diffusion came out, and all the work they had been doing to build this infrastructure for CLIP / GANs models became the best way for people to share their StableDiffusion fine-tunes:And like the first week we saw people making animation models out of it. We saw people make game texture models that use circular convolutions to make repeatable textures. We saw a few weeks later, people were fine tuning it so you could put your face in these models and all of these other ways. […] So tons of product builders wanted to build stuff with it. And we were just sitting in there in the middle, as the interface layer between all these people who wanted to build, and all these machine learning experts who were building cool models. And that's really where it took off. Incredible supply, incredible demand, and we were just in the middle.(Stable Diffusion also spawned Latent Space as a newsletter)The landing page paved the cowpath for the intense interest in diffusion model APIs.2023: Llama & other multimodal LLMsBy 2023, Replicate's growing visibility in the Stable Diffusion indie hacker community came from top AI hackers like Pieter Levels and Danny Postmaa, each making millions off their AI apps:Meta then released LLaMA 1 and 2 (our coverage of it), greatly pushing forward the SOTA open source model landscape. Demand for text LLMs and other modalities rose, and Replicate broadened its focus accordingly, culminating in a $18m Series A and $40m Series B from a16z (at a $350m valuation).Building standards for the AI worldNow that the industry is evolving from toys to enterprise use cases, all these companies are working to set standards for their own space. We cover this at ~45 mins in the podcast. Some examples:* LangChain has been trying to establish "chain” as the standard mental models when putting multiple prompts and models together, and the “LangChain Expression Language” to go with it. (Our episode with Harrison)* LLamaHub for packaging RAG utilities. (Our episode with Jerry)* Ollama's Modelfile to define runtimes for different model architectures. These are usually targeted at local inference. * Cog (by Replicate) to create environments to which you can easily attach CUDA devices and make it easy to spin up inference on remote servers. * GGUF as the filetype ggml-based executors. None of them have really broken out yet, but this is going to become a fiercer competition as the market matures. Full Video PodcastAs a reminder, all Latent Space pods now come in full video on our YouTube, with bonus content that we cut for time!Show Notes* Ben Firshman* Replicate* Free $10 credit for Latent Space readers* Andreas Jansson (Ben's co-founder)* Charlie Holtz (Replicate's Hacker in Residence)* Fig (now Docker Compose)* Command Line Interface Guidelines (clig)* Apple Human Interface Guidelines* arXiv Vanity* Open Interpreter* PixRay* SF Compute* Big Sleep by Advadnoun* VQGAN-CLIP by Rivers Have WingsTimestamps* [00:00:00] Introductions* [00:01:17] Low latency is all you need* [00:04:08] Evolution of CLIs* [00:05:59] How building ArxivVanity led to Replicate* [00:11:37] Making ML research replicable with containers* [00:17:22] Doing YC in 2020 and pivoting to tools for COVID* [00:20:22] Launching the first version of Replicate* [00:25:51] Embracing the generative image community* [00:28:04] Getting reverse engineered into an API product* [00:31:25] Growing to 2 million users* [00:34:29] Indie vs Enterprise customers* [00:37:09] How Unsplash uses Replicate* [00:38:29] Learnings from Docker that went into Cog* [00:45:25] Creating AI standards* [00:50:05] Replicate's compute availability* [00:53:55] Fixing GPU waste* [01:00:39] What's open source AI?* [01:04:46] Building for AI engineers* [01:06:41] Hiring at ReplicateThis summary covers the full range of topics discussed throughout the episode, providing a comprehensive overview of the content and insights shared.TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: Hey, and today we have Ben Firshman in the studio. Welcome Ben.Ben [00:00:18]: Hey, good to be here.Swyx [00:00:19]: Ben, you're a co-founder and CEO of Replicate. Before that, you were most notably founder of Fig, which became Docker Compose. You also did a couple of other things before that, but that's what a lot of people know you for. What should people know about you that, you know, outside of your, your sort of LinkedIn profile?Ben [00:00:35]: Yeah. Good question. I think I'm a builder and tinkerer, like in a very broad sense. And I love using my hands to make things. So like I work on, you know, things may be a bit closer to tech, like electronics. I also like build things out of wood and I like fix cars and I fix my bike and build bicycles and all this kind of stuff. And there's so much, I think I've learned from transferable skills, from just like working in the real world to building things, building things in software. And you know, it's so much about being a builder, both in real life and, and in software that crosses over.Swyx [00:01:11]: Is there a real world analogy that you use often when you're thinking about like a code architecture or problem?Ben [00:01:17]: I like to build software tools as if they were something real. So I wrote this thing called the command line interface guidelines, which was a bit like sort of the Mac human interface guidelines, but for command line interfaces, I did it with the guy I created Docker Compose with and a few other people. And I think something in there, I think I described that your command line interface should feel like a big iron machine where you pull a lever and it goes clunk and like things should respond within like 50 milliseconds as if it was like a real life thing. And like another analogy here is like in the real life, you know, when you press a button on an electronic device and it's like a soft switch and you press it and nothing happens and there's no physical feedback of anything happening, then like half a second later, something happens. Like that's how a lot of software feels, but instead like software should feel more like something that's real where you touch, you pull a physical lever and the physical lever moves, you know, and I've taken that lesson of kind of human interface to, to software a ton. You know, it's all about kind of low latency of feeling, things feeling really solid and robust, both the command lines and, and user interfaces as well.Swyx [00:02:22]: And how did you operationalize that for Fig or Docker?Ben [00:02:27]: A lot of it's just low latency. Actually, we didn't do it very well for Fig in the first place. We used Python, which was a big mistake where Python's really hard to get booting up fast because you have to load up the whole Python runtime before it can run anything. Okay. Go is much better at this where like Go just instantly starts.Swyx [00:02:45]: You have to be under 500 milliseconds to start up?Ben [00:02:48]: Yeah, effectively. I mean, I mean, you know, perception of human things being immediate is, you know, something like a hundred milliseconds. So anything like that is, is yeah, good enough.Swyx [00:02:57]: Yeah. Also, I should mention, since we're talking about your side projects, well, one thing is I am maybe one of a few fellow people who have actually written something about CLI design principles because I was in charge of the Netlify CLI back in the day and had many thoughts. One of my fun thoughts, I'll just share it in case you have thoughts, is I think CLIs are effectively starting points for scripts that are then run. And the moment one of the script's preconditions are not fulfilled, typically they end. So the CLI developer will just exit the program. And the way that I designed, I really wanted to create the Netlify dev workflow was for it to be kind of a state machine that would resolve itself. If it detected a precondition wasn't fulfilled, it would actually delegate to a subprogram that would then fulfill that precondition, asking for more info or waiting until a condition is fulfilled. Then it would go back to the original flow and continue that. I don't know if that was ever tried or is there a more formal definition of it? Because I just came up with it randomly. But it felt like the beginnings of AI in the sense that when you run a CLI command, you have an intent to do something and you may not have given the CLI all the things that it needs to do, to execute that intent. So that was my two cents.Ben [00:04:08]: Yeah, that reminds me of a thing we sort of thought about when writing the CLI guidelines, where CLIs were designed in a world where the CLI was really a programming environment and it's primarily designed for machines to use all of these commands and scripts. Whereas over time, the CLI has evolved to humans. It was back in a world where the primary way of using computers was writing shell scripts effectively. We've transitioned to a world where actually humans are using CLI programs much more than they used to. And the current sort of best practices about how Unix was designed, there's lots of design documents about Unix from the 70s and 80s, where they say things like, command line commands should not output anything on success. It should be completely silent, which makes sense if you're using it in a shell script. But if a user is using that, it just looks like it's broken. If you type copy and it just doesn't say anything, you assume that it didn't work as a new user. I think what's really interesting about the CLI is that it's actually a really good, to your point, it's a really good user interface where it can be like a conversation, where it feels like you're, instead of just like you telling the computer to do this thing and either silently succeeding or saying, no, you did, failed, it can guide you in the right direction and tell you what your intent might be, and that kind of thing in a way that's actually, it's almost more natural to a CLI than it is in a graphical user interface because it feels like this back and forth with the computer, almost funnily like a language model. So I think there's some interesting intersection of CLIs and language models actually being very sort of closely related and a good fit for each other.Swyx [00:05:59]: Yeah, I'll say one of the surprises from last year, I worked on a coding agent, but I think the most successful coding agent of my cohort was Open Interpreter, which was a CLI implementation. And I have chronically, even as a CLI person, I have chronically underestimated the CLI as a useful interface. You also developed ArchiveVanity, which you recently retired after a glorious seven years.Ben [00:06:22]: Something like that.Swyx [00:06:23]: Which is nice, I guess, HTML PDFs.Ben [00:06:27]: Yeah, that was actually the start of where Replicate came from. Okay, we can tell that story. So when I quit Docker, I got really interested in science infrastructure, just as like a problem area, because it is like science has created so much progress in the world. The fact that we're, you know, can talk to each other on a podcast and we use computers and the fact that we're alive is probably thanks to medical research, you know. But science is just like completely archaic and broken and it's like 19th century processes that just happen to be copied to the internet rather than take into account that, you know, we can transfer information at the speed of light now. And the whole way science is funded and all this kind of thing is all kind of very broken. And there's just so much potential for making science work better. And I realized that I wasn't a scientist and I didn't really have any time to go and get a PhD and become a researcher, but I'm a tool builder and I could make existing scientists better at their job. And if I could make like a bunch of scientists a little bit better at their job, maybe that's the kind of equivalent of being a researcher. So one particular thing I dialed in on is just how science is disseminated in that all of these PDFs, quite often behind paywalls, you know, on the internet.Swyx [00:07:34]: And that's a whole thing because it's funded by national grants, government grants, then they're put behind paywalls. Yeah, exactly.Ben [00:07:40]: That's like a whole, yeah, I could talk for hours about that. But the particular thing we got dialed in on was, interestingly, these PDFs are also, there's a bunch of open science that happens as well. So math, physics, computer science, machine learning, notably, is all published on the archive, which is actually a surprisingly old institution.Swyx [00:08:00]: Some random Cornell.Ben [00:08:01]: Yeah, it was just like somebody in Cornell who started a mailing list in the 80s. And then when the web was invented, they built a web interface around it. Like it's super old.Swyx [00:08:11]: And it's like kind of like a user group thing, right? That's why they're all these like numbers and stuff.Ben [00:08:15]: Yeah, exactly. Like it's a bit like something, yeah. That's where all basically all of math, physics and computer science happens. But it's still PDFs published to this thing. Yeah, which is just so infuriating. The web was invented at CERN, a physics institution, to share academic writing. Like there are figure tags, there are like author tags, there are heading tags, there are site tags. You know, hyperlinks are effectively citations because you want to link to another academic paper. But instead, you have to like copy and paste these things and try and get around paywalls. Like it's absurd, you know. And now we have like social media and things, but still like academic papers as PDFs, you know. This is not what the web was for. So anyway, I got really frustrated with that. And I went on vacation with my old friend Andreas. So we were, we used to work together in London on a startup, at somebody else's startup. And we were just on vacation in Greece for fun. And he was like trying to read a machine learning paper on his phone, you know, like we had to like zoom in and like scroll line by line on the PDF. And he was like, this is f*****g stupid. So I was like, I know, like this is something we discovered our mutual hatred for this, you know. And we spent our vacation sitting by the pool, like making latex to HTML, like converters, making the first version of Archive Vanity. Anyway, that was up then a whole thing. And the story, we shut it down recently because they caught the eye of Archive. They were like, oh, this is great. We just haven't had the time to work on this. And what's tragic about the Archive, it's like this project of Cornell that's like, they can barely scrounge together enough money to survive. I think it might be better funded now than it was when we were, we were collaborating with them. And compared to these like scientific journals, it's just that this is actually where the work happens. But they just have a fraction of the money that like these big scientific journals have, which is just so tragic. But anyway, they were like, yeah, this is great. We can't afford to like do it, but do you want to like as a volunteer integrate arXiv Vanity into arXiv?Swyx [00:10:05]: Oh, you did the work.Ben [00:10:06]: We didn't do the work. We started doing the work. We did some. I think we worked on this for like a few months to actually get it integrated into arXiv. And then we got like distracted by Replicate. So a guy called Dan picked up the work and made it happen. Like somebody who works on one of the, the piece of the libraries that powers arXiv Vanity. Okay.Swyx [00:10:26]: And the relationship with arXiv Sanity?Ben [00:10:28]: None.Swyx [00:10:30]: Did you predate them? I actually don't know the lineage.Ben [00:10:32]: We were after, we both were both users of arXiv Sanity, which is like a sort of arXiv...Ben [00:10:37]: Which is Andre's RecSys on top of arXiv.Ben [00:10:40]: Yeah. Yeah. And we were both users of that. And I think we were trying to come up with a working name for arXiv and Andreas just like cracked a joke of like, oh, let's call it arXiv Vanity. Let's make the papers look nice. Yeah. Yeah. And that was the working name and it just stuck.Swyx [00:10:52]: Got it.Ben [00:10:53]: Got it.Alessio [00:10:54]: Yeah. And then from there, tell us more about why you got distracted, right? So Replicate, maybe it feels like an overnight success to a lot of people, but you've been building this since 2019. Yeah.Ben [00:11:04]: So what prompted the start?Alessio [00:11:05]: And we've been collaborating for even longer.Ben [00:11:07]: So we created arXiv Vanity in 2017. So in some sense, we've been doing this almost like six, seven years now, a classic seven year.Swyx [00:11:16]: Overnight success.Ben [00:11:17]: Yeah. Yes. We did arXiv Vanity and then worked on a bunch of like surrounding projects. I was still like really interested in science publishing at that point. And I'm trying to remember, because I tell a lot of like the condensed story to people because I can't really tell like a seven year history. So I'm trying to figure out like the right. Oh, we got room. The right length.Swyx [00:11:35]: We want to nail the definitive Replicate story here.Ben [00:11:37]: One thing that's really interesting about these machine learning papers is that these machine learning papers are published on arXiv and a lot of them are actual fundamental research. So like should be like prose describing a theory. But a lot of them are just running pieces of software that like a machine learning researcher made that did something, you know, it was like an image classification model or something. And they managed to make an image classification model that was better than the existing state of the art. And they've made an actual running piece of software that does image segmentation. And then what they had to do is they then had to take that piece of software and write it up as prose and math in a PDF. And what's frustrating about that is like if you want to. So this was like Andreas is, Andreas was a machine learning engineer at Spotify. And some of his job was like he did pure research as well. Like he did a PhD and he was doing a lot of stuff internally. But part of his job was also being an engineer and taking some of these existing things that people have made and published and trying to apply them to actual problems at Spotify. And he was like, you know, you get given a paper which like describes roughly how the model works. It's probably listing lots of crucial information. There's sometimes code on GitHub. More and more there's code on GitHub. But back then it was kind of relatively rare. But it's quite often just like scrappy research code and didn't actually run. And, you know, there was maybe the weights that were on Google Drive, but they accidentally deleted the weights of Google Drive, you know, and it was like really hard to like take this stuff and actually use it for real things. We just started talking together about like his problems at Spotify and I connected this back to my work at Docker as well. I was like, oh, this is what we created containers for. You know, we solved this problem for normal software by putting the thing inside a container so you could ship it around and it kept on running. So we were sort of hypothesizing about like, hmm, what if we put machine learning models inside containers so they could actually be shipped around and they could be defined in like some production ready formats and other researchers could run them to generate baselines and you could people who wanted to actually apply them to real problems in the world could just pick up the container and run it, you know. And we then thought this is quite whether it gets normally in this part of the story I skip forward to be like and then we created cog this container stuff for machine learning models and we created Replicate, the place for people to publish these machine learning models. But there's actually like two or three years between that. The thing we then got dialed into was Andreas was like, what if there was a CI system for machine learning? It's like one of the things he really struggled with as a researcher is generating baselines. So when like he's writing a paper, he needs to like get like five other models that are existing work and get them running.Swyx [00:14:21]: On the same evals.Ben [00:14:22]: Exactly, on the same evals so you can compare apples to apples because you can't trust the numbers in the paper.Swyx [00:14:26]: So you can be Google and just publish them anyway.Ben [00:14:31]: So I think this was coming from the thinking of like there should be containers for machine learning, but why are people going to use that? Okay, maybe we can create a supply of containers by like creating this useful tool for researchers. And the useful tool was like, let's get researchers to package up their models and push them to the central place where we run a standard set of benchmarks across the models so that you can trust those results and you can compare these models apples to apples and for like a researcher for Andreas, like doing a new piece of research, he could trust those numbers and he could like pull down those models, confirm it on his machine, use the standard benchmark to then measure his model and you know, all this kind of stuff. And so we started building that. That's what we applied to YC with, got into YC and we started sort of building a prototype of this. And then this is like where it all starts to fall apart. We were like, okay, that sounds great. And we talked to a bunch of researchers and they really wanted that and that sounds brilliant. That's a great way to create a supply of like models on this research platform. But how the hell is this a business, you know, like how are we even going to make any money out of this? And we're like, oh s**t, that's like the, that's the real unknown here of like what the business is. So we thought it would be a really good idea to like, okay, before we get too deep into this, let's try and like reduce the risk of this turning into a business. So let's try and like research what the business could be for this research tool effectively. So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. And we were like, do you want like a deployment platform for deploying models? Like, do you want like a central place for versioning models? Like we're trying to think of like lots of different like products we could sell that were like related to this thing. And terrible idea. Like we're not sales people and like people don't want to buy something that doesn't exist. I think some people can pull this off, but we were just like, you know, a bunch of product people, products and engineer people, and we just like couldn't pull this off. So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day. And then we then like tried to figure out, okay, what can we build in like two weeks that'll be something. So we like desperately tried to, I can't remember what we've tried to build at that point. And then two weeks before demo day, I just remember it was all, we were going down to Mountain View every week for dinners and we got called on to like an all hands Zoom call, which was super weird. We're like, what's going on? And they were like, don't come to dinner tomorrow. And we realized, we kind of looked at the news and we were like, oh, there's a pandemic going on. We were like so deep in our startup. We were just like completely oblivious to what was going on around us.Swyx [00:17:20]: Was this Jan or Feb 2020?Ben [00:17:22]: This was March 2020. March 2020. 2020.Swyx [00:17:25]: Yeah. Because I remember Silicon Valley at the time was early to COVID. Like they started locking down a lot faster than the rest of the US.Ben [00:17:32]: Yeah, exactly. And I remember, yeah, soon after that, like there was the San Francisco lockdowns and then like the YC batch just like stopped. There wasn't demo day and it was in a sense a blessing for us because we just kind ofSwyx [00:17:43]: In the normal course of events, you're actually allowed to defer to a future demo day. Yeah.Ben [00:17:51]: So we didn't even take any defer because it just kind of didn't happen.Swyx [00:17:55]: So was YC helpful?Ben [00:17:57]: Yes. We completely screwed up the batch and that was our fault. I think the thing that YC has become incredibly valuable for us has been after YC. I think there was a reason why we couldn't, didn't need to do YC to start with because we were quite experienced. We had done some startups before. We were kind of well connected with VCs, you know, it was relatively easy to raise money because we were like a known quantity. You know, if you go to a VC and be like, Hey, I made this piece of-Swyx [00:18:24]: It's Docker Compose for AI.Ben [00:18:26]: Exactly. Yeah. And like, you know, people can pattern match like that and they can have some trust, you know what you're doing. Whereas it's much harder for people straight out of college and that's where like YC sweet spot is like helping people straight out of college who are super promising, like figure out how to do that.Swyx [00:18:40]: No credentials.Ben [00:18:41]: Yeah, exactly. We don't need that. But the thing that's been incredibly useful for us since YC has been, this was actually, I think, so Docker was a YC company and Solomon, the founder of Docker, I think told me this. He was like, a lot of people underestimate the value of YC after you finish the batch. And his biggest regret was like not staying in touch with YC. I might be misattributing this, but I think it was him. And so we made a point of that. And we just stayed in touch with our batch partner, who Jared at YC has been fantastic.Ben [00:19:10]: Jared Friedman. All of like the team at YC, there was the growth team at YC when they were still there and they've been super helpful. And two things have been super helpful about that is like raising money, like they just know exactly how to raise money. And they've been super helpful during that process in all of our rounds, like we've done three rounds since we did YC and they've been super helpful during the whole process. And also just like reaching a ton of customers. So like the magic of YC is that you have all of, like there's thousands of YC companies, I think, on the order of thousands, I think. And they're all of your first customers. And they're like super helpful, super receptive, really want to like try out new things. You have like a warm intro to every one of them basically. And there's this mailing list where you can post about updates to your products, which is like really receptive. And that's just been fantastic for us. Like we've just like got so many of our users and customers through YC. Yeah.Swyx [00:20:00]: Well, so the classic criticism or the sort of, you know, pushback is people don't buy you because you are both from YC. But at least they'll open the email. Right. Like that's the... Okay.Ben [00:20:13]: Yeah. Yeah. Yeah.Swyx [00:20:16]: So that's been a really, really positive experience for us. And sorry, I interrupted with the YC question. Like you were, you make it, you just made it out of the YC, survived the pandemic.Ben [00:20:22]: I'll try and condense this a little bit. Then we started building tools for COVID weirdly. We were like, okay, we don't have a startup. We haven't figured out anything. What's the most useful thing we could be doing right now?Swyx [00:20:32]: Save lives.Ben [00:20:33]: So yeah. Let's try and save lives. I think we failed at that as well. We had a bunch of products that didn't really go anywhere. We kind of worked on, yeah, a bunch of stuff like contact tracing, which turned out didn't really be a useful thing. Sort of Andreas worked on like a door dash for like people delivering food to people who are vulnerable. What else did we do? The meta problem of like helping people direct their efforts to what was most useful and a few other things like that. It didn't really go anywhere. So we're like, okay, this is not really working either. We were considering actually just like doing like work for COVID. We have this decision document early on in our company, which is like, should we become a like government app contracting shop? We decided no.Swyx [00:21:11]: Because you also did work for the gov.uk. Yeah, exactly.Ben [00:21:14]: We had experience like doing some like-Swyx [00:21:17]: And the Guardian and all that.Ben [00:21:18]: Yeah. For like government stuff. And we were just like really good at building stuff. Like we were just like product people. Like I was like the front end product side and Andreas was the back end side. So we were just like a product. And we were working with a designer at the time, a guy called Mark, who did our early designs for Replicate. And we were like, hey, what if we just team up and like become and build stuff? And yeah, we gave up on that in the end for, I can't remember the details. So we went back to machine learning. And then we were like, well, we're not really sure if this is going to work. And one of my most painful experiences from previous startups is shutting them down. Like when you realize it's not really working and having to shut it down, it's like a ton of work and it's people hate you and it's just sort of, you know. So we were like, how can we make something we don't have to shut down? And even better, how can we make something that won't page us in the middle of the night? So we made an open source project. We made a thing which was an open source Weights and Biases, because we had this theory that like people want open source tools. There should be like an open source, like version control, experiment tracking like thing. And it was intuitive to us and we're like, oh, we're software developers and we like command line tools. Like everyone loves command line tools and open source stuff, but machine learning researchers just really didn't care. Like they just wanted to click on buttons. They didn't mind that it was a cloud service. It was all very visual as well, that you need lots of graphs and charts and stuff like this. So it wasn't right. Like it was right. We actually were building something that Andreas made at Spotify for just like saving experiments to cloud storage automatically, but other people didn't really want this. So we kind of gave up on that. And then that was actually originally called Replicate and we renamed that out of the way. So it's now called Keepsake and I think some people still use it. Then we sort of came back, we looped back to our original idea. So we were like, oh, maybe there was a thing in that thing we were originally sort of thinking about of like researchers sharing their work and containers for machine learning models. So we just built that. And at that point we were kind of running out of the YC money. So we were like, okay, this like feels good though. Let's like give this a shot. So that was the point we raised a seed round. We raised seed round. Pre-launch. We raised pre-launch and pre-team. It was an idea basically. We had a little prototype. It was just an idea and a team. But we were like, okay, like, you know, bootstrapping this thing is getting hard. So let's actually raise some money. Then we made Cog and Replicate. It initially didn't have APIs, interestingly. It was just the bit that I was talking about before of helping researchers share their work. So it was a way for researchers to put their work on a webpage such that other people could try it out and so that you could download the Docker container. We cut the benchmarks thing of it because we thought that was just like too complicated. But it had a Docker container that like, you know, Andreas in a past life could download and run with his benchmark and you could compare all these models apples to apples. So that was like the theory behind it. That kind of started to work. It was like still when like, you know, it was long time pre-AI hype and there was lots of interesting stuff going on, but it was very much in like the classic deep learning era. So sort of image segmentation models and sentiment analysis and all these kinds of things, you know, that people were using, that we're using deep learning models for. And we were very much building for research because all of this stuff was happening in research institutions, you know, the sort of people who'd be publishing to archive. So we were creating an accompanying material for their models, basically, you know, they wanted a demo for their models and we were creating a company material for it. What was funny about that is they were like not very good users. Like they were, they were doing great work obviously, but, but the way that research worked is that they, they just made like one thing every six months and they just fired and forget it, forgot it. Like they, they published this piece of paper and like, done, I've, I've published it. So they like output it to Replicate and then they just stopped using Replicate. You know, they were like once every six monthly users and that wasn't great for us, but we stumbled across this early community. This was early 2021 when OpenAI created this, created CLIP and people started smushing CLIP and GANs together to produce image generation models. And this started with, you know, it was just a bunch of like tinkerers on Discord, basically. There was an early model called Big Sleep by Advadnoun. And then there was VQGAN Clip, which was like a bit more popular by Rivers Have Wings. And it was all just people like tinkering on stuff in Colabs and it was very dynamic and it was people just making copies of co-labs and playing around with things and forking in. And to me this, I saw this and I was like, oh, this feels like open source software, like so much more than the research world where like people are publishing these papers.Swyx [00:25:48]: You don't know their real names and it's just like a Discord.Ben [00:25:51]: Yeah, exactly. But crucially, it was like people were tinkering and forking and things were moving really fast and it just felt like this creative, dynamic, collaborative community in a way that research wasn't really, like it was still stuck in this kind of six month publication cycle. So we just kind of latched onto that and started building for this community. And you know, a lot of those early models were published on Replicate. I think the first one that was really primarily on Replicate was one called Pixray, which was sort of mid 2021 and it had a really cool like pixel art output, but it also just like produced general, you know, the sort of, they weren't like crisp in images, but they were quite aesthetically pleasing, like some of these early image generation models. And you know, that was like published primarily on Replicate and then a few other models around that were like published on Replicate. And that's where we really started to find our early community and like where we really found like, oh, we've actually built a thing that people want and they were great users as well. And people really want to try out these models. Lots of people were like running the models on Replicate. We still didn't have APIs though, interestingly, and this is like another like really complicated part of the story. We had no idea what a business model was still at this point. I don't think people could even pay for it. You know, it was just like these web forms where people could run the model.Swyx [00:27:06]: Just for historical interest, which discords were they and how did you find them? Was this the Lion Discord? Yeah, Lion. This is Eleuther.Ben [00:27:12]: Eleuther, yeah. It was the Eleuther one. These two, right? There was a channel where Viki Gangklep, this was early 2021, where Viki Gangklep was set up as a Discord bot. I just remember being completely just like captivated by this thing. I was just like playing around with it all afternoon and like the sort of thing. In Discord. Oh s**t, it's 2am. You know, yeah.Swyx [00:27:33]: This is the beginnings of Midjourney.Ben [00:27:34]: Yeah, exactly. And Stability. It was the start of Midjourney. And you know, it's where that kind of user interface came from. Like what's beautiful about the user interface is like you could see what other people are doing. And you could riff off other people's ideas. And it was just so much fun to just like play around with this in like a channel full of a hundred people. And yeah, that just like completely captivated me and I'm like, okay, this is something, you know. So like we should get these things on Replicate. Yeah, that's where that all came from.Swyx [00:28:00]: And then you moved on to, so was it APIs next or was it Stable Diffusion next?Ben [00:28:04]: It was APIs next. And the APIs happened because one of our users, our web form had like an internal API for making the web form work, like with an API that was called from JavaScript. And somebody like reverse engineered that to start generating images with a script. You know, they did like, you know, Web Inspector Coffee is Carl, like figured out what the API request was. And it wasn't secured or anything.Swyx [00:28:28]: Of course not.Ben [00:28:29]: They started generating a bunch of images and like we got tons of traffic and like what's going on? And I think like a sort of usual reaction to that would be like, hey, you're abusing our API and to shut them down. And instead we're like, oh, this is interesting. Like people want to run these models. So we documented the API in a Notion document, like our internal API in a Notion document and like message this person being like, hey, you seem to have found our API. Here's the documentation. That'll be like a thousand bucks a month, please, with a straight form, like we just click some buttons to make. And they were like, sure, that sounds great. So that was our first customer.Swyx [00:29:05]: A thousand bucks a month.Ben [00:29:07]: It was a surprising amount of money. That's not casual. It was on the order of a thousand bucks a month.Swyx [00:29:11]: So was it a business?Ben [00:29:13]: It was the creator of PixRay. Like it was, he generated NFT art. And so he like made a bunch of art with these models and was, you know, selling these NFTs effectively. And I think lots of people in his community were doing similar things. And like he then referred us to other people who were also generating NFTs and he joined us with models. We started our API business. Yeah. Then we like made an official API and actually like added some billing to it. So it wasn't just like a fixed fee.Swyx [00:29:40]: And now people think of you as the host and models API business. Yeah, exactly.Ben [00:29:44]: But that just turned out to be our business, you know, but what ended up being beautiful about this is it was really fulfilling. Like the original goal of what we wanted to do is that we wanted to make this research that people were making accessible to like other people and for it to be used in the real world. And this was like the just like ultimately the right way to do it because all of these people making these generative models could publish them to replicate and they wanted a place to publish it. And software engineers, you know, like myself, like I'm not a machine learning expert, but I want to use this stuff, could just run these models with a single line of code. And we thought, oh, maybe the Docker image is enough, but it's actually super hard to get the Docker image running on a GPU and stuff. So it really needed to be the hosted API for this to work and to make it accessible to software engineers. And we just like wound our way to this. Yeah.Swyx [00:30:30]: Two years to the first paying customer. Yeah, exactly.Alessio [00:30:33]: Did you ever think about becoming Midjourney during that time? You have like so much interest in image generation.Swyx [00:30:38]: I mean, you're doing fine for the record, but, you know, it was right there, you were playing with it.Ben [00:30:46]: I don't think it was our expertise. Like I think our expertise was DevTools rather than like Midjourney is almost like a consumer products, you know? Yeah. So I don't think it was our expertise. It certainly occurred to us. I think at the time we were thinking about like, oh, maybe we could hire some of these people in this community and make great models and stuff like this. But we ended up more being at the tooling. Like I think like before I was saying, like I'm not really a researcher, but I'm more like the tool builder, the behind the scenes. And I think both me and Andreas are like that.Swyx [00:31:09]: I think this is an illustration of the tool builder philosophy. Something where you latch on to in DevTools, which is when you see people behaving weird, it's not their fault, it's yours. And you want to pave the cow paths is what they say, right? Like the unofficial paths that people are making, like make it official and make it easy for them and then maybe charge a bit of money.Alessio [00:31:25]: And now fast forward a couple of years, you have 2 million developers using Replicate. Maybe more. That was the last public number that I found.Ben [00:31:33]: It's 2 million users. Not all those people are developers, but a lot of them are developers, yeah.Alessio [00:31:38]: And then 30,000 paying customers was the number late in space runs on Replicate. So we had a small podcaster and we host a whisper diarization on Replicate. And we're paying. So we're late in space in the 30,000. You raised a $40 million dollars, Series B. I would say that maybe the stable diffusion time, August 22, was like really when the company started to break out. Tell us a bit about that and the community that came out and I know now you're expanding beyond just image generation.Ben [00:32:06]: Yeah, like I think we kind of set ourselves, like we saw there was this really interesting image, generative image world going on. So we kind of, you know, like we're building the tools for that community already, really. And we knew stable diffusion was coming out. We knew it was a really exciting thing, you know, it was the best generative image model so far. I think the thing we underestimated was just like what an inflection point it would be, where it was, I think Simon Willison put it this way, where he said something along the lines of it was a model that was open source and tinkerable and like, you know, it was just good enough and open source and tinkerable such that it just kind of took off in a way that none of the models had before. And like what was really neat about stable diffusion is it was open source so you could like, compared to like Dali, for example, which was like sort of equivalent quality. And like the first week we saw like people making animation models out of it. We saw people make like game texture models that like use circular convolutions to make repeatable textures. We saw, you know, a few weeks later, like people were fine tuning it so you could make, put your face in these models and all of these other-Swyx [00:33:10]: Textual inversion.Ben [00:33:11]: Yep. Yeah, exactly. That happened a bit before that. And all of this sort of innovation was happening all of a sudden. And people were publishing on Replicate because you could just like publish arbitrary models on Replicate. So we had this sort of supply of like interesting stuff being built. But because it was a sufficiently good model, there was also just like a ton of people building with it. They were like, oh, we can build products with this thing. And this was like about the time where people were starting to get really interested in AI. So like tons of product builders wanted to build stuff with it. And we were just like sitting in there in the middle, it's like the interface layer between like all these people who wanted to build and all these like machine learning experts who were building cool models. And that's like really where it took off. We were just sort of incredible supply, incredible demand, and we were just like in the middle. And then, yeah, since then, we've just kind of grown and grown really. And we've been building a lot for like the indie hacker community, these like individual tinkerers, but also startups and a lot of large companies as well who are sort of exploring and building AI things. Then kind of the same thing happened like middle of last year with language models and Lama 2, where the same kind of stable diffusion effect happened with Lama. And Lama 2 was like our biggest week of growth ever because like tons of people wanted to tinker with it and run it. And you know, since then we've just been seeing a ton of growth in language models as well as image models. Yeah. We're just kind of riding a lot of the interest that's going on in AI and all the people building in AI, you know. Yeah.Swyx [00:34:29]: Kudos. Right place, right time. But also, you know, took a while to position for the right place before the wave came. I'm curious if like you have any insights on these different markets. So Peter Levels, notably very loud person, very picky about his tools. I wasn't sure actually if he used you. He does. So you've met him on your Series B blog posts and Danny Post might as well, his competitor all in that wave. What are their needs versus, you know, the more enterprise or B2B type needs? Did you come to a decision point where you're like, okay, you know, how serious are these indie hackers versus like the actual businesses that are bigger and perhaps better customers because they're less churny?Ben [00:35:04]: They're surprisingly similar because I think a lot of people right now want to use and build with AI, but they're not AI experts and they're not infrastructure experts either. So they want to be able to use this stuff without having to like figure out all the internals of the models and, you know, like touch PyTorch and whatever. And they also don't want to be like setting up and booting up servers. And that's the same all the way from like indie hackers just getting started because like obviously you just want to get started as quickly as possible, all the way through to like large companies who want to be able to use this stuff, but don't have like all of the experts on stuff, you know, you know, big companies like Google and so on that do actually have a lot of experts on stuff, but the vast majority of companies don't. And they're all software engineers who want to be able to use this AI stuff, but they just don't know how to use it. And it's like, you really need to be an expert and it takes a long time to like learn the skills to be able to use that. So they're surprisingly similar in that sense. I think it's kind of also unfair of like the indie community, like they're not churning surprisingly, or churny or spiky surprisingly, like they're building real established businesses, which is like, kudos to them, like building these really like large, sustainable businesses, often just as solo developers. And it's kind of remarkable how they can do that actually, and it's in credit to a lot of their like product skills. And you know, we're just like there to help them being like their machine learning team effectively to help them use all of this stuff. A lot of these indie hackers are some of our largest customers, like alongside some of our biggest customers that you would think would be spending a lot more money than them, but yeah.Swyx [00:36:35]: And we should name some of these. So you have them on your landing page, your Buzzfeed, you have Unsplash, Character AI. What do they power? What can you say about their usage?Ben [00:36:43]: Yeah, totally. It's kind of a various things.Swyx [00:36:46]: Well, I mean, I'm naming them because they're on your landing page. So you have logo rights. It's useful for people to, like, I'm not imaginative. I see monkey see monkey do, right? Like if I see someone doing something that I want to do, then I'm like, okay, Replicate's great for that.Ben [00:37:00]: Yeah, yeah, yeah.Swyx [00:37:01]: So that's what I think about case studies on company landing pages is that it's just a way of explaining like, yep, this is something that we are good for. Yeah, totally.Ben [00:37:09]: I mean, it's, these companies are doing things all the way up and down the stack at different levels of sophistication. So like Unsplash, for example, they actually publicly posted this story on Twitter where they're using BLIP to annotate all of the images in their catalog. So you know, they have lots of images in the catalog and they want to create a text description of it so you can search for it. And they're annotating images with, you know, off the shelf, open source model, you know, we have this big library of open source models that you can run. And you know, we've got lots of people are running these open source models off the shelf. And then most of our larger customers are doing more sophisticated stuff. So they're like fine tuning the models, they're running completely custom models on us. A lot of these larger companies are like, using us for a lot of their, you know, inference, but it's like a lot of custom models and them like writing the Python themselves because they've got machine learning experts on the team. And they're using us for like, you know, their inference infrastructure effectively. And so it's like lots of different levels of sophistication where like some people using these off the shelf models. Some people are fine tuning models. So like level, Peter Levels is a great example where a lot of his products are based off like fine tuning, fine tuning image models, for example. And then we've also got like larger customers who are just like using us as infrastructure effectively. So yeah, it's like all things up and down, up and down the stack.Alessio [00:38:29]: Let's talk a bit about COG and the technical layer. So there are a lot of GPU clouds. I think people have different pricing points. And I think everybody tries to offer a different developer experience on top of it, which then lets you charge a premium. Why did you want to create COG?Ben [00:38:46]: You worked at Docker.Alessio [00:38:47]: What were some of the issues with traditional container runtimes? And maybe yeah, what were you surprised with as you built it?Ben [00:38:54]: COG came right from the start, actually, when we were thinking about this, you know, evaluation, the sort of benchmarking system for machine learning researchers, where we wanted researchers to publish their models in a standard format that was guaranteed to keep on running, that you could replicate the results of, like that's where the name came from. And we realized that we needed something like Docker to make that work, you know. And I think it was just like natural from my point of view of like, obviously that should be open source, that we should try and create some kind of open standard here that people can share. Because if more people use this format, then that's great for everyone involved. I think the magic of Docker is not really in the software. It's just like the standard that people have agreed on, like, here are a bunch of keys for a JSON document, basically. And you know, that was the magic of like the metaphor of real containerization as well. It's not the containers that are interesting. It's just like the size and shape of the damn box, you know. And it's a similar thing here, where really we just wanted to get people to agree on like, this is what a machine learning model is. This is how a prediction works. This is what the inputs are, this is what the outputs are. So cog is really just a Docker container that attaches to a CUDA device, if it needs a GPU, that has a open API specification as a label on the Docker image. And the open API specification defines the interface for the machine learning model, like the inputs and outputs effectively, or the params in machine learning terminology. And you know, we just wanted to get people to kind of agree on this thing. And it's like general purpose enough, like we weren't saying like, some of the existing things were like at the graph level, but we really wanted something general purpose enough that you could just put anything inside this and it was like future compatible and it was just like arbitrary software. And you know, it'd be future compatible with like future inference servers and future machine learning model formats and all this kind of stuff. So that was the intent behind it. It just came naturally that we wanted to define this format. And that's been really working for us. Like a bunch of people have been using cog outside of replicates, which is kind of our original intention, like this should be how machine learning is packaged and how people should use it. Like it's common to use cog in situations where like maybe they can't use the SAS service because I don't know, they're in a big company and they're not allowed to use a SAS service, but they can use cog internally still. And like they can download the models from replicates and run them internally in their org, which we've been seeing happen. And that works really well. People who want to build like custom inference pipelines, but don't want to like reinvent the world, they can use cog off the shelf and use it as like a component in their inference pipelines. We've been seeing tons of usage like that and it's just been kind of happening organically. We haven't really been trying, you know, but it's like there if people want it and we've been seeing people use it. So that's great. Yeah. So a lot of it is just sort of philosophical of just like, this is how it should work from my experience at Docker, you know, and there's just a lot of value from like the core being open, I think, and that other people can share it and it's like an integration point. So, you know, if replicate, for example, wanted to work with a testing system, like a CI system or whatever, we can just like interface at the cog level, like that system just needs to put cog models and then you can like test your models on that CI system before they get deployed to replicate. And it's just like a format that everyone, we can get everyone to agree on, you know.Alessio [00:41:55]: What do you think, I guess, Docker got wrong? Because if I look at a Docker Compose and a cog definition, first of all, the cog is kind of like the Dockerfile plus the Compose versus in Docker Compose, you're just exposing the services. And also Docker Compose is very like ports driven versus you have like the actual, you know, predict this is what you have to run.Ben [00:42:16]: Yeah.Alessio [00:42:17]: Any learnings and maybe tips for other people building container based runtimes, like how much should you separate the API services versus the image building or how much you want to build them together?Ben [00:42:29]: I think it was coming from two sides. We were thinking about the design from the point of view of user needs, what are their problems and what problems can we solve for them, but also what the interface should be for a machine learning model. And it was sort of the combination of two things that led us to this design. So the thing I talked about before was a little bit of like the interface around the machine learning model. So we realized that we wanted to be general purpose. We wanted to be at the like JSON, like human readable things rather than the tensor level. So it was like an open API specification that wrapped a Docker container. And that's where that design came from. And it's really just a wrapper around Docker. So we were kind of building on, standing on shoulders there, but Docker is too low level. So it's just like arbitrary software. So we wanted to be able to like have a open API specification that defined the function effectively that is the machine learning model. But also like how that function is written, how that function is run, which is all defined in code and stuff like that. So it's like a bunch of abstraction on top of Docker to make that work. And that's where that design came from. But the core problems we were solving for users was that Docker is really hard to use and productionizing machine learning models is really hard. So on the first part of that, we knew we couldn't use Dockerfiles. Like Dockerfiles are hard enough for software developers to write. I'm saying this with love as somebody who works on Docker and like works on Dockerfiles, but it's really hard to use. And you need to know a bunch about Linux, basically, because you're running a bunch of CLI commands. You need to know a bunch about Linux and best practices and like how apt works and all this kind of stuff. So we're like, OK, we can't get to that level. We need something that machine learning researchers will be able to understand, like people who are used to like Colab notebooks. And what they understand is they're like, I need this version of Python. I need these Python packages. And somebody told me to apt-get install something. You know? If there was sudo in there, I don't really know what that means. So we tried to create a format that was at that level, and that's what cog.yaml is. And we were really kind of trying to imagine like, what is that machine learning researcher going to understand, you know, and trying to build for them. Then the productionizing machine learning models thing is like, OK, how can we package up all of the complexity of like productionizing machine learning models, like picking CUDA versions, like hooking it up to GPUs, writing an inference server, defining a schema, doing batching, all of these just like really gnarly things that everyone does again and again. And just like, you know, provide that as a tool. And that's where that side of it came from. So it's like combining those user needs with, you know, the sort of world need of needing like a common standard for like what a machine learning model is. And that's how we thought about the design. I don't know whether that answers the question.Alessio [00:45:12]: Yeah. So your idea was like, hey, you really want what Docker stands for in terms of standard, but you actually don't want people to do all the work that goes into Docker.Ben [00:45:22]: It needs to be higher level, you know?Swyx [00:45:25]: So I want to, for the listener, you're not the only standard that is out there. As with any standard, there must be 14 of them. You are surprisingly friendly with Olama, who is your former colleagues from Docker, who came out with the model file. Mozilla came out with the Lama file. And then I don't know if this is in the same category even, but I'm just going to throw it in there. Like Hugging Face has the transformers and diffusers library, which is a way of disseminating models that obviously people use. How would you compare your contrast, your approach of Cog versus all these?Ben [00:45:53]: It's kind of complementary, actually, which is kind of neat in that a lot of transformers, for example, is lower level than Cog. So it's a Python library effectively, but you still need to like...Swyx [00:46:04]: Expose them.Ben [00:46:05]: Yeah. You still need to turn that into an inference server. You still need to like install the Python packages and that kind of thing. So lots of replicate models are transformers models and diffusers models inside Cog, you know? So that's like the level that that sits. So it's very complementary in some sense. We're kind of working on integration with Hugging Face such that you can deploy models from Hugging Face into Cog models and stuff like that to replicate. And some of these things like Llamafile and what Llama are working on are also very complementary in that they're doing a lot of the sort of running these things locally on laptops, which is not a thing that works very well with Cog. Like Cog is really designed around servers and attaching to CUDA devices and NVIDIA GPUs and this kind of thing. So we're actually like, you know, figuring out ways that like we can, those things can be interoperable because, you know, they should be and they are quite complementary and that you should be able to like take a model and replicate and run it on your local machine. You should be able to take a model, you know, the machine and run it in the cloud.Swyx [00:47:02]: Is the base layer something like, is it at the like the GGUF level, which by the way, I need to get a primer on like the different formats that have emerged, or is it at the star dot file level, which is model file, Llamafile, whatever, whatever, or is it at the Cog level? I don't know, to be honest.Ben [00:47:16]: And I think this is something we still have to figure out. There's a lot yet, like exactly where those lines are drawn. Don't know exactly. I think this is something we're trying to figure out ourselves, but I think there's certainly a lot of promise about these systems interoperating. We just want things to work together. You know, we want to try and reduce the number of standards. So the more, the more these things can interoperate and, you know
Episode 544: Shaan Puri (https://twitter.com/ShaanVP) and Sam Parr (https://twitter.com/theSamParr) talk about Bryan Johnson's $2.4M dollar day, Pieter Levels investment portfolio, and the $500M exit no one saw coming. No more small boy spreadsheets, build your business on the free HubSpot CRM: https://mfmpod.link/hrd — Show Notes: (0:00) Intro (1:00) Follow Up Boss sold for $500M (4:00) Boring Mattress (17:00) Pieter Levels is insane (23:50) Bryan Johnson: Zero to $200M Hero (34:00) OpenAI's new App Store (38:00) Vertical Google for research papers (43:00) Shaan's season of intentional internet (50:30) Joe Speiser's hyper-growth to failure (53:30 Partnership agreements (59:00) Shaan's nighttime routine — Links: • Boring Mattress - http://boring.co/ • Pieter Levels portfolio - https://twitter.com/levelsio/status/1748713482692759647 • Bryan Johnson Blueprint - https://blueprint.bryanjohnson.com/ • Salary.com - http://salary.com/ • Consensus - https://consensus.app/ • Perplexity - http://perplexity.ai/ — Check Out Shaan's Stuff: • Try Shepherd Out - https://www.supportshepherd.com/ • Shaan's Personal Assistant System - http://shaanpuri.com/remoteassistant • Power Writing Course - https://maven.com/generalist/writing • Small Boy Newsletter - https://smallboy.co/ • Daily Newsletter - https://www.shaanpuri.com/ Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. — Other episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto • #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • #218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More
Indie Hacking Isn't Dead — It's Just Less Hacky. The transformation of Indie Hacking over the years creates new challenges and opportunities for Indie Hackers in today's competitive landscape. The shift from community-driven collaboration to a more competitive environment makes distribution and expertise more relevant than ever. Indie Hacking has evolved into a lifestyle that requires skill and strategic distribution.Discover the strategies and approaches reshaping the Indie Hacking landscape, and gain inspiration from seasoned entrepreneurs like Pieter Levels, who have successfully navigated these changes.The blog post: https://thebootstrappedfounder.com/indie-hacking-isnt-dead-its-just-less-hacky/The podcast episode: https://share.transistor.fm/s/c70bfc66The video: https://youtu.be/_9IBEXsUUaUYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find your Following: https://findyourfollowing.comFind me on Twitter: https://twitter.com/arvidkahl/This episode is sponsored by Acquire.com
My guest today, Pieter Levels, claims that "Indie Hacking is dead." Yet, Pieter runs several indie AI startups (and a few traditional ones), totaling $250,000 in revenue every month. So, how can he be a successful indie hacker while dismissing the foundations of his work?It turns out that Indie Hacking is very much alive. But it has changed significantly. And Pieter has been there from the early days.Today, Pieter and I talk about what it means to be an Indie Hacker in the age of AI tools, platforms, and businesses. From dependency risk to preparing software businesses for a potential exit, we tackle a wide variety of topics that every Indie Hacker has to deal with. We also dive deep into Pieter's personal journey: from digital nomadism through thoughts of teaming up with fellow makers. You'll get the full picture of an entrepreneur who tells it the way he sees it — which, as you will find, has made the world of social media a very interesting place for him to work in.With insights into the meme of bad coding, the Lindy Effect, the importance of social proof, and the generational divide regarding AI, this conversation with Pieter Levels is a must-listen for anyone interested in AI startups, indie hacking, and the future of digital entrepreneurship.Pieter on Twitter: https://twitter.com/levelsio/Pieter's projects: https://levels.io/projects/00:00:00 - Indie Hacking and AI Startups Evolution00:07:17 - Dependency on Suppliers and Finding Alternatives00:16:37 - Prioritizing Speed in Entrepreneurship Execution00:27:11 - Future of AI With Positive Outlooks00:40:11 - AI Business Challenges and Potential00:44:52 - Future Work and Travel With AI00:52:56 - Twitter Changes and the Need to Adapt00:57:50 - Attention Economy and Communication Impact01:01:39 - Navigating Controversy and Authenticity on TwitterThis episode is sponsored by Acquire.comThe blog post: https://thebootstrappedfounder.com/pieter-levels-the-indie-hackers-guide-to-ai-startups/The podcast episode: https://share.transistor.fm/s/db7c1d51The video: https://www.youtube.com/watch?v=9Wjec3wh4p8You'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw (00:00) - - Indie Hacking and AI Startups Evolution (07:17) - - Dependency on Suppliers and Finding Alternatives (16:37) - - Prioritizing Speed in Entrepreneurship Execution (27:11) - - Future of AI With Positive Outlooks (40:11) - - AI Business Challenges and Potential (44:52) - - Future Work and Travel With AI (52:56) - - Twitter Changes and the Need to Adapt (57:50) - - Attention Economy and Communication Impact (01:01:39) - - Navigating Controversy and Authenticity on Twitter
El viernes pasado tuvimos una reunión que ha supuesto un hito en nuestro camino hasta ahora. Nos esperan unos meses con nuevos retos (para variar) derivados de las entrevistas que hemos estado haciendo a clientes potenciales. Reflexionamos sobre cómo testear y analizar no solo es estratégico sino que permite avanzar con foco a lo siguiente. Notas del episodio: Pieter Levels: https://twitter.com/levelsio Sobre los autores: OTTER otter.es I Instagram @otter.es I Tik Tok @otter_es Maria Carvajal mariacarvajal.es I Instagram @soymariacarvajal I Twitter @MariaCarvajalC Mike San Román www.msanroman.io I Instagram: @mike.sanroman I Twitter @msanromanv
If you're listening to this episode, you probably want to learn to code. But, you might be overwhelmed and wondering how to learn or where to start.Thankfully, the best-articulated strategy we've come across for learning anything is dead simple: do the real thing.In this episode, we talk through the core tenets of this idea as well as approaches for applying it when learning to code. We originally covered this topic in episode #20. LinksDo the Real Thing: https://www.scotthyoung.com/blog/2020/05/04/do-the-real-thing/Pieter Levels: https://twitter.com/levelsio/Shameless PlugsJunior to SeniorParsitydev30Peter's YouTube channel
Danny Postma (@dannypostmaa) talks his rivalry with Pieter Levels, selling his AI startup, using SEO as a moat, how to be an AI first-mover, why he's not allowed to use ChatGPT, and passing $300k in revenue with Courtland (@csallen) and Channing (@ChanningAllen).
In this episode James announces his big news as he reflects on the past 5 years and how he got to where he is today. We also discuss B2C vs B2B and how people like Pieter Levels (@levelsio on Twitter) and Danny Postma (@dannypostmaa on Twitter) have managed to crack the code with B2C using AI to great advantage but is it worth the risk chasing that big win? Whilst mentioning Simon Høiberg (@SimonHoiberg on YouTube) we talk about how the best use of AI is fast becoming the role of an assistant in your app and so NoCoders are keenly jumping into the new ChatGPT API to see how they might add some killer features to their apps. James states how important AI will become in his SaaS product moving forwards with some really juicy early insights from his experimentation from which we can all learn. Another drop-the-mic moment from James comes in the form of Tiny Bird and how it has transformed James's ability to process vast amounts of data in his NoCode solution. Definitely one worth learning about for anyone recording 100's of 1000's, even millions of rows of data in a database, but one which you need to query in a performant manner. Kieran tells us about his experience with website/app roasting-as-a-service which he found very useful despite his reservations and so maybe more of us should be hiring roasters to help us to see the woods rather than the trees? And, naturally, Kieran has started to delve into ChatGPT too and is considering how he could leverage it inside Yep.so Glenn mentions an organisation called Big Change who are supplying grants to early stage ideas which are aiming to bring about transformation in the British education system and so, naturally, he has applied for recognition for his work he is doing with NoCode Kids. Let's see if they appreciate his ideas on teaching NoCode to kids in the school setting - watch this space. What we're working on Glenn's SaaS is NoCode Kids, a learning management system to teach kids about no-code, built on Webflow. Kieran's SaaS is Yep.so, a super fast landing page builder and idea validation tool, built on Bubble. James' SaaS is Userloop, a customer feedback tool for Shopify merchants, built on Bubble.
Pieter Levels (levelsio) is the digital nomad who took a leap of faith and became a multi-millionaire by building a portfolio of successful startups. From his humble beginnings as a teenager dabbling in programming to launching 7 mediocre startups, Pieter finally hit the jackpot with Nomad List and RemoteOK. With his philosophy of "learn by doing" and a focus on speed and simplicity, Pieter Levels has built a thriving empire that generates $3 million a year with high margins. Watch this inspiring story of how Pieter took control of his fate and achieved his entrepreneurial dreams. Pieter/levelsio info: * Nomad List - https://nomadlist.com * RemoteOK - https://remoteok.com * Pieter's Blog - https://levels.io/ * Pieter Twitter - https://twitter.com/levelsio This episode is available as a video on Spotify and YouTube.
Episode 402: Producer Ben breaks down his top 5 MFM interviews from 2022 ------ Show Notes: (00:45) - MrBeast and Hasan Minhaj (05:30) - Palmer Luckey (14:20) - Dharmesh Shah (26:00) - Pieter Levels (35:50) - Alex Hormozi ----- Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. ----- Additional episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto * #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • #218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More
Episode 13: Today, hosts Alex Lieberman (@businessbarista), Sophia Amoruso (@sophiaamoruso), and Jesse Pujji (@jspujji) are talking solo-preneurship—aka running a business of one. What are the pros and cons of being a solo-preneur? And what types of businesses is solo-preneurship best suited for? Then, the team gets radically honest about how they spend their time each week—from self-care to deep work, Jesse, Alex, and Sophia break down how many hours per week they dedicate to work and play, and how they organize their schedules. And finally, the team wraps the last episode of the year with a brainstorm of how we can improve on the show. We'd love to hear your feedback too! Email us at thecrazyones@morningbrew.com. #TheCrazyOnes #Startups #Entrepreneur Listen to The Crazy Ones here: https://link.chtbl.com/OV4W93_W Subscribe to Morning Brew! Sign up for free today: https://bit.ly/morningbrewyt Follow The Brew! Instagram - https://www.instagram.com/morningbrew/ Twitter - https://twitter.com/MorningBrew Tik Tok - https://www.tiktok.com/@morningbrew Follow Our Hosts! Alex Lieberman (@businessbarista) Sophia Amoruso (@sophiaamoruso) Jesse Pujji (@jspujji) (00:34) - Intro (00:58) - The Rundown (02:35) - Why Jesse would hate running a business of one (05:07) - Sophia's point of view on running a business of one (07:01) - When running a business of one can become challenging (08:25) - The actual popularity of solo-preneurship (10:04) - The story of Pieter Levels attempting to start 12 startups in 12 months (17:29) - Start of the conversation about how the team spends their time (18:23) - How Sophia organizes her week (22:48) - How Sophia organizes her time to allow for the context-switching she has to do every week (24:09) - How Jesse approaches organizing his time (28:32) - The scheduling mistake Jesse sees early founders making (29:23) - How Jesse carves out time for deep work during the week (33:00) - How Alex spends his time (36:59) - The team brainstorm about how to improve the show
Meet Tiago Ferreira (@wbetiago), founder of the Wannabe Entrepreneur community. Tiago had dreamed of the day he'd be able to work for himself, but the moment he actually quit his day job, fear overcame him. As a way of calming himself and refocusing on his "why" for desiring an alternative lifestyle, Tiago decided to start a podcast — and that podcast became Wannabe Entrepreneur. On this episode of Zero to (point) One, Zach and Tiago discuss: How he convinced Pieter Levels — the king of indie hackers — to come on his podcast The network effects of podcasting And why he wants to build the YC for bootstrapped founders Follow Tiago on Twitter Discover the Wannabe Entrepreneur community About the Show Zero to (point) One is hosted by yours truly, Zach Busekrus (@zboozee). I'm an entrepreneur in residence at a marketing firm by day and building my own start up Sponstayneous by night. Sponstayneous brings subscribers the best last-minute deals and upcoming steals on Airbnb. My wife and I co-founded the company while living full-time on Airbnb last year. There's a lot of podcasts out there that feature entrepreneurs: once they've gone public once they've had an exit or even once they've made their first million. But not as many that feature the first-time Product Hunt launchers, the Indie Hacker that has yet to go viral on Twitter, or the micro influencer who wants to build their first niche product. And there's where this show comes in. Zero to (point) One features a collection of ideas that may go nowhere, or just might change the world, from people you've never heard of, who may just become household names. A nod to tech founder Peter Thiel's Zero to One framework, Zero to (point) One explores the beginnings of (very) early-stage creators and entrepreneurs whose first chapters are still being written. Each guest featured on the show has MRR of $500 - $10k (no more and no less) or has 10k - 100k followers on Instagram, TikTok, or Twitter. Come and meet the next generation of builders wherever you get your podcasts every Friday morning. If you'd like to be on the show or if you know someone who might be a good fit for the show, send me an email at zach@sponstayneous.com or DM me on Twitter @zboozee. Finally, be sure to subscribe — who knows? You may just meet your future co-founder, your next great investment, or the next Steve or Elon right here.
Today's guest is Enzo, the founder of June - a tool to visualize and measure the metrics for subscription businesses. In this episode, we'll discover how they're growing 10% month-over-month and why you should monetize as quickly as you can. Sponsor of today's episode: Grain - Record, transcribe, clip, and share the most important moments from your video meetings: https://grain.com/saaspirates Join the FB community group: https://www.facebook.com/groups/saaspirates. Follow Mike: https://twitter.com/mikeslaats/, https://instagram.com/mikeslaats, https://youtube.com/mikedotsaas [DISCOUNT OFFER] User feedback tool: https://upvoty.com (10% off with code 'PIRATES'). Named in this episode: Sergio Mattei (Founder of Makerlog), Slack, June, Pieter Levels (aka Levels io), Spotify, Intercom, Rework the book, Des Traynor.
Notas del episodio: https://elrincondeaquiles.com/podcast/vida-videojuego Telegram de la comunidad: https://t.me/elrincondeaquiles En este episodio partiendo de ver la vida como un videojuego, conocerás como este modelo mental puede cambiar tu forma de afrontar la vida. Te dejamos con esta cita de un famoso emprendedor para romper el hielo: “La vida es como el mejor videojuego de todos los tiempos, tiene gráficos asombrosos, una cantidad infinita de niveles, un mapa enorme y una ridícula gran libertad. Solo necesitaba coger el mando". Pieter Levels. Ver la vida como un videojuego es un modelo mental que lleva apareciendo por nuestras conversaciones, y con invitados del podcast, durante un tiempo. En nuestra web ya te hicimos un guiño de como este modelo mental nos condicionaba en cómo entendemos la vida por este podcast. Va sobre piratas y hombres de goma. La vida es compleja. Los modelos mentales te ayudan a entenderla mejor y reducen su complejidad. Antes de seguir, debo de confesarte algo, el modelo mental del que te vamos a hablar en este episodio es uno de mis favoritos. Es como el meta-modelo mental. Un modelo mental para dominarlos a todos. Un modelo mental que me permite tomar mejores decisiones y saborear más el camino de la vida. Este modelo mental, del que no se habla mucho es: La vida cómo un videojuego. En este episodio hablaremos de: -Diamante del FIFA -Las diferentes áreas de la vida -¿Realmente el objetivo del videojuego de la vida es ganar? Encuéntranos en: Nuestra web: https://elrincondeaquiles.com Instagram: https://www.instagram.com/elrincondeaquiles.es Twitter: https://twitter.com/RinconDeAquiles
Lohnt es sich ein IT-Fachbuch zu schreiben?Es gibt zu jeder Software und zu jedem IT-Thema mindestens ein Buch. Doch wie ist es eigentlich, ein solches Buch zu schreiben? Was macht ein Verlag und braucht man diesen in der heutigen Zeit eigentlich noch? Wird man dadurch reich oder bleibt es Hungerlohn? Was für ein Tech-Stack steckt hinter einem Buch? Und wie würde man eigentlich starten? All das klären wir in dieser Episode mit Wolfgang, der ein Buch über MySQL im Rheinwerk-Verlag (Galileo Press) publiziert hat.Bonus: Warum Wolfgang eher ein Fan vom Reden ist und warum er wirklich so lange für seinen Dr. Titel benötigt hat.Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776Gerne behandeln wir auch euer Audio Feedback in einer der nächsten Episoden, einfach Audiodatei per Email oder WhatsApp Voice Message an +49 15678 136776LinksBewerbungsgespräche mit Wolfgang Gassler, Interview von Nils Langner: https://www.youtube.com/watch?v=-c3ZAp7MvTIMySQL: Das umfassende Handbuch: https://www.rheinwerk-verlag.de/mysql-das-umfassende-handbuch/ LaTeX: https://www.latex-project.org/ct Magazin: https://www.heise.de/ct/Rheinwerk Verlag (Galileo Press): https://www.rheinwerk-verlag.de/Michael Kofler: https://kofler.info/ Buch "Designing Data-Intensive Applications": https://dataintensive.net/Buch "MAKE" von Pieter Levels: https://readmake.com/Use the Index, Luke (Markus Winand): https://use-the-index-luke.com/Simon Sinek: Start with why: https://www.youtube.com/watch?v=u4ZoJKF_VuABuch "Observability Engineering": https://www.oreilly.com/library/view/observability-engineering/9781492076438/Honeycomb: https://www.honeycomb.io/LeanPub: https://leanpub.com/Amazon Print on Demand: https://www.amazon.de/Buecher-Print-On-Demand/b?ie=UTF8&node=5445727031Episode #35 Knowledge Sharing oder die Person, die nie "gehen" sollte...: https://engineeringkiosk.dev/podcast/episode/35-knowledge-sharing-oder-die-person-die-nie-gehen-sollte/Sprungmarken(00:00:00) Intro(00:00:40) Schreib-Skills(00:02:42) Thema: Das schreiben von IT Fachbüchern - MySQL, das umfassende Handbuch(00:03:39) Wie gut kennst du dich im Bereich MySQL aus?(00:05:19) Überblick über Wolfgangs MySQL Buch(00:09:34) Wie lange habt ihr benötigt, das Buch zu schreiben?(00:11:24) Welcher Tech-Stack wurde genutzt, um das Buch zu schreiben?(00:13:43) Welche Aufgaben hat der Verlag übernommen?(00:18:54) Wie hast du die Verbindung zum Verlag aufgebaut?(00:21:17) Was verdient man mit einem IT Fachbuch?(00:28:21) Warum hast du ein Buch geschrieben? Was bringt es ein Buch zu schreiben?(00:30:27) Was hat sich durch das Buch bei dir professionell ergeben?(00:32:22) Wieso habt ihr das Schreiben einer neuen Edition abgelehnt?(00:33:56) Imposter-Syndrom: Wie viel weißt du über das MySQL Datenbank-Thema, nach deiner Recherche?(00:35:52) Wie viele Amazon-Reviews hast du gekauft?(00:38:39) Liest du IT-Fachbücher?(00:43:24) Würdest du nochmal ein Buch schreiben?(00:46:32) Würdest du Leuten empfehlen, ein Buch zu schreiben?(00:51:00) Was würdest du Leuten empfehlen, die ein Buch schreiben möchten?(00:53:25) Würdest du nochmal ein Buch über eine Software/Software-Version schreiben?(00:55:50) Hattest du schon Berührungen zu einem Ghostwriter?(00:58:46) Lohnt sich das Mittel- und Langfristig ein Buch zu schreiben?(01:01:22) Selbst-Publishing und Print on Demand(01:03:56) Audio-Rants, Reddit und OutroHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776
Notas del episodio: https://elrincondeaquiles.com/podcast/vida-videojuego Telegram de la comunidad: https://t.me/elrincondeaquiles En este episodio partiendo de ver la vida como un videojuego, conocerás como este modelo mental puede cambiar tu forma de afrontar la vida. Te dejamos con esta cita de un famoso emprendedor para romper el hielo: “La vida es como el mejor videojuego de todos los tiempos, tiene gráficos asombrosos, una cantidad infinita de niveles, un mapa enorme y una ridícula gran libertad. Solo necesitaba coger el mando". Pieter Levels. Ver la vida como un videojuego es un modelo mental que lleva apareciendo por nuestras conversaciones, y con invitados del podcast, durante un tiempo. En nuestra web ya te hicimos un guiño de como este modelo mental nos condicionaba en cómo entendemos la vida por este podcast. Va sobre piratas y hombres de goma. La vida es compleja. Los modelos mentales te ayudan a entenderla mejor y reducen su complejidad. Antes de seguir, debo de confesarte algo, el modelo mental del que te vamos a hablar en este episodio es uno de mis favoritos. Es como el meta-modelo mental. Un modelo mental para dominarlos a todos. Un modelo mental que me permite tomar mejores decisiones y saborear más el camino de la vida. Este modelo mental, del que no se habla mucho es: La vida cómo un videojuego. En este episodio hablaremos de: -Diamante del FIFA -Las diferentes áreas de la vida -¿Realmente el objetivo del videojuego de la vida es ganar? Encuéntranos en: Nuestra web: https://elrincondeaquiles.com Instagram: https://www.instagram.com/elrincondeaquiles.es Twitter: https://twitter.com/RinconDeAquiles
5 Ways Melissa Kwan Went from $0 to $500,000 in 3 YearsMelissa Kwan is the co-founder and CEO of eWebinar. eWebinar does all of your demos, onboarding, and training webinars for you so you don't have to.She built it from nothing to $500,000 in annual recurring revenue in just three years, but she says revenue is not easy. Getting people to take out their credit cards is getting harder and harder, retaining customers with unrealistic expectations is hard, and building a product and keeping the burn low is hard. In this podcast episode, she is going to break down how she did it…. and will explain why she brought on a co-founder a year after she started the business.Melissa also shares how she got her first 100 customers, going into detail on these five points:1. Wrote down every company in her network that she thought could benefit from her product2. Two weeks before launch, she went down the list to tell people about what she was doing to see if they wanted to try it out3. She scheduled onboarding calls where David (her CTO) and she would watch people sign up and create their first webinar so they could fix UI/UX issues4. She gave anyone who signed up in the first two months a 60-day free trial (mostly because Stripe wasn't integrated yet!)5. She started charging when their trial was up - no exceptions.For any business founder, this is her advice: Automate what you can so you can do what you can'tBuild a career around what makes you happiestYou can automate tasks and be there for your customers, those two things are not mutually exclusiveClick here to check out eWebinar: https://ewebinar.com/Follow Melissa on Twitter at @ewebinarlabs and @msskwanClick here for more on Pieter Levels making millions without employees: https://www.youtube.com/watch?v=V0ej29G7ZGgIf you liked this episode, please subscribe to this podcast to hear more like it. Tell your colleagues, friends, and family… and don't forget to tap that subscribe button. For more, visit MediaMavenAndMore.com/podcast.
Dan Andrews is a founder of Tropical MBA, Dynamite Circle, and Dynamite Jobs. The Tropical MBA podcast, launched in 2009, was one of the original digital nomad podcasts. The Dynamite Circle was one of the first communities for location independent entrepreneurs.
Sam Parr (@TheSamParr) and Shaan Puri (@ShaanVP) talk with Producer Ben (@BenWilsonTweets) about his surprise phone call with YouTube sensation, Mr. Beast, the power of hypnosis, and ambitious people, ----- Links: * Mr. Beast * Intro.co * Grace Smith * Aviator Nation * Write Like Shaan * Do you love MFM and want to see Sam and Shaan's smiling faces? Subscribe to our Youtube channel. * Want more insights like MFM? Check out Shaan's newsletter. ----- Show Notes: (18:15) - Intro.co and Hypnosis (29:30) - Aviator Nation (38:05) - Shaan's mother's immigration story (48:30) - People who follow through (58:10) - Hanging out with ambitious people (01:02:35) - Thoughts on Pieter Levels podcast ----- Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. ----- Additional episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto * #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • #218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More
Shaan Puri (@ShaanVP) and Sam Parr (@TheSamParr) talk to Pieter Levels (@levelsio) about being a solopreneur who makes nearly $3M a year with no employees while living a nomadic lifestyle. Also, want $5,000? Check out the My First Million Clips contest. ----- Links: * Nomad List * RemoteOK * BuyNothing * Derek Sivers * Do you love MFM and want to see Sam and Shaan's smiling faces? Subscribe to our Youtube channel. * Want more insights like MFM? Check out Shaan's newsletter. ----- Show Notes: (02:40) - How Sam gets in touch with people (05:50) - How much Pieter makes (13:25) - How to become a hit solopreneur (20:10) - What Pieter uses to build (23:45) - Building to earn vs. building to sell (29:55) - How movements grow (30:25) - Thinking big as a solopreneur/Indie Hacker (35:35) - Businesses that could earn you $20k a month (42:30) - How Pieter spends and invests his money (47:10) - Why Pieter believes in Asia (52:40) - What Pieter spends money on (01:01:15) - Who Peter admires ----- Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. ----- Additional episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • #218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More
This is a republished episodeListen to Pieter Levels speak about his upbringing and what motivated him to become an entrepreneur. In this interview, we also try to pin down certain personal characteristics that contributed to Pieter's success in his bootstrapped startups and cover more philosophical topics like life purpose and religion.About Pieter:TwitterNomadlistRebaseAbout Wannabe EntrepreneurJoin our WBE SpaceBuy WBE MerchFollow me on TwitterBackground MusicMusic: https://www.chosic.com/free-music/all/
Hello and welcome to our conversation with Pieter Levels. Peter is the man behind NomadList.com, remoteOK.com, InflationChart.com, rebase.co and more.Pieter is hard to describe if you're after an old-world description. He's most certainly a business guy and a software developer guy but he works remotely, sometimes he charges for his creations, sometimes he doesn't. He practices radical honesty with himself and others. He's unafraid to experiment, to play and learned as a student that doing something different can have unexpected and very rewarding consequences.He works with a few trusted friends but creatively he's the man. Neil has been telling me for six months that a conversation with Pieter would be fun and interesting. He was right. Pieter is in charge of himself, he's not going with the flow unless it serves him. He's not short of money but doesn't own a home and his laptop seems to be as extravagant as it gets.He keeps things simple and for someone so successfully immersed in the world of digital, has a level of self-awareness that ensures he spends time IRLing. For the uninitiated (as I was before this conversation) IRL stands for In Real Life, which means no screens just doing stuff out there in the real world. Amen to that.Pieter seems to be on a quest to find the joy in life but fully understands that what brings joy today may not be what brings joy tomorrow. It's all an adventure. Enjoy - Pieter Levels - Thinking and doing for yourself
What are the possibilities of changing your citizenship as a remote worker? After being fully nomadic for years, Pieter Levels decided to relocate to Portugal. Many nomads and remote workers were interested in the process, so he founded Rebase - a relocation service for remote workers. Starting with immigration and tax services for Portugal, they are moving onto Mexico. In this episode, he shares insights from his relocation process and how Rebase can assist other remote workers with the same plan. Topics we discussed:The process and difficulty of building RebaseMajor trends in the nomad communityWorking on a big vs small company (indie projects vs VC funded startups)The rejection of creativity in modern society.Find the full transcript here. Stay tuned for the next episode with Per Borgen from Scrimba on building software remotely.
Are your team's virtual meetings effective?For guidance on how to structure meetings to optimize for efficiency, genuine connection and motivation, we reached out to Jordan Husney, the founder and CEO of Parabol - an app for effective remote meetings. Find the full transcript here. Stay tuned for the next episode with Pieter Levels from Rebase on the migration of the new wave of remote workers.
My guest today is Bianca Caruana. Bianca is a storyteller, podcast host, sustainability & equality advocate and writer at The Altruistic Traveller. Bianca and I had a serendipitous encounter in Aveiro (Thanks to Pieter Levels' Nomadlist.com), and we immediately clicked because we shared similar backgrounds and spiritual growth journeys. In this episode, we discussed... How to be a sustainable traveller and leverage your skillsets to travel around the world without costing a big fortune? What's the wake-up call for us to trust and surrender to the universe instead of trying to control our life? How to do our own personal reverse engineering to find out who we are NOT in order to know who we really are on a deeper level? How to overcome the fear of judgment, transform from our old identities and embrace a new life? How does adapting to be a digital nomad lifestyle help us find a sense of belonging and feel seen? Why should you write books and poetries to keep a record of your life and express your feelings? A couple of serendipitous stories on life is only making sense when you look back and connect the dots What's the Vipassana meditation? What could you get from it? How to find the meditation method that suits you? The benefits of sacred plant medicine and what to pay attention to when experiencing psychedelic trips? How to cultivate empathy? Books/links mentioned in this episode: Bianca's blog The Altruistic Traveller Bianca's Podcast The Altruistic Traveller instagram Workaway Trusted House Sitters (get 25% off) The Invisible Third Culture Adult: A book about meaning and identity Chiwi Journal Vol 1&2 The Rookery Nook and Brontë Parsonage Vipassana Meditation National Anthem: Malta - L-Innu Malti Qigong Tao Te Ching I Ching Transcendental Meditation® Technique Understand Myself
In today's episode, I will tell you what I have learned recently about NFTs and how they might be a good alternative to ads monetization. I will also give you an update on our ongoing WBE Lab (CollabClub) and what might become the biggest problem of running the WBE Space. As "tips and tricks" I will share with you what I have learned with Pieter Levels this past week on Twitter.About the EpisodeJoin the WBE SpaceStartup to Something PodcastDM me on TwitterInterview with Arvid (#164)Interview with Marc (#194)Background MusicThe Loyalist – Lotus Lane by Preconceived Notions | https://soundcloud.com/preconceived-notionsMusic promoted on https://www.chosic.com/free-music/all/Creative Commons Attribution 3.0 Unported (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
If you're listening to this episode, you probably want to learn to code. But, you might be overwhelmed and wondering how to learn or where to start. Thankfully, the best-articulated strategy I've come across for learning anything is dead simple: do the real thing. In this episode, we talk through the core tenets of this idea as well as approaches for applying it when learning to code. LinksDo the Real Thing: https://www.scotthyoung.com/blog/2020/05/04/do-the-real-thing/Pieter Levels: https://twitter.com/levelsio/Shameless Plugs Peter's YouTube Channel: https://www.youtube.com/c/peterelbaumAaron's Code School: https://parsity.io/Aaron's Free JavaScript Course: https://dev30.xyz/
Indie makers and marketplace shakers, this episode is a must-listen for you! Joining the podcast today is the prolific innovator and creator Jakob Greenfeld. Over the last two years, Jakob has successfully launched over fifteen software products and is the host of The Business Brainstorms podcast and creator of The Opportunities newsletter. In this episode, we learn about Jakob's journey from becoming a physicist to testing out the road of entrepreneurship with his bootstrapped products. Jakob gives us great insight into the early days of his workings, framework, and product launching tactics. Jakob also shines a light on taking shots on goal, the importance of authenticity, sharing your origin story and how to increase your business visibility. Key points within this episode include: An introduction to Jakob Greenfeld. The importance of hooking customers with your origin story. Preparation and patience in business. Building an online audience. Market response and first product sales. The downfalls of a product launch.Why we should never underestimate newsletters.The benefits of growing a sustainable community. The power of authenticity and being personable. Increasing your online visibility. Jakob's advice on obtaining your first 10 customers.‘'In the long term, it all boils down to just putting your brain out there and hoping that like-minded people find you.'' – Jakob Greenfeld. Connect with Jakob Greenfeld:https://jakobgreenfeld.com/https://twitter.com/jakobgreenfeldhttps://businessbrainstorms.com/https://indieopportunities.com/ Connect with First 10 Podcast host Conor McCarthy: https://www.first10podcast.comhttps://twitter.com/TheFirst10Podhttps://www.linkedin.com/in/comccart/ Resources:Book: How to be a capitalist without any capital by Nathan Latkahttps://book.nathanlatka.com/Book: The E-Myth Revisited: Why Most Small Businesses Don't Work by Michael E. Gerber https://www.amazon.co.uk/Myth-Revisited-Small-BusinessesBook: Superfans by Pat Flynnhttps://patflynn.com/book/superfans/Trends.vc Newsletter https://trends.vc/Andrew Wilkinson – Co-founder of Tiny https://twitter.com/awilkinsonHarry Dry – The Kanye Story https://thekanyestory.com/Pieter Levels – 12 start ups in 12 months https://levels.io/12-startups-12-months/Produced in partnership with podlad.com==Check out my podcast partners!BuzzsproutOtter.aiCalendly
Today I'm continuing my conversation with Pieter Levels (@levelsio). In this episode we dig into what habits make Pieter so prolific as well as his thoughts on investing, crypto, and money in general.
Listen to how I prepared for my interview with Pieter Levels, what happened off the record, and some of my thoughts on what we talked about. I will also tell you the story of my first exit meeting with a company that was interested in acquiring Chageit.About this episodeJoin our Remote Co-Working SpaceBuy me a CoffeeCommunity Project Landing PageFollow me on TwitterTry out Indie OfficesPodcast Tool AuphonicPieter's book: Makebook.ioBackground MusicLo-Fi Lounge Chill Hip Hop CITY by Alex-Productions | https://www.youtube.com/channel/UCx0_M61F81Nfb-BRXE-SeVAMusic promoted by https://www.chosic.com/free-music/all/Creative Commons CC BY 3.0https://creativecommons.org/licenses/by/3.0/
My guest today is Pieter Levels. Pieter is a serial founder of multiple products, such as NomadList.com, remoteOK.com and rebase.co. In this episode, we discussed: How did Pieter start his digital nomad life and create 12 startups in 12 months? The original story of the birth of Nomadlist.com What are the common traits of digital nomads based on the 2022 State of Digital Nomads that might surprise people? Why do people need to switch their stereotypes about remote working and adapt to the new way of living? Why has Portugal becomes the indie hacker/nomad / remote worker/entrepreneur hub? What is so good about moving here? The story behind Immigration as a service product: Rebase.co How to overcome depression and anxiety as a serial founder? Why find a therapist speaking a language different from your native language that can work better in therapy sessions? Cliche question: how to balance time and energy as a serial founder? What does it feel like to be a niche famous, and why should people be unreachable? The story about Pieter's early music career on YouTube channel. What are the future trends of remote working and nomad lifestyles? What's the current dating market for digital nomads? Books/links mentioned in this episode: Nomadlist.com 2022 State of Digital Nomads RemoteOk.com I'm Launching 12 Startups in 12 Months How is Making Friends Changing? Moving to Portugal with Rebase Cognitive behavioural therapy Why I'm unreachable and maybe you should be too The next frontier after remote work is async How to optimise your 24 hours? Fans and Haters 11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)
Listen to Pieter Levels speak about his upbringing and what motivated him to become an entrepreneur. In this interview, we also try to pin down certain personal characteristics that contributed to Pieter's success in his bootstrapped startups and cover more philosophical topics like life purpose and religion.About Pieter:TwitterNomadlistRebaseAbout Wannabe EntrepreneurJoin our Remote Co-Working SpaceBuy me a CoffeeFollow me on TwitterBackground MusicMusic: https://www.chosic.com/free-music/all/
Today I'm catching up with Pieter Levels after 4 years. The world has changed dramatically since we last spoke. Attitudes toward remote work and global travel are all completely different now. We'll talk about what headwinds and tailwinds these changes have meant for his projects. Follow Pieter on Twitter: https://twitter.com/levelsio/ Move to Portugal: https://rebase.co/portugal Work remotely: https://remoteok.com/
Pieter Levels agreed to join me for a WBE interview. In this episode, I will tell you the story of how I was able to get him on board and how I am preparing for the interview. I will also speak about my first B2B project that is now live on indieoffices.com.About this episodeJoin our Twitter space tomorrowJoin our Remote Co-Working SpaceBuy me a CoffeeFollow me on TwitterTry out Indie OfficesMy 500th follower: @kgrsajidBackground MusicNight in Venice by Kevin MacLeod | https://incompetech.com/Music promoted by https://www.chosic.com/free-music/all/Creative Commons Creative Commons: By Attribution 3.0 Licensehttp://creativecommons.org/licenses/by/3.0/
@soliton_ et @antinertia te présentent le 51ième épisode The Farmspot Podcast dans lequel ils discutent des business & opportunités intéressantes qu'ils découvrent sur internet. Dans cet épisode, on parle : Intro. Retour sur les gagnants de notre collection NFT 1. Conceptio.bio : La deeptech qui créé des ovules à partir d'un prélèvement de sang 2. Une idée : Créer le GlobalBelly des produits dérivés de collections NFT 3. Rebase.co : Pieter Levels lance une nouvelle offre pour les digital nomades : la re-localisation au Portugal, avec son offre sur rebase.co. Histoire (balbutiante) de la naissance de la notion d'Etat et expérience de pensée : se considérer comme client d'un Etat et non pas comme citoyen. + Autres idées de business Notre collection NFT pour accéder au core de la communauté Farmspot : https://rarible.com/thefarmspot Pour nous suivre : https://twitter.com/soliton_ & https://twitter.com/antinertia Abonne-toi à notre newsletter : https://farmspot.co/ N'hésitez pas à nous DM directement sur twitter, linkedin ou instagram. Tu veux qu'on te conseille pour de la croissance ? https://farmspot.co/#work-with-us On est également présent sur : - Spotify : https://open.spotify.com/show/2WoOdLj6fyOd1Nwb8h0iFq?si=WBjaTxpZQraI57vAcB0QgQ&dl_branch=1 - Apple Podcast : https://podcasts.apple.com/fr/podcast/the-farmspot-podcast/id1541779163 - Youtube : https://www.youtube.com/channel/UCJ9pX5KJ5AW2RwVkIyrP_1w
Josef Strzibny is the author of Deployment from Scratch and a current Fedora contributor. He previously worked on the Developer Experience team at Red Hat.This episode originally aired on Software Engineering Radio.Links: Deployment from Scratch @strzibnyj systemd Introduction to Control Groups SELinux Fedora Rocky Linux Puma AppSignal Datadog Rollbar Skylight Bootstrapping a multiplayer server with Elixir at X-Plane StackExchange Performance Chruby Password Safe Vault Rails Custom Credentials Transcript: You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today, I'm talking to Josef Strzibny.He's the author of the book deployment from scratch. A fedora contributor. And he previously worked on the developer experience team at red hat. Josef welcome to software engineering radio. [00:00:13] Josef: Uh, thanks for having me. I'm really happy to be here. There are a lot of commercial services for hosting applications these days. One that's been around for quite a while is Heroku, but there's also services like render and Netlify. why should a developer learn how to deploy from scratch and why would a developer choose to self host an application [00:00:37] Josef: I think that as a web engineers and backend engineers, we should know a little bit more how we run our own applications, that we write. but there is also a business case, right?For a lot of people, this could be, uh, saving money on hosting, especially with managed databases that can go, high in price very quickly. and for people like me, that apart from daily job have also some side project, some little project they want to, start and maybe turn into a successful startup, you know but it's at the beginning, so they don't want to spend too much money on it, you know?And, I can deploy and, serve my little projects from $5 virtual private servers in the cloud. So I think that's another reason to look into it. And business wise, if you are, let's say a bigger team and you have the money, of course you can afford all these services. But then what happened to me when I was leading a startup, we were at somewhere (?) and people are coming and asking us, we need to self host their application.We don't trust the cloud. And then if you want to prepare this environment for them to host your application, then you also need to know how to do it. Right? I understand completely get the point of not knowing it because already backend development can be huge.You know, you can learn so many different databases, languages, whatever, and learning also operations and servers. It can be overwhelming. I want to say you don't have to do it all at once. Just, you know, learn a little bit, uh, and you can improve as you go. Uh, you will not learn everything in a day. [00:02:28] Jeremy: So it sounds like the very first reason might be to just have a better understanding of, of how your applications are, are running. Because even if you are using a service, ultimately that is going to be running on a bare machine somewhere or run on a virtual machine somewhere. So it could be helpful maybe for just troubleshooting or a better understanding how your application works.And then there's what you were talking about with some companies want to self-host and, just the cost aspect. [00:03:03] Josef: Yeah. for me, really, the primary reason would be to understand it because, you know, when I was starting programming, oh, well, first of there was PHP and I, I used some shared hosting thing, just some SFTP. Right. And they would host it for me. It was fine. Then I switched to Ruby on Rails and at the time, uh, people were struggling with deploying it and I was asking myself, so, okay, so you ran rails s like for a server, right. It starts in development, but can you just do that on the server for, for your production? You know, can you just rails server and is that it, or is there more to it? Or when people were talking about, uh, Linux hardening, I was like, okay, but you know, your Linx distribution have some good defaults, right.[00:03:52] Jeremy: So why do you need some further hardening? What does it mean? What to change. So for me, I really wanted to know, uh, the reason I wrote this book is that I wanted to like double down on my understanding that I got it right. Yeah, I can definitely relate in the sense that I've also used Ruby and Ruby on rails as well. And there's this, this huge gap between just learning how to run it in a development environment on your computer versus deploying it onto a server and it's pretty overwhelming. So I think it's, it's really great that, that you're putting together a book that, that really goes into a lot of these things that I think that usually aren't talked about when people are just talking about learning a language. [00:04:39] Josef: you can imagine that a lot of components you can have into this applications, right? You have one database, maybe you have more databases. Maybe you have a redis key-value store. Uh, then you might have load balancers and all that jazz. And I just want to say that there's one thing I also say in the book, like try to keep it simple. If you can just deploy one server, if you don't need to fulfill some SLE (SLA) uh, uptime, just do the simplest thing first, because you will really understand it. And when there was an error you will know how to fix it because when you make things complex for you, then it will be kind of lost, very quickly. So I try to really make things as simple as possible to stay on top of them.[00:05:25] Jeremy: I think one of the first decisions you have to make, when you're going to self host an application is you have to decide which distribution you're going to use. And there's things like red hat and Ubuntu, and Debian and all these different distributions. And I'm wondering for somebody who just wants to deploy their application, whether that's rails, Django, or anything else, what are the key differences between them and, and how should they choose a distribution?[00:05:55] Josef: if you already know one particular distribution, there's no need to constantly be on the hunt for a more shiny thing, you know, uh, it's more important that you know it well and, uh, you are not lost. Uh, that said there are differences, you know, and there could be a long list from goals and philosophy to who makes it whether community or company, if it's showing distribution or not, lack of support, especially for security updates, uh, the kind of init systems, uh, that is used, the kind of c library that is used packaging format, package manager, and for what I think most people will care about number of packages and the quality or version, right?Because essentially the distribution is distribution of software. So you care about the software. If you are putting your own stuff, on top of it. you maybe don't care. You just care about it being a Linux distribution and that's it. That's fine. But if you are using more things from the distribution, you might star, start caring a little bit more.You know, other thing is maybe a support for some mandatory access control or in the, you know, world of Docker, maybe the most minimal image you can get established because you will be building a lot of, a lot of times the, the Docker image from the Docker file. And I would say that two main family of systems that people probably know, uh, ones based on Fedora and those based on Debian, right from Fedora, you have, uh, Red Hat Enterprise Linux, CentOS, uh, Rocky Linux.And on the Debian side you have Ubuntu which is maybe the most popular cloud distribution right now. And, uh, of course as a Fedora packager I'm kind of, uh, in the fedora world. Right. But if I can, if I can mention two things that I think makes sense or like our advantage to fedora based systems. And I would say one is modular packages because it's traditional systems for a long time or for only one version of particular component like let's say postgresql, uh, or Ruby, uh, for one big version.So that means, uh, either it worked for you or it didn't, you know, with databases, maybe you could make it work. With ruby and python versions. usually you start looking at some version manager to compile their own version because the version was old or simply not the same, the one your application uses and with modular packages, this changed and now in fedora and RHEL and all this, We now have several options to install. There are like four different versions of postgresql for instance, you know, four different versions of redis, but also different versions of Ruby, python, of course still, you don't get all of the versions you want. So for some people, it still might not work, but I think it's a big step forward because even when I was working at Red Hat, we were working on a product called software collections.This was kind of trying to solve this thing for enterprise customers, but I don't think it was particularly a good solution. So I'm quite happy about this modularity effort, you know, and I think the modular packages, I look into them recently are, are very better, but I will say one thing don't expect to use them in a way you use your regular version manager for development.So, if you want to be switching between versions of different projects, that's not the use case for them, at least as I understand it, not for now, you know, but for server that's fine. And the second, second good advantage of Fedora based system, I think is good initial SELinux profile settings, you know, SE Linux is security enhanced Linux.What it really is, is a mandatory access control. So, on usual distribution, you have a discrete permissions that you set that user set themselves on their directories and files, you know, but this mandatory access control means that it's kind of a profile that is there beforehand, the administrators prepares. And, it's kind of orthogonal to those other security, uh, boundaries you have there. So that will help you to protect your most vulnerable, uh, processes because especially with SELinux, there are several modes. So there is, uh, MLS (?) mode for like that maybe an army would use, you know, but for what we use, what's like the default, uh, it's uh, something called targeted policy.And that means you are targeting the vulnerable processes. So that means your services that we are exposing to external world, like whether it's SSH, postgresql, nginx, all those things. So you have a special profile for them. And if someone, some, attacker takes over, of your one component, one process, they still cannot do much more than what the component was, uh, kind of prepared to do.I think it's really good that you have this high-quality settings already made because other distributions, they might actually be able to run with SELinux. But they don't necessarily provide you any starting points. You will have to do all your policies yourself. And SELinux is actually a quite complex system, you know, it's difficult.It's even difficult to use it as a user. Kind of, if you see some tutorials for CentOS, uh, you will see a lot of people mentioned SELinux maybe even turning it off, there's this struggle, you know, and that's why I also, use and write like one big chapter on SELinux to get people more familiar and less scared about using it and running with it.[00:12:00] Jeremy: So SELinux is, it sounds like it's basically something where you have these different profiles for different types of applications. You mentioned SSH, for example, um, maybe there could be one for nginx or, or one for Postgres. And they're basically these collections of permissions that a process should be able to have access to whether that's, network ports or, file system permissions, things like that.And they're, they're kind of all pre-packaged for you. So you're saying that if you are using a fedora based distribution, you could, you could say that, I want SSH to be allowed. So I'm going to turn on this profile, or I want nginx to be used on this system. So I'm going to turn on this profile and those permissions are just going to be applied to the process that that needs it is that is that correct?[00:12:54] Josef: Well, actually in the base system, there will be already a set of base settings that are loaded, you know, and you can make your own, uh, policy models that you can load. but essentially it works in a way that, uh, what's not really permitted and allowed is disallowed.that's why it can be a pain in the ass. And as you said, you are completely correct. You can imagine it as um nginx as a reverse proxy, communicating with Puma application server via Unix socket, right? And now nginx will need to have access to that socket to be even being able to write to a Unix socket and so on.So things like that. Uh, but luckily you don't have to know all these things, because it's really difficult, especially if you're starting up. Uh, so there are set of tools and utilities that will help you to use SELinux in a very convenient way. So what you, what you do, what I will suggest you to do is to run SELinux in a permissive mode, which means that, uh, it logs any kind of violations that application does against your base system policies, right?So you will have them in the log, but everything will work. Your application will work. So we don't have to worry about it. And after some time running your application, you've ran these utilities to analyze these logs and these violations, and they can even generate a profile for you. So you will know, okay, this is the profile I need.This is the access to things I need to add. once after you do that, if, if there will be some problems with your process, if, if some article will try to do something else, they will be denied.That action is simply not happening. Yeah. But because of the utilities, you can kind of almost automate how, how you make a profile and that way is much, much easier.Yeah. [00:14:54] Jeremy: So, basically the, the operating system, it comes with all these defaults of things that you're allowed to do and not allowed to do, you turn on this permissive flag and it logs all the things that it would have blocked if you, were enforcing SELinux. And then you can basically go in and add the things that are, that are missing.[00:15:14] Josef: Yes exactly right. [00:15:16] Jeremy: the, next thing I'd like to go into is, one of the things you talk about in the book is about how your services, your, your application, how it runs, uh, as, as daemons. And I wonder if you could define what a daemon is?[00:15:33] Josef: Uh, you can think about them as a, as a background process, you know, something that continuously runs In the background. Even if the virtual machine goes down and you reboot, you just want them again to be restarted and just run at all times the system is running.[00:15:52] Jeremy: And for things like an application you write or for a database, should the application itself know how to run itself in the background or is that the responsibility of some operating system level process manager?[00:16:08] Josef: uh, every Linux operating system has actually, uh, so-called init system, it's actually the second process after the Linux kernel that started on their system, it has a process ID of one. And it's essentially the parent of all your processes because on Linux, you have always parents and children. Because you use forking to make new, make new processes. And so this is your system process manager, but obviously systemd if it's your system process manager, you already trusted with all the systems services, you can also trust them with your application, right? I mean, who else would you trust even if you choose some other purchase manager, because there are many, essentially you would have to wrap up that process manager being a systemd service, because otherwise there is, you wouldn't have this connection of systemd being a supreme supervisor of your application, right?When, uh, one of your services struggle, uh, you want it to be restarted and continue. So that's what a systemd could do for you. If you, you kind of design everything as a systemd service, for base packages like base postgresql they've already come with a systemd services, very easy to use. You just simply start it and it's running, you know, and then for your application, uh, you would write a systemd service, which is a little file.There are some directives it's kind of a very simple and straightforward, uh, because before, before systemd people were using the services with bash and it was kind of error prone, but now with systemd it's quite simple. They're just a set of directives, uh, that you learn. you tell systemd, you know, under what user you should run, uh, what working directory you want it to be running with.Uh, is there a environment file? Is there a pidfile? And then, uh, A few other things. The most important being a directive called ExecStart, which tells systemd what process to start, it will start a process and it will simply oversee oversee it and will look at errors and so on. [00:18:32] Jeremy: So in the past, I know there used to be applications that were written where the application itself would background itself. And basically that would allow you to run it in the background without something like a systemd. And so it sounds like now, what you should do instead is have your application be built to just run in the foreground.and your process manager, like systemd can be configured to, um, handle restarting it, which user is running it. environment variables, all sorts of different things that in the past, you might've had to write in your own bash script or write into the application itself.[00:19:14] Josef: And there's also some. other niceties about systemd because for example, you can, you can define how reloading should work. So for instance, you've just changed some configuration and you've want to achieve some kind of zero downtime, ah, change, zero downtime deploy, you know, uh, you can tell systemd how this could be achieved with your process and if it cannot be achieved, uh, because for instance, uh, Puma application server.It can fork processes, and it can actually, it can restart those processes in a way that it will be zero downtime, but when you want to change to evolve (?) Puma process. So what do you do, right? And uh systemd have this nice uh thing called, uh, socket activation. And with system socket activation, you can make another unit.Uh, it's not a service unit. It's a socket unit there are many kinds of units in systemd. And, uh, you will basically make a socket unit that would listen to those connections and then pass them to the application. So while application is just starting and then it could be a completely normal restart, which means stopping, starting, uh, then it will keep the connections open, keep the sockets open and then pass them. when the application is ready to, to process them.[00:20:42] Jeremy: So it sounds like if, and the socket you're referring to these would be TCP sockets, for example, of someone trying to access a website.[00:20:53] Josef: Yes, but actually worked with Unix. Uh, socket as well. Okay. [00:20:58] Jeremy: so in, in that example, Um, let's say a user is trying to go to a website and your service is currently down. You can actually configure systemd to, let the user connect and, and wait for another application to come back up and then hand that connection off to the application once it's, once it's back up.[00:21:20] Josef: yes, exactly. That, yeah. [00:21:23] Jeremy: you're basically able to remove some of the complexity out of the applications themselves for some of these special cases and, and offload those to, to systemd.[00:21:34] Josef: because yeah, otherwise you would actually need a second server, right? Uh, you will have to, uh, start second server, move traffic there and upgrade or update your first server. And exchange them back and with systemd socket activation you can avoid doing that and still have this final effect of zero downtime deployment. [00:21:58] Jeremy: So the, this, this introduction of systemd as the process manager, I think there's, this happened a few years ago where a lot of Linux distributions moved to using systemd and there, there was some, I suppose, controversy around that. And I'm kind of wondering, um, if you have any perspective on, on why there's some people who, really didn't want that to happen, know, why, why that's something people should worry about or, or, or not.[00:22:30] Josef: Yeah. there were, I think there were few things, One one was for instance, the system logging that suddenly became a binary format and you need a special utility to, to read it. You know, I mean, it's more efficient, it's in a way better, but it's not plain text rich, all administrators prefer or are used to. So I understand the concern, you know, but it's kind of like, it's fine.You know, at least to me, it it's fine. And the second, the second thing that people consistently force some kind of system creep because uh systemd is trying to do more and more every year. So, some people say it's not the Unix way, uh systemd should be very minimal in its system and not do anything else.It's it's partially true, but at the same time, the things that systemd went into, you know, I think they are essentially easier and nice to use. And this is the system, the services I can say. I certainly prefer how it's done now, [00:23:39] Jeremy: Yeah. So it sounds like we've been talking about systemd as being this process manager, when the operating system first boots systemd starts, and then it's responsible for starting, your applications or other applications running on the same machine. Uh, but then it's also doing all sorts of other things.Like you talked about that, that socket activation use case, there's logging. I think there's also, scheduled jobs. There's like all sorts of other things that are part of systemd and that's where some people, disagree on whether it should be one application that's handling all these things.[00:24:20] Josef: Yeah. Yeah. Uh, you're right with the scheduling job, like replacing Cron, you have, now two ways how to do it. But, you can still pretty much choose, what you use, I mean, I still use Cron, so I don't see a trouble there. we'll see. We'll see how it goes. [00:24:40] Jeremy: One of the things I remember I struggled with a little bit when I was learning to deploy applications is when you're working locally on your development machine, um, you have to install a language runtime in a lot of cases, whether that's for Ruby or Python, uh, Java, anything like that. And when someone is installing on their own machine, they often use something like a, a version manager, like for example, for Ruby there's rbenv and, for node, for example, there's, there's NVM, there's all sorts of, ways of installing language, run times and managing the versions.How should someone set up their language runtime on a server? Like, would they use the same tools they use on their development machine or is it something different.[00:25:32] Josef: Yeah. So there are several ways you can do, as I mentioned before, with the modular packages, if you find the version there. I would actually recommend try to do it with the model package because, uh, the thing is it's so easy to install, you know, and it's kind of instant. it takes no time on your server.It's you just install it. It's a regular package. same is true when building a Docker, uh, docker image, because again, it will be really fast. So if you can use it, I would just use that because it's like kind of convenient, but a lot of people will use some kind of version manager, you know, technically speaking, they can only use the installer part.Like for instance, chruby with ruby-install to install new versions. Right. but then you would have to reference these full paths to your Ruby and very tedious. So what I personally do, uh, I just really set it up as if I am on a developer workstation, because for me, the mental model of that is very simple.I use the same thing, you know, and this is true. For instance, when then you are referencing what to start in this ExecStart directive and systedD you know, because you have several choices. For instance, if you need to start Puma, you could be, you could be referencing the address that is like in your user home, .gem, Ruby version number bin Puma, you know, or you can use this version manager, they might have something like chruby-exec, uh, to run with their I (?) version of Ruby, and then you pass it, the actual Puma Puma part, and it will start for you, but what you can also do.And I think it's kind of beautiful. You can do it is that you can just start bash, uh, with a login shell and then you just give it the bundle exec Puma command that you would use normally after logging. Because if you install it, everything, normally, you know, you have something.you know, bashprofile that will load that environment that will put the right version of Ruby and suddenly it works.And I find it very nice. Because even when you are later logging in to your, your, uh, box, you log in as that user as that application user, and suddenly you have all the environment, then it just can start things as you are used to, you know, no problem there. [00:28:02] Jeremy: yeah, something I've run in into the past is when I would install a language runtime and like you were kind of describing, I would have to type in the, the full path to, to get to the Ruby runtime or the Python runtime. And it sounds like what you're saying is, Just install it like you would on your development machine.And then in the systemd configuration file, you actually log into a bash shell and, and run your application from the bash shell. So it has access to the, all the same things you would have in an interactive, login environment. Is that, is that right?[00:28:40] Josef: yeah, yeah. That's exactly right. So it will be basically the same thing. And it's kind of easy to reason about it, you know, like you can start with that might be able to change it later to something else, but, it's a nice way of how to do it. [00:28:54] Jeremy: So you mentioned having a user to run your application. And so I'm wondering how you decide what Linux users should run your applications. Are you creating a separate user for each application you run? Like, how are you making those decisions?[00:29:16] Josef: yes, I am actually making a new user for, for my application. Well, at least for the part of the application, that is the application server and workers, you know, so nginx um, might have own user, postgresql might have his own user, you know, I'm not like trying to consolidate that into one user, but, uh, in terms of rails application, like whatever I run Puma or whenever I run uh sidekiq, that will be part of the one user, you know, application user.Uh, and I will appropriately set the right access to the directories. Uh, so it's isolated from everything else, [00:30:00] Jeremy: Something that I've seen also when you are installing Ruby or you're installing some other language runtime, you have. The libraries, like in the case of Ruby there's there's gems. and when you're on your development machine and you install these, these gems, these packages, they, they go into the user's home directory.And so you're able to install and use them without having let's say, um, sudo or root access. is that something that you carry over to your, your deployments as well, or, or do you store your, your libraries and your gems in some place that's accessible outside of that user? I'm just wondering how you approach it.[00:30:49] Josef: I would actually keep it next to next to my application, this kind of touches maybe the question or where to put your application files on the system. so, uh, there is something called FHS, file system hierarchy standard, you know, that, uh, Linux distributions use, they, of course use it with some little modifications here and there.And, uh, this standard is basically followed by packagers and enforced in package repositories. Uh, but other than that, it's kind of random, you know, it could be a different path and, uh, it says where certain files should go. So you have /home we have /usr/bin for executables. /var for logs and so on and so on.And now when you want to put your, your application file somewhere, you are thinking about to put them, right. Uh, you have essentially, I think like three options, for, for one, you can put it to home because it's, as we talked about, I set up a dedicated user for that application. So it could make sense to put it in home.Why I don't like putting it at home is because there are certain labeling in SELinux that kind of, makes your life more difficult. it's not meant to be there, uh, essentially on some other system. Uh, without SELinux, I think it works quite fine. I also did before, you know, it's not like you cannot do it.You can, uh, then you have, the, kind of your web server default location. You know, like /usr/share/nginx/html, or /var/www, and these systems will be prepared for you with all these SELinux labeling. So when you put files there, uh, things will mostly work, but, and I also saw a lot of people do that because this particular reason, what I don't like about it is that if nginx is just my reverse proxy, you know, uh, it's not that I am serving the files from there.So I don't like the location for this reason. If it will be just static website, absolutely put it there that's the best location. then you can put it to some arbitrary location, some new one, that's not conflicting with anything else. You know, if you want to follow the a file system hierarchy standard, you put it to /srv, you know, and then maybe slash the name of the application or your domain name, hostname you can choose, what you like.Uh, so that's what I do now. I simply do it from scratch to this location. And, uh, as part of the SELinux, I simply make a model, make a, make a profile, uh, an hour, all this paths to work. And So to answer your question where I would put this, uh, gems would actually go to this, to this directory, it will be like /apps/gems, for instance.there's a few different places people could put their application, they could put it in the user's home folder, but you were saying because of the built-in SELinux rules SELinux is going to basically fight you on that and prevent you from doing a lot of things in that folder.[00:34:22] Jeremy: what you've chosen to do is to, to create your own folder, that I guess you described it as being somewhat arbitrary, just being a folder that you consistently are going to use in all your projects. And then you're going to configure SELinux to allow you to run, uh, whatever you want to run from this, this custom folder that you've decided.[00:34:44] Josef: Yeah, you can say that you do almost the same amount of work for home or some other location I simply find it cleaner to do it this way and in a way. I even fulfilled the FHS, uh, suggestion, to put it to /srv but, uh, yeah, it's completely arbitrary. You can choose anything else. Uh, sysadmins choose www or whatever they like, and it's fine.It'll work. There's there's no problem. There. And, uh, and for the gems, actually, they could be in home, you know, but I just instruct bundler to put it to that location next to my application. [00:35:27] Jeremy: Okay. Rather than, than having a common folder for multiple applications to pull your libraries or your gems from, uh, you have it installed in the same place as the application. And that just keeps all your dependencies in the same place.[00:35:44] Josef: Yep, [00:35:45] Jeremy: and the example you're giving, you're, you're putting everything in /srv/ and then maybe the name of your application. Is that right? [00:35:55] Josef: Yeah. [00:35:55] Jeremy: Ok. Yeah. Cause I've, I've noticed that, Just looking at different systems. I've seen people install things into /opt. installed into /srv and it can just be kind of, tricky as, as somebody who's starting out to know, where am I supposed to put this stuff?So, so basically it sounds like just, just pick a place and, um, at least if it's in slash srv then sysadmins who are familiar with, the, the standard file system hierarchy will will know to, to look at.[00:36:27] Josef: yeah. Yeah. opt is also a yeah, common location, as you say, or, you know, if it's actually a packaged web application fedora it can even be in /usr/share, you know? So, uh, it might not be necessarily in locations we talked about before one of the things you cover in the book is. Setting up a deployment system and you're using, shell scripts in the case of the book. And I was wondering how you decide when shell scripts are sufficient and when you should consider more specialized tools like Ansible or chef puppet, things like.[00:37:07] Josef: yeah, I chose bash in the book because you get to see things without obstructions. You know, if I would be using, let's say Ansible and suddenly we are writing some YAML files and, uh, you are using a lot of, lot of Python modules to Ansible use and you don't really know what's going on at all times. So you learn to do things with ansible 2.0, let's say, and then new ansible comes out and you have to rely on what you did, you know, and I've got to rewrite the book. Uh, but the thing is that, with just Bash I can show, literally just bash commands, like, okay, you run this and this happens, And, another thing uh why I use it is that you realize how simple something can be.Like, you can have a typical cluster with sssh, uh, and whatever in maybe 20 bash commands around that, so it's not necessarily that difficult and, uh, it's much easier to actually understand it if it's just those 20, uh, 20 bash comments. Uh, I also think that learning a little bit more about bash is actually quite beneficial because you encounter them in various places.I mean, RPM spec files, like the packages are built. That's bash, you know, language version managers, uh, like pyenv rbenv that's bash. If you want to tweak it, if you have a bug there, you might look into source code and try to fix it. You know, it will be bash. Then Docker files are essentially bash, you know, their entry points scripts might be bash.So it's not like you can avoid bash. So maybe learning a little bit. Just a little bit more than, you know, and be a little bit more comfortable. I think it can get you a long way because even I am not some bash programmer, you know, I would never call myself like that. also consider this like, uh, you can have full featured rails application, maybe in 200 lines of bash code up and running somewhere.You can understand it in a afternoon, so for a small deployment, I think it's quite refreshing to use bash and some people miss out on not just doing the first simple thing possible that they can do, but obviously when you go like more team members, more complex applications or a suite of applications, things get difficult, very fast with bash.So obviously most people will end up with some higher level too. It can be Ansible. Uh, it can be chef, it might be Kubernetes, you know, so, uh, my philosophy, uh, again, it's just to keep it simple. If I can do something with bash and it's like. 100 lines, I will do this bash because when I come back to it in, after three years, it will work and I can directly see what I have to fix.You know, if there's a postgresql update at this new location whatever, I, I immediately know what to look and what to change. And, uh, with high-level tooling, you kind of have to stay on top of them, the new versions and, updates. So that's the best is very limited, but, uh, it's kind of refreshing for very small deployment you want to do for your side project. [00:40:29] Jeremy: Yeah. So it sounds like from a learning perspective, it's beneficial because you can see line by line and it's code you wrote and you know exactly what each thing does. Uh, but also it sounds like when you have a project that's relatively small, maybe there, there aren't a lot of different servers or, the deployment process isn't too complicated.You actually choose to, to start with bash and then only move to, um, something more complicated like Ansible or, or even Kubernetes. once your project has, has gotten to a certain size.[00:41:03] Josef: you, you would see it in the book. I even explain a multiple server deployment using bash uh, where you can actually keep your components like kind of separate. So like your database have its own life cycle has its own deploy script and your load balancer the same And even when you have application servers.Maybe you have more of them. So the nice thing is that when you first write your first script to provision one server configure one server, then you simply, uh, write another Uh, supervising script, that would call this single script just in the loop and you will change the server variable to change the IP address or something.And suddenly you can deploy tomorrow. Of course, it's very basic and it's, uh, you know, it doesn't have some, any kind of parallelization to it or whatever, but if you have like three application servers, you can do it and you understand it almost immediately. You know, if you are already a software engineer, there's almost nothing to understand and you can just start and keep going.[00:42:12] Jeremy: And when you're deploying to servers a lot of times, you're dealing with credentials, whether that's private keys, passwords or, keys to third-party APIs. And when you're working with this self hosted environment, working with bash scripts, I was wondering what you use to store your credentials and, and how those are managed.I use a desktop application called password safe, uh, that can save my passwords and whatever. and you can also put their SSH keys, uh, and so on.[00:42:49] Josef: And then I simply can do a backup of this keys and of this password to some other secure physical location. But basically I don't use any service, uh, online for that. I mean, there are services for that, especially for teams and in clouds, especially the, big clouds they might have their own services for that, but for me personally, again, I just, I just keep it as simple as I can. It's just on my, my computer, maybe my hard disk. And that's it. It's nowhere else. [00:43:23] Jeremy: So, so would this be a case of where on your local machine, for example, you might have a file that defines all the environment variables for each server. you don't check that into your source code repository, but when you run your bash scripts, maybe read from that file and, use that in deploying to the server?[00:43:44] Josef: Yeah, Uh, generally speaking. Yes, but I think with rails, uh, there's a nice, uh, nice option to use, their encrypted credentials. So basically then you can commit all these secrets together with your app and the only thing you need to keep to yourself, it's just like one variable. So it's much more easy to store it and keep it safe because it's just like one thing and everything else you keep inside your repository.I know for sure there are other programs that we have in the same way that can be used with different stacks that doesn't have this baked in, because rails have have it baked in. But if you are using Django, if you are using Elixir, whatever, uh, then they don't have it. But I know that there are some programs I don't remember the names right now, but, uh, they essentially allow you to do exactly the same thing to just commit it to source control, but in a secure way, because it's, encrypted.[00:44:47] Jeremy: Yeah, that's an interesting solution because you always hear about people checking in passwords and keys into their source code repository. And then, you know, it gets exposed online somehow, but, but in this case, like you said, it's, it's encrypted and, only your machine has the key. So, that actually allows you to, to use the source code, to store all that.[00:45:12] Josef: Yeah. I think for teams, you know, for more complex deployments, there are various skills, various tools from HashiCorp vault, you know, to some cloud provider's things, but, uh, you can really start And, keep it very, very simple.[00:45:27] Jeremy: For logging an application that you're, you're self hosting. There's a lot of different managed services that exist. Um, but I was wondering what you use in a self hosted environment and, whether your applications are logging to standard out, whether they're writing to files themselves, I was wondering how you typically approach that.[00:45:47] Josef: Yeah. So there are lots of logs you can have, right from system log, your web server log application log, database log, whatever. and you somehow need to stay on top of them because, uh, when you have one server, it's quite fine to just look in, in and look around. But when there are more servers involved, it's kind of a pain and uh so people will start to look in some centralized logging system.I think when you are more mature, you will look to things like Datadog, right. Or you will build something of your own on elastic stack. That's what we do on the project I'm working on right now. But I kind of think that there's some upfront costs uh, setting it all up, you know, and in terms of some looking at elastic stack we are essentially building your logging application.Even you can say, you know, there's a lot of work I also want to say that you don't look into your logs all that often, especially if you set up proper error and performance monitoring, which is what I do with my project is one of the first thing I do.So those are services like Rollbar and skylight, and there are some that you can self host so if people uh, want to self host them, they can. But I find it kind of easier to, even though I'm self hosting my application to just rely on this hosted solution, uh, like rollbar, skylight, appsignal, you know, and I have to say, especially I started to like appsignal recently because they kind of bundle everything together.When you have trouble with your self hosting, the last thing you want to find yourself in a situation when your self hosted logs and sources, error reporting also went down. It doesn't work, you know, so although I like self-hosting my, my application.[00:47:44] Josef: I kind of like to offload this responsibility to some hosted hosted providers.[00:47:50] Jeremy: Yeah. So I think that in and of itself is a interesting topic to cover because we've mostly been talking about self hosting, your applications, and you were just saying how logging might be something that's actually better to use a managed service. I was wondering if there's other. Services, for example, CDNs or, or other things where it actually makes more sense for you to let somebody else host it rather than your [00:48:20] Josef: I think that depends. Logging for me. It's obvious. and then I think a lot of, lots of developers kind of fear databases. So there are they rather have some kind of, one click database you know, replication and all that jazz back then so I think a lot of people would go for a managed database, although it may be one of those pricy services it's also likes one that actually gives you a peace of mind, you know? maybe I would just like point out that even though you get all these automatic backups and so on, maybe you should still try to make your own backup, just for sure. You know, even someone promised something, uh, your data is usually the most valuable thing you have in your application, so you should not lose it.And some people will go maybe for load balancer, because it's may be easy to start. Like let's say on DigitalOcean, you know, uh, you just click it and it's there. But if you've got opposite direction, if you, for instance, decide to, uh, self host your uh load balancer, it can also give you more, options what to do with that, right?Because, uh, you can configure it differently. You can even configure it to be a backup server. If all of your application servers go down. Which is maybe could be interesting use case, right? If you mess up and your application servers are not running because you are just messing with, with them. Suddenly it's okay. Because your load balancers just takes on traffic. Right. And you can do that if it's, if it's your load balancer, the ones hosted are sometimes limited. So I think it comes to also, even if the database is, you know, it's like maybe you use some kind of extension that is simply not available. That kind of makes you, uh, makes you self host something, but if they offer exactly what you want and it's really easy, you know, then maybe you just, you just do it.And that's why I think I kind of like deploying to uh, virtual machines, uh, in the cloud because you can mix and match all the services do what you want and, uh, you can always change the configurations to fit, to, uh, meet your, meet your needs. And I find that quite, quite nice.[00:50:39] Jeremy: One of the things you talk about near the end of your book is how you, you start with a single server. You have the database, the application, the web server, everything on the same machine. And I wonder if you could talk a little bit about how far you can, you can take that one server and why people should consider starting with that approach. Uh, I'm not sure. It depends a lot on your application. For instance, I write applications that are quite simple in nature. I don't have so many SQL calls in one page and so on.[00:51:13] Josef: But the applications I worked for before, sometimes they are quite heavy and, you know, even, with little traffic, they suddenly need a more beefy server, you know, so it's a lot about application, but there are certainly a lot of good examples out there. For instance. The team, uh, from X-Plane flight simulator simulator, they just deploy to one, one server, you know, the whole backend all those flying players because it's essentially simple and they even use elixir which is based on BEAM VM, which means it's great for concurrency for distributed systems is great for multiple servers, but it's still deployed to one because it's simple. And they use the second only when they do updates to the service and otherwise they can, they go back to one.ANother one would be maybe Pieter Levels (?) a maker that already has like a $1 million business. And it's, he has all of his projects on one server, you know, because it's enough, you know why you need to make it complicated. You can go and a very profitable service and you might not leave one server. It's not a problem. Another good example, I think is stackoverflow. They have, I think they have some page when they exactly show you what servers they are running. They have multiple servers, but the thing is they have only a few few servers, you know, so those are the examples that goes against maybe the chant of spinning up hundreds of servers, uh, in the cloud, which you can do.It's easy, easier when you have to do auto scaling, because you can just go little by little, you know, but, uh, I don't see the point of having more servers. To me. It means more work. If I can do it, if one, I do it. But I would mention one thing to pay attention to, when you are on one server, you don't want suddenly your background workers exhaust all the CPU so that your database cannot serve, uh, your queries anymore right? So for that, I recommend looking into control groups or cgroups on Linux. When you create a simple slice, which is where you define how much CPU power, and how much memory can be used for that service. And then you attach it to, to some processes, you know, and when we are talking about systemd services.They actually have this one directive, uh, where you specify your, uh, C group slice. And then when you have this worker server and maybe it even forks because it runs some utilities, right? For you to process images or what not, uh, then it will be all contained within that C group. So it will not influence the other services you have and you can say, okay, you know, I give worker service only 20% of my CPU power because I don't care if they make it fast or not.It's not important. Important is that, uh, every visitor still gets its page, you know, and it's, they are working, uh, waiting for some background processes so they will wait and your service is not going down.[00:54:34] Jeremy: yeah. So it sort of sounds like the difference between if you have a whole bunch of servers, then you have to, Have some way of managing all those servers, whether that's Kubernetes or something else. Whereas, um, an alternative to that is, is having one server or just a few servers, but going a little bit deeper into the capabilities of the operating system, like the C groups you were referring to, where you could, you could specify how much CPU, how much Ram and, and things, for each service on that same machine to use.So it's kind of. Changing it, I don't know if it's removing work, but it's, it's changing the type of work you do. [00:55:16] Josef: Yeah, you essentially maybe have to think about it more in a way of this case of splitting the memory or CPU power. Uh, but also it enables you to use, for instance, Unix sockets instead of TCP sockets and they are faster, you know, so in a way it can be also an advantage for you in some cases to actually keep it on one server.And of course you don't have a network trip so another saving. So to get there, that service will be faster as long as it's running and there's no problem, it will be faster. And for high availability. Yeah. It's a, it's obviously a problem. If you have just one server, but you also have to think about it in more complex way to be high available with all your component components from old balancers to databases, you suddenly have a lot of things.You know, to take care and that set up might be complex, might be fragile. And maybe you are better off with just one server that you can quickly spin up again. So for instance, there's any problem with your server, you get alert and you simply make a new one, you know, and if you can configure it within 20, 30 minutes, maybe it's not a problem.Maybe even you are still fulfilling your, uh, service level contract for uptime. So I think if I can go this way, I prefer it simply because it's, it's so much easy to, to think about it. Like that.[00:56:47] Jeremy: This might be a little difficult to, to answer, but when you, you look at the projects where you've self hosted them, versus the projects where you've gone all in on say AWS, and when you're trying to troubleshoot a problem, do you find that it's easier when you're troubleshooting things on a VM that you set up or do you find it easier to troubleshoot when you're working with something that's connecting a bunch of managed services? [00:57:20] Josef: Oh, absolutely. I find it much easier to debug anything I set on myself, uh, and especially with one server it's even easier, but simply the fact that you build it yourself means that you know how it works. And at any time you can go and fix your problem. You know, this is what I found a problem with services like digital ocean marketplace.I don't know how they call this self, uh, hosted apps that you can like one click and have your rails django app up, up and running. I actually used when I, uh, wasn't that skilled with Linux and all those things, I use a, another distribution called. A turnkey Linux. It's the same idea. You know, it's like that they pre prepare the profile for you, and then you can just easily run it as if it's a completely hosted thing like heroku, but actually it's your server and you have to pay attention, but I actually don't like it because.You didn't set it up. You don't know how it's set up. You don't know if it has some problems, some security issues. And especially the people that come for these services then end up running something and they don't know. I believe they don't know because when I was running it, I didn't know. Right. So they are not even know what they are running.So if you really don't want to care about it, I think it's completely fine. There's nothing wrong with that. But just go for that render or heroku. And make your life easier, you know,[00:58:55] Jeremy: Yeah, it sounds like the solutions where it's like a one-click install on your own infrastructure. you get the bad parts of, of both, like you get the bad parts of having this machine that you need to manage, but you didn't set it up. So you're not really sure how to manage it.you don't have that team at Amazon who, can fix something for you because ultimately it's still your machine. So That could have some issues there. [00:59:20] Josef: Yeah. Yeah, exactly. I will. I would recommend it or if you really decide to do it, at least really look inside, you know, try to understand it, try to learn it, then it's fine. But just to spin it up and hope for the best, uh, it's not the way to go [00:59:37] Jeremy: In, in the book, you, you cover a few different things that you use such as Ruby on rails and nginx, Redis, postgres. Um, I'm assuming that the things you would choose for applications you build in self hosts. You want them to have as little maintenance as possible because you're the one who's responsible for all of it.I'm wondering if there's any other, applications that you consider a part of your default stack that you can depend on. And, that the, the maintenance burden is, is low. [01:00:12] Josef: Yeah. So, uh, the exactly right. If I can, I would rather minimize the amount of, uh, dependencies I have. So for instance, I would think twice of using, let's say elastic search, even though I used it before. And it's great for what it can do. Uh, if I can avoid it, maybe I will try to avoid it. You know, you can have descent full text search with Postgres today.So as long as it would work, I would uh, personally avoid it. Uh, I think one relation, uh, database, and let's say redis is kind of necessary, you know, I I've worked a lot with elixir recently, so we don't use redis for instance. So it's kind of nice that you can limit, uh, limit the number of dependencies by just choosing a different stack.Although then you have to write your application in a little different way. So sometimes even, yeah. In, in such circumstances today, this could be useful. You know, I, I think, it's not difficult to, to run it, so I don't see, I don't see a problem there. I would just say that with the services, like, uh, elastic search, they might not come with a good authentication option.For instance, I think asked et cetera, offers it, but not in the free version. You know, so I would just like to say that if you are deploying a component like that, be aware of it, that you cannot just keep it completely open to the world, you know? Uh, and, uh, maybe if you don't want to pay for a version that has it, or maybe are using it at the best, it doesn't have it completely.You could maybe build out just a little bit tiny proxy. That would just do authentication and pass these records back and forth. This is what you could do, you know, but just not forget that, uh, you might run something unauthenticated.I was wondering if there is any other, applications or capabilities where you would typically hand off to a managed service rather than, than trying to deal with yourself. [01:02:28] Josef: Oh, sending emails, not because it's hard. Uh, it's actually surprisingly easy to start sending your own emails, but the problem is, uh, the deliverability part, right? Uh, you want your emails to be delivered and I think it's because of the amount of spam everybody's sending.It's very difficult to get into people's boxes. You know, you simply be flagged, you have some unknown address, uh, and it would just it would just not work. So actually building up some history of some IP address, it could take a while. It could be very annoying and you don't even know how to debug it. You, you cannot really write Google.Hey, you know, I'm, I'm just like this nice little server so just consider me. You cannot do that. Uh, so I think kind of a trouble. So I would say for email differently, there's another thing that just go with a hosted option. You might still configure, uh, your server to be sending up emails. That could be useful.For instance, if you want to do some little thing, like scanning your system, a system log and when you see some troublesome. Logging in all that should, it shouldn't happen or something. And maybe you just want an alert on email to be sent to you that something fishy is going on. And so you, you can still set up even your server, not just your main application and might have a nice library for that, you know, to send that email, but you will still need the so-called relay server. to just pass your email. You. Yeah, because building this trust in an email world, that's not something I would do. And I don't think as a, you know, independent in the maker developer, you can really have resources to do something like that. So will be a perfect, perfect example for that. Yeah.[01:04:22] Jeremy: yeah, I think that's probably a good place to start wrapping up, but is there anything we missed that you think we should have talked about? [01:04:31] Josef: we kind of covered it. Maybe, maybe we didn't talk much about containers, uh, that a lot of people nowadays, use. uh, maybe I would just like to point out one thing with containers is that you can, again, do just very minimal approach to adopting containers. You know, uh, you don't need to go full on containers at all.You can just run a little service, maybe your workers in a container. For example, if I want to run something, uh, as part of my application, the ops team, the developers that develop this one component already provide a Docker file. It's very easy way to start, right? Because you just deployed their image and you run it, that's it.And they don't have to learn what kind of different stack it is, is a Java, is it python, how I would turn it. So maybe you care for your own application, but when you have to just take something that's already made, and it has a Docker image, you just see the nice way to start. And one more thing I would like to mention is that you also don't really need, uh, using services like Docker hub.You know, most people would use it to host their artifacts that are built images, so they can quickly pull them off and start them on many, many servers and blah, blah. But if you have just one server like me, but you want to use containers. And I think it's to just, you know, push the container directly. Essentially, it's just an archive.And, uh, in that archive, there are few folders that represent the layers. That's the layers you build it. And the Docker file and that's it. You can just move it around like that, and you don't need any external services to run your content around this little service.[01:06:18] Jeremy: Yeah. I think that's a good point because a lot of times when you hear people talking about containers, uh, it's within the context of Kubernetes and you know, that's a whole other thing you have to learn. You have to learn not only, uh, how containers work, but you have to learn how to deploy Kubernetes, how to work with that.And, uh, I think it's, it's good to remind people that it is possible to, to just choose a few things, run them as containers. Uh, you don't need to. Like you said, even run, everything as containers. You can just try a few things. [01:06:55] Josef: Yeah, exactly.[01:06:57] Jeremy: Where can people, uh, check out the book and where can they follow you and see what you're up to.[01:07:04] Josef: uh, so they can just go to deploymentfromscratch.com. That's like the homepage for the book. And, uh, if they want to follow up, they can find me on twitter. Uh, that would be, uh, slash S T R Z I B N Y J like, uh, J and I try to put updates there, but also some news from, uh, Ruby, Elixir, Linux world. So they can follow along.[01:07:42] Jeremy: Yeah. I had a chance to, to read through the alpha version of the book and there there's a lot of, really good information in there. I think it's something that I wish I had had when I was first starting out, because there's so much that's not really talked about, like, when you go look online for how to learn Django or how to learn Ruby on Rails or things like that, they teach you how to build the application, how to run it on your, your laptop.but there's this, this very, large gap between. What you're doing on your laptop and what you need to do to get it running on a server. So I think anybody who's interested in learning more about how to deploy their own application or even how it's done in general. I think they'll find the book really valuable.[01:08:37] Josef: Okay. Yeah. Thank you. Thank you for saying that. Uh, makes me really happy. And as you say, that's the idea I really packed, like kind of everything. You need in that book. And I just use bash so, it's easier to follow and keep it without any abstractions. And then maybe you will learn some other tools and you will apply the concepts, but you can do whatever you want.[01:09:02] Jeremy: All right. Well, Josef thank you, so much for talking to me today.[01:09:05] Josef: Thank you, Jeremy.
Chris talks about overfitting to the public leaderboard (on Kaggle and in life), and Christian tries to figure out a direction for two different projects: a hierarchical book reader (or writer?), and his timetracking app: should he go more B2B, or be more like Scott's Cheap Flights or Nomad List? 00:00 Intro 02:48 Overfitting to the public leaderboard 06:48 Different approaches to life, business and problems 11:40 Buying a new computer? 20:04 SaaS Work in Progress and 30x500 23:03 Hot Take: hierarchical book reader 37:29 Hot Take: timetracker: B2B or start a cult? Timestamps created with https://clips.marketing by @cgenco
Miquel trabaja y vive de programador incansable y se le conoce en Twitter por Vivir En Remoto. También se le conoce por ser una versión españolizada de Pieter Levels aunque con algunos ceros menos en la cuenta bancaria (I feel you) aunque con el mismo afán de creación en serie de proyectitos. #106 Cómo ser […] La entrada Un programador adicto a crear se publicó primero en Pau Ninja: el podcast.
Miquel trabaja y vive de programador incansable y se le conoce en Twitter por Vivir En Remoto. También se le conoce por ser una versión españolizada de Pieter Levels aunque con algunos ceros menos en la cuenta bancaria (I feel you) aunque con el mismo afán de creación en serie de proyectitos. • Notas de este episodio: https://podcast.pau.ninja/106 • Episodios por temática: https://podcast.pau.ninja/ • Suscríbete al podcast: https://podcast.pau.ninja/episodios • Comunidad del podcast: https://podcast.pau.ninja/comunidad • Sobre mí: - Blogs y proyectos: https://pau.ninja/ - Redes sociales: @pau_ninja
Miquel trabaja y vive de programador incansable y se le conoce en Twitter por Vivir En Remoto. También se le conoce por ser una versión españolizada de Pieter Levels aunque con algunos ceros menos en la cuenta bancaria (I feel you) aunque con el mismo afán de creación en serie de proyectitos. • Notas de este episodio: https://podcast.pau.ninja/106 • Episodios por temática: https://podcast.pau.ninja/ • Suscríbete al podcast: https://podcast.pau.ninja/episodios • Comunidad del podcast: https://podcast.pau.ninja/comunidad • Sobre mí: - Blogs y proyectos: https://pau.ninja/ - Redes sociales: @pau_ninja
Are all remote workers future nomads?As one of the founders of the modern remote work movement, and creator of Nomad List and RemoteOK, Pieter Levels offers his take on the future of the digital nomad lifestyle and its communities.Find the full transcript here!
Episode #68 with Dylan—Today he's talking to Steph Smith, author of Standing Out in 2020: Doing Content Right.Dishin' out Smart Nonsense about:Should you quit your job and learn how to code?The beauty of remote work and a nomadic lifestyleCreating content: What you must knowLinksConnect with Steph Smith: Website | Twitter | TrendsSteph Articles:The Guide to Remote Work That Isn't Trying to Sell You AnythingHow to Be Great? Just Be Good, RepeatablyAntifragility at Work: Change is the Only ConstantDon’t Snooze on these Sleep Industry Trends: Why Sleep Could Be the Next Mindfulness RevolutionStanding Out in 2020: Doing Content Right by Steph SmithThe Big Business of Drug Patent Expirations | Trends#114 with Steph Smith - The Head of Trends Talks Insider Trading and The Gen-Z Mafia | My First MillionBrandon Zhang: The Student Mindset, Growth Hacking, and Building Community | Smart Nonsense Pod #61All-In with Chamath, Jason, Sacks & Friedberg[Pieter Levels makes $600k a year from Nomad List and Remote OK](https://www.nocsdegree.com/pieter-levels-learn-coding/#:~:text=Pete Codes,-18 Aug 2020&text=Pieter Levels is a digital,interview but an analysis piece.)Automate the Boring Stuff with Python by Al SweigartWait But Why by Tim UrbanThinking in Bets by Annie DukeThe Hustle | Sam ParrNomad List | Pieter LevelsGhost | John O'NolanLeave Me Alone | Danielle Johnson | James IvingsShaan PuriPaul JarvisTim FerrisGary VaynerchukToptalGumroaHere's the full Show Notes.Watch on YouTube & SubscribeWatch Henry's last YouTube video.P.S. Toss us a 5-star review :)
On this episode Abadesi talks to Pieter Levels, founder of Nomad List, a global community of international travellers working around the world, RemoteOK, a job board for remote jobs, and Hoodmaps, a unique neighborhood map app.In this episode they talk about...Bootstrapping versus VC, and why he doesn’t want to build a team around his products“I don’t want to lose my skills. If I stop making stuff and become a manger, I’m going to learn a new skills but I’m not a business guy. I’m a creative person. I get happy from making stuff that works and people use.”Pieter says that he originally thought about creating a venture-backed business, which was going to be a proto-Uber in Amsterdam, before he pivoted to bootstrapping businesses. They discuss the questionable ethics of big venture-backed businesses who have often had to compromise on their values to get really big, really fast. He says that he works with one other person on his products but otherwise works on them all on his own — and he likes it that way. He explains why he doesn’t want to become a manager and instead prefers to keep working on his current products and potential new ones on his own, instead of delegating them to someone else once they’ve become successful.The difference between creating a website and building a community, and how to think about charging for your product“It’s psychologically difficult to charge people money.”He talks about how Nomad List has evolved over time and the features he has added to the site. He explains how it transformed from a website to a community. He breaks down the benefits of a community in expanding the reach of a movement and the intangibles that a community brings with it.He talks about how he got over the psychological barriers to charging money for access to a community, and says that at one time he explained to a member that he was even somewhat embarrassed to be charging, though now he hears from people all the time about the value that it brings to their lives. He also talks about the difficulty of managing and moderating a community.What the future of remote work and the digital nomad lifestyle will look like“You start off as a nomad thinking that you are going to travel the world forever but you go insane if you travel too fast.”Pieter talks about the evolution of the digital nomad lifestyle from its infancy to now, and why it’s being talked about more than ever. He says that it was at one time a somewhat fringe movement and that he never expected it to expand like it has. He says that creating the community around the lifestyle has helped accelerate its acceptance in mainstream culture and has resulted in there being more resources than ever for digital nomads.He says that in time we won’t be calling it nomadism anymore, it will just be something that becomes a normal part of life as remote work gains more and more acceptance. He says that eventually “digital nomadism” will become a term like “netizens” (an early term for people who used the internet) that we don’t use anymore, because it is so pervasive, just like the internet has become.“I flew less than my Dutch friends last year. Travel’s really fun but it’s more about finding a place where you feel better than where you were born and grew up.”We’ll be back next week so be sure to subscribe on Apple Podcasts, Google Podcasts, Spotify, Breaker, Overcast, or wherever you listen to your favorite podcasts. Big thanks to Headspin Mobile for their support.
Hacemos un repaso de la metodología de Pieter Levels para convertir una idea en un negocio y luego venderlo.
Muchos productos SaaS han salido de la imaginación liberada de fechas de entregas. Hoy dedicamos el episodio a los productos SaaS que empezaron como proyectos paralelos y se volvieron empresas reales (Product Hunt, PixelMe, Gatsby…), cómo sirven de vía de escape para salir de la monotonía, evitan la procrastinación e incluso pueden mejorar tu economía. Recomendaciones: The Ultimate Guide for Working with Side Projects: https://medium.freecodecamp.org/the-ultimate-guide-for-side-projects-fdcc3531dfd5 Side projects: Small Investments: https://www.indiehackers.com/@kslambert/side-projects-small-investments-faaab0c8cc Turning side projects into profitable startups (Pieter Levels): https://youtu.be/6reLWfFNer0 Indie Hackers: https://www.indiehackers.com Gatsby: https://www.gatsbyjs.org/blog/2018-05-24-launching-new-gatsby-company/ PixelMe: https://medium.com/pixelme-blog/from-bali-to-our-first-10-paying-customers-854d710f8e81
Conversa com Eric Vieira a partir do vídeo do Pieter Levels https://youtu.be/6reLWfFNer0 e do HackTalk com Henrique Bastos: https://youtu.be/VVh5ixNZ2dk
Blog / Podcast: Presentástico Podcast: Humor en público Libro: Willpower Doesn't Work: Discover the Hidden Keys to Success de Benjamin Hardy. Podcast: Puro Mac Blog: Tim Ferriss Libro: Armas de titanes: Los secretos, trucos y costumbres de aquellos que han alcanzado el éxito de Tim Ferriss (versión original: Tools of Titans). Web: levels.io - Pieter Levels. Web: Nomad List - Pieter Levels. Twitter: Pieter Levels. Libro: Factfulness TED: Nicholas Christakis: La influencia oculta de las redes sociales. Libros: Conectados: El sorprendente poder de las redes sociales y cómo nos afecta de Nicholas A. Christakis y James H. Fowler. Artículo: How To Launch Your Next Project /W Product Hunt's Ryan Hoover Episodio: ZT 104 Practicando Deep Work con meditación, lectura rápida y mejora contínua con Rick Téllez + “Extreme Ownership” de Jocko Willink App: Insight Timer – Meditación (Android, iOS) Episodio: EB 33 Autónomos felices: Daniel Julià (extra ball) Episodio: EB 39 Estoicismo para la vida moderna con Gonzalo Álvarez Marañón (extra ball) Episodio: EB 40 Carta VI de Seneca sobre Compartir el conocimiento Estos son nuestros grupos de Telegram: Somos zetatesters (grupo general). ZetaKids (padres y madres preocupados por la educación de sus hij@s)
Starting an online business is scary. You're putting yourself out there and risking failure in front of thousands or even millions of people. Learn how Pieter Levels has not only faced his fears, but used them as motivation while building an empire of profitable businesses that cater to digital nomads. Transcript, speaker information, and more: https://www.indiehackers.com/podcast/043-pieter-levels-of-nomad-list
In deze aflevering van Ik Weet Het Ook Niet praat ik met Pieter Levels over digitale nomaden. De volgende onderwerpen komen aan bod: geld, remote werken, landen, die pinautomaat met die gans, paralysis by analysis, travel rage en trends pushen.
DNX - Digitale Nomaden Podcast mit Marcus Meurer & Felicia Hargarten
DNX das ist krasser Input zu den Themen Digitales Nomadentum, ortsunabhängiges Arbeiten, Online Unternehmertum und persönliche Weiterentwicklung in allen Bereichen. Mehr Infos zur DNX auf www.dnx-berlin.de Schreib mir an marcus@lifehackz.co, hinterlass eine kurze Bewertung auf iTunes und abonniere die Show! Werde auch Teil der kostenlosen DNX LIFE HACKZ Community mit tausenden von gleichgesinnten Lifehackern. 1.000 Dank, Dein Marcus SHOWNOTES Pieter Levels Website
Michael has Joel on the show and they talk about life, health, business startups, mindset, and what it means to be a digital nomad. Show links Check out Joel here --> His site will direct you to his instagram, twitter, business, and everything else he's doing. Check out Pieter Levels, the guy that Joel talked about on the show, making 12 startups in one year. Be sure to like our Facebook page. Terry has a new book coming out in October. If you’re a pastor or Christian leader who teaches young people, you’ll want to pre-order TEACHING THE NEXT GENERATIONS. It’s designed as a textbook but it will help anyone learn more about how teaching and learning “work.” Another book that Terry has been devouring this summer is GRIT: THE POWER OF PASSION AND PERSEVERANCE by Angela Duckworth. It’s one of those game-changer books that leaders in various lines of work are recommending. Part of the clarifying process is to eliminate some nonessentials. We like the book ESSENTIALISM by Greg McKeown. If you want to start a blog or just relocate one to a better server, we can’t speak highly enough about our experience with Bluehost. 24/7 support and easy-to-get-started instructions make Bluehost the standard for folks with better things to do with their time. Links: Follow 37 the Podcast on Twitter. Follow Terry Linhart on Twitter or Michael Yoder on Twitter. 37thePodcast is a production of Truth Work Media. Truth Work Media creates monsters called podcasts. If you want to start a podcast, we can help. Thanks to J2 Marketing for providing studio space. Thanks to our friends at Bethel College (Indiana). For more about that vibrant growing Christian liberal arts college, visit BethelCollege.edu. The opinions expressed on this episode, unless the sources are otherwise given, are exclusively of participants on the podcast.
Recorded on the island of Koh Phangan in Thailand’s south, today’s episode is a discussion on what it’s like to be a ‘tropical digital nomad’, living and working quite literally on the beach. Guido and Marjet are really good friends of ours that we met during our time in Koh Lanta. An experienced developer and a translator, Guido and Marjet offer two very different perspectives on the location independent lifestyle and an important reality check for newbies. Discussion Points [1:31] Guido and Marjet attempt to ‘paint the picture' of what it's like to live and work on a tropical island [5:58] How Guido convinced his employer (a large bank in the Netherlands) to allow him to work from Thailand (hint: it wasn't hard) and why being a developer is probably the best career for working remotely. [8:43] How Guido & Marjet easily became a part of the community of nomads on Koh Phangan and a comparison on Koh Phangan to other Thai DN hotspots, Koh Lanta and Chiang Mai. [14:54] The beginnings of Guido & Marjet's digital nomad journey and how Marjet transitioned from bartending to becoming a location independent translator. [22:10] Guido and Marjet's top tips for newbie digital nomads and the realities of living the digital nomad lifestyle Links Mentioned * Beachub * KoHub * Nomadlist.com * The article on Pieter Levels that inspired my own DN journey * My bali recommendations – Dojo and Hubud * UpWork.com * Fiverr.com
Around 2 years ago, Dylan Wolff made a decision to challenge himself and go after a completely new career as a remote web developer. An internal auditor for years, he decided that he wanted to learn a new skill and travel the world. Today, he works full-time as a location independent developer for distributed Ruby on Rails consultancy. I've previously featured Dylan's story on my website in my inspiring nomads blog series and most recently on my YouTube channel where we answered viewers questions on becoming a remote web developer. In this week's podcast, we go deep into Dylan's story, how he learned web development, how he scored his first job and how he now travels the world, working remotely. Talking Points [01:40] How Dylan came to the realisation that he wanted to work remotely and travel the world and the steps he took to make it happen [5:38] Starting from Square one and becoming inspired by Pieter Levels [11:16] Making the choice to pursue web development and finding a firm that supports remote workers [14:12] The resources Dylan used starting out and his top tips for finding the time to code while working full-time [23:49] Knowing when you're ready to look for work and his first experiences entering the workforce [34:49] The challenge of working solo [36:49] How Dylan managed to score a job that allowed him to work remotely from the get-go and what he would have done if it hadn't been that way. [42:59] Travel experiences as a digital nomad and plans moving forward [45:59] How Dylan balances work and travel and dealing with multiple time zones [51:18] He favourite locations in the world to work from Links Mentioned Pieter Levels Hastag Nomads NomadList.com Slack & Basecamp Treehouse, Code School, Code Academy Thinkful Michael Hartle - The Rails Tutorial Pimsleur My first public vlog - Moving Out to Travel the World Our video together - How to become a developer J Space - Jeju Island Nick's Blog Post on finding your own rhythm Dylanwolff.com
While most entrepreneurs in Chiang Mai are chasing passive income, Raphael is on passionate mission to help young people with their gap year decisions. His startup, Années Sabbatiques (literally Gap Years in French) is an online portal for discovering gap year opportunities. Despite his dedication to Années Sabbatiques, Raphael has achieved his lifestyle goal of location independence and regular vacation time. Listen as Raph and I discuss his path to location independence and the experience of building a startup on the road. Show Notes [2:32] A life-changing Gap year in Australia, catching the travel bug and making the choice to pursue travel after graduation [6:22] Taking the first steps towards location independence [7:52] How Raphael built his career and made it work within his ideal lifestyle [12:26] The importance of having a mission and working with companies who share it [17:16] How Raphael built the skills that allowed him to be location independent, his transition into working remotely and how he makes his work, work for him [23:31] Doing work that fulfils while still making money [24:46] Raphael's Startup, Années Sabbatiques [28:37] The challenges of building a startup on the road, how Raphael went about gaining client work and the one client ended up making all the difference [35:31] The advantages of finding work before leave your home country [37:42] Raphael's travel journey so far and the transition from backpacker to digital nomad [41:36] Raphael's favourite places to work remotely [43:14] Raphael's top tips for those looking to become location independent Links Mentioned Love Affair Travel podcast Pieter Levels - 12 startups in 12 months Natalie Sisson - Suitcase Entrepreneur Années Sabbatiques - Anneessabitique.com Episode 5 with Chris Chui Dylan's Story
Pieter Levels, Founder of Levels.io, talks about how he’s building 12 startups in 12 months. Not only do we cover how he’s able to pull that off (and to with awesome quality), but he has really interesting reasons why he’s approaching startups this way. T Learn more about your ad choices. Visit megaphone.fm/adchoices