Podcasts about GitLab

open-source Git repository host

  • 873PODCASTS
  • 1,704EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 27, 2025LATEST
GitLab

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about GitLab

Show all podcasts related to gitlab

Latest podcast episodes about GitLab

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Tuesday, May 27th 2025: SVG Steganography; Fortinet PoC; GitLab Duo Prompt Injection

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later May 27, 2025 7:13


SVG Steganography Steganography is not only limited to pixel-based images but can be used to embed messages into vector-based formats like SVG. https://isc.sans.edu/diary/SVG%20Steganography/31978 Fortinet Vulnerability Details CVE-2025-32756 Horizon3.ai shows how it was able to find the vulnerability in Fortinet s products, and how to possibly exploit this issue. The vulnerability is already being exploited in the wild and was patched May 13th https://horizon3.ai/attack-research/attack-blogs/cve-2025-32756-low-rise-jeans-are-back-and-so-are-buffer-overflows/ Remote Prompt Injection in GitLab Duo Leads to Source Code Theft An attacker may leave instructions (prompts) for GitLab Duo embedded in the source code. This could be used to exfiltrate source code and secrets or to inject malicious code into an application. https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo

Engineering Kiosk
#196 Star Wars auf GitHub: 4,5 Mio. Fake-Sterne entdeckt

Engineering Kiosk

Play Episode Listen Later May 20, 2025 61:33


Welchen Wert haben GitHub-Stars?GitHub selbst ist ein Social Network für Entwickler*innen. Ob du es wahrhaben willst oder nicht. Man interagiert miteinander, kann sich gegenseitig folgen und Likes werden in Form von Stars ausgedrückt. Das bringt mich zu der Frage: Welchen Wert haben eigentlich GitHub Stars? Denn Fraud in Social Networks, wie das Kaufen von Followern, ist so alt wie die Existenz solcher Plattformen.Wie sieht es also auf GitHub damit aus? In dieser Episode schauen wir uns eine wissenschaftliche Untersuchung zum Thema Fake Stars auf GitHub an. Was sind GitHub-Stars wert? Aus welcher Motivation heraus kaufen sich Leute eigentlich GitHub Stars? Welche Herausforderungen gibt es, Fake Stars zu erkennen? Wie werden GitHub Stars eigentlich genutzt?Aber bei der wissenschaftlichen Untersuchung bleibt es nicht. Wir haben die Community gefragt, welche Bedeutung GitHub Stars für sie haben, ob Stars ein guter Indikator für die Qualität eines Projekts sind, wie diese Entscheidungen beeinflussen und nach welchen Kriterien die Community Stars vergibt.Zwei kleine Sneak-Peaks:Einen GitHub Star kannst du auf dem Schwarzmarkt bereits für $0.10 kaufenDas Kaufen von GitHub Stars beeinflusst das organische Stars-Wachstum von Repositories innerhalb der ersten zwei Monate. Danach flacht es ab.Du willst mehr davon? Dann schalte jetzt ein.Bonus: GitHub als Social Network für Entwickler.Ein Dank an unsere Community-Mitglieder:Dario TignerSchepp Christian Schäfer Philipp WolframMoritz KaiserStefan BrandtSimon BrüggenMelanie PatrickMaxi KurzawskiStefan BetheTim GlabischHolger Große-PlankermannMirjam ZiselsbergerSimon LegnerUnsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

AWS Podcast
#719: AWS News: Amazon Q Developer brings powerful new AI capabilities to GitLab Duo

AWS Podcast

Play Episode Listen Later May 5, 2025 26:12


Description: Learn how you can use the all new Amazon Q Developer integration with GitLab Duo to automate code generation and review, plus even more updates from AWS. 00:00:00 - Intro, 00:00:28 - SWE Holly Bench, 00:04:31 - Analytics, 00:06:49 - Application Integration, 00:07:14 - Artificial Intelligence, 00:08:53 - Amazon Bedrock Data Automation, 00:14:11 - AWS Health Omex, 00:14:21 - Compute, 00:16:37 - Contact Centers, 00:17:25 - Containers, 00:17:46 - Databases, 00:18:18 - Front end Web and Mobile, 00:18:59 - Management and Governance, 00:20:07 - Migration and Transfer, 00:20:17 - Networking and Content Delivery, 00:20:44 - Security Identity End Compliance, 00:23:24 - Serverless, 00:24:01 - Storage, 00:24:41 - Wrap up Shownotes: https://d29iemol7wxagg.cloudfront.net/719ExtendedShownotes.html

Tech Lead Journal
#215 - The Async First Playbook: Build Effective and Inclusive Teams with Less Meetings - Sumeet Moghe

Tech Lead Journal

Play Episode Listen Later May 5, 2025 63:03


(04:07) Brought to you by Swimm.io.⁠⁠⁠⁠⁠⁠Start modernizing your mainframe faster with ⁠Swimm⁠.Understand the what, why, and how of your mainframe code.Use AI to uncover critical code insights for seamless migration, refactoring, or system replacement.Are too many meetings killing your productivity and making your team less effective?Discover a new approach to work where meetings are no longer the default and deep work takes the center stage.In this episode, Sumeet Moghe, the author of the “Async-First Playbook”, shares actionable insights on building high-performing teams through async-first approach.Key topics discussed:The real reasons behind the return-to-office trend, and why remote and async work are far from deadHow async-first companies like GitLab, Shopify, and Automattic operate, and why it's not an all-or-nothing approachSurprising survey findings: Why most people want to work remotely, and how meetings and interruptions are damaging productivityThe async-first mindset: Making meetings the last resort, prioritizing written communication, and defining reasonable response lagsThe ConveRel Quadrants: A framework for deciding when to meet based on relationship strength and meeting purposeInclusion as a first-class responsibility: How async work empowers introverts, non-native speakers, parents, and diverse team membersThe “default to action” principle: How teams can move faster by embracing reversible decisions and reducing bottlenecksAsync-first leadership: Building trust, modeling the right behaviors, and creating systems that replace performative busynessPractical tips for better business writing and reading, plus how AI tools can supercharge your communicationThe future of work: Why top talent will continue to demand autonomy, and how AI and fractional work are shaping new collaboration modelsTune in to discover how to build high-performing, effective and inclusive teams with fewer meetings by adopting async-first.  Timestamps:(02:19) Career Turning Points(06:21) The Return to Office Trend(11:36) Companies Embracing Async-First(13:20) People's Working Style Preference(17:37) What is Async-First?(21:39) Team Handbook and Ways of Working(23:24) The ConveRel Quadrants(27:41) Inclusion as a First-Class Responsibility(32:14) Defaulting to Action(35:50) Async-First Leadership(40:38) Being Good in Written Communication(44:35) AI Usage in Written Communication(46:17) Time to Read and Reading Comprehension(51:14) The Future of Work(58:33) 3 Tech Lead Wisdom_____Sumeet Moghe's BioSumeet Gayathri Moghe is an Agile enthusiast, product manager, and design nerd at Thoughtworks. Sumeet has recently authored The Async-First Playbook. His practical recommendations for effective collaboration within remote and distributed teams stand for what he's learned from his colleagues, their successes, and their occasional misadventures.Sumeet kicked off “The async-first manifesto” , a set of principles he is co-creating with volunteer enthusiasts from around the world. He is also bringing async-work to life with stories of “Humans of remote work” .Follow Sumeet:LinkedIn – linkedin.com/in/sumeetmogheWebsite – asyncagile.org

The Future of Security Operations
GitLab's CISO Josh Lemos on the pros and cons of making security practices public

The Future of Security Operations

Play Episode Listen Later Apr 29, 2025 47:50


In this week's episode of The Future of Security Operations podcast, Thomas is joined by Josh Lemos, CISO at GitLab. Throughout his 15-year career in security, Josh has led teams at ServiceNow, Cylance, and Square. Known for his expertise in AI-driven security strategies, Josh is also a board member with HiddenLayer. He drives innovation at GitLab with a relentless focus on offensive security, identity management, and automation. In this episode: [02:05] His early career path from mechanic to electrical engineer to security leader [03:35] Josh's philosophy on hiring and mentoring, plus his tips for creating networking opportunities [05:30] How he applies technical foundations from his practitioner days to his work as CISO [07:40] Building product security at ServiceNow from the ground up [10:40] “Down and in” versus “up and out” - adopting a new leadership style as CISO at Square [12:17] Josh's experience as an early AI and security researcher at Cylance [16:15] What's surprised Josh most about the evolution of AI [18:50] Why Josh calls today's models “AI version 1.0” - and what he thinks it will take to upgrade to version 2.0 [22:45] The LLM security threats Josh is most worried about, as a board member with Hidden Layer [26:30] “Expressing exponential value” - what excited Josh most about becoming CISO at GitLab [27:45] Why GitLab prioritizes “intentional transparency” [32:45] How GitLab automates and orchestrates its Tier 1 and Tier 2 security processes [34:10] How GitLab's security team uses GitLab internally [37:35] The secret to recruiting, hiring, and managing a remote, global team [39:45] The importance of in-person collaboration for building trust and connection [41:45] Downsizing, bootstrapping, and problem-solving: Josh's predictions for the future of SecOps [46:10] Connect with Josh Where to find Josh: LinkedIn GitLab Where to find Thomas Kinsella: LinkedIn Tines Resources mentioned: GitLab's Security Handbook GitLab's GUARD Framework Netskope's security blog Jobs at GitLab Haroon Meer

Open at Intel
Evolving Software Deployment With GitLab

Open at Intel

Play Episode Listen Later Apr 24, 2025 20:55


In this episode, we sit down with Victor Nagy of GitLab to discuss his role and GitLab's initiatives. Victor details the transition from using a custom solution to integrating Flux for smoother application deployment. Victor also talks about GitLab's commitment to the open source community, contributions to Flux, and becoming a potential maintainer. We also touch on what makes developer tools great, developer experience, and developments in AI and security, highlighting the rapid pace of innovation in these fields. 00:00 Introduction and Guest Introduction 00:36 Key Open Source Projects: Flux and GitLab 01:17 Choosing Flux 03:42 Community Contributions and Future Plans 05:35 Deployment and Product Management 12:31 GitLab's Comprehensive Platform and Differentiators 18:38 Security and AI 19:43 Conclusion and Final Thoughts

SURFERS
Miniserie 13: ¿Adiós a la oficina?

SURFERS

Play Episode Listen Later Apr 15, 2025 23:05


Hoy, las oficinas físicas ya no definen dónde ni cómo generamos impacto. La nueva normalidad laboral está marcada por la autonomía, la confianza y la capacidad de tomar decisiones desde cualquier lugar del mundo. Pero este cambio también implica nuevos retos: desde cómo liderar equipos descentralizados hasta cómo protegernos en un entorno cada vez más digital y vulnerable.En este episodio, conversamos con dos líderes que están impulsando esta transformación desde diferentes perspectivas como Vincent Huguet, fundador y CEO de Malt, una de las plataformas más relevantes del trabajo freelance en Europa, con más de 500,000 freelancers registrados y 40,000 empresas clientes. Su visión: devolverle al talento la libertad de construir su carrera bajo sus propios términos. Y también Julián Garrido, CEO de MNEMO (hoy parte de Accenture) y referente en ciberseguridad, con una carrera que abarca firmas globales como McAfee y Cisco LATAM. Su experiencia ofrece una mirada estratégica sobre cómo proteger a las personas y organizaciones en esta nueva era del trabajo.Juntos, exploramos temas clave como: La evolución del trabajo independiente y las plataformas que están impulsando este modelo. Desde plataformas que impulsan la libertad profesional, hasta los desafíos invisibles de un entorno cada vez más digital e hiperconectado… Este episodio es una mirada honesta al presente y futuro del trabajo.. Conecta con nuestros invitados: : -Conecta con Roger Alarcón en: https://www.linkedin.com/in/roger-alarcon/ -Conecta con Vincent Huguet en: https://www.linkedin.com/in/vincenthuguet/ -Conecta con Julian Garrido en: https://www.linkedin.com/in/julian-garridomexico/ Escucha las entrevistas completas en: -Roger Alarcón : Short 2: GitLab y el trabajo remoto. https://open.spotify.com/episode/2qPLiCUcJpzn8SMM4IlOj4?si=_9iJ6aeMTpeqzHY56rf2cw  -Vincent Huguet : Backside 3: Del futuro del trabajo y contracultura. https://open.spotify.com/episode/4aGa0Gr2SGOZCGBPP4dLGx?si=AGSJpZXlQuWh1KH2RXSghQ -Julian Garrido: Backside 27: Ciberseguridad ¿Cuáles son las cosas que no puedes perder? https://open.spotify.com/episode/06eTHCRXLcA0RtbUrMLURv?si=89PvuWSSQSm2hNSTylSugQ ¡No te pierdas nuestras miniseries semanales  para conocer más acerca de todo eso que acerca a generar riqueza y abundancia en todos los ámbitos de tu vida! Suscríbete a nuestro canal en SpotifyVe la entrevista completa en YouTubeSigue negocioscool en todas nuestras redes Conecta con nosotros a través de LinkedIn

What the Dev?
303: How AI agents are transforming how software is built (with GitLab's Emilio Salvador)

What the Dev?

Play Episode Listen Later Apr 8, 2025 14:33


In this episode, Jenna Barron speaks with Emilio Salvador, vice president of strategy and developer relations at GitLab, about AI agents in software development. Key talking points include: How AI agents compare to traditional forms of automation in the software development processChallenges teams may face when implementing agentsThe impact AI agents will have on the role of software developers

Python Bytes
#427 Rise of the Python Lord

Python Bytes

Play Episode Listen Later Apr 7, 2025 36:31 Transcription Available


Topics covered in this episode: Git Town solves the problem that using the Git CLI correctly PEP 751 – A file format to record Python dependencies for installation reproducibility git-who and watchgha Share Python Scripts Like a Pro: uv and PEP 723 for Easy Deployment Extras Joke Watch on YouTube About the show Sponsored by Posit Package Manager: pythonbytes.fm/ppm Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Git Town solves the problem that using the Git CLI correctly Git Town is a reusable implementation of Git workflows for common usage scenarios like contributing to a centralized code repository on platforms like GitHub, GitLab, or Gitea. Think of Git Town as your Bash scripts for Git, but fully engineered with rock-solid support for many use cases, edge cases, and error conditions. Keep using Git the way you do now, but with extra commands to create various branch types, keep them in sync, compress, review, and ship them efficiently. Basic workflow Commands to create, work on, and ship features. git town hack - create a new feature branch git town sync - update the current branch with all ongoing changes git town switch - switch between branches visually git town propose - propose to ship a branch git town ship - deliver a completed feature branch Additional workflow commands Commands to deal with edge cases. git town delete - delete a feature branch git town rename - rename a branch git town repo - view the Git repository in the browser Brian #2: PEP 751 – A file format to record Python dependencies for installation reproducibility Accepted From Brett Cannon “PEP 751 has been accepted! This means Python now has a lock file standard that can act as an export target for tools that can create some sort of lock file. And for some tools the format can act as their primary lock file format as well instead of some proprietary format.” File name: pylock.toml or at least something that starts with pylock and ends with .toml It's exciting to see the start of a standardized lock file Michael #3: git-who and watchgha git-who is a command-line tool for answering that eternal question: Who wrote this code?! Unlike git blame, which can tell you who wrote a line of code, git-who tells you the people responsible for entire components or subsystems in a codebase. You can think of git-who sort of like git blame but for file trees rather than individual files. And watchgha - Live display of current GitHub action runs by Ned Batchelder Brian #4: Share Python Scripts Like a Pro: uv and PEP 723 for Easy Deployment Dave Johnson Nice full tutorial discussing single file Python scripts using uv with external dependencies Starting with a script with dependencies. Using uv add --script [HTML_REMOVED] [HTML_REMOVED] to add a /// script block to the top Using uv run Adding #!/usr/bin/env -S uv run --script shebang Even some Windows advice Extras Brian: April 1 pranks done well BREAKING: Guido van Rossum Returns as Python's BDFL including Brett Cannon noted as “Famous Python Quotationist” Guido taking credit for “I came for the language but I stayed for the community” which was from Brett then Brett's title of “Famous Python Quotationist” is crossed out. Barry Warsaw asking Guido about releasing Python 2.8 Barry is the FLUFL, “Friendly Language Uncle For Life “ Mariatta can't get Guido to respond in chat until she addresses him as “my lord”. “… becoming one with whitespace.” “Indentation is Enlightenment” Upcoming new keyword: maybe Like “if” but more Pythonic as in Maybe: print("Python The Documentary - Coming This Summer!") I'm really hoping there is a documentary April 1 pranks done poorly Note: pytest-repeat works fine with Python 3.14, and never had any problems If you have to explain the joke, maybe it's not funny. The explanation pi, an irrational number, as in it cannot be expressed by a ratio of two integers, starts with 3.14159 and then keeps going, and never repeats. Python 3.14 is in alpha and people could be testing with it for packages Test & Code is doing a series on pytest plugins pytest-repeat is a pytest plugin, and it happened to not have any tests for 3.14 yet. Now the “joke”. I pretended that I had tried pytest-repeat with Python 3.14 and it didn't work. Test & Code: Python 3.14 won't repeat with pytest-repeat Thus, Python 3.14 won't repeat. Also I mentioned that there was no “rational” explanation. And pi is an irrational number. Michael: pysqlscribe v0.5.0 has the “parse create scripts” feature I suggested! Markdown follow up Prettier to format Markdown via Hugo Been using mdformat on some upcoming projects including the almost done Talk Python in Production book. Command I like is mdformat --number --wrap no ./ uv tool install --with is indeed the pipx inject equivalent, but requires multiple --with's: pipx inject mdformat mdformat-gfm mdformat-frontmatter mdformat-footnote mdformat-gfm-alerts uv tool install mdformat --with mdformat-gfm --with mdformat-frontmatter --with mdformat-footnote --with mdformat-gfm-alerts uv follow up From James Falcon As a fellow uv enthusiast, I was still holding out for a use case that uv hasn't solved. However, after last week's episode, you guys finally convinced me to switch over fully, so I figured I'd explain the use case and how I'm working around uv's limitations. I maintain a python library supported across multiple python versions and occasionally need to deal with bugs specific to a python version. Because of that, I have multiple virtualenvs for one project. E.g., mylib38 (for python 3.8), mylib313 (for python 3.13), etc. I don't want a bunch of .venv directories littering my project dir. For this, pyenv was fantastic. You could create the venv with pyenv virtualenv 3.13.2 mylib313, then either activate the venv with pyenv activate mylib313 and create a .python-version file containing mylib313 so I never had to manually activate the env I want to use by default on that project. uv doesn't have a great solution for this use case, but I switched to a workflow that works well enough for me: Define my own central location for venvs. For me that's ~/v Create venvs with something like uv venv --python 3.13 ~/v/mylib313 Add a simple function to my bashrc: `workon() { source ~/v/$1/bin/activate } so now I can run workon mylib313orworkon mylib38when I need to work in a specific environment. uv's.python-version` support works much differently than pyenv's, and that lack of support is my biggest frustration with this approach, but I am willing to live without it. Do you Firefox but not Zen? You can now make pure Firefox more like Zen's / Arc's layout. Joke: So here it will stay See the follow up thread too! Also: Guido as Lord Python via Nick Muoh

The Azure Podcast
Episode 516 - Digital Intelligence Architecture

The Azure Podcast

Play Episode Listen Later Apr 4, 2025


In this episode, Sujit D'Mello and Cynthia Kreng are joined by special guest Mike Becker, an Azure Architect at Microsoft, to discuss how various Azure services can be combined to create a complex solution. Sujit covers the latest enhancements in AKS, including Azure CNI, load balancer support, network isolated clusters, cost recommendations, and GPU driver options. Mike shares insights into a comprehensive Azure cloud solution for collecting and analyzing economic data and media feedback about companies, highlighting the use of Azure Data Factory, Databricks, Power BI, and OpenAI for sentiment analysis. The discussion delves into the architectural decisions, technical challenges, and practical applications of these technologies in delivering robust and secure solutions.   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode516.mp3 YouTube: https://youtu.be/wG12eJymh54 Resources: ADF how it workshttps://learn.microsoft.com/en-us/azure/data-factory/introduction#how-does-it-work Azure Data Factory- Best Practiceshttps://learn.microsoft.com/en-us/answers/questions/1283307/azure-data-factory-best-practices Azure Data Bricks Medallion architecturehttps://learn.microsoft.com/en-us/azure/databricks/lakehouse/medallion Azure Data Bricks Best Practiceshttps://dzone.com/articles/azure-databricks-best-practices-for-a-developer Sentiment Analysis with Azure AI serviceshttps://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment Power BI recommendationshttps://community.fabric.microsoft.com/t5/Desktop/Power-BI-Development-and-Best-Practices/m-p/4632985/highlight/true#M1386307 Improve Power BI model's performancehttps://powerbi.microsoft.com/en-us/blog/best-practice-rules-to-improve-your-models-performance/ GitLab best practices - if you cannot use Azure DevOpshttps://about.gitlab.com/topics/version-control/what-are-gitlab-flow-best-practices/

The Tech Blog Writer Podcast
3226: How Instabug Uses AI to Catch Bugs Before Users Do

The Tech Blog Writer Podcast

Play Episode Listen Later Mar 31, 2025 28:56


What if your mobile app could detect bugs, fix UI inconsistencies, and spot user frustration before a user ever reports it? In today's episode, recorded live at IGEL Now & Next, I sit down with Kenny Johnston, Chief Product Officer at Instabug, to explore how AI is reshaping the way developers build, test, and maintain mobile apps. Instabug is taking mobile observability to an entirely new level by developing what Kenny describes as “zero maintenance apps.” Powered by on-device AI models, their platform can now detect subtle UX breakdowns, visual design flaws, and even frustration signals that wouldn't normally trigger crash reports. Whether it's an unresponsive button, a layout shift, or a broken navigation path, Instabug flags the issue, often before a user ever notices. Kenny shares how Instabug's approach to AI is helping development teams move faster and smarter, particularly in high-stakes environments like retail and e-commerce where performance peaks during events like Black Friday or Valentine's Day. Through real-time crash reporting, automated UI analysis, and deep session insights, developers can spot and solve problems that would otherwise get lost in a backlog or surface in app store reviews. We also explore the unique pressures of mobile development. With no quick rollbacks and high user expectations, developers need tools tailored to the realities of app store approvals, device fragmentation, and version-specific bugs. Instabug's platform brings together observability, feedback, and issue reproduction in a way that simplifies the mobile stack and accelerates release cycles. Kenny draws on his experience at GitLab to reflect on the need to consolidate tools and workflows in mobile development. He offers valuable insights for product leaders and mobile engineers on how to navigate change, evolve their approach, and stay curious in the face of constant technical demands. So how can your team shift from reactive debugging to proactive experience design? And are you really seeing all the issues your users encounter or just the ones they report? It's time to find out.

Edge of NFT Podcast
Edge of Hot Topics: Hot Gemini 2.5, GitLab's AI Insights & Amazon's AI Startup Push

Edge of NFT Podcast

Play Episode Listen Later Mar 28, 2025 22:29


Join host Richard Hon on this episode of Hot Topics on the Edge of Show as we dive into the latest advancements in AI. Richard welcomes Addy Crezee, Founder & CEO /function1, FORKED, CREZEE and Victoria Neiman, Co-founder & COO, /function1 | FORKED | CREZEE to discuss Google's bold claims about Gemini 2.5, GitLab's latest insights on developers embracing AI, Amazon's Alexa Fund fueling AI startups, and a sneak peek at the upcoming Function One AI Conference in Dubai. Don't miss their take on the future of AI and its impact on industries across the board. Support us through our Sponsors! ☕

Tank Talks
Why Venture Capital Needs a Reboot—And How Villi Iltchev Is Doing It

Tank Talks

Play Episode Listen Later Mar 27, 2025 52:34


On this episode of Tank Talks, we welcome back Villi Iltchev, founder and managing partner of Category Ventures, for an unfiltered deep dive into the evolving venture capital landscape. From his early days at Salesforce Ventures to launching his solo $160M fund, Villi unpacks the seismic shifts happening in enterprise software, how AI is reshaping startup economics, and what today's founders need most from their investors.We get tactical about startup pricing models, founder-investor trust, and what it takes to build truly category-defining companies. Villi also shares what he learned from backing GitLab, why transparency builds long-term trust, and how he thinks about firm design as a solo GP.Whether you're an aspiring founder, current operator, or an emerging VC, this episode is a masterclass in strategic thinking and building with purpose.Inside the Mind of a Modern VC (00:01:00)* Villi's journey from tech banking to Salesforce Ventures* Why Salesforce's transformation into a platform company changed everything* The parallels between Salesforce and NVIDIA's ecosystem dominance* How being early at Salesforce shaped Villi's thesis around go-to-market and platform strategyScaling GitLab: Lessons from the Frontlines (00:15:00)* The inside story of GitLab's infamous database failure—and why live-streaming the crisis built trust* Why Villi pushed GitLab to sunset unscalable SKUs and simplify pricing* The power of bundling and setting an “aspirational” price point from day oneGoing Solo: Building Category Ventures (00:25:00)* Why Villi finally felt ready to start his own fund—and what changed* The biggest surprises (and reliefs) in raising as a solo GP* How LPs are getting more sophisticated and what they want from fund managers* Why venture needs a reset and what legacy firms are getting wrongThe New Rules of Early-Stage Investing (00:32:00)* Why founder/firm misalignment leads to orphaned startups* The real impact of mega-funds dabbling at seed and pre-seed* Why Category Ventures is built to be flexible—and fiercely focused on enterprise softwareAI, Startups & the Future of Enterprise (00:38:00)* Villi's hot take on AI-powered lean startups: “It's not the norm—and won't be.”* Why AI is a second-order unlock for vertical SaaS and back-office automation* The coming wave of software replacing the BPO industryLife, Adrenaline, and VC Energy (00:45:00)* What gets Villi's adrenaline pumping as a VC* Why endless internal meetings kill his vibe—and founder calls fuel him* How skiing and extreme adventure balance the chaos of ventureAs the venture landscape shifts under our feet, Villi Iltchev is proving that thoughtful investing, deep expertise, and founder-first empathy are more vital than ever. From GitLab board rooms to building Category VC, his journey is a blueprint for those looking to lead with clarity—and conviction.About Villi Iltchev:Villi Iltchev is the founder and managing partner of Category Ventures, a $160M early-stage venture firm focused exclusively on enterprise software. With a career spanning both operating and investing, Villi brings a rare blend of empathy and edge to the startups he backs—having sat on both sides of the table.He began his career in tech investment banking before transitioning into operating roles at companies like Hewlett-Packard, LifeLock, and Box. He later joined Salesforce Ventures at its inception, helping to build one of the most influential corporate venture arms in the world. During his time there, he led investments in category-defining companies like GitLab and HubSpot.Prior to launching Category Ventures, Villi was a partner at August Capital and Two Sigma Ventures, where he built a strong track record backing developer tools, infrastructure, and vertical SaaS startups. His investments are grounded in deep enterprise domain expertise, a keen sense for go-to-market strategy, and a relentless focus on founder empathy.A lifelong learner and backcountry skiing enthusiast, Villi draws creative energy from the outdoors and adrenaline-fueled adventures. He holds degrees in finance and philosophy and is driven by a singular belief: the best founders don't just build products—they redefine categories.Follow Villi Iltchev on LinkedIn: https://www.linkedin.com/in/villi04Visit the Category Ventures website: https://www.categoryvc.com/Follow Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com

CFO Thought Leader
1083: Navigating the Go-To-Market Roadmap with Precision | Brian Robins, CFO, GitLab

CFO Thought Leader

Play Episode Listen Later Mar 26, 2025 46:35


It was a pivotal moment Brian Robbins tells us he'll never forget: stepping onto a makeshift stage to address some 400 employees just minutes before a key 8-K filing would publicly announce the potential sale of a major business unit. The room bristled with anxiety—people worried about their jobs and the future of the company. Robbins recalls that, instead of relying on scripted talking points, he spoke from the heart and vowed to keep everyone informed as events unfolded. By offering that openness, he reinforced his belief that finance isn't just about numbers, but about building trust and forging a clear path forward.Today, that spirit of transparent communication fuels Robbins's approach as CFO. Above all, he prioritizes strong relationships across every organizational function, from sales and marketing to product and engineering. This is why go-to-market execution, he explains, has become the centerpiece of his strategic leadership. Robbins embeds dedicated finance professionals alongside revenue-focused teams, helping to fine-tune territory splits, refine pricing, and calibrate product positioning based on real-time data. Now Listen

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Wednesday Mar 19th 2025: Python DLL Side Loading; Tomcast RCE Correction; SAML Roulette; Windows Shortcut 0-Day

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Mar 19, 2025 7:18


Python Bot Delivered Through DLL Side-Loading A "normal", but vulnerable to DLL side-loading PDF reader may be used to launch additional exploit code https://isc.sans.edu/diary/Python%20Bot%20Delivered%20Through%20DLL%20Side-Loading/31778 Tomcat RCE Correction To exploit the Tomcat RCE I mentioned yesterday, two non-default configuration options must be selected by the victim. https://x.com/dkx02668274/status/1901893656316969308 SAML Roulette: The Hacker Always Wins This Portswigger blog explains in detail how to exploit the ruby-saml vulnerablity against GitLab. https://portswigger.net/research/saml-roulette-the-hacker-always-wins Windows Shortcut Zero Day Exploit Attackers are currently taking advantage of an unpatched vulnerability in how Windows displays Shortcut (.lnk file) details. Trendmicro explains how the attack works and provides PoC code. Microsoft is not planning to fix this issue https://www.trendmicro.com/en_us/research/25/c/windows-shortcut-zero-day-exploit.html

The Talent Tango
From Math to People Leadership

The Talent Tango

Play Episode Listen Later Mar 12, 2025 20:51


In this episode, we dive into Brittany Rohde's unconventional journey into the HR and people space. With a background in mathematics and a love for admin, Brittany has successfully navigated the world of people operations, compensation, and leadership. We explore how she transitioned from a math degree into HR, her philosophy on taking opportunities, and why she believes every people team should have an engineering component. Key Takeaways

PodRocket - A web development podcast from LogRocket
LLMs for web developers with Roy Derks

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Mar 6, 2025 28:45


Roy Derks, Developer Experience at IBM, talks about the integration of Large Language Models (LLMs) in web development. We explore practical applications such as building agents, automating QA testing, and the evolving role of AI frameworks in software development. Links https://www.linkedin.com/in/gethackteam https://www.youtube.com/@gethackteam https://x.com/gethackteam https://hackteam.io We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Roy Derks.

This Week in Startups
AI Agents & the Future of Work with LangChain's Harrison Chase | AI Basics with Google Cloud

This Week in Startups

Play Episode Listen Later Mar 4, 2025 19:58


In this episode: Jason sits down with Harrison Chase, CEO of LangChain, to explore how AI-powered agents are transforming the way startups operate. They discuss the shift from traditional entry-level roles to AI-driven automation, the importance of human-in-the-loop systems, and the future of AI-powered assistants in business. Harrison shares insights on how companies like Replit, Klarna, and GitLab are leveraging AI agents to streamline operations, plus a look ahead at what's next for AI-driven workflows. Brought to you in partnership with Google Cloud.*Timestamps:(0:00) Introduction to Startup Basics series & Importance of AI in startups(2:04) Partnership with Google Cloud & Introducing Harrison Chase from Langchain(4:38) Evolution of entry-level jobs & Examples of AI agents in startups(8:00) Challenges & Future of AI agents in startups(14:24) AI agents in collaborative spaces & Non-developers creating AI agents(18:40) Closing remarks and where to learn more*Uncover more valuable insights from AI leaders in Google Cloud's 'Future of AI: Perspectives for Startups' report. Discover what 23 AI industry leaders think about the future of AI—and how it impacts your business. Read their perspectives here: https://goo.gle/futureofai*Check out all of the Startup Basics episodes here: https://thisweekinstartups.com/basicsCheck out Google Cloud: https://cloud.google.com/Check out LangChain: https://www.langchain.com/*Follow Harrison:LinkedIn: https://www.linkedin.com/in/harrison-chase-961287118/X: https://x.com/hwchase17*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis*Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

Code for Thought
[EN] ByteSized RSE: Project Management with GitHub

Code for Thought

Play Episode Listen Later Mar 4, 2025 28:21


English Edition: How can repository services like GitHub or GitLab help you manage your project. Listen to my conversation with three guests, Gemma Turon (Ersilia), Ben Clifford (Parsl) and Mike Simpson (Uni Newcastle) how they use GitHub PM tools effectively in their work.Links:https://ersilia.io Ersilia project - Gemma Turonhttps://github.com/ersilia-os GitHub pages for Ersiliahttps://parsl-project.org - Parsl project - Ben Cliffordhttps://github.com/Parsl/parsl - Parsl on GitHubhttps://www.youtube.com/watch?v=uQYQ_F8auEQ - Mike Simpson's talk at RSE Con 2023 'Colouring Cities: from prototype to Platform'https://www.youtube.com/watch?v=vP9k8mAXod4 - a session on Project Management and people at RSE Con 2024 - also with Mikehttps://docs.github.com/en/issues/planning-and-tracking-with-projects/learning-about-projects/best-practices-for-projects GitHub guidelines for using Projects (GitHub tool)https://www.software.ac.uk/blog/task-management-humans-self-care blog post from MikeByteSized RSE is supported by the Universe-HPC project. Get in touchThank you for listening! Merci de votre écoute! Vielen Dank für´s Zuhören! Contact Details/ Coordonnées / Kontakt: Email mailto:peter@code4thought.org UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastodon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org Bluesky: https://bsky.app/profile/code4thought.bsky.social LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/

LINUX Unplugged
604: One Week Left

LINUX Unplugged

Play Episode Listen Later Mar 3, 2025 61:10 Transcription Available


We're pre-gaming two of the biggest Linux events of the year. Engineers, organizers, and surprise guests are dropping by to give us the scoop before it all begins.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. River: River is the most trusted place in the U.S. for individuals and businesses to buy, sell, send, and receive Bitcoin. Support LINUX UnpluggedLinks:

Linux User Space
Episode 5:07: Kernel Overload

Linux User Space

Play Episode Listen Later Feb 24, 2025 68:30


Coming up in this episode * We load up your tech toolbox * We settle the Kernel debate * and Thou shall not package OBS... that way. 0:00 Cold Open 1:39 We load up your tech toolbox 21:48 We settle the kernel debate 49:43 Thou shall not package OBS... that way. 1:03:32 Next Time 1:07:26 Stinger The Video Version (https://youtu.be/15zr84iGDHo) https://youtu.be/15zr84iGDHo IT-Tools (https://it-tools.tech/) Can be self hosted (https://github.com/CorentinTh/it-tools) Do you use IT-Tools? If so, which ones? If you don't, would you?

Create Like the Greats
Mastering Customer Success Stories with Anthropic

Create Like the Greats

Play Episode Listen Later Feb 21, 2025 10:04


In this episode of Create Like The Greats, we break down how Anthropic, a leading AI company behind Claude, is revolutionizing the use of customer success stories to drive measurable business results. Despite fierce competition from OpenAI and Google, Anthropic differentiates itself through clarity, simplicity, and strategic storytelling. Learn how they use short yet impactful case studies to elevate their brand, enhance credibility, and attract the right audience. Inspired by research from Ethan Crump at Foundation Labs, we take a deep dive into how Anthropic structures its case studies, why they work, and how you can apply these principles to your own business. Key Takeaways & Insights 1. The Power of Customer Success Stories Anthropic has ramped up investment in customer case studies, adding 67 new pages to their website. These stories increase trust and credibility, addressing the fragmented B2B buyer's journey. Featuring well-known clients, such as DuckDuckGo, GitLab, Brave, BCG, and Assembly AI, boosts authority in their industry. 2. The Business Impact of Case Studies The dedicated case studies subfolder attracts over 60,000 organic monthly visitors, generating what would cost $25,000 in paid traffic. SEO performance: 1,600 ranking keywords 4,600 backlinks from 231 domains 3. Anthropic's Winning Formula for Case Studies Short and Direct: Each case study stays under 1,000 words, challenging traditional long-form SEO norms. Clear Structure: Follows a problem → solution → outcome format. Intent Matching for SEO: Content is designed to match search intent, ensuring customers find relevant solutions quickly. Focus on Business Value: Emphasizes results (e.g., 30% improvement in user satisfaction, 80% cost reduction). Uses client and company expert quotes to strengthen credibility. Avoids overly technical details, making it accessible to decision-makers at all levels, including CEOs and finance executives. 4. Applying These Lessons to Your Business Prioritize clarity and simplicity when crafting customer success stories. Optimize stories for search intent so they're discoverable and relevant. Showcase tangible metrics and real results in your case studies. Structure content for easy readability while keeping it strategic and insightful.  Resources How Anthropic Drives 60K+ in Organic Traffic — With One Simple Strategy —

Scrum Master Toolbox Podcast
Substack Week: The Shared Ownership Challenge, Understanding Clear Accountability in Engineering Teams | Rafa Páez

Scrum Master Toolbox Podcast

Play Episode Listen Later Feb 20, 2025 32:20


Substack Week: The Shared Ownership Challenge, Understanding Clear Accountability in Engineering Teams With Rafa Páez Welcome to our Substack Week, where we interview thought leaders who publish newsletters on Substack to help you find inspiring voices that drive our community forward. In this episode, we explore the concept of shared ownership and its pitfalls with Rafa Páez, an experienced engineering leader with insights on creating clear accountability in teams. The Pitfalls of Shared Ownership In engineering teams, shared ownership often manifests as ambiguity in responsibility and accountability. Rafa shares a personal experience where assigning two engineers to lead an initiative resulted in nothing getting done, as each assumed the other would take action. This phenomenon highlights how shared ownership without clear accountability can lead to missed deadlines, poor quality deliverables, and team conflicts. "It might not be my fault because I thought the other person was available, I thought the other person had more time to actually work on that initiative." Understanding the Bystander Effect The bystander effect, a psychological phenomenon first identified by social psychologists, explains why people are less likely to take action when others are present. In a team setting, this manifests as members assuming someone else will take responsibility, leading to collective inaction. This effect can significantly impact team productivity and project outcomes. "Because there are more people there, someone thinks that someone else will take care of that thing, whether it's a project, initiative, or any other action." The DRI Framework: Creating Clear Ownership The Directly Responsible Individual (DRI) concept, popularized by Gitlab and Apple, addresses the accountability gap by ensuring one person is clearly responsible for each significant initiative. This framework emerged after a failed project launch where no clear ownership led to quality issues. The DRI approach creates clear lines of responsibility while maintaining collaborative team dynamics. "You can have multiple DRIs for different aspects, but at the end, it needs to be one responsible for the overall project." Implementing DRI Successfully For leaders implementing the DRI framework, several key considerations are crucial for success. DRIs should be assigned thoughtfully based on skills and experience, with senior team members often better suited for these roles. The framework must be supported by a culture that empowers DRIs to make decisions while maintaining team collaboration. "DRIs need to be empowered to make decisions. If they are not empowered to make decisions, this role is not going to work because they're going to feel frustrated." Avoiding Common Anti-patterns When implementing the DRI framework, leaders should be aware of potential anti-patterns that can emerge. These include DRIs becoming bottlenecks, erosion of team collaboration, and overuse of the framework for minor tasks. Success requires finding the right balance and ensuring the framework enhances rather than hinders team dynamics. "Another issue or anti-pattern is the erosion of collaboration - some people might get the wrong concept about DRIs and say 'I don't need to collaborate anymore.'" Building a Culture of Accountability Creating a successful culture of accountability requires clear communication about the DRI role and its implications. Leaders must ensure DRIs are supported while maintaining team collaboration and avoiding the framework becoming overly bureaucratic. The focus should be on enabling effective decision-making and clear ownership while preserving team dynamics. "Consider the skills when assigning DRIs, support people in this role, and remember that DRI is an organizational agnostic framework that adapts to the organizations we are within." Resources For Further Study The Gitlab handbook article about the DRI concept The book: Extreme Ownership by Jocko Willink The Engineering Leader newsletter by Rafa Páez   [The Scrum Master Toolbox Podcast Recommends]

Stock Market Today With IBD
Stocks Bounce In Inside Day; Axon Enterprise, GitLab, Freshpet In Focus

Stock Market Today With IBD

Play Episode Listen Later Feb 10, 2025 19:03


Alissa Coram and Ed Carson analyze Monday's market action and discuss key stocks to watch on Stock Market Today.

The Government Huddle with Brian Chidester
168: The One with the GitLab Federal CTO

The Government Huddle with Brian Chidester

Play Episode Listen Later Feb 7, 2025 34:44


Joel Krooswyk, Federal Chief Technology Officer at GitLab joins the show to discuss some of his predictions for the Federal government this year including his thoughts on the proliferation of Software Bill of Materials (SBOMs), AI in software development, and the renewed focus that agencies will place on cloud technology. We also discuss his thoughts on compliance as a strategic enabler and why DevSecOps will continue to grow in popularity. 

The Peel
Illegal Immigrant to $160m Fund 1: Inside Villi Iltchev's Journey Building Category Ventures

The Peel

Play Episode Listen Later Feb 6, 2025 109:19


Villi Iltchev is the founder of Category Ventures, where he invests early in enterprise software startups. And he's done it longer than almost anyone, building Salesforce's corporate venture arm and investing early in companies like Airtable, Zapier, GitLab, Remote, Hubspot, Gusto, and Box. Fresh off raising his $160m Fund 1, we get into the opportunity he saw to start Category, and how San Francisco and Silicon Valley have changed over the past 30 years. He also shares his story growing up as an illegal immigrant in Greece, moving to the US by himself in high school, the biggest mistake of his career, advice for founders selling their company, why unit economics and profitability always matters, how developer tools went from terrible to amazing businesses, the mistake that almost killed GitLab after he invested, and why you should raise your seed round from a seed fund. Timestamps: (00:00) Intro (03:22) Illegally immigrating from Bulgaria to Greece (05:14) Moving to the US by himself in high school (13:15) Moving to SF in the Dot Com Bubble (15:49) How SF changed over the last 25 years (22:27) Why HP fell from the top of Silicon Valley (25:36) Building Salesforce's corporate VC arm (30:29) Why SaaS was so transformative (34:35) Angel investing in Airtable (39:52) The biggest mistake of his career (42:13) Why unit economics always matter (47:20) Biggest mistake when selling a tech company (49:00) Almost starting a software PE firm and landing in VC (55:45) Lessons from August Capital + Evolution of venture (59:22) Early days of dev tools + Investing in GitLab (1:09:50) Why being contrarian is dumb (1:11:45) How GitLab almost died and emerged stronger (1:16:48) Villi's journey to starting Category (1:25:22) Category's thesis (1:30:48) Why startups always come in batches (1:31:57) The importance of track record in venture (1:35:32) Deciding a $160m fund size (1:39:26) Why you should raise seed rounds from seed firms (1:43:40) What Villi looks for in a startup Referenced Category VC: https://www.categoryvc.com/ Category's $160m Fund 1: https://www.forbes.com/sites/alexkonrad/2024/12/17/villi-iltchev-raises-160-million-debut-fund-category/ Aaron Levie (Box) on The Peel: https://youtu.be/cLn_tqPvNf4 GitLab's Recovery Stream: https://www.youtube.com/watch?v=v0TRHLvYGE0 Guy Podjarny (Snyk) on The Peel: https://youtu.be/BzKlZ_v4uCw Why SaaS won't consolidate: https://medium.com/@villispeaks/why-saas-consolidation-is-not-happening-2b9b722e0250 Follow Villi Twitter: https://x.com/villi LinkedIn: https://www.linkedin.com/in/villi04/ Follow Turner Twitter: https://twitter.com/TurnerNovak LinkedIn: https://www.linkedin.com/in/turnernovak Subscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/

Augmented - the industry 4.0 podcast
Scaling Open Source in Manufacturing with FlowFuse's ZJ van de Weg

Augmented - the industry 4.0 podcast

Play Episode Listen Later Feb 5, 2025 26:57


This week's guest is ZJ van de Weg (https://www.linkedin.com/in/zegerjan/), CEO of FlowFuse. ZJ shares his journey from an intern at GitLab to now leading FlowFuse, how open-source technology is transforming industrial operations, and why Node-RED has become the go-to platform for low-code manufacturing connectivity. He also takes a deep dive into the challenges of scaling open source solutions in enterprise environments, the value of an ‘open-core' business model, and the future of IT/OT collaboration. Augmented Ops is a podcast for industrial leaders, citizen developers, shop floor operators, and anyone else that cares about what the future of frontline operations will look like across industries. This show is presented by Tulip (https://tulip.co/), the Frontline Operations Platform. You can find more from us at Tulip.co/podcast (https://tulip.co/podcast) or by following the show on LinkedIn (https://www.linkedin.com/company/augmentedpod/). Special Guest: ZJ van de Weg.

Open Source Startup Podcast
E164: Taking on Auth0 with Open Source Zitadel

Open Source Startup Podcast

Play Episode Listen Later Feb 3, 2025 34:02


Florian Forster is Co-Founder & CEO of Zitadel, the cloud security platform aiming to build the future of identity and access management. Their open source project, also called zitadel, provides identity infrastructure and has 10K stars on GitHub. In this episode, we dig into: The benefits of having an open source auth vendor Authentication vs. authorization Building the "GitLab for identity" Why customization matters for an auth product Demand for self-hosting options for auth Appealing to developers and security teams

GOTO - Today, Tomorrow and the Future
"Looks Good to Me" Constructive Code Reviews • Adrienne Braganza Tacke & Paul Slaughter

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jan 31, 2025 52:54 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereAdrienne Braganza Tacke - Senior Developer Advocate at Cisco & Author of "Looks Good To Me: Constructive Code Reviews"Paul Slaughter - Staff Fullstack Engineer at GitLab & Creator of Conventional CommentsRESOURCESAdriennehttps://x.com/AdrienneTackehttps://github.com/AdrienneTackehttps://www.linkedin.com/in/adriennetackehttps://www.instagram.com/adriennetackehttps://www.adrienne.iohttps://blog.adrienne.ioPaulhttps://x.com/souldzinhttps://github.com/souldzinhttps://gitlab.com/pslaughterhttps://gitlab.com/souldzinhttps://souldzin.comDESCRIPTIONPaul Slaughter and Adrienne Braganza Tacke delve into the critical role of communication in code reviews, emphasizing how soft skills can significantly enhance the engineering process. Adrienne, drawing insights from her upcoming book, explores the expectations for software engineers in code reviews, offers practical tips for improving communication, and shares her unique perspective on the parallels between writing and reviewing code.Their conversation highlights the importance of fostering a positive feedback culture and leading by example to create a collaborative environment within teams.RECOMMENDED BOOKSAdrienne Braganza Tacke • "Looks Good to Me": Constructive Code ReviewsAdrienne Braganza Tacke • Coding for KidsGrace Huang • Code Reviews in TechMartin Fowler • RefactoringMatthew Skelton & Manuel Pais • Team TopologiesDave Thomas & Andy Hunt • The Pragmatic ProgrammerBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Cyber Security Headlines
Cybersecurity News: Tenable acquires Vulcan Cyber, Chinese and Iranian hackers are using U.S. AI, US Navy bans use of DeepSeek

Cyber Security Headlines

Play Episode Listen Later Jan 30, 2025 7:35


Tenable acquiring Israel's Vulcan Cyber in $150 million deal Tenable, a Nasdaq-listed cybersecurity company valued at $5.3 billion, is acquiring Israeli cybersecurity firm Vulcan Cyber for approximately $150 million, with the deal expected to close in Q1 of this year. The acquisition aims to enhance Tenable's security exposure management platform by integrating Vulcan Cyber's capabilities, unifying security visibility and risk mitigation. Vulcan Cyber was founded in 2018 and has raised $55 million and employs 100 people, though it is unclear how many will remain post-acquisition. (CalCalistech) Chinese and Iranian Hackers Are Using U.S. AI Products to Bolster Cyberattacks Hackers linked to China, Iran, Russia, and North Korea are using AI, including Google's Gemini chatbot, to enhance cyberattacks, according to U.S. officials and Google security research. These groups utilize AI for tasks like writing malicious code, identifying vulnerabilities, and researching targets rather than developing advanced hacking techniques. Meanwhile, China's DeepSeek AI has raised global concerns about Beijing's progress in the AI arms race, adding uncertainty to the technology's impact on security and warfare. (Wall Street Journal)   U.S. Navy bans use of DeepSeek due to ‘security and ethical concerns' The U.S. Navy has warned its members to avoid using China's DeepSeek AI due to security and ethical concerns, instructing them not to use it for work or personal tasks. DeepSeek's newly released AI model, R1, has drawn global attention for its capabilities, sparking concerns over China's AI advancements and impacting tech markets, with AI chipmakers like Nvidia and Broadcom losing $800 billion in market value. The warning comes amid growing U.S.-China AI competition, with figures like Trump and industry leaders emphasizing the urgency of maintaining American leadership in AI. (CNBC) South Africa's government-run weather service knocked offline by cyberattack A cyberattack has taken the South African Weather Service (SAWS) offline, disrupting critical services for aviation, marine, and agriculture, while forcing SAWS to share weather updates via social media. The breach, the second attempted attack in two days, has also impacted regional allies like Mozambique and Zambia, with efforts underway to restore systems. While no ransomware group has claimed responsibility, South Africa has faced a wave of cyberattacks in recent years, targeting public institutions, including its defense department, pension organization, and national lab service. (The Record) FBI seizes major cybercrime forums in coordinated domain takedown The FBI and international law enforcement have seized multiple cybercrime-linked platforms, including Cracked[.]io, Nulled[.]to, SellIX, and StarkRDP, in a major crackdown on digital marketplaces for stolen credentials and hacking tools. These sites have been criticized for enabling password theft, software piracy, and credential-stuffing attacks, but now redirect to FBI-controlled servers, effectively shutting them down. The operation, involving agencies from Australia, France, Germany, and others, marks another step in global efforts to dismantle cybercriminal networks.   (CyberScoop) North Koreans clone open source projects to plant backdoors, steal credentials North Korea's Lazarus Group carried out a large-scale supply chain attack, dubbed Phantom Circuit, compromising hundreds of victims by embedding backdoors in cloned open-source software, according to SecurityScorecard's latest report. The campaign began in late 2024 and targeted cryptocurrency developers and tech professionals by distributing malware-laced repositories on platforms like GitLab. Stolen data included credentials, authentication tokens, and system information, with the attackers using obfuscation techniques and VPNs.  (The Register)   Oasis Security Research Team Discovers Microsoft Azure MFA Bypass Oasis Security discovered a critical vulnerability in Microsoft's Multi-Factor Authentication (MFA), allowing attackers to bypass it and gain unauthorized access to Office 365 accounts, including Outlook, OneDrive, and Azure. The flaw exploited session creation and TOTP code tolerance, enabling attackers to brute-force MFA codes undetected within 70 minutes. Oasis reported the issue to Microsoft, which implemented a stricter rate limit, permanently fixing the vulnerability by October 2024. The research highlights the importance of strong MFA implementations and improved alerting mechanisms for failed second-factor attempts. (Cloud Security Alliance) SLAP and FLOP security flaws affect all current Apple devices, and many older ones Security researchers from The Georgia Institute of Technology have discovered two vulnerabilities, SLAP and FLOP, affecting all iPhones, iPads, and Macs with A15 and M2 chips or later. These flaws exploit speculative execution to access data from open web tabs, with SLAP affecting Safari and FLOP impacting both Safari and Chrome. While there's no evidence of exploitation in the wild, Apple has been working on fixes since mid-2024, stating there is no immediate risk to users. Until a patch is released, the best precaution is to be cautious of the websites you visit. (9to5Mac)   Security faces many problems. Asset inventory, patching automation, config management, and device administration are all perennial challenges. But how many of them are related to security specifically? That what we dig into on our latest episode of Defense in Depth. Look for “The Hardest Problems in Security Aren't “Security Problems”” wherever you get your podcasts. Huge thanks to our sponsor, Conveyor Ever wish you had a teammate that could handle the most annoying parts of customer security reviews? You know, chasing down SMEs for answers, updating systems, coordinating across teams—all the grunt work nobody wants to do. Plus, having to finish the dang questionnaire itself. Well. That teammate exists—Conveyor just launched Sue, the first AI Agent for Customer Trust. Sue really is the dream teammate. She never misses a deadline, answers every customer request from sales, completes every questionnaire and knocks out all the coordination in-between.  Sue handles it all so you don't have to. Learn more at www.conveyor.com.

Traction
5 Lessons From Selling to Millions of Developers

Traction

Play Episode Listen Later Jan 29, 2025 32:13


Think marketing to developers is all about hoodies and hackathons? Think again. As the developer-led economy is poised to grow to a TRILLION dollars, companies that offer products for developers must refocus to a business-to-developer (B2D) model as developers become not only users of products, but key purchase influencers. In this episode, Nnamdi Iregbulem, Partner at Lightspeed Venture Partners, Joyce Lin, Head of Developer Relations at Viam, and Caroline Lewko, CEO of Revere Communications, reveal actionable insights for building and scaling developer relations programs that drive adoption, loyalty, and growth in a developer-led economy. Specifically, you'll learn:How to get developers to adopt and build on your platform – from APIs and SDKs to advanced ML and DevOps tools.What it takes to deliver an exceptional Developer Experience (DX) – aligning marketing and product teams to meet the unique needs of developers.Key ingredients of a winning Developer Relations program – from onboarding to advocacy and retention.How to stand out through community engagement – turning developers into loyal evangelists for your platform.Resources Mentioned:Nnamdi Iregbulem -https://www.linkedin.com/in/nnamdiiregbulem/Joyce Lin -https://www.linkedin.com/in/joyce-lin/Caroline Lewko -https://www.linkedin.com/in/carolinelewko/Lightspeed Venture Partners | LinkedIn -https://www.linkedin.com/company/lightspeed-venture-partners/Lightspeed Venture Partners | Website -https://lsvp.com/Viam | LinkedIn -https://www.linkedin.com/company/viaminc/Viam | Website -https://www.viam.com/Revere Communications - https://www.reverecommunications.com/Tyler Jewell's research on developer-led markets -https://tylerjewell.substack.com/p/the-developer-led-landscape-20-08-28Stack Overflow Developer Survey -https://survey.stackoverflow.com/2024/GitLab's “Everyone Can Contribute” philosophy -https://gitlab.com/“Developer Relations: How to Build and Grow a Successful DevRel Program” by Caroline Lewko and James Parton - https://www.devrel.agency/bookThis episode is brought to you by:Leverage community-led growth to skyrocket your business. “From Grassroots to Greatness” by author Lloyed Lobo will help you master 13 game-changing rules from some of the most iconic brands in the world — like Apple, Atlassian, CrossFit, Harley-Davidson, HubSpot, Red Bull and many more — to attract superfans of your own that will propel you to new heights. Grab your copy today at FromGrassrootsToGreatness.com.Each year the US and Canadian governments provide more than $20 billion in R&D tax credits and innovation incentives to fund businesses. But the application process is cumbersome, prone to costly audits, and receiving the money can take as long as 16 months. Boast automates this process, enabling companies to get more money faster without the paperwork and audit risk. We don't get paid until you do! Find out if you qualify today at https://Boast.AI.Launch Academy is one of the top global tech hubs for international entrepreneurs and a designated organization for Canada's Startup Visa. Since 2012, Launch has worked with more than 6,000 entrepreneurs from over 100 countries, of which 300 have grown their startups to seed and Series A stage and raised over $2 billion in funding. To learn more about Launch's programs or the Canadian Startup Visa, visit https://LaunchAcademy.ca.Content Allies helps B2B companies build revenue-generating podcasts. We recommend them to any B2B company that is looking to launch or streamline its podcast production. Learn more at https://contentallies.com.#DeveloperRelations #BusinessToDeveloper #B2D #Product #Marketing #Innovation #Startup #GenerativeAI #AI

Product-Led Podcast
6 Steps To Launch A PLG Motion

Product-Led Podcast

Play Episode Listen Later Jan 28, 2025 54:31


Hila Qu is the director of growth at GitLab, a developer platform. GitLab offers a powerful platform that enables developers, engineers, and teams to build, release, and deploy very efficiently. The company started as an open-source product, but it became a PLG business as it has the criteria to be one. Due to its large free user base, GitLab was able to launch a PLG motion. Data on how users utilize the platform also allowed them to understand which features they use and what behaviors indicate that they are likely to convert to potential PQ. Hila provides details on how she created all these from scratch to grow GitLab and gives the six steps to launch a PLG motion. Show Notes [00:47] What GitLab is and how they started the PLG motion [08:44] How existing sales motion works before getting into the PLG side of things [17:18] Aligning on the customer journey and funnel design [31:00] Organize the right teams the right way [36:07] Recommendations for infrastructure and tool stack dependent on company size [40:45] How to identify the highest ROI focus area for PLG efforts [46:28] Anticipating common challenges and building the PLG culture [49:56] Hila's advice for starting a PLG motion  About Hila Qu Hila is a uniquely talented growth leader. Prior to her current role at GitLab, Hila worked at Acorns, a financial technology, and services company that specializes in micro-investing and robo-investing. At Acorns, she founded and developed the growth team into a 20+ member team, drove the customer base from 1M to over 4M, and launched two new product lines. Now at GitLab, she leads their growth product team that has since generated over $1.5M incremental ARR from growth product initiatives & experiments in just the first six months. Needless to say, Hila lives every day in the world of growth, retention, analytics, and products (some nights too). Link GitLab Profile Hila's Linkedin

Federal Tech Podcast: Listen and learn how successful companies get federal contracts

Connect to John Gilroy on LinkedIn   https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com A recent study showed that the federal government has identified 1700 use cases for Artificial Intelligence. Today, we examine some challenges and solutions for unlocking the power of AI represented in these examples.  Our guest, Joel Krooswyk from GitLab, examines Software Bills of Material, repatriation, and what efficiency might look like in the future. SBOM. For years, software developers have recommended using a Software Bill of Material. Today, its value has become so apparent that it is becoming mandatory. During the interview, Joel Krooswyk discusses the security benefits of mandating an SBOM policy for all federal software development. Fifteen years ago, Vivek Kundra coined the phrase “Cloud First.”  It took a while, but cloud adoption is pervasive by the federal government.  However, with this adoption, we have seen examples where cloud service providers may over-promise and under delivery. The interview provides guidelines for transitioning from the cloud back to the premises, which is increasingly called “repatriation.” Software development in the future will make compliance partner with DevSecOps in an automated process. This will reduce maintenance costs and provide real-time reporting.  Intelligent automation will be able to validate each step of the process.

The SaaS Revolution Show
How AI is Revolutionizing Productivity Measurement: Insights from Ashley Kramer, GitLab's Interim CRO, CSO, and CMO

The SaaS Revolution Show

Play Episode Listen Later Jan 23, 2025 30:25


Live from the SaaStock USA 2024 Scale Stage, Ashley Kramer, GitLab's Chief Strategy and Marketing Officer, sits down with AJ Eckstein, Founder & Creator at Creator Match and Fast Company. Together, they explore the imminent shift in how companies measure AI's impact on efficiency, questioning whether the productivity gains truly justify the risks and costs associated with AI adoption. Ashley argues that forward-thinking companies must move beyond traditional output metrics and focus on those that reflect real business value—such as enhanced software quality, faster time-to-market, consistent delivery, and, most importantly, improved customer satisfaction.Check out the other ways SaaStock is serving SaaS founders

The Bootstrapped Founder
371: Brian Sierakowski — Mastering Product Communication

The Bootstrapped Founder

Play Episode Listen Later Jan 22, 2025 65:11 Transcription Available


Brian Sierakowski (@bsierakowski) has been busy over the last year: he started working on ChangeBot and TRMNL, and both projects are taking off.If you ever wondered what a good changelog looks like or why you might need one, this episode is for you. This episode is sponsored by Paddle.com — if you're looking for a payment platform that works for you so you can focus on what matters, check them out.The blog post: https://thebootstrappedfounder.com/brian-sierakowski-mastering-product-communication/The podcast episode: https://tbf.fm/episodes/317-brian-sierakowski-mastering-product-communicationCheck out Podscan to get alerts when you're mentioned on podcasts: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw

Lenny's Podcast: Product | Growth | Career
Behind the founder: Drew Houston (Dropbox)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jan 9, 2025 97:36


Drew Houston is the co-founder and CEO of Dropbox. Under his leadership, Dropbox has grown from a simple idea to a service used by over 700 million registered users globally, with a valuation exceeding $9 billion. Drew has led Dropbox through multiple phases, from explosive viral growth, to battling all the tech giants at once, to reinventing the company for the future of work. In our conversation, he opens up about:• The three eras of Dropbox's growth and evolution• The challenges he's faced over the past 18 years• What he learned about himself• How he's been able to manage his psychology as a founder• The importance of maintaining your learning curve• Finding purpose beyond metrics and growth• The micro, macro, and meta aspects of building companies• Much more—Brought to you by:• Paragon—Ship every SaaS integration your customers want• Explo—Embed customer-facing analytics in your product• Vanta—Automate compliance. Simplify security—Find the transcript at: https://www.lennysnewsletter.com/p/behind-the-founder-drew-houston-dropbox—Where to find Drew Houston:• X: https://x.com/drewhouston• LinkedIn: https://www.linkedin.com/in/drewhouston/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Drew and Dropbox(04:44) The three eras of Dropbox(07:53) The first era: Viral growth and early success(14:19) The second era: Challenges and competition(20:49) Strategic shifts and refocusing(29:36) Personal reflections and leadership lessons(40:19) Unlocking mindfulness and building support systems(43:14) The Enneagram test(50:35) The challenges of being a founder CEO(58:11) The third era: Rebooting the team and core business(01:22:41) Lessons and advice for aspiring founders(01:27:46) Balancing personal and professional growth(01:42:38) Final reflections and future outlook—Referenced:• Dropbox: https://www.dropbox.com/• Y Combinator: https://www.ycombinator.com/• Paul Graham's website: https://www.paulgraham.com/• Hacker News: https://news.ycombinator.com/• Arash Ferdowsi on LinkedIn: https://www.linkedin.com/in/arashferdowsi/• Sequoia Capital: https://www.sequoiacap.com/• Pejman Nozad on LinkedIn: https://www.linkedin.com/in/pejman/• Mike Moritz on LinkedIn: https://www.linkedin.com/in/michaelmoritz/• TechCrunch Disrupt: https://techcrunch.com/events/tc-disrupt-2024/• Dropbox viral demo: https://youtu.be/7QmCUDHpNzE• Digg: https://digg.com/• Reddit: https://www.reddit.com/• Hadi and Ali Partovi: https://www.partovi.org/• Zynga: https://www.zynga.com/• Steve Jobs announces Apple's iCloud: https://www.youtube.com/watch?v=ilnfUa_-Rbc• Dropbox Carousel: https://en.wikipedia.org/wiki/Dropbox_Carousel• Dropbox Is Buying Mega-Hyped Email Startup Mailbox: https://www.businessinsider.com/dropbox-is-buying-mega-hyped-email-startup-mailbox-2013-3• 5 essential questions to craft a winning strategy | Roger Martin (author, advisor, speaker): https://www.lennysnewsletter.com/p/the-ultimate-guide-to-strategy-roger-martin• Intel: https://www.intel.com/• Gordon Moore: https://en.wikipedia.org/wiki/Gordon_Moore• Netscape: https://en.wikipedia.org/wiki/Netscape• Myspace: https://en.wikipedia.org/wiki/Myspace• Bill Campbell: https://en.wikipedia.org/wiki/Bill_Campbell_(business_executive)• Enneagram type descriptions: https://www.enneagraminstitute.com/type-descriptions/• The Myers-Briggs Type Indicator: https://www.themyersbriggs.com/en-US/Products-and-Services/Myers-Briggs• Brian Chesky's new playbook: https://www.lennysnewsletter.com/p/brian-cheskys-contrarian-approach• Ben Horowitz on X: https://x.com/bhorowitz• Why Read Peter Drucker?: https://hbr.org/2009/11/why-read-peter-drucker• GitLab: https://about.gitlab.com/• Automattic: https://automattic.com/• Dropbox Dash: https://www.dash.dropbox.com/• Welcome Command E to Dropbox: https://blog.dropbox.com/topics/company/welcome-command-e-to-dropbox-• StarCraft: https://en.wikipedia.org/wiki/StarCraft_(video_game)• Procter & Gamble and the Beauty of Small Wins: https://hbr.org/2009/10/the-beauty-of-small-wins• Teaching Smart People How to Learn: https://hbr.org/1991/05/teaching-smart-people-how-to-learn—Recommended books:• Guerrilla Marketing: Easy and Inexpensive Strategies for Making Big Profits from Your Small Business: https://www.amazon.com/Guerilla-Marketing-Inexpensive-Strategies-Business/dp/0618785914• Playing to Win: How Strategy Really Works: https://www.amazon.com/Playing-Win-Strategy-Really-Works/dp/142218739X• High Output Management: https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884/• Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company: https://www.amazon.com/Only-Paranoid-Survive-Exploit-Challenge/dp/0385483821• Zone to Win: Organizing to Compete in an Age of Disruption: https://www.amazon.com/Zone-Win-Organizing-Compete-Disruption/dp/1682302113• Warren Buffett's books: https://www.amazon.com/warren-buffett-Books/s?k=warren+buffett&rh=n%3A283155• Poor Charlie's Almanack: The Essential Wit and Wisdom of Charles T. Munger: https://www.amazon.com/Poor-Charlies-Almanack-Essential-Charles/dp/1953953239• Invent and Wander: The Collected Writings of Jeff Bezos: https://www.amazon.com/Invent-Wander-Collected-Writings-Introduction/dp/1647820715/• The 15 Commitments of Conscious Leadership: A New Paradigm for Sustainable: https://www.amazon.com/15-Commitments-Conscious-Leadership-Sustainable-ebook/dp/B00R3MHWUE—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Open Source Startup Podcast
E162: The AI Code Editor War with Zed

Open Source Startup Podcast

Play Episode Listen Later Jan 7, 2025 30:04


Nathan Sobo is the Founder of Zed, the next-gen code editor that enables high-performance collaboration - powered by AI. Open source zed has 53K Stars on GitHub and is used by engineers at Vercel, Apple, Anthropic, and GitLab. Prior to founding Zed, Nathan created the editor Atom at GitHub which reached 1M+ active users. Zed has raised from investors including Redpoint and Root Ventures. In this episode, we dive into Nathan's deep history with code editors including the widely adopted GitHub editor Atom, the decision to make Zed open source (and the massive 10x growth that came from it), how AI changed their trajectory, why collaboration is core to becoming the defacto editor, anchoring on performance and responsiveness, how they're thinking about the commercial side of Zed & more!

Let's Talk Supply Chain
442: On The Margins - How to Thrive Through Cost-Cutting and Corporate Changes

Let's Talk Supply Chain

Play Episode Listen Later Dec 23, 2024 45:02


On The Margins: How procurement leaders can maintain supplier relationships, support teams, nurture trust and build resilience in volatile markets.     IN THIS EPISODE WE DISCUSS:   [06.24] An introduction to Michael van Keulen, and what he loves about travel and spending time with the supply chain community. “I love to connect, I take so much pride in what I get to do every day… Helping and playing a role in the community we've created keeps me energized.” [08.31] From the opportunities in technology to big macro challenges, the issues that are top of mind for the procurement community right now, and why collaboration remains crucial. “We're excited about technology, there's so much out there… Finding the right solution isn't easy, but there's a lot of attention now paid to technology in procurement.” [12.48] Coupa's Mind Your Business campaign. [14.08] The importance of talking about how to thrive through cost-cutting and corporate changes. [15.09] An introduction to Rendi Miller from GitLab, and what she loves about procurement. “Like many people, I fell into procurement. And it's served me so well because of the network of people I've met, friends that I've made. It's a really unique group.” [18.10] How to approach change and navigate transition, and Rendi's personal experience of managing big corporate transitions. “The one thing we can always count on is change. You need to be adaptable, and not be afraid of it… Have trust with your employees as a leader, and have a solid foundation built for your people, processes, and technology.” [21.48] Rendi's advice to her younger self for navigating change. “Every time I've been through some sort of change, it's really been for the better in the long run... You can't be shortsighted.” [25.14] The challenges Rendi faced, and lessons she learned, from managing corporate transitions. [27.40] Rendi's advice for procurement leaders to help maintain supplier relationships in the face of pressure. “The time when you need them to step in and help you with a reduction is not the time to start building a relationship! The time to build relationships is right from the beginning… Treat them as partners instead of just vendors that work for you.” [30.13] How leaders can support their teams emotionally during big changes. [34.18] What procurement leaders can do now to improve resilience for the future. [35.15] It's trivia time! Three questions stand between an audience member and a brand new pickleball set. [40.02] Coupa Inspire returns in 2025 – don't miss your chance to meet Sarah and Michael in Las Vegas.   RESOURCES AND LINKS MENTIONED:   If you enjoyed the show, there are plenty more episodes of On The Margins to explore, or check out 213: Manage Your Supply Chain Planning Smarter and Safer with Coupa.

Work In Progress
Turning AI into a valuable career tool

Work In Progress

Play Episode Listen Later Dec 17, 2024 12:04


In this episode of the Work in Progress podcast, GitLab Foundation president & CEO Ellie Bertani joins me to discuss whether AI will eliminate jobs or will AI unlock economic opportunity for workers and the human potential in all of us? The impact of AI on workers and business was a big part of the conversation at the Human Potential Summit in Denver earlier this fall. GitLab Foundation is on a mission to increase lifetime earnings for people through education, training and access to opportunities, says Bertani. When it comes to AI, the organization is looking at funding projects that can make a positive impact on the workforce and help workers thrive in today's economy. It is committed to unlocking access to new, high-paying roles in underserved communities. From AI-driven job-matching platforms for the Navajo Nation to smarter systems that help nonprofits maximize impact, GitLab's approach aims to make AI work for people, not against them. In the podcast, Bertani discusses common mistakes organizations make with AI, how to avoid them, and why clarity of purpose is essential when deploying AI solutions. You can listen to the entire conversation here, or wherever you get your podcasts. You can also find our podcasts on the Work in Progress YouTube channel. The conversation was part of the WorkingNation media partnership with the Human Potential Summit. Episode 344: Ellie Bertani, president & CEO, GitLab FoundationHost & Executive Producer: Ramona Schindelheim, Editor-in-Chief, WorkingNationProducer: Larry BuhlTheme Music: Composed by Lee Rosevere and licensed under CC by 4Transcript: Download the transcript for this episode hereWork in Progress Podcast: Catch up on previous episodes here

Software Defined Talk
Episode 497: Big Math

Software Defined Talk

Play Episode Listen Later Dec 13, 2024 74:54


This week, we discuss the 12 Days of OpenAI, the latest in quantum computing, and Nvidia's unique management style. Plus, Coté shares his thoughts on turkeys and BBQ. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=kuxa1FqX4tw) 497 (https://www.youtube.com/watch?v=kuxa1FqX4tw) Runner-up Titles Go full-on American Multiple days of meat Advent of Meat Fowl Day ChopOps Fractured and Scattered Mr. Mahalo Rundown 12 Days of OpenAI (https://openai.com/12-days/) Update on ChatGPT's impressive growth (https://www.threads.net/@ociubotaru/post/DDLN7i2NKxT?xmt=AQGzdg4NWoYwDHDBtKHGLrqrlUrEWSELSfF5FjD3z-yo1g) The phony comforts of AI skepticism (https://www.platformer.news/ai-skeptics-gary-marcus-curve-conference/?ref=platformer-newsletter) Quantum Computing Meet Willow, our state-of-the-art quantum chip (https://blog.google/technology/research/google-willow-quantum-chip/) Google unveils 'mind-boggling' quantum computing chip (https://www.bbc.co.uk/news/articles/c791ng0zvl3o) Relevant to your Interests Intel's Search for a CEO (https://www.threads.net/@carnage4life/post/DDMgH0eJRPS?xmt=AQGzYswkrS4wmq1DZVn9YIyulb7odNRA717aPumWg0lRxg) Intel names two chip industry veterans to its board amid CEO search (https://www.reuters.com/technology/intel-names-two-chip-industry-veterans-its-board-amid-ceo-search-2024-12-05/) Why Gelsinger was wrong for Intel | The Observation Deck (https://bcantrill.dtrace.org/2024/12/08/why-gelsinger-was-wrong-for-intel/) Ousted Intel CEO asks people to “pray and fast” for staff (https://www.thestack.technology/intel-ceo-pray/?trk=feed_main-feed-card_feed-article-content) He Was Going to Save Intel. He Destroyed $150 Billion of Value Instead. (https://www.wsj.com/tech/intel-ceo-pat-gelsinger-chips-bd4d61f9) Sundar Pichai says Google Search will ‘change profoundly' in 2025 (https://www.theverge.com/2024/12/5/24314245/sundar-pichai-google-search-change-profoundly-2025) Broadcom reverses controversial plan in effort to cull VMware migrations (https://arstechnica.com/information-technology/2024/12/new-broadcom-sales-plan-may-be-insignificant-in-deterring-vmware-migrations/) Dev-Led Landscape: Transforming a Middle-Aged Startup (https://tylerjewell.substack.com/p/dev-led-landscape-transforming-a?trk=feed_main-feed-card_feed-article-content&utm_campaign=posts-open-in-app&triedRedirect=true) China launches antitrust probe into Nvidia (https://on.ft.com/4gtEvm1) Confidential computing at 1Password (https://blog.1password.com/confidential-computing/?ck_subscriber_id=512840665) The 50 best video games of 2024 (https://www.polygon.com/what-to-play/24078256/best-video-games-2024) Chiplet Market Growth Forecast to US$411 Billion by 2035 (https://www.idtechex.com/en/research-article/chiplet-market-growth-forecast-to-us-411-billion-by-2035/31905#:~:text=It%20predicts%20the%20market%20will,as%20data%20centers%20and%20AI) Cohesity completes its merger with Veritas; here's how they'll integrate (https://techcrunch.com/2024/12/10/cohesity-completes-its-merger-with-veritas-heres-how-theyll-integrate/) Automattic acquires WPAI, a startup that makes AI products for WordPress (https://techcrunch.com/2024/12/09/automattic-acquires-wpai-a-startup-that-creates-ai-solutions-for-wordpress/) The low-hanging fruit is gone; the hill is steeper now: Sundar Pichai on AI, Future of engg jobs (https://www.nextbigwhat.com/p/the-low-hanging-fruit-is-gone-the) What Analysts Are Saying After Oracle's Results Underwhelmed Investo (https://finance.yahoo.com/news/analysts-saying-oracles-results-underwhelmed-154416031.html)rs (https://finance.yahoo.com/news/analysts-saying-oracles-results-underwhelmed-154416031.html) Bill Staples on LinkedIn: I am honored and excited to lead GitLab into its next chapter as CEO (https://www.linkedin.com/posts/williamstaples_i-am-honored-and-excited-to-lead-gitlab-into-activity-7270549076067135488-2QeY) From where I left (https://antirez.com/news/144) (Redis) Amazon sued by DC attorney general for allegedly excluding neighborhoods from Prime delivery (https://www.cnbc.com/2024/12/04/amazon-sued-by-dc-ag-over-excluding-areas-from-prime-delivery.html) Nonsense Sinister USB-C connector (https://x.com/jonbruner/status/1864377520835174450?s=46&t=tKrY7ObmfMDBTim-ug3gOw) Listener Feedback Intel after Gelsinger / Oxide (https://oxide.computer/podcasts/oxide-and-friends/2218242) Git ingest (https://gitingest.com/) Conferences CfgMgmtCamp (https://cfgmgmtcamp.org/ghent2025/), February 2-5, 2025. DevOpsDayLA (https://www.socallinuxexpo.org/scale/22x/events/devopsday-la) at SCALE22x (https://www.socallinuxexpo.org/scale/22x), March 6-9, 2025, discount code DEVOP SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Amazon Smart Plug (https://www.amazon.com/dp/B089DR29T6?tag=googhydr-20&hvadid=432178372710&hvpos=&hvnetw=g&hvrand=79947055925131898&hvpone=&hvptwo=&hvqmt=e&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9028322&hvtargid=kwd-353517962350&ref=pd_sl_6gaoxcyj5s_e&gbraid=0AAAAADl_c3InfPpEWgpti6uviRwBj5XdC&gclid=CjwKCAiAjeW6BhBAEiwAdKltMuUX3-S22ramZrYvArmqnelNy2kDLZQfLhEpMYA1AkNLryUqPusezRoCrSUQAvD_BwE) Reacting to the AWS re:Invent Keynote (https://www.thecloudcast.net/2024/12/reacting-to-aws-reinvent-keynote.html) Coté: Tanzu Platform 10 is GA'ed (https://blogs.vmware.com/tanzu/broadcom-announces-the-general-availability-of-vmware-tanzu-platform-10-making-it-easier-for-customers-to-build-and-launch-new-applications-in-the-private-cloud/) Photo Credits Header (https://unsplash.com/photos/a-laptop-computer-sitting-on-top-of-a-wooden-table-G_vWviqUCCg) Artwork (https://unsplash.com/photos/a-large-display-of-red-numbers-on-a-wall-tOjIx_NyzFo)

Develpreneur: Become a Better Developer and Entrepreneur
Developer Tools That Transform: Habits for Smarter Development

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Dec 10, 2024 27:18


In the ever-evolving world of software development, the tools you use can either streamline your workflow or slow you down. Mastering the right developer tools isn't just about efficiency—it's about transforming how you approach challenges and fostering habits that drive smarter, more effective development. The Building Better Developers podcast dives deep into this topic, exploring how thoughtful tool selection and intentional habits can lead to meaningful growth and productivity. Let's explore how developer tools can be a catalyst for transformation in your work. Why Developer Tools Matter The podcast emphasizes that developer tools are not just about improving efficiency—they shape how we think and solve problems. Tools like integrated development environments (IDEs), task management software, and even simple utilities help bridge the gap between idea and execution. Choose tools that enhance, not complicate. When evaluating tools, prioritize simplicity and integration over complexity. As Rob Broadhead explains, “Avoid tools that add work. The app should improve your life, not make it harder.” For instance, while tools like QuickBooks Desktop streamline accounting, their online counterparts may introduce unnecessary complexity. Evaluating Developer Tools: A Framework The podcast introduces a structured approach to evaluating tools. Here's a summarized framework: Define Your Needs: Identify the problems the tool should solve. Is it for task tracking, bug fixing, or customer relationship management? Research: Use online comparisons or customer reviews. Google terms like “alternatives to [tool]” or “tools like [tool name]” to discover your options. Test the Tools: Take advantage of free trials or demos to assess usability and functionality. Measure ROI: Evaluate the time and effort saved versus the cost of the tool. By taking this methodical approach, you avoid the common trap of jumping into tools without a clear purpose. Common Pitfalls with Developer Tools Michael Meloche warns against several pitfalls, including: Over-complicating workflows: Switching between multiple tools can lead to inefficiency. Find one that meets most of your needs and stick with it. Time sinks: Developers often spend hours experimenting with tools that don't provide meaningful value. Set clear time limits for evaluating new software. Redundancy: Avoid using multiple tools for the same task. For example, don't use three bug trackers when one robust option like Jira will suffice. Remember, the goal isn't to try every tool but to find those that integrate seamlessly into your existing processes. Top Developer Tools Mentioned The podcast lists several essential categories of tools every developer should explore: Task Management: Tools like Jira, Asana, and Monday.com streamline task organization and collaboration. Version Control: Git remains the gold standard, with platforms like GitHub and GitLab offering enhanced collaboration features. Time Tracking: Tools like Toggl help track productivity and billable hours effectively. Communication: Slack and Microsoft Teams are ideal for keeping remote teams connected. The Seasonal Approach to Tool Mastery Rob proposes a seasonal approach to tool evaluation. Instead of randomly testing tools throughout the year, dedicate specific periods to exploring certain categories. For example, focus on marketing automation tools one season and customer relationship management tools the next. This method ensures you gain deep knowledge of tools relevant to your work without overwhelming yourself. Tips for Implementing New Tools Start Small: Test one feature at a time. For instance, if trying a new IDE, begin by configuring it for a small project. Involve the Team: Gather input from colleagues to ensure the tool works across the board. Track Impact: Use metrics to evaluate the tool's impact, like reduced project delays or improved code quality. Challenge for Developers The podcast ends with a challenge: spend seven days exploring a new category of tools. Here's how to get started: Day 1: Research tools in a specific category (e.g., bug tracking or time management). Days 2-6: Spend 10-15 minutes each day testing different tools. Day 7: Evaluate your findings and pick the one that fits best. This simple exercise sharpens your evaluation skills and helps you discover tools that genuinely improve your workflow. Final Thoughts Building better habits and mastering tools isn't about chasing every shiny new app. It's about intentional choices that align with your goals. As Rob Broadhead wisely concludes, “It's not about doing more; it's about doing what matters.” Take the time to evaluate your toolset, and you'll find yourself not just working harder but working smarter. Ready to embrace the challenge? Let us know your top tool picks! Stay Connected: Join the Develpreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Updating Developer Tools: Keeping Your Tools Sharp and Efficient Tools to Separate Developers from Coders Building a Strong Developer Toolkit: Enhancing Skills and Productivity Developer Tools That Transform: Habits for Smarter Development Building Better Habits Videos – With Bonus Content

Sales Game Changers | Tip-Filled  Conversations with Sales Leaders About Their Successful Careers
The Importance of Technology by Policy with Rob Efrus and GitLab Federal CTO Joel Krooswyk

Sales Game Changers | Tip-Filled Conversations with Sales Leaders About Their Successful Careers

Play Episode Listen Later Dec 9, 2024 28:52


This is episode 719. Read the complete transcription on the Sales Game Changers Podcast website. The Sales Game Changers Podcast was recognized by YesWare as the top sales podcast. Read the announcement here. Read more about the Institute for Excellence in Sales Premier Women in Sales Employer (PWISE) designation and program here. Purchase Fred Diamond's best-sellers Love, Hope, Lyme: What Family Members, Partners, and Friends Who Love a Chronic Lyme Survivor Need to Know and Insights for Sales Game Changers now! Today's show focused on the concept of “Technology by Policy” and featured Federal Business Development and Marketing expert Rob Efrus and GitLab Federal CTO Joel Krooswyk. ROB'S TIP:  “Try and walk in the shoes of your end user, federal customers, understand the pressures that they are facing, agency reorganizations, budget cuts, etc. doing more with less, and align your value proposition to those challenges and the policies that are driving those challenges.” JOEL'S TIP: “Find some good sources of information and just subscribe so you're up to date. Don't let policy fly by as some of it's going to be really important. Find a few news sources you like. I'm a big fan of subscribing directly to the White House and to CISA. If you just get a few of those things coming in your inbox, you can at least keep up with policy changes as they're being published.”  

Closing Bell
Closing Bell Overtime: ServiceNow CEO On AI Strength; Rubrik CEO On Blowout Quarter 12/5/24

Closing Bell

Play Episode Listen Later Dec 5, 2024 43:04


The S&P 500 recorded its first down day of December as stocks pulled back from record levels—but it was another busy Overtime session with earnings from Asana, HPE, Ulta, Lululemon, Victoria's Secret, Rubrik, Docusign, Gitlab and Samsara. Oppenheimer analyst Brian Nagel breaks down why the “not that bad” quarter for the retailer is actually a positive sign. Rubrik CEO Bipul Sinha on the company's blowout quarter and its growth momentum. ServiceNow Bill McDermott talks its AI strategy and partnerships. Plus, Bill Holdings CEO Rene Lacerte on his outlook for the small businesses and his own company's stock run higher.  

Open Source Startup Podcast
E158: Open Source Diagramming and Charting with Mermaid Chart

Open Source Startup Podcast

Play Episode Listen Later Dec 5, 2024 40:47


Andrew Firestone is CEO and Knut Sveidqvist is CTO of Mermaid Chart, the open source text-based diagraming software platform. The mermaid project has over 70K stars on GitHub and is an open source diagramming and charting tool. Mermaid Chart has raised $7.5M from investors including Open Core Ventures. In this episode, we dig into the mermaid project's 8 year journey, going from side project to company, working with GitLab founder Sid Sijbrandij to bring Andrew in as CEO & more!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

Interviews: Tech and Business
What is DevSecOps? (explained by GitLab) | CXOTalk #861

Interviews: Tech and Business

Play Episode Listen Later Nov 26, 2024 24:46


AI is revolutionizing software development, but where do you start? Ashley Kramer, Chief Marketing & Strategy Officer and Interim Chief Revenue Officer at GitLab, joins CXOTalk to discuss how to integrate AI strategically into your DevSecOps process. Learn how to measure AI's ROI, increase developer productivity and happiness, and navigate the hype around AI-powered coding. Discover practical strategies for leveraging AI to build better software faster, improve security, and drive innovation.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company

The CyberWire
UK's newest cybersecurity MVPs.

The CyberWire

Play Episode Listen Later Sep 12, 2024 34:29


The UK designates data centers as Critical National Infrastructure. Cisco releases patches for multiple vulnerabilities in its IOS XR network operating system. BYOD is a growing security risk. A Pennsylvania healthcare network has agreed to a $65 million settlement stemming from a 2023 data breach.Google Cloud introduces air-gapped backup vaults. TrickMo is a newly discovered Android banking malware. GitLab has released a critical security update. A $20 domain purchase highlights concerns over WHOIS trust and security. Our guest is Jon France, CISO at ISC2, with insights on Communicating Cyber Risk of New Technology to the Board. And, could Pikachu be a double-agent for Western intelligence agencies? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Jon France, CISO at ISC2, sharing his take on "All on "Board" for AI – Communicating Cyber Risk of New Technology to the Board." This is a session Jon presented at Black Hat USA 2024. You can check out his session's abstract. Also, N2K CyberWire is a partner of ISC2's Security Congress 2024. Learn more about the in-person and virtual event here.  Selected Reading UK Recognizes Data Centers as Critical National Infrastructure (Infosecurity Magazine) Cisco Patches High-Severity Vulnerabilities in Network Operating System (SecurityWeek) BYOD Policies Fueling Security Risks (Security Boulevard) Healthcare Provider to Pay $65M Settlement Following Ransomware Attack (SecurityWeek) Google Unveils Air-gapped Backup Vaults to Protect Data from Ransomware Attacks (Cyber Security News) New Android Banking Malware TrickMo Attacking Users To Steal Login Credentials (Cyber Security News) GitLab Releases Critical Security Update, Urges Users to Patch Immediately (Cyber Security News) Rogue WHOIS server gives researcher superpowers no one should ever have (Ars Technica) Pokémon GO was an intelligence tool, claims Belarus military official (The Register)  Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential security leaders in the industry. Learn more about our network sponsorship opportunities and build your brand where industry leaders get their daily news. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices