Cross-platform document-oriented database
POPULARITY
Categories
Talk Python To Me - Python conversations for passionate developers
The folks over at Astral have made some big-time impacts in the Python space with uv and ruff. They are back with another amazing project named ty. You may have known it as Red-Knot. But it's coming up on release time for the first version and with the release it comes with a new official name: ty. We have Charlie Marsh and Carl Meyer on the show to tell us all about this new project. Episode sponsors Posit Auth0 Talk Python Courses Links from the show Talk Python's Rock Solid Python: Type Hints & Modern Tools (Pydantic, FastAPI, and More) Course: training.talkpython.fm Charlie Marsh on Twitter: @charliermarsh Charlie Marsh on Mastodon: @charliermarsh Carl Meyer: @carljm ty on Github: github.com/astral-sh/ty A Very Early Play with Astral's Red Knot Static Type Checker: app.daily.dev Will Red Knot be a drop-in replacement for mypy or pyright?: github.com Hacker News Announcement: news.ycombinator.com Early Explorations of Astral's Red Knot Type Checker: pydevtools.com Astral's Blog: astral.sh Rust Analyzer Salsa Docs: docs.rs Ruff Open Issues (label: red-knot): github.com Ruff Types: types.ruff.rs Ruff Docs (Astral): docs.astral.sh uv Repository: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Is waiting months for employee feedback a thing of the past? Dive into the future of work with Shane McAllister and L10 founders, Zyrian Chung and Keith Chan. Discover how their AI-powered platform collects daily insights to transform workplace culture, empower managers, and address issues like silent quitting before they escalate. Learn how real-time feedback and AI can create more engaged and productive teams, and get a peek into the tech (including MongoDB) that makes it possible. Stop guessing, start understanding your team today!TAGs:#EmployeeEngagement, #AIinHR, #WorkplaceCulture, #FutureofWork, #EmployeeFeedback, #RealTimeInsights, #TeamManagement, #Leadership, #HRTechnology, #MongoDB, #L10, #ZyrianChung, #KeithChan, #SilentQuitting, #EmployeeMorale, #TeamProductivity, #PerformanceManagement, #AITools, #EmployeeSurveys, #PeopleAnalytics
In der heutigen Folge von „Alles auf Aktien“ sprechen die Finanzjournalisten Philipp Vetter und Holger Zschäpitz über zwei gelungene Börsengänge, einen weiteren Tiefschlag für Bayer und einen Absturz bei Tui. Außerdem geht es um Etoro, Pfisterer Holding, Super Micro, AMD, Nvidia, Coreweave, Cisco, Eon, Daimler Truck, Brenntag, Renk, Hapag Lloyd, Baidu, WeRide, Uber, General Motors, Mercedes-Benz, BMW, Volkswagen, Pony.AI, Momenta Technology, Tesla, Alphabet, Archer Aviation, Marvell Technology, Broadcom, The Trade Desk, Datadog, MongoDB, Adobe, Diamondback Energy, Regeneron Pharmaceuticals, Warner Bros Discovery, Rheinmetall, Siemens Energy. Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter.[ Hier bei WELT.](https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html.) [Hier] (https://open.spotify.com/playlist/6zxjyJpTMunyYCY6F7vHK1?si=8f6cTnkEQnmSrlMU8Vo6uQ) findest Du die Samstagsfolgen Klassiker-Playlist auf Spotify! Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
Talk Python To Me - Python conversations for passionate developers
Python has many string formatting styles which have been added to the language over the years. Early Python used the % operator to injected formatted values into strings. And we have string.format() which offers several powerful styles. Both were verbose and indirect, so f-strings were added in Python 3.6. But these f-strings lacked security features (think little bobby tables) and they manifested as fully-formed strings to runtime code. Today we talk about the next evolution of Python string formatting for advanced use-cases (SQL, HTML, DSLs, etc): t-strings. We have Paul Everitt, David Peck, and Jim Baker on the show to introduce this upcoming new language feature. Episode sponsors Posit Auth0 Talk Python Courses Links from the show Guests: Paul on X: @paulweveritt Paul on Mastodon: @pauleveritt@fosstodon.org Dave Peck on Github: github.com Jim Baker: github.com PEP 750 – Template Strings: peps.python.org tdom - Placeholder for future library on PyPI using PEP 750 t-strings: github.com PEP 750: Tag Strings For Writing Domain-Specific Languages: discuss.python.org How To Teach This: peps.python.org PEP 501 – General purpose template literal strings: peps.python.org Python's new t-strings: davepeck.org PyFormat: Using % and .format() for great good!: pyformat.info flynt: A tool to automatically convert old string literal formatting to f-strings: github.com Examples of using t-strings as defined in PEP 750: github.com htm.py issue: github.com Exploits of a Mom: xkcd.com pyparsing: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Google Cloud Next 2025 otwiera sezon konferencji hyperscalerów z Agentic AI
In this episode of the Insurtech Leadership Podcast, host Joshua Hollander speaks with Ali Azhar, Chief Business Development Officer at Hover, a leading prop-tech and insurtech company leveraging advanced AI and 3D spatial data to streamline property assessments and claims processes. Ali shares his remarkable personal journey—from childhood entrepreneurial efforts driven by "altruistic capitalism" to founding and scaling tech companies, including ScholarshipExperts.com, and his pivot into software sales following an impactful business failure. His candid reflections offer valuable insights into resilience, reinvention, and leadership. Ali also explains Hover's groundbreaking technology, initially developed from military-inspired photogrammetry, and details how contractors "Trojan-horsed" Hover into the insurance industry. With over 10 million properties modeled, Hover's innovative approach significantly improves accuracy, reduces manual errors, and supports digital transformation across insurance and construction. In This Episode: [02:19] Ali's entrepreneurial roots and how overcoming business failure led to career reinvention. [11:33] Hover's unique technology origin story and entry into the insurance market. [20:37] Why user experience and adjuster adoption are central to Hover's growth. [22:40] Hover's aggressive investment in innovation—allocating 50% of revenue to R&D. [25:33] The evolution of Hover's interior claims automation from challenging beginnings to industry-leading solution. [29:39] Critical advice for insurtech startups: Be an innovation partner, not just a vendor. Notable Quotes: “Insurance carriers are not technology businesses—they're risk aversion experts. We have to meet them where they are.” [32:49] “The thing that always helped separate me from anyone else was I was willing to do more than what was asked.” [00:08:55] “Our bread and butter has been the exterior scoping solution. Over 10 years, we've modeled over 10 million properties.” [00:23:14] About Our Guest: Ali Azhar is a sales and business development leader with a proven track record of driving exponential revenue growth at companies like HOVER, GoodData, and MongoDB. A serial entrepreneur, he co-founded ventures like ScholarshipExperts.com and Velocity Athletics. Known for scaling teams, adding thousands of customers, and executing high-impact strategies, Ali combines hands-on startup experience with corporate leadership. He holds a degree in Management and Marketing from the University of North Florida and champions "altruistic capitalism" in business. Resources: Ali Azhar https://www.linkedin.com/in/aazhar/ https://hover.to/ Josh Hollander https://www.linkedin.com/in/joshuarhollander/ https://www.horton-usa.com/ https://www.linkedin.com/showcase/insurtech-leadership-show/?viewAsMember=true https://www.insurtechassociation.org/ https://innsure.org/
Talk Python To Me - Python conversations for passionate developers
What trends and technologies should you be paying attention to today? Are there hot new database servers you should check out? Or will that just be a flash in the pan? I love these forward looking episodes and this one is super fun. I've put together an amazing panel: Gina Häußge, Ines Montani, Richard Campbell, and Calvin Hendryx-Parker. We dive into the recent Stack Overflow Developer survey results as a sounding board for our thoughts on rising and falling trends in the Python and broader developer space. Episode sponsors NordLayer Auth0 Talk Python Courses Links from the show The Stack Overflow Survey Results: survey.stackoverflow.co/2024 Panelists Gina Häußge: chaos.social/@foosel Ines Montani: ines.io Richard Campbell: about.me/richard.campbell Calvin Hendryx-Parker: github.com/calvinhp Explosion: explosion.ai spaCy: spacy.io OctoPrint: octoprint.org .NET Rocks: dotnetrocks.com Six Feet Up: sixfeetup.com Stack Overflow: stackoverflow.com Python.org: python.org GitHub Copilot: github.com OpenAI ChatGPT: chat.openai.com Claude: anthropic.com LM Studio: lmstudio.ai Hetzner: hetzner.com Docker: docker.com Aider Chat: github.com Goose AI: goose.ai IndyPy: indypy.org OctoPrint Community Forum: community.octoprint.org spaCy GitHub: github.com Hugging Face: huggingface.co Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
✨ Heads up! This episode features a demonstration of the SnapLogic UI and its AI Agent Creator towards the end. For the full visual experience, check out the video version on the Spotify app! ✨(Episode Summary)Tired of tangled data spread across multiple clouds, on-premise systems, and the edge? In this episode, MongoDB's Shane McAllister sits down with Peter Ngai, Principal Architect at SnapLogic, to explore the future of data integration and management in today's complex tech landscape.Dive into the challenges and solutions surrounding modern data architecture, including:Navigating the complexities of multi-cloud and hybrid cloud environments.The secrets to building flexible, resilient data ecosystems that avoid vendor lock-in.Strategies for seamless data integration and connecting disparate applications using low-code/no-code platforms like SnapLogic.Meeting critical data compliance, security, and sovereignty demands (think GDPR, HIPAA, etc.).How AI is revolutionizing data automation and providing faster access to insights (featuring SnapLogic's Agent Creator).The powerful synergy between SnapLogic and MongoDB, leveraging MongoDB both internally and for customer integrations.Real-world applications, from IoT data processing to simplifying enterprise workflows.Whether you're an IT leader, data engineer, business analyst, or simply curious about cloud strategy, iPaaS solutions, AI in business, or simplifying your data stack, Peter offers invaluable insights into making data connectivity a driver, not a barrier, for innovation.-Keywords: Data Integration, Multi-Cloud, Hybrid Cloud, Edge Computing, SnapLogic, MongoDB, AI, Artificial Intelligence, Data Automation, iPaaS, Low-Code, No-Code, Data Architecture, Data Management, Cloud Data, Enterprise Data, API Integration, Data Compliance, Data Sovereignty, Data Security, Business Automation, ETL, ELT, Tech Stack Simplification, Peter Ngai, Shane McAllister.
In this episode of the Revenue Builders Podcast, hosts John McMahon and John Kaplan are joined by Marcello Gallo, Chief Revenue Officer at Sigma Computing. The discussion dives into Marcello's extensive experience in enterprise sales leadership, including his non-traditional path, lessons from leading roles at various companies, and the importance of structure, mentorship, and continuous learning. Marcello shares valuable insights on transitioning from technical roles to sales, territory management, and the significance of aligning with customer needs to drive value. The conversation also emphasizes the importance of having a growth mindset, understanding customer environments, and leveraging product-market fit for sustained success.ADDITIONAL RESOURCESLearn more about Marcello Gallo:https://www.linkedin.com/in/gallomarcello/Download the CRO Strategy Checklist: https://hubs.li/Q03f8LmX0Read Force Management's Guide to Increasing Company Valuation: https://hubs.li/Q038n0jT0Enjoying the podcast? Sign up to receive new episodes straight to your inbox: https://hubs.li/Q02R10xN0HERE ARE SOME KEY SECTIONS TO CHECK OUT[00:01:53] Marcello's Journey into Enterprise Sales[00:08:13] The Importance of Structure in Sales[00:28:37] Navigating Major Accounts and Complex Sales[00:34:32] Understanding the Champion's Role in Sales[00:35:15] Building Strong Relationships with Champions[00:37:59] The Importance of Predicting and Preparing for Objections[00:39:14] Role-Playing and Preparation Techniques[00:40:05] Leadership and Helping Teams Get Unstuck[00:42:03] Lessons from Climbing the Corporate Ladder[00:43:21] The Value of Enablement and Territory Management[00:46:20] Adapting to Market Changes and Customer Feedback[00:53:59] Choosing the Right Opportunities and Taking Risks[01:04:50] Sigma Computing's Growth and OpportunitiesHIGHLIGHT QUOTES“If you can't bet on yourself, who can you bet on?"“Knowledge is courage.”“You get delegated to those that you sound like.”“Hire the people commensurate to the territory that you have open.”“Don't confuse position with opportunity.”
In a new season of the Oracle University Podcast, Lois Houston and Nikita Abraham dive into the world of Oracle GoldenGate 23ai, a cutting-edge software solution for data management. They are joined by Nick Wagner, a seasoned expert in database replication, who provides a comprehensive overview of this powerful tool. Nick highlights GoldenGate's ability to ensure continuous operations by efficiently moving data between databases and platforms with minimal overhead. He emphasizes its role in enabling real-time analytics, enhancing data security, and reducing costs by offloading data to low-cost hardware. The discussion also covers GoldenGate's role in facilitating data sharing, improving operational efficiency, and reducing downtime during outages. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. This time, we're focusing on the fundamentals of Oracle GoldenGate. Oracle GoldenGate helps organizations manage and synchronize their data across diverse systems and databases in real time. And with the new Oracle GoldenGate 23ai release, we'll uncover the latest innovations and features that empower businesses to make the most of their data. Nikita: Taking us through this is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. He's been doing database replication for about 25 years and has been focused on GoldenGate on and off for about 20 of those years. 01:18 Lois: In today's episode, we'll ask Nick to give us a general overview of the product, along with some use cases and benefits. Hi Nick! To start with, why do customers need GoldenGate? Nick: Well, it delivers continuous operations, being able to continuously move data from one database to another database or data platform in efficiently and a high-speed manner, and it does this with very low overhead. Almost all the GoldenGate environments use transaction logs to pull the data out of the system, so we're not creating any additional triggers or very little overhead on that source system. GoldenGate can also enable real-time analytics, being able to pull data from all these different databases and move them into your analytics system in real time can improve the value that those analytics systems provide. Being able to do real-time statistics and analysis of that data within those high-performance custom environments is really important. 02:13 Nikita: Does it offer any benefits in terms of cost? Nick: GoldenGate can also lower IT costs. A lot of times people run these massive OLTP databases, and they are running reporting in those same systems. With GoldenGate, you can offload some of the data or all the data to a low-cost commodity hardware where you can then run the reports on that other system. So, this way, you can get back that performance on the OLTP system, while at the same time optimizing your reporting environment for those long running reports. You can improve efficiencies and reduce risks. Being able to reduce the amount of downtime during planned and unplanned outages can really make a big benefit to the overall operational efficiencies of your company. 02:54 Nikita: What about when it comes to data sharing and data security? Nick: You can also reduce barriers to data sharing. Being able to pull subsets of data, or just specific pieces of data out of a production database and move it to the team or to the group that needs that information in real time is very important. And it also protects the security of your data by only moving in the information that they need and not the entire database. It also provides extensibility and flexibility, being able to support multiple different replication topologies and architectures. 03:24 Lois: Can you tell us about some of the use cases of GoldenGate? Where does GoldenGate truly shine? Nick: Some of the more traditional use cases of GoldenGate include use within the multicloud fabric. Within a multicloud fabric, this essentially means that GoldenGate can replicate data between on-premise environments, within cloud environments, or hybrid, cloud to on-premise, on-premise to cloud, or even within multiple clouds. So, you can move data from AWS to Azure to OCI. You can also move between the systems themselves, so you don't have to use the same database in all the different clouds. For example, if you wanted to move data from AWS Postgres into Oracle running in OCI, you can do that using Oracle GoldenGate. We also support maximum availability architectures. And so, there's a lot of different use cases here, but primarily geared around reducing your recovery point objective and recovery time objective. 04:20 Lois: Ah, reducing RPO and RTO. That must have a significant advantage for the customer, right? Nick: So, reducing your RPO and RTO allows you to take advantage of some of the benefits of GoldenGate, being able to do active-active replication, being able to set up GoldenGate for high availability, real-time failover, and it can augment your active Data Guard and Data Guard configuration. So, a lot of times GoldenGate is used within Oracle's maximum availability architecture platinum tier level of replication, which means that at that point you've got lots of different capabilities within the Oracle Database itself. But to help eke out that last little bit of high availability, you want to set up an active-active environment with GoldenGate to really get true zero RPO and RTO. GoldenGate can also be used for data offloading and data hubs. Being able to pull data from one or more source systems and move it into a data hub, or into a data warehouse for your operational reporting. This could also be your analytics environment too. 05:22 Nikita: Does GoldenGate support online migrations? Nick: In fact, a lot of companies actually get started in GoldenGate by doing a migration from one platform to another. Now, these don't even have to be something as complex as going from one database like a DB2 on-premise into an Oracle on OCI, it could even be simple migrations. A lot of times doing something like a major application or a major database version upgrade is going to take downtime on that production system. You can use GoldenGate to eliminate that downtime. So this could be going from Oracle 19c to Oracle 23ai, or going from application version 1.0 to application version 2.0, because GoldenGate can do the transformation between the different application schemas. You can use GoldenGate to migrate your database from on premise into the cloud with no downtime as well. We also support real-time analytic feeds, being able to go from multiple databases, not only those on premise, but being able to pull information from different SaaS applications inside of OCI and move it to your different analytic systems. And then, of course, we also have the ability to stream events and analytics within GoldenGate itself. 06:34 Lois: Let's move on to the various topologies supported by GoldenGate. I know GoldenGate supports many different platforms and can be used with just about any database. Nick: This first layer of topologies is what we usually consider relational database topologies. And so this would be moving data from Oracle to Oracle, Postgres to Oracle, Sybase to SQL Server, a lot of different types of databases. So the first architecture would be unidirectional. This is replicating from one source to one target. You can do this for reporting. If I wanted to offload some reports into another server, I can go ahead and do that using GoldenGate. I can replicate the entire database or just a subset of tables. I can also set up GoldenGate for bidirectional, and this is what I want to set up GoldenGate for something like high availability. So in the event that one of the servers crashes, I can almost immediately reconnect my users to the other system. And that almost immediately depends on the amount of latency that GoldenGate has at that time. So a typical latency is anywhere from 3 to 6 seconds. So after that primary system fails, I can reconnect my users to the other system in 3 to 6 seconds. And I can do that because as GoldenGate's applying data into that target database, that target system is already open for read and write activity. GoldenGate is just another user connecting in issuing DML operations, and so it makes that failover time very low. 07:59 Nikita: Ok…If you can get it down to 3 to 6 seconds, can you bring it down to zero? Like zero failover time? Nick: That's the next topology, which is active-active. And in this scenario, all servers are read/write all at the same time and all available for user activity. And you can do multiple topologies with this as well. You can do a mesh architecture, which is where every server talks to every other server. This works really well for 2, 3, 4, maybe even 5 environments, but when you get beyond that, having every server communicate with every other server can get a little complex. And so at that point we start looking at doing what we call a hub and spoke architecture, where we have lots of different spokes. At the end of each spoke is a read/write database, and then those communicate with a hub. So any change that happens on one spoke gets sent into the hub, and then from the hub it gets sent out to all the other spokes. And through that architecture, it allows you to really scale up your environments. We have customers that are doing up to 150 spokes within that hub architecture. Within active-active replication as well, we can do conflict detection and resolution, which means that if two users modify the same row on two different systems, GoldenGate can actually determine that there was an issue with that and determine what user wins or which row change wins, which is extremely important when doing active-active replication. And this means that if one of those systems fails, there is no downtime when you switch your users to another active system because it's already available for activity and ready to go. 09:35 Lois: Wow, that's fantastic. Ok, tell us more about the topologies. Nick: GoldenGate can do other things like broadcast, sending data from one system to multiple systems, or many to one as far as consolidation. We can also do cascading replication, so when data moves from one environment that GoldenGate is replicating into another environment that GoldenGate is replicating. By default, we ignore all of our own transactions. But there's actually a toggle switch that you can flip that says, hey, GoldenGate, even though you wrote that data into that database, still push it on to the next system. And then of course, we can also do distribution of data, and this is more like moving data from a relational database into something like a Kafka topic or a JMS queue or into some messaging service. 10:24 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there's a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting visit oracle.com/education. 10:58 Nikita: Welcome back! Nick, does GoldenGate also have nonrelational capabilities? Nick: We have a number of nonrelational replication events in topologies as well. This includes things like data lake ingestion and streaming ingestion, being able to move data and data objects from these different relational database platforms into data lakes and into these streaming systems where you can run analytics on them and run reports. We can also do cloud ingestion, being able to move data from these databases into different cloud environments. And this is not only just moving it into relational databases with those clouds, but also their data lakes and data fabrics. 11:38 Lois: You mentioned a messaging service earlier. Can you tell us more about that? Nick: Messaging replication is also possible. So we can actually capture from things like messaging systems like Kafka Connect and JMS, replicate that into a relational data, or simply stream it into another environment. We also support NoSQL replication, being able to capture from MongoDB and replicate it onto another MongoDB for high availability or disaster recovery, or simply into any other system. 12:06 Nikita: I see. And is there any integration with a customer's SaaS applications? Nick: GoldenGate also supports a number of different OCI SaaS applications. And so a lot of these different applications like Oracle Financials Fusion, Oracle Transportation Management, they all have GoldenGate built under the covers and can be enabled with a flag that you can actually have that data sent out to your other GoldenGate environment. So you can actually subscribe to changes that are happening in these other systems with very little overhead. And then of course, we have event processing and analytics, and this is the final topology or flexibility within GoldenGate itself. And this is being able to push data through data pipelines, doing data transformations. GoldenGate is not an ETL tool, but it can do row-level transformation and row-level filtering. 12:55 Lois: Are there integrations offered by Oracle GoldenGate in automation and artificial intelligence? Nick: We can do time series analysis and geofencing using the GoldenGate Stream Analytics product. It allows you to actually do real time analysis and time series analysis on data as it flows through the GoldenGate trails. And then that same product, the GoldenGate Stream Analytics, can then take the data and move it to predictive analytics, where you can run MML on it, or ONNX or other Spark-type technologies and do real-time analysis and AI on that information as it's flowing through. 13:29 Nikita: So, GoldenGate is extremely flexible. And given Oracle's focus on integrating AI into its product portfolio, what about GoldenGate? Does it offer any AI-related features, especially since the product name has “23ai” in it? Nick: With the advent of Oracle GoldenGate 23ai, it's one of the two products at this point that has the AI moniker at Oracle. Oracle Database 23ai also has it, and that means that we actually do stuff with AI. So the Oracle GoldenGate product can actually capture vectors from databases like MySQL HeatWave, Postgres using pgvector, which includes things like AlloyDB, Amazon RDS Postgres, Aurora Postgres. We can also replicate data into Elasticsearch and OpenSearch, or if the data is using vectors within OCI or the Oracle Database itself. So GoldenGate can be used for a number of things here. The first one is being able to migrate vectors into the Oracle Database. So if you're using something like Postgres, MySQL, and you want to migrate the vector information into the Oracle Database, you can. Now one thing to keep in mind here is a vector is oftentimes like a GPS coordinate. So if I need to know the GPS coordinates of Austin, Texas, I can put in a latitude and longitude and it will give me the GPS coordinates of a building within that city. But if I also need to know the altitude of that same building, well, that's going to be a different algorithm. And GoldenGate and replicating vectors is the same way. When you create a vector, it's essentially just creating a bunch of numbers under the screen, kind of like those same GPS coordinates. The dimension and the algorithm that you use to generate that vector can be different across different databases, but the actual meaning of that data will change. And so GoldenGate can replicate the vector data as long as the algorithm and the dimensions are the same. If the algorithm and the dimensions are not the same between the source and the target, then you'll actually want GoldenGate to replicate the base data that created that vector. And then once GoldenGate replicates the base data, it'll actually call the vector embedding technology to re-embed that data and produce that numerical formatting for you. 15:42 Lois: So, there are some nuances there… Nick: GoldenGate can also replicate and consolidate vector changes or even do the embedding API calls itself. This is really nice because it means that we can take changes from multiple systems and consolidate them into a single one. We can also do the reverse of that too. A lot of customers are still trying to find out which algorithms work best for them. How many dimensions? What's the optimal use? Well, you can now run those in different servers without impacting your actual AI system. Once you've identified which algorithm and dimension is going to be best for your data, you can then have GoldenGate replicate that into your production system and we'll start using that instead. So it's a nice way to switch algorithms without taking extensive downtime. 16:29 Nikita: What about in multicloud environments? Nick: GoldenGate can also do multicloud and N-way active-active Oracle replication between vectors. So if there's vectors in Oracle databases, in multiple clouds, or multiple on-premise databases, GoldenGate can synchronize them all up. And of course we can also stream changes from vector information, including text as well into different search engines. And that's where the integration with Elasticsearch and OpenSearch comes in. And then we can use things like NVIDIA and Cohere to actually do the AI on that data. 17:01 Lois: Using GoldenGate with AI in the database unlocks so many possibilities. Thanks for that detailed introduction to Oracle GoldenGate 23ai and its capabilities, Nick. Nikita: We've run out of time for today, but Nick will be back next week to talk about how GoldenGate has evolved over time and its latest features. And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course to learn more. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:33 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Talk Python To Me - Python conversations for passionate developers
Pandas is at a the core of virtually all data science done in Python, that is virtually all data science. Since it's beginning, Pandas has been based upon numpy. But changes are afoot to update those internals and you can now optionally use PyArrow. PyArrow comes with a ton of benefits including it's columnar format which makes answering analytical questions faster, support for a range of high performance file formats, inter-machine data streaming, faster file IO and more. Reuven Lerner is here to give us the low-down on the PyArrow revolution. Episode sponsors NordLayer Auth0 Talk Python Courses Links from the show Reuven: github.com/reuven Apache Arrow: github.com Parquet: parquet.apache.org Feather format: arrow.apache.org Python Workout Book: manning.com Pandas Workout Book: manning.com Pandas: pandas.pydata.org PyArrow CSV docs: arrow.apache.org Future string inference in Pandas: pandas.pydata.org Pandas NA/nullable dtypes: pandas.pydata.org Pandas `.iloc` indexing: pandas.pydata.org DuckDB: duckdb.org Pandas user guide: pandas.pydata.org Pandas GitHub issues: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
**(Note: Spotify listeners can also watch the screen sharing video accompanying the audio. Other podcast platforms offer the audio-only version.)**In this episode of MongoDB Podcast Live, host Shane McAllister is joined by Sachin Hejip from Dataworkz. Sachin will showcase “Dataworkz Agent Builder” which is built with MongoDB Atlas Vector Search, and demonstrate how it can use Natural Language to create Agents and in turn, automate and simplify the creation of Agentic RAG applications. Sachin will demo the MongoDB Leafy Portal Chatbot Agent, which combines operational data with unstructured data for personalised customer experience and support, built using Dataworkz and MongoDB.Struggling with millions of unstructured documents, legacy records, or scattered data formats? Discover how AI, Large Language Models (LLMs), and MongoDB are revolutionizing data management in this episode of the MongoDB Podcast.Join host Shane McAllister and the team as they delve into tackling complex data challenges using cutting-edge technology. Learn how MongoDB Atlas Vector Search enables powerful semantic search and Retrieval Augmented Generation (RAG) applications, transforming chaotic information into valuable insights. Explore integrations with popular frameworks like Langchain and Llama Index.Find out how to efficiently process and make sense of your unstructured data, potentially saving significant costs and unlocking new possibilities.Ready to dive deeper?#MongoDB #AI #LLM #LargeLanguageModels #VectorSearch #AtlasVectorSearch #UnstructuredData #Podcast #DataManagement #Dataworkz #Observability #Developer #BigData #RAG
Elon Musks Autokonzern Tesla hat im ersten Quartal schlecht abgeschnitten. Dennoch dürfen sich die Aktionäre über einen Kursgewinn freuen. Rüdiger und Robert analysieren die Gründe.Erwähnte Titel: Tesla, Uber, Nvidia, MongoDB, Google, Medpace, Hermes, Citigroup, Bank of America, Netflix, Allianz, Mercedes, Deutsche TelekomAlle Folgen finden Sie auch auf KURIER.at und kronehit.at.Weitere Podcasts finden Sie unter KURIER.at/podcasts Hosted on Acast. See acast.com/privacy for more information.
In this week's episode of The Future of Security Operations podcast, Thomas is joined by Mark Hillick, CISO at Brex. Mark's experience in the security industry spans more than two decades. He started out as a security engineer at Allied Irish Banks before advancing through companies like MongoDB to become Director and Head of Security at Riot Games. His book, The Security Path, features over 70 interviews with security professionals on their career journeys. In this episode: [02:06] His early career journey - from a mathematics background to building early online banking systems [03:32] What's kept Mark excited about security for over two decades [04:40] The compound benefits of growing within a company over time [07:20] Mark's leadership style - defined by transparency, directness, and genuine care for his teammates [12:45] Communicating the business trade-off between risk and return [16:45] Reflecting on the team's response to major incidents at Riot Games [21:00] The unique challenges of securing gaming platforms [26:30] How Mark approaches strategy and planning in the fintech space [28:08] The case for building strong, partnership-driven vendor relationships [31:13] Creating space for creativity - without spreading the team too thin [34:35] Empowering his team to speak openly - even if it means calling him out [36:35] The inspiration behind Mark's books Digital Safety for Parents and The Security Path [40:20] Connect with Mark Where to find Mark: LinkedIn Brex Where to find Thomas Kinsella: LinkedIn Tines Resources mentioned: The Security Path - click here to redeem a free copy for podcast listeners (first come, first serve) Digital Safety for Parents - click here to redeem a free copy for podcast listeners (first come, first serve) Mark's talk during his time at Riot Games in 2016
Talk Python To Me - Python conversations for passionate developers
Do you or your company need accounting software? Well, there are plenty of SaaS products out there that you can give your data to. but maybe you also really like Django and would rather have a foundation to build your own accounting system exactly as you need for your company or your product. On this episode, we're diving into Django Ledger, created by Miguel Sanda, which can do just that. Episode sponsors Auth0 Talk Python Courses Links from the show Miguel Sanda on Twitter: @elarroba Miguel on Mastodon: @elarroba@fosstodon.org Miguel on GitHub: github.com Django Ledger on Github: github.com Django Ledger Discord: discord.gg Get Started with Django MongoDB Backend: mongodb.com Wagtail CMS: wagtail.org Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Varun Mohan is the co-founder and CEO of Windsurf (formerly Codeium), an AI-powered development environment (IDE) that has been used by over 1 million developers in just four months and has quickly emerged as a leader in transforming how developers build software. Prior to finding success with Windsurf, the company pivoted twice—first from GPU virtualization infrastructure to an IDE plugin, and then to their own standalone IDE.In this conversation, you'll learn:1. Why Windsurf walked away from a profitable GPU infrastructure business and bet the company on helping engineers code2. The surprising UI discovery that tripled adoption rates overnight.3. The secret behind Windsurf's B2B enterprise plan, and why they invested early in an 80-person sales team despite conventional startup wisdom.4. How non-technical staff at Windsurf built their own custom tools instead of purchasing SaaS products, saving them over $500k in software costs5. Why Varun believes 90% of code will be AI-generated, but engineering jobs will actually increase6. How training on millions of incomplete code samples gives Windsurf an edge, and creates a moat long-term7. Why agency is the most undervalued and important skill in the AI era—Brought to you by:• Brex—The banking solution for startups• Productboard—Make products that matter• Coda—The all-in-one collaborative workspace—Where to find Varun Mohan:• X: https://x.com/_mohansolo• LinkedIn: https://www.linkedin.com/in/varunkmohan/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Varun's background(03:57) Building and scaling Windsurf(12:58) Windsurf: The new purpose-built IDE to harness magic(17:11) The future of engineering and AI(21:30) Skills worth investing in(23:07) Hiring philosophy and company culture(35:22) Sales strategy and market position(39:37) JetBrains vs. VS Code: extensibility and enterprise adoption(41:20) Live demo: building an Airbnb for dogs with Windsurf(42:46) Tips for using Windsurf effectively(46:38) AI's role in code modification and review(48:56) Empowering non-developers to build custom software(54:03) Training Windsurf(01:00:43) Windsurf's unique team structure and product strategy(01:06:40) The importance of continuous innovation(01:08:57) Final thoughts and advice for aspiring developers—Referenced:• Windsurf: https://windsurf.com/• VS Code: https://code.visualstudio.com/• JetBrains: https://www.jetbrains.com/• Eclipse: https://eclipseide.org/• Visual Studio: https://visualstudio.microsoft.com/• Vim: https://www.vim.org/• Emacs: https://www.gnu.org/software/emacs/• Lessons from a two-time unicorn builder, 50-time startup advisor, and 20-time company board member | Uri Levine (co-founder of Waze): https://www.lennysnewsletter.com/p/lessons-from-uri-levine• IntelliJ: https://www.jetbrains.com/idea/• Julia: https://julialang.org/• Parallel computing: https://en.wikipedia.org/wiki/Parallel_computing• Douglas Chen on LinkedIn: https://www.linkedin.com/in/douglaspchen/• Carlos Delatorre on LinkedIn: https://www.linkedin.com/in/cadelatorre/• MongoDB: https://www.mongodb.com/• Cursor: https://www.cursor.com/• GitHub Copilot: https://github.com/features/copilot• Llama: https://www.llama.com/• Mistral: https://mistral.ai/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• React: https://react.dev/• Sonnet: https://www.anthropic.com/claude/sonnet• OpenAI: https://openai.com/• FedRamp: https://www.fedramp.gov/• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law• How to win in the AI era: Ship a feature every week, embrace technical debt, ruthlessly cut scope, and create magic your competitors can't copy | Gaurav Misra (CEO and co-founder of Captions): https://www.lennysnewsletter.com/p/how-to-win-in-the-ai-era-gaurav-misra—Recommended book:• Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs: https://www.amazon.com/Fall-Love-Problem-Solution-Entrepreneurs/dp/1637741987—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Brandon talks with Rick Houlihan, Field CTO at MongoDB (https://fnf.dev/4ipjSI9), about why document databases are more important than ever, how MongoDB fits into modern app architectures, and where AI comes into play. Plus, Rick shares the story of how he helped deprecate 3,000 Oracle databases at Amazon. Show Links MongoDB (https://fnf.dev/4ipjSI9) Contact Rick Houlihan LinkedIn: Rick Houlihan (https://www.linkedin.com/in/rickhoulihan/) Twitter/X: @houlihan_rick (https://x.com/houlihan_rick) SDT News & Hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Special Guest: Rick Houlihan.
**(Note: Spotify listeners can also watch the screen sharing video accompanying the audio. Other podcast platforms offer the audio-only version.)**Struggling with millions of unstructured documents, legacy records, or scattered data formats? Discover how AI, Large Language Models (LLMs), and MongoDB are revolutionizing data management in this episode of the MongoDB Podcast.Join host Shane McAllister and the team as they delve into tackling complex data challenges using cutting-edge technology. Learn how MongoDB Atlas Vector Search enables powerful semantic search and Retrieval Augmented Generation (RAG) applications, transforming chaotic information into valuable insights. Explore integrations with popular frameworks like Langchain and Llama Index.Find out how to efficiently process and make sense of your unstructured data, potentially saving significant costs and unlocking new possibilities.Ready to dive deeper?#MongoDB #AI #LLM #LargeLanguageModels #VectorSearch #AtlasVectorSearch #UnstructuredData #Podcast #DataManagement #RAG #SemanticSearch #Langchain #LlamaIndex #Developer #BigData
Talk Python To Me - Python conversations for passionate developers
Have you ever spent an afternoon wrestling with a Jupyter notebook, hoping that you ran the cells in just the right order, only to realize your outputs were completely out of sync? Today's guest has a fresh take on solving that exact problem. Akshay Agrawal is here to introduce Marimo, a reactive Python notebook that ensures your code and outputs always stay in lockstep. And that's just the start! We'll also dig into Akshay's background at Google Brain and Stanford, what it's like to work on the cutting edge of AI, and how Marimo is uniting the best of data science exploration and real software engineering. Episode sponsors Worth Search Talk Python Courses Links from the show Akshay Agrawal: akshayagrawal.com YouTube: youtube.com Source: github.com Docs: marimo.io Marimo: marimo.io Discord: marimo.io WASM playground: marimo.new Experimental generate notebooks with AI: marimo.app Pluto.jl: plutojl.org Observable JS: observablehq.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Talk Python To Me - Python conversations for passionate developers
We're sitting down with Eric Matthes, the educator, author, and developer behind Django Simple Deploy. If you've ever struggled with taking that final step of getting your Django app onto a live server (without spending days wrestling with DevOps complexities), then give Django Simple Deploy a look. Eric shares how Django Simple Deploy automates away the boilerplate parts of deployment, so you can focus on building features instead of deciphering endless configs. We'll talk about this new project's journey to 1.0, the range of hosting platforms it supports, and why it's not just for beginners. Episode sponsors Worth Search Talk Python Courses Links from the show django-simple-deploy documentation: readthedocs.io django-simple-deploy repository: github.com Python Crash Course book: ehmatthes.github.io Code Red: codered.cloud Docker: docker.com Caddy: caddyserver.com Bunny.net CDN: bunny.net Platform.sh: platform.sh fly.io: fly.io Heroku: heroku.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
MongoDB staff developer advocate Jesse Hall teams up with Storecraft creator Tomer Shalev to dissect how this open-source commerce platform leverages MongoDB for scalable, AI-driven workflows. Watch them live-code integrations (vector search, dynamic pricing, email automation) and explore real-world use cases—from custom discount logic to OpenAI-powered product recommendations.Perfect for developers building modular, future-proof commerce solutions.
Talk Python To Me - Python conversations for passionate developers
This episode is all about Beeware, the project that working towards true native apps built on Python, especially for iOS and Android. Russell's been at this for more than a decade, and the progress is now hitting critical mass. We'll talk about the Toga GUI toolkit, building and shipping your apps with Briefcase, the newly official support for iOS and Android in CPython, and so much more. I can't wait to explore how BeeWare opens up the entire mobile ecosystem for Python developers, let's jump right in. Episode sponsors Posit Python in Production Talk Python Courses Links from the show Anaconda open source team: anaconda.com PEP 730 – Adding iOS: peps.python.org PEP 738 – Adding Android: peps.python.org Toga: beeware.org Briefcase: beeware.org emscripten: emscripten.org Russell Keith-Magee - Keynote - PyCon 2019: youtube.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What
Ever wondered how companies like Amazon or Pinterest deliver lightning-fast image search? Dive into this episode of MongoDB Podcast Live with Shane McAllister and Nenad, a MongoDB Champion, as they unravel the magic of semantic image search powered by MongoDB Atlas Vector Search!
ABOUT ELIOT HOROWITZEliot Horowitz is the Founder and CEO of Viam, an engineering platform unlocking AI, automation, and data for devices in the physical world. With a deep commitment to advancing technology, Eliot leads Viam in helping companies build solutions across robotics, food and beverage, climate, marine, industrial manufacturing, and more.A career software developer and technology leader, Eliot co-founded MongoDB in 2007, writing the core code base for the pioneering database and leading the engineering and product teams for 13 years as CTO. MongoDB, which went public in 2017, has since reached a market cap of over $20 billion. Before MongoDB, he co-founded the ecommerce company ShopWiki and served as CTO, and he began his career in software development in the R&D group of adtech firm DoubleClick.Eliot is passionate about using technology to address pressing societal issues, including working with WAVS to protect marine life in the North Atlantic and supporting Billion Oyster Project's work to help restore New York Harbor's ecosystem.SHOW NOTES:The origin story of founding Viam (2:56)How Viam can be a game-changing platform, accelerating robotics software & hardware 10x to 100x (4:33)The ideation journey behind Viam: Building a platform that simplifies the integration of hardware and software development (6:11)Solving challenges with seamless APIs, a modular system, the right abstraction layers, and a comprehensive platform (9:54)Key questions for identifying the right abstraction layers at Viam (11:32)Optimizing your platform for flexibility and ease of use (13:32)The evolution of product building, from first-hand experience to customer-driven (16:33)How Eliot's MongoDB Experience shaped Viam's user-centric approach, open-source strategy, business model & ecosystem approach (18:48)Cultivating developer communities & leveraging community insights at MongoDB & Viam (23:01)Frameworks for deciding on your business model & pricing (24:52)Eliot's approach to building developer tools & products used by engineers (26:23)Aligning your eng team & stakeholders on the product vision (29:51)What it means to deeply understand engineers and how they interact with your product (31:10)Strategies for eng leaders to better connect with customers (34:38)Viam's real-world applications & what's next (36:31)Rapid fire questions (39:31)LINKS AND RESOURCESViam - At Viam, we believe in the power of technology to make our world smarter, happier, and more sustainable. We're building a revolutionary engineering platform for problem-solving in the physical world, so that innovators from all disciplines can address humanity's most complex challenges with practical solutions. Together with our partners, we're committed to making a lasting positive impact on industries, communities, and the planet.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
Talk Python To Me - Python conversations for passionate developers
In this episode, we welcome back Will McGugan, the creator of the wildly popular Rich library and founder of Textualize. We'll dive into Will's latest article on "Algorithms for High Performance Terminal Apps" and explore how he's quietly revolutionizing what's possible in the terminal, from smooth animations and dynamic widgets to full-on TUI (or should we say GUI?) frameworks. Whether you're looking to supercharge your command-line tools or just curious how Python can push the limits of text-based UIs, you'll love hearing how Will's taking a modern, web-inspired approach to old-school terminals. Episode sponsors Posit Python in Production Talk Python Courses Links from the show Algorithms for high performance terminal apps post: textual.textualize.io Textual Demo: github.com Textual: textualize.io Zero ver: 0ver.org memray: github.com Posting app: posting.sh Bulma CSS framewokr: bulma.io JP Term: davidbrochart.github.io Rich: github.com btop: github.com starship: starship.rs Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Talk Python To Me - Python conversations for passionate developers
Have you ever wondered why certain data points stand out so dramatically? They might hold the key to everything from fraud detection to groundbreaking discoveries. This week on Talk Python to Me, we dive into the world of outlier detection with Python with Brett Kennedy. You'll learn how outliers can signal errors, highlight novel insights, or even reveal hidden patterns lurking in the data you thought you understood. We'll explore fresh research developments, practical use cases, and how outlier detection compares to other core data science tasks like prediction and clustering. If you're ready to spot those game-changing anomalies in your own projects, stay tuned. Episode sponsors Posit Python in Production Talk Python Courses Links from the show Data-morph: github.com PyOD: github.com Prophet: github.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Talk Python To Me - Python conversations for passionate developers
Today we explore the wild world of Python deployment with my friend, Calvin Hendricks-Parker from Six Feet Up. We'll tackle some of the biggest challenges in taking a Python app from “it works on my machine” to production, covering inconsistent environments, conflicting dependencies, and sneaky security pitfalls. Along the way, Calvin shares how containerization with Docker and Kubernetes can both simplify and complicate deployments, especially for smaller teams. Finally, we'll introduce Scaf, a powerful project blueprint designed to give developers a rock-solid start on Python web projects of all sizes. Get notified when the Talk Python in Production book goes live and read the first third online right now. Episode sponsors Posit Python in Production Talk Python Courses Links from the show Calvin Hendryx-Parker: github.com Scaf on GitHub: github.com Scaf on GitHub (duplicate): github.com "Deploy the Dream" song: deploy-the-dream-talk-python.mp3 CloudDevEngineering YouTube Channel: youtube.com TechWorld with Nana YouTube Channel: youtube.com Tilt (Kubernetes Dev Tool): tilt.dev Talos (Minimal OS for Kubernetes): talos.dev Traefik Reverse Proxy: traefik.io Sealed Secrets on GitHub: github.com Argo CD Documentation: readthedocs.io MailHog on GitHub: github.com Next.js: nextjs.org Cloud Custodian: cloudcustodian.io Valky (Redis Replacement): valkey.io “The ‘Works on My Machine' Certification Program” (Coding Horror): blog.codinghorror.com NVIDIA's First Desktop AI PC (Ars Technica): arstechnica.com Kind (Kubernetes in Docker): kind.sigs.k8s.io Updated Effective PyCharm Course: training.talkpython.fm Talk Python in Production book: talkpython.fm/books/python-in-production Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
► Get a free share!This show is sponsored by Trading 212! To get free fractional shares worth up to 100 EUR / GBP, you can open an account with Trading 212 through this link https://www.trading212.com/Jdsfj/FTSE. Terms apply.When investing, your capital is at risk and you may get back less than invested.Past performance doesn't guarantee future results.► Get 15% OFF Finchat.io:Huge thanks to our sponsor, FinChat.io, the best investing toolkit we've discovered! Get 15% off your subscription with code below and unlock powerful tools to analyze stocks, discover hidden gems, and build income streams. Check them out at FinChat.io!https://finchat.io/playingftse/?lmref=iQl2VQ► Episode Notes:What website is Steve W about to go on? Find out on this week's PlayingFTSE Show!It's been a tough week in the stock market, especially in the US. And neither Steve D nor Steve W has been having a good time of things. Crowdstrike shares have been falling recently. Could this be the result of last year's outage coming back around to haunt the company?The stock roughly back where it was before the big drop and Steve D has been taking a look. There's still a very good business with some strong customer retention here…FTSE 100 distribution company Bunzl has been catching Steve W's eye. Revenues for 2024 are down, but this was known about and the share price has fallen on the latest news.There's around 7% of the market cap in free cash to deploy each year. And if it can't be used for growth, it's coming back as dividends and buybacks.MongoDB is a stock the PlayingFTSE Show has been looking at for a while. And it took an almighty hit this week, with shares down over 30%. The reason is a weak outlook for the next three months, but the company has been known to guide low and then work higher before. Steve D has been checking this one out.Despite full year revenues being up 11%, shares in Greggs fell sharply this week. This doesn't seem to make sense, but Steve W thinks he can see what's going on. The latest news is that trading conditions are tough right now. But with the company set to increase its store count by 5% this year, could it be a bargain at today's prices?Transmedics has been the subject of a short report recently and the stock is down further after its results for the last year. But the business is well ahead of the competition.Steve D has been on this one for a while and is impressed by the company's response to the allegations from Scorpion Capital. So could this be his moment to buy?► Support the show:Appreciate the show and want to offer your support? You could always buy us a coffee at: https://ko-fi.com/playingftse(All proceeds reinvested into the show and not to coffee!)There are many ways to help support the show, liking, commenting and sharing our episodes with friends! You can also check out our clothing merch store: https://playingftse.teemill.com/We get a small cut of anything you buy which will be reinvested back into the show...► Timestamps:0:00 INTRO & OUR WEEKS4:20 CROWDSTRIKE16:42 BUNZL27:26 MONGODB43:36 GREGGS57:27 TRANSMEDICS► Show Notes:What's been going on in the financial world and why should anyone care? Find out as we dive into the latest news and try to figure out what any of it means. We talk about stocks, markets, politics, and loads of other things in a way that's accessible, light-hearted and (we hope) entertaining. For the people who know nothing, by the people who know even less. Enjoy► Wanna get in contact?Got a question for us? Drop it in the comments below or reach out to us on Twitter: https://twitter.com/playingftseshow Or on Instagram: https://www.instagram.com/playing_ftse/► Enquiries: Please email - playingftsepodcast@gmail(dot)com► Disclaimer: This information is for entertainment purposes only and does not constitute financial advice. Always consult with a qualified financial professional before making any investment decisions.
Welcome to MongoDB Podcast with host Shay McAlister! In this engaging episode, we dive into the transformative realm of AI in revenue operations, featuring our special guest, Fabio, Senior Software Development Engineer at MongoDB. Discover how AI is revolutionizing rev ops through Mops AI, a tool born from innovative hackathons. Fabio shares insights on leveraging MongoDB's own tools and advanced technologies like OpenAI and Atlas Vector Search to optimize tedious processes and enhance data utilization. Perfect for developers, AI enthusiasts, and tech innovators, this episode offers deep dives into practical AI applications, internal tool creation, and the future of generative AI in optimizing business processes. Tune in for a compelling look at how AI-powered innovation is shaping the way we handle data and drive revenue!
Kevin Rose möchte es mit Digg nochmal wissen. Wir freuen uns auf das S1 Filing von Klarna. On Running, CrowdStrike und mongoDB Quartalszahlen. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) OpenAI (00:03:05) Nvidia Schmuggler (00:11:40) TSMC (00:15:30) Mistral (00:21:00) Scale AI (00:24:00) Digg (00:34:00) Larry Page (00:39:10) Klarna (00:41:40) CoreWeave (00:44:20) Russische Desinformationen (00:48:25) On Running (00:53:30) Puma (00:55:30) CrowdStrike (00:57:10) mongoDB (01:02:10) Schmuddelecke Shownotes Airbnb distanziert sich von den „persönlichen Ansichten“ des Mitgründers Joe Gebbia Skift Grok schätzt mit 75%-85%iger Sicherheit, dass Trump ein von Putin kompromittierter Aktivposten ist. Twitter Richter lehnt Musks Versuch ab, OpenAI daran zu hindern, ein gewinnorientiertes Unternehmen zu werden CNBC 3 Männer wegen Betrugs angeklagt, Fälle im Zusammenhang mit angeblichen Verschiebungen von Nvidia-Chips CNA Es war chaotisch: Bundesbedienstete müssen in Büros ohne Schreibtische, Wi-Fi und Licht zurückkehren CNN Kevin Rose bringt Digg mit dem Gründer von Reddit, Alexis Ohanian, wieder zum Leben. New York Times
Kevin Horner's Chart of the Day is MongoDB (MDB) after earnings. He gives levels to watch on the short & longer-term with shares under pressure Thursday morning.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Nikita Shamgunov is the founder of Neon, an open-source serverless Postgres company. Before Neon, Nikita co-founded MemSQL, now SingleStore, which is valued at over a billion dollars. He has also worked as a VC at Khosla Ventures and held engineering roles at Meta and Microsoft. Nikita is known for his strategic thinking and transparency about his decision-making process.We discuss:The importance of storytelling and providing a clear narrative for your companyWhen to introduce a sales team and how to build a sales and marketing "machine"Pricing strategies, including pricing for storage and compute in the data and analytics spaceThe evolution of revenue models in DevTools: from selling seats and storage/compute to selling tokensLessons learned from hiring MongoDB's VP of Engineering, focusing on improving reliability and building strong team management processesThe benefits of using a high-quality recruiting firm and avoiding the pitfalls of bad hiresBalancing competitiveness with respect for competitors to maintain credibility, particularly in the developer tools marketThe idea of “developing your taste” in product development, inspired by Guillermo Rauch from VercelHow modern dev tools can monetize through seats, storage/compute, or tokens, with tokens currently being the most profitableWhy Nikita advises DevTools founders to understand the business model framework and align it with their strategyThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:NeonSingleStore Khosla Ventures Fusion Talent
Former Ford CEO Mark Fields weighs in what the one-month delay in auto tariffs means for the stocks. Former Boston Fed President Eric Rosengren breaks down the Beige Book, jobs, and Fed policy after the latest economic data. Vital Knowledge's Adam Crisafulli and Wilmington Trust's Meghan Shue analyze the market landscape, and we cover key earnings from Marvell, MongoDB, Victoria's Secret, and Zscaler. Plus, Christopher Rolland of Susquehanna on Marvell's earnings.
In this episode of MongoDB TV, join Shane McAllister along with MongoDB experts Sabina Friden and Frank Sun as they explore the powerful observability suite within MongoDB Atlas. Discover how these tools can help you optimize database performance, reduce costs, and ensure reliability for your applications. From customizable alerts and query insights to performance advisors and seamless integrations with enterprise tools like Datadog and Prometheus, this episode covers it all. Whether you're a developer, database administrator, or just getting started with MongoDB, learn how to leverage these observability tools to gain deep insights into your database operations and improve your application's efficiency. Tune in for a live demo showcasing how MongoDB's observability suite can transform your database management experience. Perfect for anyone looking to enhance their MongoDB skills and take their database performance to the next level.
In this episode of Flying High with Flutter, we're joined by Arek Borucki, author of MongoDB in Action, Third Edition and a seasoned Principal Database Engineer. Arek shares his journey with MongoDB, discusses running databases on Kubernetes, and compares MongoDB to other databases. We also explore MongoDB 8's latest features, ACID compliance, and when MongoDB might not be the right choice. Plus, Arek dives into MongoDB Atlas, Atlas CLI, and how to get started with these powerful tools.
This week, Ben and Andrew dive into the (surprisingly?) complex world of calculator apps, analyze how AI is revolutionizing the technical interview, and dissect the emerging “two-tier” economy around AI. What side of the curve does your org fall on?Then, the conversation goes on site to San Francisco, where host Dan Lines hosts Rob Zuber (CTO, CircleCI) and Tara Hernandez (VP of Dev Productivity at MongoDB) for a discussion of LinearB's 2025 Software Engineering Benchmarks Report.We unpack the report's surprising findings on the PR lifecycle, project management hygiene, DORA metrics, code quality, and predictability, with key takeaways for optimizing your engineering team's performance.Be sure to grab your copy of the report to follow along with Dan, Rob & Tara.Check out:2025 Software Engineering Benchmarks ReportBeyond the DORA FrameworksIntroducing AI-Powered Code Review with gitStreamFollow the hosts:Follow BenFollow AndrewFollow today's guest(s):Rob ZuberTara HernandezReferenced in today's show:"A calculator app? Anyone could make that."‘Two-tier' AI economy is emerging between startups and corporations, with large organizations falling behind, AWS EMEA chief saysAI Killed The Tech Interview. Now What? | Kane Narraway New Junior Developers Can't Actually CodeSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever
Talk Python To Me - Python conversations for passionate developers
On this episode, I'm joined by Dr. Jeff Boeing, an assistant professor at the University of Southern California whose research spans urban planning, spatial analysis, and data science. We explore why OpenStreetMap is such a powerful source of global map data—and how Jeff's Python library, OSMnx, makes that data easier to download, model, and visualize. Along the way, we talk about what shapes city streets around the world, how urban design influences everything from daily commutes to disaster resilience, and why turning open data into accessible tools can open up completely new ways of understanding our cities. If you've ever wondered how to build or analyze your own digital maps in Python, or what it takes to manage a project that transforms raw geographic data into meaningful research, you won't want to miss this conversation. Episode sponsors Posit Podcast Later Talk Python Courses Links from the show City Street Orientations World: geoffboeing.com OSMnx Documentation: readthedocs.io OSMnx GitHub: github.com OpenStreetMap: openstreetmap.org Open Database License: opendatacommons.org ID Editor (Web Editor): wiki.openstreetmap.org Planet OSM: planet.openstreetmap.org Overpass API: wiki.openstreetmap.org GeoPandas: geopandas.org NetworkX: networkx.org Shapely: shapely.readthedocs.io Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Talk Python To Me - Python conversations for passionate developers
As Python developers, we're incredibly lucky to have over half a million packages that we can use to build our applications with over at PyPI. However, when it comes to choosing a UI framework, the options get narrowed down very quickly. Intersect those choices with the ones that work on mobile, and you have a very short list. Flutter is a UI framework for building desktop and mobile applications, and is in fact the one that we used to build the Talk Python courses app, you'd find at talkpython.fm/apps. That's why I'm so excited about Flet. Flet is a Python UI framework that is distributed and executed on the Flutter framework, making it possible to build mobile apps and desktop apps with Python. We have Feodor Fitsner back on the show after he launched his project a couple years ago to give us an update on how close they are to a full featured mobile app framework in Python. Episode sponsors Posit Podcast Later Talk Python Courses Links from the show Flet: flet.dev Flet on Github: github.com Packaging apps with Flet: flet.dev/docs/publish Flutter: flutter.dev React vs. Flutter: trends.stackoverflow.co Kivy: kivy.org Beeware: beeware.org Mobile forge from Beeware: github.com The list of built-in binary wheels: flet.dev/docs/publish/android#binary-python-packages Difference between dynamic and static Flet web apps: flet.dev/docs/publish/web Integrating Flutter packages: flet.dev/docs/extend/integrating-existing-flutter-packages serious_python: pub.dev/packages/serious_python Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
In this episode of the DevOps Toolchain Podcast, host Joe Colantonio sits down with Richmond Alake, a developer advocate at MongoDB and an AI/ML practitioner with a strong background in computer vision, robotics, and machine learning. With years of experience in software development, Richmond has authored over 200 technical articles and taught numerous AI/ML courses. Together, they dive into the evolving landscape of AI, machine learning, and multimodal AI, discussing how MongoDB shapes the future of AI-powered applications. Richmond shares insights on how vector embeddings, Retrieval-Augmented Generation (RAG), and agentic systems transform data storage and retrieval for AI-driven development. They also explore the impact of generative AI, the rise of multimodal AI, and how MongoDB serves as the memory provider for intelligent systems. If you're a developer looking to build AI-powered applications efficiently, streamline your data management, or understand where AI is headed next, this is an episode you don't want to miss!
Topics covered in this episode: PEP 772 – Packaging governance process Official Django MongoDB Backend Now Available in Public Preview Developer Philosophy Python 3.13.2 released Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: PEP 772 – Packaging governance process draft, created 21-Jan, by Barry Warsaw, Deb Nicholson, Pradyun Gedam “As Python packaging has matured, several interrelated problems with the current way of managing the technical development, decision making and processes have become apparent.” “This PEP proposes a Python Packaging Council with broad authority over packaging standards, tools, and implementations. Like the Python Steering Council, the Packaging Council seeks to exercise this authority as rarely as possible; instead, they use this power to establish standard processes.” PEP discusses PyPA, Packaging-WG, Interoperability Standards, Python Steering Council, and Expectations of an elected Packaging Council A specification with Composition: 5 people Mandate, Responsibilities, Delegations, Process, Terms, etc. Michael #2: Official Django MongoDB Backend Now Available in Public Preview Over the last few years, Django developers have increasingly used MongoDB, presenting an opportunity for an official MongoDB-built Python package to make integrating both technologies as painless as possible. Features The ability to use Django models with confidence. Developers can use Django models to represent MongoDB documents, with support for Django forms, validations, and authentication. Django admin support. The package allows users to fire up the Django admin page as they normally would, with full support for migrations and database schema history. Native connecting from settings.py. Just as with any other database provider, developers can customize the database engine in settings.py to get MongoDB up and running. MongoDB-specific querying optimizations. Field lookups have been replaced with aggregation calls (aggregation stages and aggregate operators), JOIN operations are represented through $lookup, and it's possible to build indexes right from Python. Limited advanced functionality. While still in development, the package already has support for time series, projections, and XOR operations. Aggregation pipeline support. Raw querying allows aggregation pipeline operators. Since aggregation is a superset of what traditional MongoDB Query API methods provide, it gives developers more functionality. Brian #3: Developer Philosophy by qntm Intended as “advice for junior developers about personal dev philosophy”, I think these are just great tips to keep in mind. The items Avoid, at all costs, arriving at a scenario where the ground-up rewrite starts to look attractive This is less about “don't do rewrites”, but about noticing the warning signs ahead of time. Aim to be 90% done in 50% of the available time Great quote: “The first 90% of the job takes 90% of the time. The last 10% of the job takes the other 90% of the time.” Automate good practices Think about pathological data “Nobody cares about the golden path. Edge cases are our entire job.” Brian's note: But also think about the happy path. Documenting and testing what you think of as the happy path is a testing start and helps others understand your idea of how things are supposed to work. There's usually a simpler way to write it Write code to be testable It is insufficient for code to be provably correct; it should be obviously, visibly, trivially correct Brian's note: Even if it's obviously, visibly, trivially correct, it will still break. So test it anyway. Michael #4: Python 3.13.2 released Python 3.13's second maintenance release. About 250 changes went into this update Also Python 3.12.9, Python 3.12's ninth maintenance release already. Just 180 changes for 3.12, but it's still worth upgrading. For us, it's simply rebuilding our Docker base (i.e. —no-cache) with these lines: RUN curl -LsSf https://astral.sh/uv/install.sh | sh RUN --mount=type=cache,target=/root/.cache uv venv --python 3.13 /venv Extras Brian: Still thinking about pytest plugins a lot. The top pytest plugin list Has been updated for Feb Is starting to include things without “pytest” in the name, like Hypothesis and Syrupy. Eventually I'll have to add “looking at trove classifiers” as part of the search, but for now, let me know if you're favorite is missing. Includes T&C podcast episode links if I've covered it on the show. There's 2 so far Michael: There's a new release of PyScript out. All the details are here: Highlight is new PyGame-CE support. Go play! PEP 2026 – Calendar versioning for Python rejected. :( PEP 759 – External Wheel Hosting withdrawn Joke: Pride Versioning
Nikolay and Michael are joined by Franck Pachot to discuss SQL vs NoSQL — did Franck change teams by joining MongoDB, normalisation vs denormalisation, developer experience, NULLs, and more! Here are some links to things they mentioned:Franck Pachot https://postgres.fm/people/franck-pachotFranck's workshop at PGConf India https://pgconf.in/conferences/pgconfin2025/program/proposals/958 PostgreSQL Conference Germany https://2025.pgconf.de"Schema Later" Considered Harmful by Michael Stonebraker and Álvaro Hernández https://www.enterprisedb.com/blog/schema-later-considered-harmfulComparison of JOINS by Michael Stonebraker and Álvaro Hernández https://www.enterprisedb.com/blog/comparison-joins-mongodb-vs-postgresql Franck's post about why he joined MongoDB https://www.linkedin.com/pulse/2025-im-joining-mongodb-franck-pachot-e4shfEdgeDB https://www.edgedb.comNikolay's tweet about a recent issue with NULLs https://x.com/samokhvalov/status/1889078097124999272PartiQL https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.htmlFerretDB https://www.ferretdb.comDocumentDB https://github.com/microsoft/documentdb~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artwork
MongoDB product leader Sahir Azam explains how vector databases have evolved from semantic search to become the essential memory and state layer for AI applications. He describes his view of how AI is transforming software development generally, and how combining vectors, graphs and traditional data structures enables high-quality retrieval needed for mission-critical enterprise AI use cases. Drawing from MongoDB's successful cloud transformation, Azam shares his vision for democratizing AI development by making sophisticated capabilities accessible to mainstream developers through integrated tools and abstractions. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode: Introducing ambient agents: Blog post by Langchain on a new UX pattern where AI agents can listen to an event stream and act on it Google Gemini Deep Research: Sahir enjoys its amazing product experience Perplexity: AI search app that Sahir admires for its product craft Snipd: AI powered podcast app Sahir likes
Talk Python To Me - Python conversations for passionate developers
In this episode, I'm joined by JJ Allaire, founder and executive chairman at Posit, and Carlos Scheidegger, a software engineer at Posit, to explore Quarto, an open-source tool revolutionizing technical publishing. We discuss how Quarto empowers users to seamlessly transform Jupyter notebooks into polished reports, dashboards, e-books, websites, and more. JJ shares his journey from creating RStudio to developing Quarto as a versatile, multi-language tool, while Carlos delves into its roots in reproducibility and the challenges of academic publishing. Don't miss this deep dive into a tool that's shaping the future of data-driven storytelling! Episode sponsors Talk Python Courses DigitalOcean Links from the show JJ Allaire JJ on LinkedIn: linkedin.com JJ on GitHub: github.com Carlos Scheidegger Personal site: cscheid.net Mastodon: @scheidegger Fast AI: fast.ai nbdev: nbdev.fast.ai nbsanity - Share Notebooks as Polished Web Pages in Seconds: answer.ai Pandoc: pandoc.org Observable: github.com Quarto Pub: quartopub.com Deno: deno.com Real World Data Science site: realworlddatascience.net Typst: typst.app Github Actions for Quarto: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
In this episode of the Product Thinking Podcast, Melissa Perri talks with Andrew Davidson, Senior Vice President of Products at MongoDB. Andrew has been instrumental in transforming MongoDB from a traditional database company into a comprehensive developer data platform. As MongoDB evolves, Andrew shares insights into managing technical products and enhancing user experiences, crucial areas for product managers working with complex platforms. Join us as we explore the nuances of product management in the world of databases and developer tools. Andrew sheds light on maintaining a balance between technical excellence and strategic management, offering valuable lessons for product managers. Tune in to gain actionable insights on how to approach product management for highly technical products like databases and learn how this can impact broader business outcomes. You'll hear us talk about: 09:29 The Spark Behind MongoDB Andrew explains the motivation for creating MongoDB and how it challenged traditional relational databases, emphasizing the need for a system that aligns better with modern computing demands. 24:32 User Experience in Technical Products We delve into the importance of user experience within technical products, highlighting the layers of user and developer experiences that product managers must consider. 35:50 Balancing Internal Innovation and Customer Feedback Andrew discusses the importance of balancing internal innovation with customer feedback, stressing the necessity for product managers to engage directly with users to refine the product. Episode Resources: Andrew Davidson on LinkedIn: https://www.linkedin.com/in/andrewad/ Learn more about MongoDB: https://www.mongodb.com/ Sign up for a free Liveblocks account: https://liveblocks.io/ Timestamps: 00:00 Episode Preview 01:25 Introduction 02:57 Liveblocks 03:40 Dear Melissa 07:56 Early Days in Databases 15:39 MongoDB's Longevity 21:13 From Database to Developer Data Platform 32:21 Ensuring Product Strategy in Technical Tools 39:51 The Rise of MongoDB Atlas 45:04 Navigating Market Perception and Growth 52:16 Balancing Speed and Stability in Tech
Talk Python To Me - Python conversations for passionate developers
Join me as I chat with Rich Iannone and Michael Chow from Posit where we explore the transformative power of data tables with the Great Tables library. We'll cover practical applications of Great Tables, showcasing how thoughtful design and advanced formatting can elevate your data presentations. And you'll learn about innovative features like nano plots and interactive elements and the importance of structure, format, and style in crafting tables that both inform and inspire. Whether you're a seasoned data scientist or just starting out, this episode is packed with valuable tips and inspiring examples to enhance your data storytelling. Episode sponsors Talk Python Courses DigitalOcean Links from the show Michael Chow: github.com/machow Richard Iannone: github.com/rich-iannone Episode Deep Dives Writeup: talkpython.fm/blog Great Tables: github.com Making Beautiful, Publication Quality Tables PyCon talk: youtube.com Andrew Weatherman's Visualization Gallery: aweatherman.com Bureau of the Census Manual of Tabular Presentation: census.gov Table Contest: posit.co Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
In this podcast, Apoorva Joshi, Senior AI Developer Advocate at MongoDB, discusses how to evaluate software applications that use the Large Language Models or LLMs and how to improve the performance of LLM based applications. Read a transcript of this interview: https://bit.ly/3WEppT6 Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon London (April 7-9, 2025) Discover new ideas and insights from senior practitioners driving change and innovation in software development. https://qconlondon.com/ InfoQ Dev Summit Boston (June 9-10, 2025) Actionable insights on today's critical dev priorities. devsummit.infoq.com/conference/boston2025 InfoQ Dev Summit Munich (Save the date - October 2025) QCon San Francisco 2025 (17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ InfoQ Dev Summit New York (Save the date - December 2025) The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ:Learn and share the changes and innovations in professional software development. - Join a community of ex perts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
CTO Series: Sergey's Leadership Insights—Bridging Innovation and Strategy as CTO of CrateDB In this BONUS episode, we sit down with Sergey, the forward-thinking CTO of CrateDB, to unpack his journey from Nokia to CrateDB and his leadership philosophy that blends technical expertise with strategic foresight. We dive into the key moments that shaped his career, the challenges of scaling technology in a competitive market, and how Sergey aligns his team's efforts with broader business goals while staying adaptable in an ever-evolving tech landscape. The Defining Moment in Sergey's Leadership Journey “Being a cheerleader, servant, and strategist for your team creates an environment where innovation can thrive.” Sergey shares how working at Nokia with an inspiring people manager, Sotiris, influenced his leadership approach. Sotiris embodied servant leadership and made strategic thinking a team-wide responsibility. Sergey reflects on how this mindset helped him approach his current role at CrateDB, emphasizing the importance of not only building great products but ensuring they resonate in the market through thoughtful sales and marketing alignment. “The best leaders help their teams see what's next—not just solve today's problems.” Navigating Product-Market Fit for Technical Products “For technical products, adoption is not just about features—it's about connecting with both developers and decision-makers.” Sergey breaks down the challenges of achieving product-market fit for developer-centric solutions like CrateDB. He explains the dual approach of engaging both top-down decision-makers, like CTOs, and bottom-up developer communities. By drawing from his startup experience, Sergey underscores the importance of building trust and delivering a developer experience that wins over early adopters. “The real challenge is bridging the gap between leadership adoption and the developers who use the product every day.” The Impact of AI on Developer Experience “AI's true transformation lies in how it enhances the products we already use, often invisibly.” When asked about AI's current role, Sergey reflects on the potential of AI-powered tools to transform workflows over the next few years. While not yet life-changing for his daily routine, he anticipates that AI's influence will soon be felt through the optimization of background processes in everyday tools and databases. “The future isn't about flashy AI features—it's about smarter tools that simplify complex workflows.” Aligning Tech Strategy with Business Goals “A strong strategy needs to be a story that teams can rally around and imagine themselves in.” Sergey details CrateDB's unique approach to strategic planning, inspired by open-source RFCs (Request for Comments). Instead of rigid OKRs, they craft stories that clarify priorities and invite feedback from across the organization. He highlights the importance of quarterly check-ins and building checkpoints to validate assumptions along the way. Key tips in this segment: Document the assumptions behind the strategy. Break initiatives into steps to test their feasibility. Avoid deadline-driven development; focus on value-driven milestones. Fostering Collaboration Between Tech and Business Units “Collaboration thrives when both sides understand the trade-offs involved in strategic decisions.” Sergey explains how collaboration between engineering and business leaders is fostered through transparency and communication. Product managers and engineering leads play key roles in advocating for priorities and ensuring alignment across teams. Sergey emphasizes the value of making trade-offs explicit to avoid silos. “The best partnerships between tech and business come from mutual understanding—not just of goals, but of constraints.” Staying Ahead with Strategic Roadmapping “A good strategy diagnoses the situation, sets guiding policies, and outlines coherent actions.” Sergey highlights the importance of competitive intelligence in staying ahead of market trends without reacting impulsively. In the world of databases, long adoption cycles offer the advantage of thoughtful strategic planning. He references the book Good Strategy/Bad Strategy and describes how CrateDB maintains an evergreen list of initiatives that can be prioritized when needed. “Don't just chase trends—create a strategy that withstands change by focusing on long-term coherence.” Overcoming the Challenges of the CTO Role “The CTO role is often ambiguous—define it based on your organization's needs.” Sergey candidly discusses the challenge of imposter syndrome and the ambiguity that comes with the CTO title. He outlines two common archetypes: the technical expert versus the team builder and cultural leader. He stresses the importance of adjusting the role to the organization's maturity and goals. “Your leadership role isn't static—adapt your approach to meet your organization where it is.” Books That Shaped Sergey's Leadership Approach “Most tech problems are people problems disguised as engineering issues.” Sergey shares the books that influenced his leadership style: Peopleware by Tom DeMarco: Reinforces the idea that technical challenges often stem from team dynamics. Drive by Daniel Pink: Highlights the importance of autonomy, mastery, and purpose in motivating teams. Good to Great by Jim Collins: Explores what makes some companies thrive while others stagnate. About Sergey Gerasimenko Sergey is the innovative CTO of CrateDB, leading the charge in real-time analytics and hybrid search. Previously, he was VP of Engineering at MongoDB, shaping the edge device strategy, and at Realm, a leading open-source mobile/embedded database acquired by MongoDB in 2019. With a career spanning groundbreaking roles at Brainly and Nokia, Sergey co-founded two companies and holds a patent. His leadership continues to push the boundaries of tech innovation. You can link with Sergey Gerasimenko on LinkedIn.
Talk Python To Me - Python conversations for passionate developers
Join me for an insightful conversation with Alex Monahan, who works on documentation, tutorials, and training at DuckDB Labs. We explore why DuckDB is gaining momentum among Python and data enthusiasts, from its in-process database design to its blazingly fast, columnar architecture. We also dive into indexing strategies, concurrency considerations, and the fascinating way MotherDuck (the cloud companion to DuckDB) handles large-scale data seamlessly. Don't miss this chance to learn how a single pip install could totally transform your Python data workflow! Episode sponsors Sentry Error Monitoring, Code TALKPYTHON Data Citizens Podcast Talk Python Courses Links from the show Alex on Mastodon: @__Alex__ DuckDB: duckdb.org MotherDuck: motherduck.com SQLite: sqlite.org Moka-Py: github.com PostgreSQL: www.postgresql.org MySQL: www.mysql.com Redis: redis.io Apache Parquet: parquet.apache.org Apache Arrow: arrow.apache.org Pandas: pandas.pydata.org Polars: pola.rs Pyodide: pyodide.org DB-API (PEP 249): peps.python.org/pep-0249 Flask: flask.palletsprojects.com Gunicorn: gunicorn.org MinIO: min.io Amazon S3: aws.amazon.com/s3 Azure Blob Storage: azure.microsoft.com/products/storage Google Cloud Storage: cloud.google.com/storage DigitalOcean: www.digitalocean.com Linode: www.linode.com Hetzner: www.hetzner.com BigQuery: cloud.google.com/bigquery DBT (Data Build Tool): docs.getdbt.com Mode: mode.com Hex: hex.tech Python: www.python.org Node.js: nodejs.org Rust: www.rust-lang.org Go: go.dev .NET: dotnet.microsoft.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy