Podcasts about graphs

  • 1,657PODCASTS
  • 3,454EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 22, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about graphs

Show all podcasts related to graphs

Latest podcast episodes about graphs

Homebrewed Christianity Podcast
Ryan Burge: The 2024 Election & Religion Post-Mortem

Homebrewed Christianity Podcast

Play Episode Listen Later May 22, 2025 103:20


Well nerds, buckle up for this one. My buddy Ryan Burge has returned with his latest graphs about religion and the 2024 election, and let me tell you - it was zesty. We started talking about minor league baseball, chicken raising, and somehow ended up dissecting why 83% of white evangelicals voted for Trump (spoiler: it's not shocking). Ryan breaks down the real story of the 2024 election - how non-white evangelicals are now 50/50, why mainline Protestants aren't actually that liberal, and the fascinating shifts happening in the Catholic vote. We dive into the data that shows education and church attendance create some pretty stark political divides, and why Democrats might want to rethink their approach to people of faith. But this is us, so we also talked about LeBron's hair transplants, whether 100 men could take down a silverback gorilla, why online gambling is destroying America, and Ryan's ongoing campaign to get academics to eat at steakhouses instead of Sweet Green. Plus, Ryan explains why Mark Driscoll might be the godfather of the manosphere, and we debate whether Joe Scarborough and Mika have the worst work schedule in television. Oh, and we somehow got into a deep discussion about Mayor Pete's beard and why Democrats need to learn how to talk about their faith without sounding like they're apologizing for it. Because apparently that's where our brains go. Want the full conversation? This is just a taste of what we covered in over two hours of completely unhinged discussion. If you're a member of either Graphs About Religion (Ryan's substack) or Process This (mine), you get access to the entire unedited conversation, plus invitations to join us live for future streams where things get even more zesty - and yes, I'm using that word in the Whitehead sense, not the Gen Z sense.   Previous Visits from Ryan Burge Distrust & Denominations Trust, Religion, & a Functioning Democracy What it's like to close a church The Future of Christian Education & Ministry in Charts The Sky is Falling & the Charts are Popping! Graphs about Religion & Politics w/ Spicy Banter a Year in Religion (in Graphs) Evangelical Jews, Educated Church-Goers, & other bits of dizzying data 5 Religion Graphs w/ a side of Hot Takes Myths about Religion & Politics Ryan P. Burge is an assistant professor of political science at Eastern Illinois University. Author of numerous journal articles, he is the co-founder of and a frequent contributor to Religion in Public, a forum for scholars of religion and politics to make their work accessible to a general audience. Burge is a pastor in the American Baptist Church. Upcoming Online Class:⁠⁠⁠⁠ Rediscovering the Spirit: Hand-Raisers, Han, & the Holy Ghost⁠⁠⁠⁠ is an open-online course exploring the dynamic, often overlooked third person of the Trinity. Based on Grace Ji-Sun Kim's groundbreaking work on the Holy Spirit this class takes participants on a journey through biblical foundations, historical developments, diverse cultural perspectives, and practical applications of Spirit theology. ⁠⁠⁠⁠As always, this class is donation-based, including 0. To get class info and sign up, head over here. ⁠⁠⁠⁠ _____________________ ⁠⁠⁠⁠⁠⁠Hang with 40+ Scholars & Podcasts and 600 people at Theology Beer Camp 2025 (Oct. 16-18) in St. Paul, MN. ⁠⁠⁠⁠⁠⁠ This podcast is a ⁠⁠⁠⁠⁠⁠Homebrewed Christianity⁠⁠⁠⁠⁠⁠ production. Follow ⁠⁠⁠⁠⁠⁠the Homebrewed Christianity⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠Theology Nerd Throwdown⁠⁠⁠⁠⁠⁠, & ⁠⁠⁠⁠⁠⁠The Rise of Bonhoeffer⁠⁠⁠⁠⁠⁠ podcasts for more theological goodness for your earbuds. Join over 80,000 other people by joining our⁠⁠⁠⁠⁠⁠ Substack - Process This!⁠⁠⁠⁠⁠⁠ Get instant access to over 45 classes at ⁠⁠⁠⁠⁠⁠www.TheologyClass.com⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠Follow the podcast, drop a review⁠⁠⁠⁠⁠⁠, send ⁠⁠⁠⁠⁠⁠feedback/questions⁠⁠⁠⁠⁠⁠ or become a ⁠⁠⁠⁠⁠⁠member of the HBC Community⁠⁠⁠⁠⁠⁠. Learn more about your ad choices. Visit megaphone.fm/adchoices

Wax Museum: A Basketball Card Podcast
Episode 322: Playoff Weekend in Indy — Games, Graphs, and a Giant Card Show

Wax Museum: A Basketball Card Podcast

Play Episode Listen Later May 15, 2025 24:15


On this week's episode, Kyle recaps his playoff weekend in Indianapolis — from attending Games 3 and 4 of Pacers/Cavs to exploring a massive 200-table card show. He shares game-day impressions, autograph stories, and some key card pickups along the way.

TubeTalk: Your YouTube How-To Guide
Navigating the Maze of YouTube Retention Graphs

TubeTalk: Your YouTube How-To Guide

Play Episode Listen Later May 15, 2025 41:28 Transcription Available


Send us a textGet the vidIQ plugin for FREE: https://vidiq.ink/3yvoc7rJoin Discord: https://www.vidiq.com/discordWant a 1 on 1 coach? https://vidiq.ink/theboost1on1Check out the video version here: https://youtu.be/0HPucaCMwrQAudience engagement metrics on YouTube aren't always what they seem, and retention graphs can be misleading if viewed in isolation. We explore why "good" retention varies drastically based on content length, traffic sources, and viewer behavior across different platforms.• TikTok has reportedly surpassed Twitch to become the second most streamed platform after YouTube• Retention graphs look different depending on video length, with shorts typically exceeding 100% while long-form content rarely does• High-view videos often have lower retention percentages than creators expect• Search traffic typically has shorter view duration than browse or suggested traffic• Breaking down retention by traffic source and subscriber status reveals more useful insights than aggregate data• YouTube doesn't currently track "hover behavior" when viewers preview but don't click• Platform differences significantly impact engagement behaviors (TV vs. mobile vs. desktop)• Mr. Beast has set unrealistic retention expectations for many creators• Viewer disclaimers and overexplaining opinions have become increasingly common in contentIf you enjoyed this episode, please leave us a five-star review and join us next time when we'll be sharing more about ourselves as creators.

AWS Health Innovation Podcast
#123, Decoding Immune Cell Behavior to Transform Treatment Discovery with Gregory Vladimer from Graph Therapeutics

AWS Health Innovation Podcast

Play Episode Listen Later May 13, 2025 19:45


Graph Therapeutics combines AI-driven perturbation modeling with multi-omics data to develop targeted therapies for patients with complex immune-mediated diseases who currently lack effective treatment options.

Hacker Public Radio
HPR4376: Re-research

Hacker Public Radio

Play Episode Listen Later May 12, 2025


This show has been flagged as Explicit by the host. Research Tools Harvard Referencing - https://en.wikipedia.org/wiki/Parenthetical_referencing#Author%E2%80%93date_(Harvard_referencing) Google Notebook LM - https://notebooklm.google/ Google Scholar - https://scholar.google.co.uk/ Connected Papers - https://www.connectedpapers.com/ Zotero - https://www.zotero.org/ Databases SQL Databases - https://en.wikipedia.org/wiki/Relational_database NoSQL Databases - https://en.wikipedia.org/wiki/NoSQL Graph Databases - https://en.wikipedia.org/wiki/Graph_database Misc Borland Graphics Interface - https://en.wikipedia.org/wiki/Borland_Graphics_Interface Hough Transform - https://en.wikipedia.org/wiki/Hough_transform Joplin - https://joplinapp.org/ Provide feedback on this episode.

Effective Altruism Forum Podcast
“Doing Prioritization Better” by arvomm, David_Moss, Hayley Clatterbuck, Laura Duffy, Derek Shiller, Bob Fischer

Effective Altruism Forum Podcast

Play Episode Listen Later May 10, 2025 75:04


Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward. Executive Summary Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. We ask how much of EA prioritization work falls in each of these categories: Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. We then explore strengths and potential pitfalls of each level: Cause [...] ---Outline:(00:37) Executive Summary(03:09) Introduction: Why prioritize? Have we got it right?(05:18) The types of prioritization(06:54) A snapshot of EA(16:45) The Types of Prioritization Evaluated(16:57) Cause Prioritization(20:56) Within-Cause Prioritization(25:12) Cross-Cause Prioritization(30:07) Summary Table(30:53) What factors should push us towards one or another?(37:27) Possible Next Steps(39:44) Conclusion(40:58) Acknowledgements(41:01) en-US-AvaMultilingualNeural__ Modern geometric logo design with text RETHINK PRIORITIES(41:55) Appendix: Strengths and Pitfalls of Each Type(42:07) Within-Cause Prioritization Strengths(42:12) Decision-Making Support(42:37) Comparability of Outputs(44:18) Disciplinarity Advantages(45:45) Responsiveness to Evidence(46:48) Movement Building(48:06) Within-Cause Prioritization Weaknesses and Potential Pitfalls(48:12) Responsiveness to Evidence(50:54) Decision-Making Support(52:45) Cross-Cause Prioritization Strengths:(53:06) Decision-Making Support(54:49) Responsiveness to Evidence(56:08) Movement Building(56:22) Comparability of Outputs(56:45) Decision-Making Support(57:14) Cross-Cause Prioritization Weaknesses and Potential Pitfalls(57:20) Comparability of Outputs(58:01) Disciplinarity Advantages(58:41) Movement Building(59:09) Decision-Making Support(01:00:27) Cause Prioritization Strengths(01:00:32) Decision-Making Support(01:02:01) Responsiveness to Evidence(01:02:52) Movement Building(01:03:28) Cause Prioritization Weaknesses and Potential Pitfalls(01:04:28) Decision-Making Support(01:06:08) Responsiveness to EvidenceThe original text contained 23 footnotes which were omitted from this narration. --- First published: April 16th, 2025 Source: https://forum.effectivealtruism.org/posts/ZPdZv8sHuYndD8xhJ/doing-prioritization-better-2 --- Narrated by TYPE III AUDIO. ---Images from the article:

The Real Python Podcast
Experiments With Gen AI, Knowledge Graphs, Workflows, and Python

The Real Python Podcast

Play Episode Listen Later May 9, 2025 59:18


Are you looking for some projects where you can practice your Python skills? Would you like to experiment with building a generative AI app or an automated knowledge graph sentiment analysis tool? This week on the show, we speak with Raymond Camden about his journey into Python, his work in developer relations, and the Python projects featured on his blog.

Earley AI Podcast
Earley AI Podcast Episode 66: Reengineering Knowledge for the AI Era

Earley AI Podcast

Play Episode Listen Later May 9, 2025 29:49


In this episode of the Earley AI Podcast, host Seth Earley sits down with industry analyst and advisor Tony Baer, a seasoned expert in data, cloud, and analytics. With decades of experience guiding global tech leaders like AWS and Oracle, Tony brings a nuanced perspective on how knowledge engineering is evolving—and why context is the missing link in many enterprise AI initiatives.Together, Seth and Tony explore the shift from static data models to dynamic knowledge frameworks, the renewed importance of governance, and how graph databases and generative AI are reshaping enterprise intelligence. This is a conversation packed with hard-earned lessons and actionable insight for data, IT, and transformation leaders aiming to make AI work in the real world.Key Takeaways:Knowledge engineering today is about dynamic, adaptive structures—not static ontologies or rigid models.The role of the knowledge engineer is shifting: it's less about technical mastery and more about bridging data, business, and domain expertise.Context is foundational. The five W's—Who, What, When, Where, Why (and How)—unlock meaningful, actionable intelligence.Graph databases and AI are enabling real-time connections across data, turning static information into living knowledge.Generative AI delivers the most value when rooted in organizational context. RAG strategies demand clean data and strong information architecture.Successful AI initiatives are focused. Start with well-bounded, high-impact processes—avoid boiling the ocean. Core principles from previous data waves still apply. It's about evolving governance, stewardship, and architecture for the AI era. Sustainable value comes from feedback loops, iteration, and alignment—not silver bullets. Tune in to discover how to make AI practical, actionable, and intelligent for your organization.Quote of the Show: "Just because something is old does not make it wrong. There are a lot of disciplines we've built up over the years—governance, data stewardship—that still matter. The principle was right. We just adapt it and use our learnings from each cycle to become more knowledgeable and proficient." Tony BaerLinksLinkedIn: https://www.linkedin.com/in/dbinsight/Website: https://www.dbinsight.ioThanks to our sponsors: VKTR Earley Information Science AI Powered Enterprise Book

Data Skeptic
Unveiling Graph Datasets

Data Skeptic

Play Episode Listen Later May 8, 2025 44:12


Thoroughbred Racing Radio Network
Thursday Del Mar Ship & Win ATR from Iroquois Steeplechase-Part 2: Patrick Lewis/Keri Brion/Declan Carroll, Tony Black, Win Using Thoro-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later May 8, 2025


Oracle University Podcast
Oracle GoldenGate 23ai: New Features & Product Family

Oracle University Podcast

Play Episode Listen Later May 6, 2025 17:39


In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey.   Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration.   Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   -----------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today's episode, we'll talk about the various iterations of Oracle GoldenGate since its inception. And we'll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I'm sure there's been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release.  02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that's in the 23ai release? I know there's a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That's really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate?  Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract.  So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you've given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that's a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on.  And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily.  10:51 Lois: So, there's really a massive amount of flexibility built into the system.  11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we've gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships.  And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI.  We have Oracle GoldenGate Free.  This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database.  12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What's next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That's really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product.  15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it?  Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it.  15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview.  Lois: In our next episode, we'll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Thoroughbred Racing Radio Network
Horseshoe Indy “Thurby” ATR from Churchill Downs-Part 2: Saratoga Special's Tom Law & Sean Clancy, Twin Spires' Nick Tammaro & HRN's Ed DeRosa, Win Using Thor-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later May 1, 2025


The Tech Blog Writer Podcast
3263: How Neo4j and Graph Databases Help Enterprises Make Smarter Decisions

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 30, 2025 30:48


How do you uncover misinformation and financial fraud hidden in plain sight across thousands of digital platforms during a global election cycle? In this episode, I spoke with Jim Webber, Chief Scientist at Neo4j, to explore how graph database technology is being used to expose coordinated disinformation campaigns, empower AI systems, and help enterprises manage the complexity of modern data. At the heart of our conversation is the story of the ElectionGraph Project, where Syracuse University used Neo4j's graph technology to investigate political ad spend on Meta platforms. What they discovered was not just political messaging, but sophisticated scams disguised as legitimate campaigns. These efforts, targeting civically engaged users, used merchandise giveaways as a front to harvest credit card details and enroll victims in recurring billing traps. Traditional analytics would have struggled to trace these relationships, but graph databases allowed researchers to map and understand the deeper connections between thousands of entities. We also unpack how graph technology goes far beyond fraud detection. Jim explains why graph databases are now foundational for businesses building AI systems, particularly those using Retrieval-Augmented Generation (RAG) to reduce hallucinations and improve decision making. Whether it's helping enterprises respond to customer needs or enabling AI agents to take action in real time, graphs provide the structure and context needed for reliable outcomes. Jim also shares the backstory behind Klarna's data transformation, where the company embraced knowledge graphs at the core of its operations and replaced major systems, including parts of Salesforce. It's a striking example of what becomes possible when a business commits to connected data as a strategic asset. From misinformation to intelligent automation, this episode dives into the real-world value of graph technology in 2025. Are you thinking critically about how your data infrastructure supports your AI ambitions?

MLOps.community
GraphBI: Expanding Analytics to All Data Through the Combination of GenAI, Graph, & Visual Analytics // Paco Nathan & Weidong Yang // #310

MLOps.community

Play Episode Listen Later Apr 29, 2025 74:01


GraphBI: Expanding Analytics to All Data Through the Combination of GenAI, Graph, & Visual Analytics // MLOps Podcast #310 with Paco Nathan, Principal DevRel Engineer at Senzing & Weidong Yang, CEO of Kineviz.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractExisting BI and big data solutions depend largely on structured data, which makes up only about 20% of all available information, leaving the vast majority untapped. In this talk, we introduce GraphBI, which aims to address this challenge by combining GenAI, graph technology, and visual analytics to unlock the full potential of enterprise data.Recent technologies like RAG (Retrieval-Augmented Generation) and GraphRAG leverage GenAI for tasks such as summarization and Q&A, but they often function as black boxes, making verification challenging. In contrast, GraphBI uses GenAI for data pre-processing—converting unstructured data into a graph-based format—enabling a transparent, step-by-step analytics process that ensures reliability.We will walk through the GraphBI workflow, exploring best practices and challenges in each step of the process: managing both structured and unstructured data, data pre-processing with GenAI, iterative analytics using a BI-focused graph grammar, and final insight presentation. This approach uniquely surfaces business insights by effectively incorporating all types of data.// BioPaco NathanPaco Nathan is a "player/coach" who excels in data science, machine learning, and natural language, with 40 years of industry experience. He leads DevRel for the Entity Resolved Knowledge Graph practice area at Senzing.com and advises Argilla.io, Kurve.ai, KungFu.ai, and DataSpartan.co.uk, and is lead committer for the pytextrank​ and kglab​ open source projects. Formerly: Director of Learning Group at O'Reilly Media; and Director of Community Evangelism at Databricks.Weidong YangWeidong Yang, Ph.D., is the founder and CEO of Kineviz, a San Francisco-based company that develops interactive visual analytics based solutions to address complex big data problems. His expertise spans Physics, Computer Science and Performing Art, with significant contributions to the semiconductor industry and quantum dot research at UC, Berkeley and Silicon Valley. Yang also leads Kinetech Arts, a 501(c) non-profit blending dance, science, and technology. An eloquent public speaker and performer, he holds 11 US patents, including the groundbreaking Diffraction-based Overlay technology, vital for sub-10-nm semiconductor production.// Related LinksWebsite: https://www.kineviz.com/Blog: https://medium.com/kinevizWebsite: https://derwen.ai/pacohttps://huggingface.co/pacoidhttps://github.com/ceterihttps://neo4j.com/developer-blog/entity-resolved-knowledge-graphs/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Weidong on LinkedIn: /yangweidong/Connect with Paco on LinkedIn: /ceteri/

This American Life
859: Chaos Graph

This American Life

Play Episode Listen Later Apr 27, 2025 67:29


People immersed in chaos try to solve for what it all adds up to. Visit thisamericanlife.org/lifepartners to sign up for our premium subscription.Prologue: A scientist who is used to organizing data starts tracking scientific meetings that seem to exist only on paper—meetings that might decide the fate of years of research. The NIH website shows one reality; the empty conference rooms tell another story. She graphs the chaos. (9 minutes)Act One: American doctors returning from Gaza compare notes and start to see a pattern. (28 minutes)Act Two: A woman watches her partner get taken in handcuffs with no explanation. Days later, she spots him in the most unexpected place. The coordinates of her life suddenly don't make sense as she navigates the bewildering map of the US immigration system. (23 minutes)Transcripts are available at thisamericanlife.orgThis American Life privacy policy.Learn more about sponsor message choices.

CNN News Briefing
5 Good Things: Why Saying No Brings Eva Longoria Joy

CNN News Briefing

Play Episode Listen Later Apr 26, 2025 16:41


The host of CNN's "Searching for Spain" shares why Americans should try to live life like the Spaniards do. A rare record collector reunites a woman with a Voice-o-Graph she recorded 70 years ago. How this record-setting rodent is saving lives with his sense of smell. From deception to acceptance, a female magician's decades-long journey into the world's most prestigious magic club. Plus, scientists may have found the first signs of life on a planet outside our solar system. Learn more about your ad choices. Visit podcastchoices.com/adchoices

CNN 5 Good Things
Why Saying No Brings Eva Longoria Joy

CNN 5 Good Things

Play Episode Listen Later Apr 26, 2025 17:11


The host of CNN's "Searching for Spain" shares why Americans should try to live life like the Spaniards do. A rare record collector reunites a woman with a Voice-o-Graph she recorded 70 years ago. How this record-setting rodent is saving lives with his sense of smell. From deception to acceptance, a female magician's decades-long journey into the world's most prestigious magic club. Plus, scientists may have found the first signs of life on a planet outside our solar system. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Effective Altruism Forum Podcast
“Why you can justify almost anything using historical social movements” by JamesÖz

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 25, 2025 9:11


[Cross-posted from my Substack here] If you spend time with people trying to change the world, you'll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. Technological progress is what drives improvements [...] The original text contained 1 footnote which was omitted from this narration. --- First published: April 24th, 2025 Source: https://forum.effectivealtruism.org/posts/kACcdhLDdWb9ZPG9L/why-you-can-justify-almost-anything-using-historical-social --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
[Linkpost] “Scaling Our Pilot Early-Warning System” by Jeff Kaufman

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 25, 2025 5:34


This is a link post. Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems. One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic, and early detection would be critical to mitigate the worst impacts. If such a pathogen were to spread, however, we wouldn't be able to monitor it with traditional approaches because we wouldn't know what to look for. Instead, we have invested in metagenomic sequencing for pathogen-agnostic detection. This doesn't require deciding what [...] --- First published: April 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/AJ8bd2sz8tF7cxJff/scaling-our-pilot-early-warning-system Linkpost URL:https://naobservatory.org/blog/scaling-our-early-warning-system/ --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

SAP Cloud Platform Podcast
Episode 118: Become a SaaS provider of multi-tenant applications with SAP BTP

SAP Cloud Platform Podcast

Play Episode Listen Later Apr 25, 2025 28:35 Transcription Available


In the April 2025 episode of SAP BTP Talk, we explore how existing and potential SAP partners can build, run, and integrate scalable full-stack cloud applications using the SAP Cloud Application Programming Model (CAP) and adhering to the development recommendations set out in the SAP BTP Developer's Guide. We would touch upon including an ERP-agnostic design while you are developing the solution that lets you deliver your application as a side-by-side extension to consumers using any SAP solution, such as SAP S/4HANA Cloud, SAP Business One, and SAP Business ByDesign.

Thoroughbred Racing Radio Network
Thursday Woodbine '25 ATR from Churchill Downs-Part 2: Hall Of Fame's Brien Bouyea, Smarty Jones owner Pat Chapman, Seth Merrow, Win Using Thoro-Graph's Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Apr 24, 2025


The Joe Reis Show
Juan Sequeda & Jesus Barrasa - Unlocking Knowledge with Graphs

The Joe Reis Show

Play Episode Listen Later Apr 24, 2025 55:14


Juan Sequeda and Jesus Barrasa are among the top experts on graphs in the world. In this episode, we chat about the definitions of semantics, ontologies, and the differences between RDF and property graphs, etc. We also talk about how AI is giving graphs a new surge of interest.

Thoroughbred Racing Radio Network
Thursday Santa Anita Park Hollywood Meet ATR-Part 2: NYRA's Pat McKenna, Steeplechase Weekend w/ Joe Clancy, Win Using Thoro-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Apr 17, 2025


Effective Altruism Forum Podcast
“Cost-effectiveness of Anima International Poland” by saulius

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 17, 2025 39:24


Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima's programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based [...] ---Outline:(02:16) Background(02:46) Results(05:57) Explanations of the programs(08:59) Why these estimates are very uncertain(13:48) Animal welfare metric(16:42) Comparison to SADs(19:42) Comparison to other charities(19:47) Comparisons of SADs estimates(20:54) Comparisons of cage-free estimates(24:26) For how many years do reforms have an impact?(25:21) Cage-free(29:45) Broilers(31:18) Stop the farms(32:57) Fur farmsThe original text contained 8 footnotes which were omitted from this narration. --- First published: April 10th, 2025 Source: https://forum.effectivealtruism.org/posts/sLYSa7MyuDKxreN5h/cost-effectiveness-of-anima-international-poland-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Engineering Kiosk
#191 Graphdatenbanken: von GraphRAG bis Cypher mit Michael Hunger von Neo4j

Engineering Kiosk

Play Episode Listen Later Apr 14, 2025 72:16


Von Kanten und Knoten: Ein Einstieg in Graph-DatenbankenWelche Relationen die einzelnen Datensätze in deiner Datenbank haben, kann eine Rolle bei der Entscheidung spielen, welche Art von Datenbank du am besten einsetzen solltest. Wenn du unabhängige Datensätze hast, die keine Relation zueinander haben oder häufige One to Many-Relationen, sind relationale Datenbanken gut geeignet. Wenn du jedoch sehr viele Many to Many Relationen hast, spielt eine Datenbank-Art ihre Vorteile aus: Graph Datenbanken.Ein gutes Beispiel sind wohl soziale Netzwerke wie LinkedIn oder Facebook, wo Events, Personen, Firmen und Posts mit Kommentaren eine durchgehende Beziehung zueinander haben. Auch bekannt als Social Graph. Natürlich kann dies auch alles in einer relationalen Datenbank gespeichert werden, aber Fragen wie “Gib mir bitte alle Personen, über die ich im 3. Grad verbunden bin, die aus Deutschland kommen und bei Aldi gearbeitet haben” sind schwer zu beantworten. Für Graph-Datenbanken jedoch ein Klacks. Grund genug, diesem Thema eine Bühne zu geben. Darum geht es in dieser Episode.In dem Interview mit dem Experten Michael Hunger klären wir, was eine Graph-Datenbank ist, welche Anwendungsfälle sich dadurch besser abbilden lassen, als z. B. in relationalen Datenbanken, was der Ursprung von Graph Datenbanken ist, was der Unterschied eines Property-Graph-Model und dem Triple-Store-Model ist, wie man mithilfe von Sprachen wie Cypher, SPARQL und Datalog, Daten aus einem Graph extrahiert, für welche Use Cases dies ggf. nicht die richtige Datenstruktur ist und geben einen Einblick in die Themen Knowledge Graphen, LLMs und GraphRAG.Bonus: Was der Film Matrix mit Graph-Datenbanken zu tun hat.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

Thoroughbred Racing Radio Network
Thursday Horseshoe Indianapolis ATR-Part 2: Tom Law, KEE w/ Jeremy Plonk, Win Using Thoro-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Apr 10, 2025


graphs jeff franklin horseshoe indianapolis
School Counseling Simplified Podcast
243. Data strategies every counselor needs with Patti Hoelzle

School Counseling Simplified Podcast

Play Episode Listen Later Apr 8, 2025 26:25


Welcome back to another episode of School Counseling Simplified! All April long, I'm sitting down with amazing guest experts to bring you insight, encouragement, and practical tools for your school counseling practice. Today's guest is the incredible Patti Hoelzle from Rooted Well, and we're talking all about something many counselors shy away from… data. But don't worry—Patti breaks it down in a way that's simple, empowering, and exciting! Patti Hoelzle is the owner of Rooted WELL and a National Board Certified School Counselor with a passion for building proactive, equitable systems of student support. She trains and consults on mindfulness in schools, trauma-informed practices, tiered interventions, and PBIS, working with educators and families nationwide. A sought-after speaker, Patti has presented at local and national conferences and teaches as an adjunct professor in a school counseling graduate program. Previously, she led social-emotional learning and MTSS efforts in a school district and has spent 18 years dedicated to being a professional school counselor. Recognized as Washington's 2021 School Counselor Advocate of the Year, Patti is dedicated to ensuring every student gets the whole-child support they deserve. Why Data Matters in School Counseling School counselors are in a unique position—we have to do the job, prove our impact, and often justify our position for the following school year. The good news? Data can do all three. Using data allows you to: Advocate for your role and time Communicate impact to stakeholders, families, and administration Support budget decisions and staffing Build confidence in your work Time Tracking as a Starting Point Patti recommends starting with one of the simplest tools: a time tracker. She's created an Excel spreadsheet workbook that allows counselors to track: Time spent on individual students Tasks completed throughout the day Graphs and charts that automatically populate from your entries This is perfect for sharing with admin, staying accountable, and noticing patterns in how your time is spent. You can find this resource in Patti's Teachers Pay Teachers store (linked in the show notes below). Using Google Tools for Easy Data Collection Another strategy Patti loves: Google Forms + the Google Suite. These tools are powerful for: Progress monitoring Sending surveys to students, teachers, and caregivers Collecting ongoing data during small groups Tracking changes in student behavior or academic progress And bonus—sending forms to caregivers via email often leads to higher participation rates than paper handouts. Advice for New Counselors Start small. Patti suggests: Begin with tracking your time, since it's something you're already doing Add in pre/post assessments once you're in the groove Use tools that already exist—no need to reinvent the wheel A Mindset Shift: The Slow Cooker Analogy “Our work is like a slow cooker, not a microwave.” Counselors often wish for a quick fix, but real change takes time. Don't be discouraged if you don't see growth right away. If your data isn't showing growth: Don't take it personally—there are many factors at play Use it as a learning opportunity Be willing to adapt and try new approaches Track student growth over time, especially with Tier 2 or Tier 3 students This conversation was such a great reminder that data doesn't have to be intimidating—it can actually empower us to better serve our students and advocate for ourselves. You can connect with Patti and find her time tracker and other amazing resources linked below in the show notes. Thanks for listening, and I'll see you next week on School Counseling Simplified!  Resources mentioned: Join my school counselor membership IMPACT here! If you are enjoying School Counseling Simplified please follow and leave us a review on Apple Podcasts! Connect with Rachel: TpT Store Blog Instagram Facebook Page Facebook Group Pinterest  Youtube Connect with Patti: rootedwellcoaching.com TpT Store TikTok Instagram More About School Counseling Simplified: School Counseling Simplified is a podcast offering easy to implement strategies for busy school counselors. The host, Rachel Davis from Bright Futures Counseling, shares tips and tricks she has learned from her years of experience as a school counselor both in the US and at an international school in Costa Rica. You can listen to School Counseling Simplified on Apple Podcasts, Spotify, Google Podcasts, and more!  

Beyond the Code
Eva Beylin at #BB25, on The Graph, Optimism and Ethereum.

Beyond the Code

Play Episode Listen Later Apr 7, 2025 21:23


On March 26, 2025,⁠ ⁠Collider. VC⁠⁠ hosted⁠ ⁠Building Blocks 2025⁠⁠, as part of ETH TLV. Our host,⁠ ⁠Yitzy Hammer⁠⁠ was invited to come and interview guests and speakers.Live from Building Blocks at Jaffa Port in Tel Aviv, Yitzy Hammer sits down with Eva Beylin, Board Member of Optimism and Former Director of The Graph Foundation. They explored Eva's journey from management consulting to the crypto world, her experiences with NFTs, and her role at The Graph. They discuss the evolution of the Ethereum ecosystem, the importance of leadership honesty, and the current state of the crypto market, emphasizing the need for a return to fundamentals and realistic expectations in the industry.

Thoroughbred Racing Radio Network
Thursday Succeed ATR from Santa Anita-Part 2: Dylan Donnelly, Win Using Thoro-Graph with Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Apr 3, 2025


The Azure Security Podcast
Episode 110: Securing GenAI Applications with Entra (3 of 4): Monitoring and More

The Azure Security Podcast

Play Episode Listen Later Apr 1, 2025 40:14 Transcription Available


In this episode Michael and Gladys talk to Sharon Chahal who is a Principal Program Manager in the Identity team at Microsoft about monitoring and auditing when building GenAI applications. We also cover other related topics.Michael and Gladys cover the latest security news about API Security Posture Management, Azure Key Vault in China, Azure Data Studio retirement, new least privilege permissions in Graph and more.https://aka.ms/azsecpod

SAP Cloud Platform Podcast
Episode 117: Discovering Knowledge Graph within SAP HANA Cloud

SAP Cloud Platform Podcast

Play Episode Listen Later Apr 1, 2025 44:26 Transcription Available


In this new episode Niklas Siemer, Product Specialist for SAP Business Technology Platform, is talking to Shabana Samsudheen, Senior Product Manager for SAP HANA Cloud. We're making a deep dive into the new Knowledge Graph engine of SAP HANA Cloud. Talking about what graphs are and what they're used for. Typical uses cases of graphs and how to use them in SAP HANA Cloud.

Catalog & Cocktails
What Do Data Modeling, Data Vault and Knowledge Graphs Have in Common?

Catalog & Cocktails

Play Episode Listen Later Mar 28, 2025 70:55


Patrick Cuba, Snowflake Architect, explores the fundamental connections between Data Modeling, Data Vault, and Knowledge Graphs—revealing how these approaches all center on the same core elements: business entities, their relationships, and their historical states. Patrick unpacks why, despite the AI revolution, human expertise remains irreplaceable for accountability and real business value. If you're wrestling with cognitive overload in the face of data explosion or wondering how different modeling disciplines can complement each other, this episode delivers the practical insights you need.

Catalog & Cocktails
TAKEAWAYS - What Do Data Modeling, Data Vault and Knowledge Graphs Have in Common?

Catalog & Cocktails

Play Episode Listen Later Mar 28, 2025 6:33


Patrick Cuba, Snowflake Architect, explores the fundamental connections between Data Modeling, Data Vault, and Knowledge Graphs—revealing how these approaches all center on the same core elements: business entities, their relationships, and their historical states. Patrick unpacks why, despite the AI revolution, human expertise remains irreplaceable for accountability and real business value. If you're wrestling with cognitive overload in the face of data explosion or wondering how different modeling disciplines can complement each other, this episode delivers the practical insights you need.

GRTiQ Podcast
Michaela (Mickey) Negus - Senior Engineering Manager, Engineering Operations & Customer Success at Edge & Node

GRTiQ Podcast

Play Episode Listen Later Mar 28, 2025 42:16


Leave feedback!Today I am speaking with Mickey Negus, the Senior Engineering Manager at Edge & Node who oversees Engineering Operations and Customer Success. This is Mickey's second appearance on the GRTiQ Podcast, having first joined us in November 2023 (Ep. 143) where she shared her journey into web3 and initial work at Edge & Node.After joining Edge & Node in 2022, Mickey has focused on revolutionizing how decentralized protocols provide technical support – transforming what was once a seven-day response window into a remarkably efficient three-minute median response time. Her approach goes beyond traditional support models, employing highly-skilled technical experts in frontline roles and coordinating support efforts across multiple organizations, time zones, and communication channels to meet the unique needs of a decentralized network.During our conversation, Mickey shares insights about The Graph's successful transition to Sunrise (moving all traffic from hosted services to the decentralized network), explains the critical differences between web2 and web3 support paradigms, and outlines her three-pillar support playbook for decentralized protocols. She also discusses how they've implemented AI tools for sentiment analysis, documentation improvement, and streamlining support operations across hundreds of concurrent conversations.Show Notes and TranscriptsThe GRTiQ Podcast takes listeners inside web3 and The Graph (GRT) by interviewing members of the ecosystem.  Please help support this project and build the community by subscribing and leaving a review.Twitter: GRT_iQwww.GRTiQ.com

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What

Thoroughbred Racing Radio Network
Thursday Arkansas Derby Wknd ATR from Oaklawn Park-Part 2: Rich Migliore, NCAA's w/ Dave Hill, Win Using Thoro-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 27, 2025


Steve Talks Books
Analyzing Night Fever by Ed Brubaker and Sean Phillips

Steve Talks Books

Play Episode Listen Later Mar 26, 2025 69:38


In this conversation, the hosts discuss the graphic novel 'Night Fever' by Ed Brubaker and Sean Phillips, exploring its themes of male fantasy, midlife crisis, and the blurred lines between reality and fantasy. They delve into the protagonist's journey, the art style, and the relationships depicted in the story, ultimately reflecting on the deeper messages about life and self-awareness. The conversation delves into various themes surrounding human behavior, decision-making, and the complexities of life choices. It explores the instinctual reactions in life-threatening situations, the critique of the literary world, and the nature of satisfaction in life. The discussion also touches on the fantasy of living without consequences and the rational thought that often guides our decisions. Additionally, the participants share their personal journeys into graphic novels and comics, highlighting their experiences and recommendations.Send us a messageSupport the showFilm Chewing Podcast: https://www.buzzsprout.com/2235582/followLens Chewing on YouTube: https://www.youtube.com/@lenschewingSpeculative Speculations: https://creators.spotify.com/pod/show/speculative-speculationsSupport the podcast: https://www.paypal.com/ncp/payment/7EQ7XWFUP6K9EJoin Riverside.fm: https://riverside.fm/?via=steve-l

Software Engineering Daily
Knowledge Graphs as Agentic Memory with Daniel Chalef

Software Engineering Daily

Play Episode Listen Later Mar 25, 2025 53:39


Contextual memory in AI is a major challenge because current models struggle to retain and recall relevant information over time. While humans can build long-term semantic relationships, AI systems often rely on fixed context windows, leading to loss of important past interactions. Zep is a startup that's developing a memory layer for AI agents using The post Knowledge Graphs as Agentic Memory with Daniel Chalef appeared first on Software Engineering Daily.

The Vance Crowe Podcast
Up the Graph: Ideas and technology that will be everywhere in 2 years

The Vance Crowe Podcast

Play Episode Listen Later Mar 25, 2025 62:28 Transcription Available


In this episode, Vance Crowe shares a talk delivered at the American Farm Bureau Fusion Conference, focusing on the rapid integration of technology in agriculture and society. Vance delves into the concept of 'up the graph,' explaining how ideas and technologies spread and become commonplace. Drawing from his experience at Monsanto, he discusses the importance of engaging with emerging technologies early to leverage their potential before they become mainstream. He also highlight the role of AI in transforming policy analysis, communication, and legislative drafting, emphasizing its inevitable ubiquity in the near future.Additionally, Vance explores the potential of Bitcoin and the Lightning Network in revolutionizing financial transactions for farmers, particularly in reducing transaction fees and enabling direct-to-consumer sales. He introduce the concept of value for value in podcasting and other creative industries, and discuss the emerging decentralized social media platform, Nostr, which offers an alternative to traditional platforms by allowing users to retain their audience across different services. Throughout the talk, Vance encourages listeners to engage with these technologies through experimentation and play, to better understand and harness their capabilities.Legacy Interviews - A service that records individuals and couples telling their life stories so that future generations can know their family history. https://www.legacyinterviews.com/experienceRiver.com - Invest in Bitcoin with Confidence https://river.com/signup?r=OAB5SKTP

Breaking Badness
From ValleyRAT to Silver Fox: How Graph-Based Threat Intel is Changing the Game

Breaking Badness

Play Episode Listen Later Mar 24, 2025 57:53


In this episode of Breaking Badness, host Kali Fencl welcomes Wes Young of CSIRT Gadgets and Daniel Schwalbe, CISO and head of investigations at DomainTools, dive into a recent DomainTools Investigations (DTI) analysis involving ValleyRAT and Silver Fox, and how new tools are enabling faster, more accessible analysis for junior and seasoned analysts alike. Whether you're a threat intel veteran or an aspiring analyst, this episode is packed with hard-earned lessons, technical insights, and future-forward thinking. They also unpack the evolution of threat intelligence from early higher-ed days of wiki-scraped snort rules to today's graph-powered AI analysis. Wes shares the origin story behind his platform AlphaHunt, how it's being used to automate and enhance threat detection, and why community sharing remains essential even in an era of advanced tooling.

TeamClearCoat - An Automotive Enthusiast Podcast by Two Car Nerds

Whew! Despite a few technical difficulties, it's all computers after all, this episode felt like a return to form. Dave found Ian a new job, and Ian isn't too fond of Dave's F1 race watching method. We love you!

Thoroughbred Racing Radio Network
Thursday NY Thoroughbred Breeders ATR-Part 2: Chase Season w/ Sean Clancy, NCAA's w/ Dave Hill, Win Using Thoro-Graph w/ Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 20, 2025


Pro Church Tools with Brady Shearer
Standout Church Trends of the 2020s

Pro Church Tools with Brady Shearer

Play Episode Listen Later Mar 19, 2025 32:31


It's time to check in on the state of religion and Christianity. We've gathered key data-backed trends shaping this decade, including the one denomination that's growing, the strongest force in American religion, and the most common words in new church names. Let's dive in.   ============================= Table of Contents: ============================= 0:00 - Intro 1:30 - Trend #1: Church Names 9:40 - Trend #2: The One Denomination That's Growing 16:07 - Trend #3: Young Men Are Now As Religious As Young Women 27:19 - Trend #4: The Strongest Force In American Religion   IMPORTANT LINKS - Ryan Burge: https://www.drburge.com/ - Ryan Burge | Twitter/X: https://twitter.com/ryanburge - Graphs about Religion: https://www.graphsaboutreligion.com/ - What's In a Name? Trends in Non-Denominational Church Branding: https://bit.ly/4iJfQL9 - The Assemblies of God: A Denomination That May Be Growing: https://bit.ly/43JmYD1 - Are Young Women Leaving Religion Faster than Young Men?: https://bit.ly/4iLkk3N - Non-Denominationalism Is the Strongest Force in American Religion: https://bit.ly/41Zvj47 - The American Religious Landscape: Facts, Trends, and the Future: https://amzn.to/3XMZ1a2   THE 167 NEWSLETTER

The Presentation Podcast
e218 - Navigating the Data Visualization Landscape: Tools, Tips, and Techniques with Ann K. Emery

The Presentation Podcast

Play Episode Listen Later Mar 18, 2025 82:21


Episode #218 - Data visualization is an essential skill in today's data-driven world. It transforms raw data into visual formats like charts, graphs, and maps, making complex information understandable and engaging. In this podcast episode, Troy, Sandy and Nolan talk with Ann K. Emery of Depict Data Studio about the nuances of data visualization tools and best practices - especially for presentations. Listen now! Full Episode Show Notes https://thepresentationpodcast.com/2025/e218 Show Suggestions? Questions for your Hosts? Email us at: info@thepresentationpodcast.com Listen and review on iTunes. Thanks! http://apple.co/1ROGCUq New Episodes 1st and 3rd Tuesday Every Month

Crazy Wisdom
Episode #444: The Hidden Frameworks of the Internet: Knowledge Graphs, Ontologies, and Who Controls Truth

Crazy Wisdom

Play Episode Listen Later Mar 17, 2025 60:23


On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Jessica Talisman, a senior information architect deeply immersed in the worlds of taxonomy, ontology, and knowledge management. The conversation spans the evolution of libraries, the shifting nature of public and private access to knowledge, and the role of institutions like the Internet Archive in preserving digital history. They also explore the fragility of information in the digital age, the ongoing battle over access to knowledge, and how AI is shaping—and being shaped by—structured data and knowledge graphs. To connect with Jessica Talisman, you can reach her via LinkedIn.  Check out this GPT we trained on the conversation!Timestamps00:05 – Libraries, Democracy, Public vs. Private Knowledge Jessica explains how libraries have historically shifted between public and private control, shaping access to knowledge and democracy.00:10 – Internet Archive, Cyberattacks, Digital Preservation Stewart describes visiting the Internet Archive post-cyberattack, sparking a discussion on threats to digital preservation and free information.00:15 – AI, Structured Data, Ontologies, NIH, PubMed Jessica breaks down how AI trains on structured data from sources like NIH and PubMed but often lacks alignment with authoritative knowledge.00:20 – Linked Data, Knowledge Graphs, Semantic Web, Tim Berners-Lee They explore how linked data enables machines to understand connections between knowledge, referencing the vision behind the semantic web.00:25 – Entity Management, Cataloging, Provenance, Authority Jessica explains how libraries are transitioning from cataloging books to managing entities, ensuring provenance and verifiable knowledge.00:30 – Digital Dark Ages, Knowledge Loss, Corporate Control Stewart compares today's deletion of digital content to historical knowledge loss, warning about the fragility of digital memory.00:35 – War on Truth, Book Bans, Algorithmic Bias, Censorship They discuss how knowledge suppression—from book bans to algorithmic censorship—threatens free access to information.00:40 – AI, Search Engines, Metadata, Schema.org, RDF Jessica highlights how AI and search engines depend on structured metadata but often fail to prioritize authoritative sources.00:45 – Power Over Knowledge, Open vs. Closed Systems, AI Ethics They debate the battle between corporations, governments, and open-source efforts to control how knowledge is structured and accessed.00:50 – Librarians, AI Misinformation, Knowledge Organization Jessica emphasizes that librarians and structured knowledge systems are essential in combating misinformation in AI.00:55 – Future of Digital Memory, AI, Ethics, Information Access They reflect on whether AI and linked data will expand knowledge access or accelerate digital decay and misinformation.Key InsightsThe Evolution of Libraries Reflects Power Struggles Over Knowledge: Libraries have historically oscillated between being public and private institutions, reflecting broader societal shifts in who controls access to knowledge. Jessica Talisman highlights how figures like Andrew Carnegie helped establish the modern public library system, reinforcing libraries as democratic spaces where information is accessible to all. However, she also notes that as knowledge becomes digitized, new battles emerge over who owns and controls digital information​​.The Internet Archive Faces Systematic Attacks on Knowledge: Stewart Alsop shares his firsthand experience visiting the Internet Archive just after it had suffered a major cyberattack. This incident is part of a larger trend in which libraries and knowledge repositories worldwide, including those in Canada, have been targeted. The conversation raises concerns that these attacks are not random but part of a broader, well-funded effort to undermine access to information​​.AI and Knowledge Graphs Are Deeply Intertwined: AI systems, particularly large language models (LLMs), rely on structured data sources such as knowledge graphs, ontologies, and linked data. Talisman explains how institutions like the NIH and PubMed provide openly available, structured knowledge that AI systems train on. Yet, she points out a critical gap—AI often lacks alignment with real-world, authoritative sources, which leads to inaccuracies in machine-generated knowledge​​.Libraries Are Moving From Cataloging to Entity Management: Traditional library systems were built around cataloging books and documents, but modern libraries are transitioning toward entity management, which organizes knowledge in a way that allows for more dynamic connections. Linked data and knowledge graphs enable this shift, making it easier to navigate vast repositories of information while maintaining provenance and authority​​.The War on Truth and Information Is Accelerating: The episode touches on the increasing threats to truth and reliable information, from book bans to algorithmic suppression of knowledge. Talisman underscores the crucial role librarians play in preserving access to primary sources and maintaining records of historical truth. As AI becomes more prominent in knowledge dissemination, the need for robust, verifiable sources becomes even more urgent​​.Linked Data is the Foundation of Digital Knowledge: The conversation explores how linked data protocols, such as those championed by Tim Berners-Lee, allow machines and AI to interpret and connect information across the web. Talisman explains that institutions like NIH publish their taxonomies in RDF format, making them accessible as structured, authoritative sources. However, many organizations fail to leverage this interconnected data, leading to inefficiencies in knowledge management​​.Preserving Digital Memory is a Civilization-Defining Challenge: In the digital age, the loss of information is more severe than ever. Alsop compares the current state of digital impermanence to the Dark Ages, where crucial knowledge risks disappearing due to corporate decisions, cyberattacks, and lack of preservation infrastructure. Talisman agrees, emphasizing that digital archives like the Internet Archive, WorldCat, and Wikimedia are foundational to maintaining a collective human memory​​.

Homebrewed Christianity Podcast
Ryan Burge: Distrust & Denominations

Homebrewed Christianity Podcast

Play Episode Listen Later Mar 13, 2025 71:40


In this episode, I am joined by political scientist Ryan Burge for an engaging conversation about his fascinating data on religious decline and the rise of the 'Nones' and non-denominational Christianity. We discuss the implications of denominational decline, growing distrust in institutional religion, and the explosive growth of non-denominational churches. This episode features in-depth analysis, intriguing graphs, lively discussions, and insights from prominent social philosophers.   *** If you want access to the entire 2-hour conversation and invites to join us live in the future, all you have to do is become a member of either (or both) of our SubStacks — Graphs on Religion & Process This. *** Ryan P. Burge is an assistant professor of political science at Eastern Illinois University. Authorof numerous journal articles, he is the co-founder of and a frequent contributor to Religion in Public, a forum for scholars of religion and politics to make their work accessible to a general audience. Burge is a pastor in the American Baptist Church. Previous Visits from Ryan Burge Trust, Religion, & a Functioning Democracy What it's like to close a church The Future of Christian Education & Ministry in Charts The Sky is Falling & the Charts are Popping! Graphs about Religion & Politics w/ Spicy Banter a Year in Religion (in Graphs) Evangelical Jews, Educated Church-Goers, & other bits of dizzying data 5 Religion Graphs w/ a side of Hot Takes Myths about Religion & Politics Theology Beer Camp | St. Paul, MN | October 16-18, 2025 3 Days of Craft Nerdiness with 50+ Theologians & God-Pods and 600 new friends. A Five-Week Online Lenten Class w/ John Dominic Crossan Join us for a transformative 5-week Lenten journey on "Paul the Pharisee: Faith and Politics in a Divided World."This course examines the Apostle Paul as a Pharisee deeply engaged with the turbulent political and religious landscape of his time. Through the lens of his letters and historical context, we will explore Paul's understanding of Jesus' Life-Vision, his interpretation of the Execution-and-Resurrection, and their implications for nonviolence and faithful resistance against empire. Each week, we will delve into a specific aspect of Paul's theology and legacy, reflecting on its relevance for our own age of autocracy and political turmoil. . For details and to sign-up for any donation, including 0, head over here. _____________________ Hang with 40+ Scholars & Podcasts and 600 people at Theology Beer Camp 2025 (Oct. 16-18) in St. Paul, MN. This podcast is a Homebrewed Christianity production. Follow the Homebrewed Christianity, Theology Nerd Throwdown, & The Rise of Bonhoeffer podcasts for more theological goodness for your earbuds. Join over 80,000 other people by joining our Substack - Process This! Get instant access to over 45 classes at www.TheologyClass.com Follow the podcast, drop a review, send feedback/questions or become a member of the HBC Community. Learn more about your ad choices. Visit megaphone.fm/adchoices

Thoroughbred Racing Radio Network
Thursday NTRA NHC '25 ATR from Horseshoe Vegas-Part 2: Keith Chamblin, Brian Chenvert, Equine Edge's Scotty McKeever & David Harrison, Thoro-Graph's Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 13, 2025


Data Skeptic
Graph Bugs

Data Skeptic

Play Episode Listen Later Mar 10, 2025 29:01


In this episode today's guest is Celine Wüst, a master's student at ETH Zurich specializing in secure and reliable systems, shares her work on automated software testing for graph databases. Celine shows how fuzzing—the process of automatically generating complex queries—helps uncover hidden bugs in graph database management systems like Neo4j, FalconDB, and Apache AGE. Key insights include how state-aware query generation can detect critical issues like buffer overflows and crashes, the challenges of debugging complex database behaviors, and the importance of security-focused software testing. We'll also find out which Graph DB company offers swag for finding bugs in its software and get Celine's advice about which graph DB to use. ------------------------------- Want to listen ad-free?  Try our Graphs Course?  Join Data Skeptic+ for $5 / month of $50 / year https://plus.dataskeptic.com

Thoroughbred Racing Radio Network
Thursday Oaklawn Park ATR from Tampa Bay Downs-Part 2: David Johansen appreciations w/ Dan Delgado & Dani Eden, Thoro-Graph/Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 6, 2025


Thoroughbred Racing Radio Network
Thursday Santa Anita Park ATR from Gulfstream Park-Part 2: Steve Haskin's Derby Dozen, Andy Simoff, Win. Using Thoro-Graph w/ Crafty Jeff Franklin

Thoroughbred Racing Radio Network

Play Episode Listen Later Feb 27, 2025