Podcasts about verifiable

  • 179PODCASTS
  • 226EPISODES
  • 40mAVG DURATION
  • 1WEEKLY EPISODE
  • May 5, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about verifiable

Latest podcast episodes about verifiable

DeFi Slate
What The Author of ‘Blockchain Revolution' Got Right (and Wrong) with Alex Tapscott

DeFi Slate

Play Episode Listen Later May 5, 2025 55:08


In today's episode, we chat with Alex Tapscott, co-author of "Blockchain Revolution" – the book that kicked off so many crypto journeys back in 2016. Alex shares how a ski trip conversation with his father evolved into pioneering work that shaped the industry.We explore crypto's path dependency through major events and why Trump's election could be the final domino for Bitcoin adoption. Alex breaks down institutional strategies and the shift happening now: how regulatory changes, tech advancements, and evolving business mindsets are signalling a crucial inflection point for the space.Looking ahead, Alex points to tokenization and AI integration as the real game-changers, moving crypto beyond speculation into something that's actually transforming the world.Let's get into it.---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.xyz----Join The Rollup Edge: https://members.therollup.coWebsite: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

DeFi Slate
The Truth About What Went Wrong with Crypto AI Agents with Tarun Chitra and Wei Dai

DeFi Slate

Play Episode Listen Later Apr 30, 2025 59:54


AI and crypto are evolving together, but not without a few missteps along the way.In today's episode, we chat with Wei Dai from 1kx and Tarun Chitra from Gauntlet about what went wrong with the first generation of crypto AI agents and why reasoning models are bringing about a shift.We explore DeepSeek's revolutionary approach to cost optimization, the potential of MCP as "HTTP for agents," and why crypto's experience with security is so crucial here. The discussion spans from the distant horizon of quantum computing to the very real opportunities in compute economics.Our guests explain how AI's cost revolution parallels crypto's disruption of financial services and why decentralized computing might be the next big challenge. Let's dive into this conversation at the intersection of two transformative technologies.---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.xyz----Join The Rollup Edge: https://members.therollup.coWebsite: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

CryptoNews Podcast
#434: Daniel ‌Marin, Founder of Nexus, on Enabling the Verifiable Internet, Aggregating Unused Compute Power, ZK Tech, and Verifiable AI

CryptoNews Podcast

Play Episode Listen Later Apr 28, 2025 34:59


Daniel ‌Marin is the Founder and Chief Executive Officer of Nexus. Daniel founded Nexus in 2022 while he was at Stanford with the mission to enable the Verifiable Internet, which will redefine digital trust and create a more transparent, secure, and efficient world. To achieve this mission, Nexus is building a globally distributed Layer-1 blockchain powered by a zkVM engine.Daniel earned a Bachelor of Science in Computer Science from Stanford University. He was named to Forbes' '30 Under 30' list in 2025, and earned Bronze medals at the International Physics Olympiad in 2018 and 2019.In this conversation, we discuss:- Are we back?- Enabling the Verifiable Internet- Parallels between AI and ZK- Aggregating unused compute power- Verifiable AI- Solving critical issues around privacy, trust, and security- 2.1 million users and 3.6 million nodes already connected to the network- With Nexus, more nodes = faster blockchain- Verifiable computation will impact many markets, blockchain is just one example- The power of zkEVM- The future of AI & BlockchainNexusWebsite: nexus.xyzX: @NexusLabsDiscord: discord.gg/nexus-xyzDaniel MarinX: @danielmarinqLinkedIn: Daniel Marin---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.   PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

DeFi Slate
Global Instability Is Fueling the Greatest Crypto Boom Yet with Arthur Hayes and Mike Silagadze

DeFi Slate

Play Episode Listen Later Apr 28, 2025 52:23


Trump, Powell, and global tariffs are reshaping the financial system, and crypto is right in the crosshairs. Arthur Hayes and Mike Silagadze explain how these shifts could spark an even bigger DeFi boom. Arthur explains how Trump's walking back of tariffs and Fed stability commitments signal we've hit the bottom, setting up Bitcoin for a massive rally. Meanwhile, Mike claims ETH has finally bottomed and shares how ether.fi is building toward generating a billion dollars in annual revenue.We explore treasury buybacks as the new QE, why central banks choose gold over US treasuries, and how crypto products with real cash flow are shifting the space toward fundamentals. We also discuss ether.fi's DeFi bank vision and how their card is delivering crypto functionality with Visa-level usability.This conversation connects macro trends, token economics, and DeFi's future in ways that reshape our understanding. Let's get into it.

Tech Path Podcast
Nvidia Tariffs vs Bitcoin Mining + Ethereum's GPU Killer!

Tech Path Podcast

Play Episode Listen Later Apr 17, 2025 17:13


Technology stocks plunged as the chipmaking sector warned of ongoing uncertainty and higher costs from President Donald Trump's tariff plans. Meanwhile, Fabric's has revealed its new processing unit known as the "verifiable processing unit," or VPU, which will be tailored to handle Ethereum cryptography that rivals Nvidia GPU's.~This episode is sponsored by Uphold~Uphold Get $20 in Bitcoin - Signup & Verify and trade at least $100 of any crypto within your first 30 days ➜ https://bit.ly/pbnupholdGuest: Sue Ennis VP at Hut 8Hut 8 Website ➜ https://hut8.com/00:00 intro00:14 Sponsor: Uphold00:48 Nvidia Stock Crashing01:30 Tariffs Hit NVidia02:29 Nvidia China Exposure04:48 U.S. AI Leadership06:25 Trump's American Bitcoin x Hut 808:50 Sustainable Energy10:24 Ethereum Exposure To Nvidia GPU's11:52 Ethereum falling behind?12:55 Ethereum VPU vs Nvidia GPU14:39 VPU Stockpile?16:16 outro#Crypto #Nvidia #Bitcoin~Nvidia Tariffs vs Bitcoin Mining + Ethereum's GPU Killer!

Becker’s Payer Issues Podcast
Legacy Systems vs. Patient Access: Transforming Payer-Provider Alliances

Becker’s Payer Issues Podcast

Play Episode Listen Later Apr 15, 2025 13:20


In this episode of the Becker's Healthcare Podcast, Brook and Jocelyne from Verifiable dive into the critical connection between provider network growth, payer collaboration, and the modernization of credentialing systems. They explore how outdated legacy systems hinder patient access and provider onboarding, and share actionable strategies for healthcare leaders to improve compliance, reduce delays, and prepare for upcoming NCQA changes. With real-world examples — including Midi Health's rapid nationwide expansion — this discussion offers a forward-looking roadmap for building more efficient, scalable, and patient-centered networks.This episode is sponsored by Verifiable.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What

BlockHash: Exploring the Blockchain
Ep. 499 Charlie Hu | Making Bitcoin Verifiable with Bitlayer

BlockHash: Exploring the Blockchain

Play Episode Listen Later Mar 19, 2025 46:39


For episode 499, Co-founder Charlie Hu joins Brandon Zemp to dive into Bitlayer and how it's the first Bitcoin Layer 2 solution, built on the BitVM paradigm to scale Bitcoin without compromising its security. Bitlayer delivers Bitcoin equivalent security, trust minimized BTC bridging, EVM compatibility, and unlimited throughput. Innovations such as the Finality Stack, a dedicated verification layer for Bitcoin, and RtEVM, a high performance transaction processing engine, empower secure, scalable, and efficient transaction execution, enabling advanced decentralized applications within the Bitcoin ecosystem.

BlockHash: Exploring the Blockchain
Ep. 497 Catherine Daly | Verifiable Data Warehouse with Space and Time

BlockHash: Exploring the Blockchain

Play Episode Listen Later Mar 17, 2025 34:01


For episode 497, Head of Marketing Catherine Daly joins Brandon Zemp to talk about Space and Time, an AI driver web3 data warehouse. SxT replaces blockchain indexing, databases, and API servers with a decentralized solution.Catherine is a senior marketing strategist with a passion for building community around emerging technology. Prior to Space and Time, Catherine managed full-funnel marketing for both startups and established global organizations in the semiconductor industry. She is accomplished in developing data-driven integrated communications strategies to accelerate growth for businesses across Web3 the technology ecosystem.

The Commercial Break
TCB is Verifiable!

The Commercial Break

Play Episode Listen Later Mar 13, 2025 76:38


Episode#711: Bryan & Krissy discuss getting verified on Insta. How does it feel seeing your friend's kids turn the same age you were when you met? Plus, the dentist office is turning into a sales showroom and the gang isn't about it. Then, Tool fans are suing Tool in The Sand and a fussy couple is suing an airline for putting a dead passenger in the seat next to them. TCB Bit: It's time for College Corner on WSHIT! Professor Hungebuckle gives her advice to Springer Breakers on how to have a good time and stay safe! Watch episode #712 on Youtube Text us or leave us a voicemail: +1 (212) 433-3TCB FOLLOW US: Instagram:  @thecommercialbreak Youtube: youtube.com/thecommercialbreak TikTok: @tcbpodcast Website: www.tcbpodcast.com CREDITS: Hosts: Bryan Green & Krissy Hoadley Executive Producer: Bryan Green Producer: Astrid B. Green Voice Over: Rachel McGrath "TCB Bits" are all written, performed and produced by Bryan Green To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Noah's Window
Is Jesus' Resurrection A Verifiable Fact? | February 14, 2025

Noah's Window

Play Episode Listen Later Feb 14, 2025 15:58


Key Verse: John 20:29 We have eye witness accounts of many people who saw the resurrected Jesus. Many even gave their lives because they would not recant what they knew to be true.

Becker’s Healthcare Podcast
Navigating Network Compliance in Healthcare: Trends, Challenges, and Digital Transformation Insights

Becker’s Healthcare Podcast

Play Episode Listen Later Feb 5, 2025 19:00


In this episode of the Becker's Healthcare Podcast, host Jakob Emerson is joined by Gianni Aiello, VP of Product at Verifiable, to explore the evolving landscape of network compliance in healthcare. They discuss the core components of successful compliance programs, barriers to effective implementation, and the role of digital transformation in addressing regulatory challenges.This episode is sponsored by Verifiable.

Enginears
Phylax is Building Verifiable, Embedded Hack Prevention | Enginears Podcast

Enginears

Play Episode Listen Later Jan 29, 2025 41:35


If you're keen to share your story, please reach out to us!Guest:https://x.com/odysseas_eth/https://jobs.ashbyhq.com/phylax/Coinbase 'Nomad Bridge' article: https://www.coinbase.com/en-gb/blog/nomad-bridge-incident-analysis/Powered by Artifeks!https://www.linkedin.com/company/artifeksrecruitmenthttps://www.artifeks.co.ukhttps://www.linkedin.com/in/agilerecruiterLinkedIn: https://www.linkedin.com/company/enginearsioTwitter: https://x.com/EnginearsioAll Podcast Platforms: https://smartlink.ausha.co/enginears00:00 - Enginears Intro.00:48 - Odysseas Intro.04:57 - The Nomad Bridge incident; what was the cause?05:52 - Phylax Intro.10:03 - What is the current state of security in the crypto space?13:01 - What are Odysseas' biggest challenges that he sees in protocol security?16:27 - Who is auditing the protocol engineering and why does it cost so much?22:15 - Hack prevention(s) and how they work?33:50 - Social engineering hacking.38:07 - Phylax's growth plans for the next 12 months.39:51 - Odysseas & Phylax Outro.40:54 - Enginears Outro.Edited by: hunterdigital.co.ukHosted by Ausha. See ausha.co/privacy-policy for more information.

Fearless Practice
Mark Pioro: Ontario Psychotherapy Rules and Regulations | Ep 148

Fearless Practice

Play Episode Listen Later Jan 8, 2025 30:45


Are you a Registered Psychotherapist? Are you licensed through the College of Registered Psychotherapists of Ontario (CRPO)? Do you know what you're allowed and not allowed to do in your private practice?  Today's guest is Mark, the Deputy Registrar and General Counsel at the College of Registered Psychotherapists of Ontario. We discuss rules and regulations that pertain to registered psychotherapists in private practice.  MEET MARK Mark Pioro is the Deputy Registrar & General Counsel at the College of Registered Psychotherapists of Ontario (CRPO). CRPO is the regulator, established by the government, which sets the standards for RPs. CRPO acts in the interest of the public, striving to ensure the competent and ethical practice of the profession. Learn more about Mark and the CRPO on the CRPO website.  In this episode:  What is the CRPO? Certification and residency  The CRPO and running private practices  Going from graduation into private practice  Do's and don'ts in advertising  Managing fees and rates ethically  Handling complaints  What is the CRPO? The College of Registered Psychotherapists of Ontario (CRPO) is one of the regulatory bodies for psychotherapy in Ontario. This means that only individuals registered with the CRPO are legally permitted to call themselves Registered Psychotherapists (RPs). But other mental health professionals may be regulated to provide psychotherapy services by different colleges. In 2007, the Ontario government decided to allow the following to provide psychotherapy services: Nurses  Occupational therapists  Physicians  Psychologists  Social workers  Registered Psychotherapists  Certification and residency With Registered Psychotherapists (RPs) you can be a registered RP without having to be in Ontario. You can also easily become a RP if you are registered with another regulated province as a Registered Counselling Therapist. The CRPO and running private practices The College of Registered Psychotherapists of Ontario (CRPO) offers resources and information regarding regulations and laws related to the practice of psychotherapy. However, the CRPO does not provide training or guidance on the business aspects of running a private practice, including areas such as tax implications, insurance requirements, or other specific business-related advice.  Going from graduation into private practice Essentially, it depends on the therapist. In Ontario, if the student has completed the course work and feels confident enough to run a private practice, they could start a private practice  while finishing their degree. However, the psychotherapist would still need to have supervision and oversight. Remember that every provincial college may have different regulations for when a therapist can start private practice! Do's and don'ts in advertising Make sure your advertising is; Truthful  Accurate  Verifiable  Some inappropriate advertising may include;  Promising results  Using comparisons or superlatives  Concealing advertising  Advertising in a regulated province where you are not licensed Be clear and honest with your advertising. Be ethical, and don't try to sell your services as a fix-all to potential clients in order to get more business. Talk with a Canadian consultant to make sure that you remain ethical while still effective.  Managing fees and rates ethically An RP cannot lure in a client with a low rate and then suddenly increase it once the therapeutic relationship has been established. However, you can raise your fees and rates ethically, since your expertise levels may increase and inflation is an issue sometimes.  Handling complaints  Check your insurance policies, since some insurance companies may offer you a lawyer for the process while others might not  Have a network of support for this challenging time The most serious complaints may go public, but those are very rare and may need evidence and legal findings  Most complaints and investigations are resolved without a disciplinary hearing which has to go public  Connect with me: Instagram Website  Resources mentioned and useful links: Ep 147: Encore episode | EP 147 Learn more about the tools and deals that I love and use for my Canadian private practice Sign up for my free e-course on How to Start an Online Canadian Private Practice Jane App (use code FEARLESS for one month free) Learn more about Mark and the CRPO on the CRPO website Rate, review, and subscribe to this podcast on Apple Podcasts, Spotify, Amazon, and TuneIn

Becker’s Payer Issues Podcast
Navigating Network Compliance in Healthcare: Trends, Challenges, and Digital Transformation Insights

Becker’s Payer Issues Podcast

Play Episode Listen Later Dec 20, 2024 19:00


In this episode of the Becker's Healthcare Podcast, host Jakob Emerson is joined by Gianni Aiello, VP of Product at Verifiable, to explore the evolving landscape of network compliance in healthcare. They discuss the core components of successful compliance programs, barriers to effective implementation, and the role of digital transformation in addressing regulatory challenges.This episode is sponsored by Verifiable.

The SSI Orbit Podcast – Self-Sovereign Identity, Decentralization and Web3
#78 - The United Nations Transparency Protocol (UNTP) (with Steve Capell)

The SSI Orbit Podcast – Self-Sovereign Identity, Decentralization and Web3

Play Episode Listen Later Dec 6, 2024 68:55


Are you confident in the environmental and social claims about your products?    In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Steve Capell, Vice Chair of UN/CEFACT and Project Lead of the United Nations Transparency Protocol (UNTP), to explore how transparency and traceability are being revolutionized in global value chains. Together, they unpack the challenges of greenwashing, the urgency of compliance with new regulations, and the transformative potential of a global transparency protocol. Steve shares real-world examples, such as the impact of carbon border adjustments and digital product passports, highlighting how regulatory frameworks and technological innovation intersect. The conversation also addresses the role of decentralized identifiers and verifiable credentials in ensuring the integrity of sustainability claims. Key Insights: Greenwashing is widespread, with over 50% of product claims being misleading or false. The UNTP offers a standards-based approach to ensure transparency and interoperability in value chains. Verifiable credentials are essential for decentralized trust, linking data to trusted sources while ensuring integrity. Regulations like carbon border adjustments and product passports are reshaping trade by enforcing sustainability disclosures. The UN's role as a neutral body provides a trusted space for creating global standards and recommendations. Tune in to this episode to learn how the UNTP is driving a shift from marketing-led sustainability claims to evidence-backed transparency and why this transformation is critical for regulatory compliance and strategic business differentiation. Don't miss this deep dive into the future of transparent global trade! Chapters: 00:00 - Why is the UN pursuing the development of a new protocol to help solve transparency in sustainability disclosures? 09:17 - How to ensure the integrity of claims being made is the d efacto standard? 16:32 - How did the UNTP think through the proper technical and governance architecture to support all transparency use cases? 32:36 - What will become the catalyst for the mass uptake of the UNTP? 40:32 - What makes the UN a good home for the definition of a transparency protocol? 51:03 - Does all data that interacts with the UNTP need to be public? 58:40 - Is there an opportunity for registrars to create value using the UNTP? Resources UNTP Overview: A detailed introduction to the United Nations Transparency Protocol (UNTP). UN/CEFACT: Learn more about the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT).  The digital product passport and its technical implementation Green claims - European Commission - Environment: New criteria to stop companies from making misleading claims about environmental merits of their products and services. Carbon Border Adjustment Mechanism (CBAM): EU regulation on carbon emissions and global trade.

InfoBlips
Why verifiable factual information matter

InfoBlips

Play Episode Listen Later Nov 20, 2024 2:07


Why verifiable factual information matter

Chain Reaction
Verifiable Inference: Don't Trust, Verify | Crypto x AI Event

Chain Reaction

Play Episode Listen Later Oct 15, 2024 66:54


In this engaging debate for Crypto x AI Month, join Luke Saunders as he moderates a conversation on Verifiable Inference—a critical technology that ensures trustless AI by verifying the correctness of outputs without revealing internal workings. He is joined by leading founders from the crypto AI space, including Colin Gagich of Inference Labs, Ryan McNutt of SphereOne, Jeremy from Aizel Network, and Travis Good of Ambient, to explore: ► What verifiable inference is and why it's essential in the age of AI ► How decentralized models can offer solutions to centralized AI control ► Real-world use cases of verifiable inference across blockchain and AI applications This panel digs into the technical approaches—from ZK proofs to trusted execution environments (TEEs)—and discusses how the future of AI and crypto requires trustless verification to ensure security, transparency, and privacy. Watch more sessions from Crypto x AI Month here: https://delphidigital.io/crypto-ai --- Crypto x AI Month is the largest virtual event dedicated to the intersection of crypto and AI, featuring 40+ top builders, investors, and practitioners. Over the course of three weeks, this event brings together panels, debates, and discussions with the brightest minds in the space, presented by Delphi Digital. Crypto x AI Month is free and open to everyone thanks to the support from our sponsors: https://olas.network/ https://venice.ai/ https://near.org/ https://mira.foundation/ https://www.theoriq.ai/ --- Follow the Speakers: - Luke Saunders on Twitter/X ► https://x.com/lukedelphi - Travis Good on Twitter/X ► https://x.com/IridiumEagle - Jeremy on Twitter/X ► https://x.com/immorriv - Colin Gagich on Twitter/X ► https://x.com/colingagich - Travis Good on Twitter/X ► https://x.com/ryanmcnutty33 --- Chapters 00:00 Introduction to Verifiable Inference 03:46 Defining Verifiable Inference 05:13 Use Cases for Verifiable Inference 10:01 Real-World Applications and Innovations 16:38 Different Approaches to Verification 24:18 Exploring Zero-Knowledge Proofs 28:08 Proof of Logics and Its Implications 34:02 Multi-Agent Systems and Transaction Verification 37:46 Challenges in Optimistic Approaches 41:36 Determinism and Model Reproducibility 45:30 The Balance of Open and Closed Source Models 50:25 The Future of Edge Computing and Inference 57:49 Decentralization and Government Control of AI Disclaimer All statements and/or opinions expressed in this interview are the personal opinions and responsibility of the respective guests, who may personally hold material positions in companies or assets mentioned or discussed. The content does not necessarily reflect the opinion of Delphi Citadel Partners, LLC or its affiliates (collectively, “Delphi Ventures”), which makes no representations or warranties of any kind in connection with the contained subject matter. Delphi Ventures may hold investments in assets or protocols mentioned or discussed in this interview. This content is provided for informational purposes only and should not be misconstrued for investment advice or as a recommendation to purchase or sell any token or to use any protocol.

Crypto Altruism Podcast
Episode 175 - Open Forest Protocol - Redefining the Value of Nature with Verifiable Onchain Environmental Assets

Crypto Altruism Podcast

Play Episode Listen Later Oct 8, 2024 36:37


In episode 175, we're excited to welcome Michael Kelly, Chief Product Officer at Open Forest Protocol, an organization on a mission to create the highest integrity and most verifiable nature-based carbon credits, at scale. We discuss the Afforestation/Reforestation methodology behind OFP's carbon credits, the role of blockchain in addressing the critical challenges in environmental asset markets, and how OFP actively involves local and Indigenous communities in climate projects. We also take a glimpse into the century-long vision that drives Open Forest Protocol's work.--Three Key Takeaways--There are about 1.5 billion hectares of land around the world that would be eligible for some type of environmental asset methodology – such as afforestation, reforestation, biodiversity conservation, etc. This provides exciting opportunities for growth in the sector, and for local communities to focus on positive environmental actions instead of extraction.OFP's approach focuses on enhanced data reporting which is stored and verified on-chain. This data collection is driven by local communities who are collecting conservation data at regular intervals. This ensures all credits issues through OFP's protocol can be verified, while also enabling local communities to start projects without having to go through extensive evaluation from an outside auditor. They simply begin by uploading data, getting their project verified, and turning their project into credits.“The nature of value is changing the value of nature”. Traditionally, extractive approaches have been much more profitable than conservation. This is due to incentive structures that place a greater value on extracting resources from land than by conserving it. By giving communities the power to generate income by engaging in positive environmental actions, it can help shift the balance and make conservation as profitable as extraction.--Full shownotes with links available at--https://www.cryptoaltruism.org/blog/crypto-altruism-podcast-episode-175-open-forest-protocol-redefining-the-value-of-nature-with-verifiable-onchain-environmental-assets

Grace to You on Oneplace.com
The Bible Verifiable by Miracles

Grace to You on Oneplace.com

Play Episode Listen Later Sep 20, 2024 28:55


Here's a question for you, How would you explain to someone what it means that the Bible is the Word of God? And if a friend doubted the accuracy of Scripture . . . what would you tell him? To support this ministry financially, visit: https://www.oneplace.com/donate/85/29

Grace to You on Oneplace.com
The Bible Verifiable by Miracles

Grace to You on Oneplace.com

Play Episode Listen Later Sep 19, 2024 28:55


What's the big deal if you believe that only some parts of the Bible are useful today . . . but other parts are too dated to be helpful for modern problems? Seems logical that a two-thousand-year-old book would be hard to apply to life today . . . right? To support this ministry financially, visit: https://www.oneplace.com/donate/85/29

Proof of Coverage
Verifiable AI Agents with Axal

Proof of Coverage

Play Episode Listen Later Sep 16, 2024 48:20


In today's episode we welcome back co-host John Wu and introduce Ash Ahmed, a recent Harvard College graduate and co-founder of Axal. Ash shares his fascinating journey from being a competitive debater to the world of startups and cryptocurrency. We discuss the unique bond between Ash and his twin brother, who is also making waves in the venture space with a non-invasive Neuralink startup. Ash discusses the entrepreneurial spirit fostered at Harvard, the challenges of building a company while still in college, and the importance of reflection in his decision-making process. Ash explains how Axal aims to create a network for verifiable autonomous agents, making tasks easier and more accessible for users. He highlights the intersection of AI and blockchain, the utility of Axal's token, and the vision for the future of crypto and task fulfillment. 00:00 - Introduction 01:06 - Introduction of Guests 01:57 - Sibling Rivalry and Competition: Ash's Twin Brother 04:19 - Different Paths: Ash and His Brother's Journeys 06:17 - Entrepreneurship in College 09:18 - Finding Funding and Building Axal 10:42 - Advice for Mentors in Startup Accelerators 12:26 - The Importance of Ambition and Hustle 16:46 - Reflections on Personal Growth and Maturity 17:12 - Introducing Axal: The Platform Overview 21:00 - Exploring Other Vertical Opportunities for Axal 23:23 - The Role of UI/UX in Product Development 24:34 - Comparing Computer Science Education Across Universities 26:10 - The Entrepreneurial Spirit at Harvard 27:25 - Token Utility and Ecosystem Dynamics 30:57 - Future Vision: Axal's Role in the Crypto Landscape 33:11 - Advice for Aspiring Entrepreneurs in College 36:06 - The Pressure of Early Career Decisions 37:32 - The Value of Banking Experience for Entrepreneurs Disclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.

Nuclear Hotseat hosted by Libbe HaLevy
NH #689: Depleted Uranium Weapons in Russia, Ukraine… Israel? Jack Cohen-Joppa of the Nuclear Resister with Verifiable Info

Nuclear Hotseat hosted by Libbe HaLevy

Play Episode Listen Later Sep 4, 2024 60:01


This Week’s Featured Interview: Links from the Interview: Nuclear Hotseat Hot Story with Linda Pentz Gunter: Russia’s war in Ukraine has put another nuclear power plant in danger: their own. The ICAN Update with Alistair Burnett Monthly update on issues and actions regarding the United Nations Treaty on the Prohibition of Nuclear War from ICAN –...

Citadel Dispatch
NOSTRIGA: VERIFIABLE REPUTATION - WEBS OF TRUST

Citadel Dispatch

Play Episode Play 33 sec Highlight Play 39 sec Highlight Play 31 sec Highlight Listen Later Sep 3, 2024 42:27 Transcription Available


Discussion on the power of key reputation systems and webs of trust using nostr in front of a live audience at Nostriga in Riga.Video: https://www.youtube.com/watch?v=LE731vXoUOUODELL on Nostr: https://primal.net/odellPablo on Nostr: https://primal.net/pablof7zStuart Bowman on Nostr: https://primal.net/p/npub1lunaq893u4hmtpvqxpk8hfmtkqmm7ggutdtnc4hyuux2skr4ttcqr827ljhzrd on Nostr: https://primal.net/p/npub1ye5ptcxfyyxl5vjvdjar2ua3f0hynkjzpx552mu5snj3qmx5pzjscpknprpip on Nostr: https://primal.net/p/npub176p7sup477k5738qhxx0hk2n0cty2k5je5uvalzvkvwmw4tltmeqw7vgup website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://citadeldispatch.comnostr live chat: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://citadeldispatch.com/stream⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nostr account: ⁠https://primal.net/odell⁠youtube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@citadeldispatch⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠stream sats to the show: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.fountain.fm/(00:00) Introduction and Setting the Stage(00:41) Webs of Trust: Concept and Importance(04:02) Social Graph vs. Web of Trust(10:12) Challenges and Practical Applications(18:03) Building Trust in Nostr Clients(24:02) Bootstrapping Trust for New Users(29:08) Blossom and Media Authenticity(37:01) Final Thoughts and Future Outlook

Unbelievable?
Are all religions HISTORICALLY verifiable?

Unbelievable?

Play Episode Listen Later Aug 9, 2024 45:42


Welcome to today's Unbelievable Debate, recorded live at St Michael's Aylesbury, where two distinguished scholars, Robert Scott and Muhammad Yasir Al-Hanafi, engage in a thought-provoking debate on the historical verifiability, truth, and societal contributions of their respective faiths: Christianity and Islam. Join us as we delve into these complex topics with respect and curiosity and discuss the different truth claims of each religion ask whether we can collaboratively embrace our differences toward the common social and human good? Scott and Yasir explore deep and challenging questions, such as: Are the core beliefs of Christianity and Islam historically verifiable? What contributions do these religions make to society? Is belief in God necessary for the world to function? Today's debate also features a lively Q&A session, where students and attendees pose difficult and often misunderstood questions including: "Someone told me that in Islam, if you kill an unbeliever (an ‘infidel'), then 72 virgins are waiting for you in heaven. Is that true?" "How does the chain of transmission work in Islam?" "If everything in the world has a cause and God is the cause of everything, who caused God?" Join us as we delve into these complex topics with respect and curiosity. ***We have another Christian vs Islam dialogue coming up soon so, if you still have questions -- we want to hear from you -- please do drop us your in the comments or email us at unbelievable@premier.org.uk • Subscribe to the Unbelievable? podcast: https://pod.link/267142101 • More shows, free eBook & newsletter: https://premierunbelievable.com • For live events: http://www.unbelievable.live • For online learning: https://www.premierunbelievable.com/training • Support us in the USA: http://www.premierinsight.org/unbelievableshow • Support us in the rest of the world: https://www.premierunbelievable.com/donate

Zero Knowledge
Episode 333: Verifiable SQL, Reckle Trees and ZK Coprocessing with Lagrange Labs

Zero Knowledge

Play Episode Listen Later Jul 24, 2024 64:18


Summary In this week's episode Anna (https://x.com/AnnaRRose) chats with Ismael Hishon-Rezaizadeh (https://x.com/ismael_h_r), Founder and CEO at Lagrange Labs (https://www.lagrange.dev/) and Charalampos (Babis) Papamanthou (https://x.com/chbpap), Head of Research at Lagrange and Co-Director of the Applied Cryptography Lab at Yale University. They revisit the concepts of zk-powered coprocessors and dive into the work that Charalampos did previous to joining Lagrange on Verifiable SQL. They then explore how this is incorporated into the Lagrange coprocessor system, the work they are doing on Reckle Trees, future work and what all this enables for dApp developers. They discuss their new prover marketplace, the general state of infrastructure and how they are keen to bring more concepts from general computing into decentralized blockchain systems. Here's some additional links for this episode: 13:07 * Protocols for Public Key Cryptosystems by Ralph C. Merkle (https://www.ralphmerkle.com/papers/Protocols.pdf) 14:08 * Episode 57: Merklize this! Merkle Trees & Patricia Tries (https://zeroknowledge.fm/57-2/) 26:32 * Episode 327: Proof Aggregation with Shumo and Yi from NEBRA (https://zeroknowledge.fm/327-2/) 36:57 * Reckle Trees: Updatable Merkle Batch Proofs with Applications by Papamanthou, Srinivasan, Gailly, Hishon-Rezaizadeh, Salumets and Golemac (https://eprint.iacr.org/2024/493.pdf) 36:57 * Lagrange Labs GitHub on Reckle Trees (https://github.com/Lagrange-Labs/reckle-trees) The Web3 Summit is back! The next edition will be happening in Berlin from Aug 19-21, you can head over to web3summit.com (http://web3summit.com/) to apply, learn more and grab your tickets today. Episode Sponsors Launching soon, Namada (https://namada.net/) is a proof-of-stake L1 blockchain focused on multichain, asset-agnostic privacy, via a unified shielded set. Namada is natively interoperable with fast-finality chains via IBC, and with Ethereum using a trust-minimized bridge. Follow Namada on Twitter @namada (https://twitter.com/namada) for more information and join the community on Discord (http://discord.gg/namada). Aleo (http://aleo.org/) is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. As Aleo is gearing up for their mainnet launch in Q1, this is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at http://aleo.org/ (http://aleo.org/). If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (www.youtube.com/channel/UCYWsYz5cKw4wZ9Mpe4kuM_g)

Harvest Church
A Verifiable Faith (Acts 13:13-43)

Harvest Church

Play Episode Listen Later Jul 8, 2024 40:51


A Verifiable Faith (Acts 13:13-43) by Harvest Church

Papers Read on AI
Autonomous LLM-driven research from data to human-verifiable research papers

Papers Read on AI

Play Episode Listen Later May 13, 2024 31:11


As AI promises to accelerate scientific discovery, it remains unclear whether fully AI-driven research is possible and whether it can adhere to key scientific values, such as transparency, traceability and verifiability. Mimicking human scientific practices, we built data-to-paper, an automation platform that guides interacting LLM agents through a complete stepwise research process, while programmatically back-tracing information flow and allowing human oversight and interactions. In autopilot mode, provided with annotated data alone, data-to-paper raised hypotheses, designed research plans, wrote and debugged analysis codes, generated and interpreted results, and created complete and information-traceable research papers. Even though research novelty was relatively limited, the process demonstrated autonomous generation of de novo quantitative insights from data. For simple research goals, a fully-autonomous cycle can create manuscripts which recapitulate peer-reviewed publications without major errors in about 80-90%, yet as goal complexity increases, human co-piloting becomes critical for assuring accuracy. Beyond the process itself, created manuscripts too are inherently verifiable, as information-tracing allows to programmatically chain results, methods and data. Our work thereby demonstrates a potential for AI-driven acceleration of scientific discovery while enhancing, rather than jeopardizing, traceability, transparency and verifiability. 2024: Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay, Roy Kishony https://arxiv.org/pdf/2404.17605

Into the Bytecode
#31 – Sreeram Kannan: building the verifiable cloud

Into the Bytecode

Play Episode Listen Later May 8, 2024 71:00


This is my conversation with Sreeram Kannan, founder at EigenLayer.Timestamps:- 00:00:00 intro- 00:01:21 sponsor: Optimism- 00:02:42 the AVS economy- 00:05:24 blockchains separate trust and innovation- 00:16:53 sponsor: Optimism- 00:18:02 specialized services and SaaS on EigenLayer- 00:24:50 rollups are open verifiable web servers- 00:41:35 rollup economics and business models- 01:55:14 the transition from academic to builder/operator- 01:06:38 impact per unit action- 01:10:26 outroLinks:Sreeram Kannan: https://twitter.com/sreeramkannanEigenLayer: https://twitter.com/eigenlayerThank you to our sponsors for making this podcast possible:Optimism - https://optimism.ioPrivy - https://privy.ioInto the Bytecode:Twitter -  https://twitter.com/sinahabFarcaster - https://warpcast.com/sinahabOther episodes - https://intothebytecode.comDisclaimer: this podcast is for informational purposes only. It is not financial advice or a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.

The Fintech Blueprint
Building Artificial Intelligence we can trust using ZK Proofs, with EZKL CEO Jason Morton

The Fintech Blueprint

Play Episode Listen Later May 7, 2024 41:01


Lex chats with Jason Morton, CEO of EZKL - EZKL is a technology that helps people build around artificial intelligence using zero-knowledge proofs (ZK proofs) and the latest in technology. Jason explains that ZK proofs are a way to prove that a computation has been executed correctly without revealing any sensitive information. The proofs can be used to verify the execution of AI models, statistical models, and other computations on blockchain networks. EZKL provides a command line tool, a backend proving service, and a Python library for developers to use. The company's economic model involves licensing the components of the system and running backend services. In the future, EZKL aims to provide a more complete solution for interacting with the technology. The demand for verifiable AI and ZK proofs is expected to come from both Web2 and Web3 companies, as well as enterprise clients MENTIONED IN THE CONVERSATION EZKL's Website: https://bit.ly/4dx1wUbJason's Twitter: https://bit.ly/4boqumY Topics: fintech, machine learning, ai, artificial intelligence, llm, zk-proofs, zero-knowledge proofs, blockchain, web3 Companies: EZKL, Ethereum, Bitcoin, DARPA ABOUT THE FINTECH BLUEPRINT 

WAGMI Ventures Podcast
Building the Verifiable Compute Layer for AI x Crypto, with Scott Dykstra (Space and Time)

WAGMI Ventures Podcast

Play Episode Listen Later May 6, 2024 27:45


Scott Dykstra is the Co-Founder & CTO @ Space and Time (https://www.spaceandtime.io). Backed by M12, Framework, & more, Space and Time is the first decentralized data warehouse that delivers sub-second ZK proofs against onchain and offchain data to power the future of AI x blockchain. In this episode we talk about why trust in analytics through cryptographic verification matters, possible surprises or future trajectories for the intersection of AI x crypto, insights from his founder journey that may be useful for new founders in the space, & much more.Recorded Wednesday April 24, 2024.

Tokyo Fresh
Verifiable Clout

Tokyo Fresh

Play Episode Listen Later Apr 23, 2024 95:11


This week Jordan and David discuss lying about your job in a group of people who can easily verify what your job is. ⁠⁠⁠⁠⁠⁠⁠Discord invite⁠⁠⁠⁠⁠⁠⁠ Contact Us: ⁠⁠⁠⁠⁠⁠⁠Email⁠⁠⁠⁠⁠⁠⁠ Twitter: @tokyofreshpod Instagram: ⁠⁠⁠⁠⁠⁠⁠@tokyofreshpodcast⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠@afroinjapan⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠@zyrell⁠⁠⁠⁠⁠⁠⁠ MERCH ⁠⁠⁠⁠⁠⁠⁠JPN⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠USA/EU/WORLD⁠ --- Send in a voice message: https://podcasters.spotify.com/pod/show/tokyofresh/message

Founders of Web 3
Establishing Trust in a Digital Age through Verifiable AI, with Fraser Edwards of Cheqd

Founders of Web 3

Play Episode Listen Later Apr 8, 2024 33:44


In this episode of the Metaverse Podcast, our host Jamie Burke welcomes Fraser Edwards, Co-Founder and CEO at Cheqd.   Cheqd is building the payment infrastructure and the trust layer that enables the creation of marketplaces for Trusted Data.   > Can digital identities be truly self-sovereign?   If you're interested in any of these topics, tune in!   - Fraser's background - Cheqd's mission statement - Self-sovereign identity - Deep fakes and trust - Biometrics databases dangers - Diffuse social signals - AI training data bias - Artisanal AI content value - Decentralized identity management - Reputation systems importance - Agent based systems promise - Hyper personalization potential - Commercial participation drivers - Regulation shaping conditions - Digital identity wallets - EU ID initiative impact - Cheqd's roadmap - Partnering for solutions - Proof of personhood integration   #AI #data #identity ------   Whether you're a founder, investor, developer, or just have an interest in the future of the Open Metaverse, we invite you to hear from the people supporting its growth. Outlier Ventures is the Open Metaverse accelerator, helping over 100 Web3 startups a year. You can apply for startup funding here - https://ov.click/pddsbcq122 Questions? Join our community: Twitter - https://ov.click/pddssotwq122 LinkedIn - https://ov.click/pddssoliq122 For further Open Metaverse content: Listen to The Metaverse Podcast - https://ov.click/pddsmcq122 Check out our portfolio - https://ov.click/pddspfq122 Thanks for listening!

Becker’s Payer Issues Podcast
Unlocking Growth: How Insourcing Credentialing Drove Surprising Benefits for Humana

Becker’s Payer Issues Podcast

Play Episode Listen Later Apr 4, 2024 16:00


Tune into this episode featuring Rehan Mirza , Chief Growth Officer at Verifiable, and Dara McDaniel, Associate Director of Credentialing at Humana. Dara and Rehan share insights on credentialing outside of patient safety and compliance, new approaches to leverage technology and much more.This episode is sponsored by Verifiable.

JeffMara Paranormal Podcast
VERIFIABLE Near Death Experience! | Woman Visits Living Sister While Dead!

JeffMara Paranormal Podcast

Play Episode Listen Later Mar 28, 2024 26:55


Near-death experience guest is Dianna Herold, who was embraced by angels after her Near Death experience. --- Send in a voice message: https://podcasters.spotify.com/pod/show/jeffrey-s-reynolds/message Support this podcast: https://podcasters.spotify.com/pod/show/jeffrey-s-reynolds/support

The Crypto Conversation
Randcast - On-Chain Verifiable Randomness

The Crypto Conversation

Play Episode Listen Later Mar 15, 2024 42:55


Felix Xu is the Co-Founder of ARPA Network, a decentralized computation network based on threshold cryptography and its use cases   Why you should listen Randcast is ARPA's first product, an on-chain verifiable random number generation service with an easy-to-use Smart Contract SDK that can be directly integrated into DApps to provide out-of-the-box functionalities like rolling dice, shuffling an array, generating in-game item attributes based on probability, generating random in-game maps or dungeons, and deciding the probability outcomes of a lottery. Using a random number in a smart contract is a common requirement. For example, a game may need to generate a random number to determine the lottery winner. However, the blockchain is deterministic, and the result of a smart contract is determined by the input. That is to say, a blockchain is a decentralized and trustless environment where the random number generation can be easily manipulated by any party in the network. e.g., if we use a block hash or timestamp as the source of randomness, the miners can manipulate it to their advantage by either withholding the block or manipulating the timestamp. The solution is to use an external source of randomness. Randcast is a service that generates random numbers through a decentralized network and provides them to smart contracts. The process of generating randomness is both transparent and verifiable. It is facilitated by a group of nodes that utilize the threshold BLS signature scheme. Before returning the requested random number to the user's smart contract, the randomness is verified on-chain by the Randcast Adapter smart contract. Randcast and the ARPA Network solve another major issue with random number generation. Randomness generated by a single off-chain entity could potentially be tampered with or manipulated by that entity. Randcast solves this issue by using multiple nodes in the ARPA Network to generate randomness through BLS Threshold signature tasks, which means that no single node has the ability to manipulate the final randomness result. Supporting links Bitget Bitget VIP Link with BONUS 1000 USDT Bitget Academy Bitget Research Bitget Wallet ARPA Network Dear Game Andy on Twitter  Brave New Coin on Twitter Brave New Coin If you enjoyed the show please subscribe to the Crypto Conversation and give us a 5-star rating and a positive review in whatever podcast app you are using.  

The Last American Vagabond
The Entirely Preventable Impending Rafah Massacre, War/Border Bill & Blood Libel Or Verifiable Fact?

The Last American Vagabond

Play Episode Listen Later Feb 15, 2024 199:41


Welcome to The Daily Wrap Up, a concise show dedicated to bringing you the most relevant independent news, as we see it, from the last 24 hours (2/15/24). As always, take the information discussed in the video below and research it for yourself, and come to your own conclusions. Anyone telling you what the truth is, or claiming they have the answer, is likely leading you astray, for one reason or another. Stay Vigilant. !function(r,u,m,b,l,e){r._Rumble=b,r[b]||(r[b]=function(){(r[b]._=r[b]._||[]).push(arguments);if(r[b]._.length==1){l=u.createElement(m),e=u.getElementsByTagName(m)[0],l.async=1,l.src="https://rumble.com/embedJS/u2q643"+(arguments[1].video?'.'+arguments[1].video:'')+"/?url="+encodeURIComponent(location.href)+"&args="+encodeURIComponent(JSON.stringify([].slice.apply(arguments))),e.parentNode.insertBefore(l,e)}})}(window, document, "script", "Rumble");   Rumble("play", {"video":"v4b1zjo","div":"rumble_v4b1zjo"}); Video Source Links (In Chronological Order): Week 2 of the #FluorideLawsuit: EPA Rests Their Case, Admits Harm Related to Fluoride Exposure (25) Derrick Broze on X: "Look who's speaking at the World Government Summit 2024 https://t.co/DVL3C21khA" / X (25) Taylor Hudak on X: "Twitter has locked Syrian Girl @Partisangirl out of her account. Twitter continues to market itself as a free speech platform, but locks its users out of their accounts! https://t.co/9VQx499PE0" / X (36) Mohamad Safa on X: "Elon Musk accused us that our followers are bots, which is why I constantly lose tens of thousands of followers and get censored here. If you're not a bot, reply with

Circular Economy Podcast
123 Topolytics: making waste visible, verifiable and valuable

Circular Economy Podcast

Play Episode Listen Later Feb 11, 2024 53:18


We explore why it's important for business to map, and understand their waste flows: what it is, specifically; where it comes from and goes to; how much there is – and why!; and to understand the opportunities for wasting less and circulating more value. Topolytics is a data analytics business that is making the world's waste visible, verifiable and valuable. Michael Groves and Fleur Ruckley explain how data analytics, mapping and machine learning can make waste and resource management more transparent, efficient and effective, both commercially and environmentally. Founder and CEO Michael Groves is a geographer with a PhD in aerial and satellite earth observation. Michael has over 20 years' experience in environmental management and sustainability reporting. Fleur Ruckley is Topolytics' Head of Implementation, using Topolytics' WasteMap® platform to generate actionable waste and resources analytics for clients and their supply chains. Fleur has a degree in Natural Sciences and a Masters in Environmental Management, and has worked in the charity, public and private sector supporting organisations, communities and schools to develop and implement sustainable and circular policies and practices. Fleur is a Chartered Waste Manager and is a member of the Circular Economy Steering Group for the Institute for Environmental Management & Assessment. Leveraging Topolytics waste map means companies can identify areas for improvement, such as preventing or reducing the waste or by re-designing processes and products, to support reuse and to achieve more efficient and sustainable outcomes. Mike explains how those sectors with significant waste generation are showing increasing interest in this. Business that understand what materials they produce and consume, can then make better decisions about recovery, reuse and recycling, and Geospatial analysis can help reduce waste by identifying material flow and leakage. Fleur tells us how companies are starting to see the benefits of using data and modeling to reduce waste in their supply chains, with improvements in ESG reporting, supplier management, and overall performance. Mike also highlights the potential for industrial symbiosis, using unwanted materials to create resources for another organisation – in other words, new by-products and value opportunities!

The Other Side NDE (Near Death Experiences)
Man Dies; Meets His Grandmother And Brother He Never Knew In Heaven (Verifiable Moment) (NDE)

The Other Side NDE (Near Death Experiences)

Play Episode Listen Later Feb 10, 2024 29:23


⭐Today's NDE afterlife experience is from Ray Feurstein. Ray was told to convey a message to a family member about things he couldn't possibly know and that family member validated everything he said. It was all just a matter of fact yet still the magic of the universe never fails to amaze. After two near death experiences Ray lives a life liberated from the fear of death and hopes to encourage others that there is a beautiful life after life.▶SUBMIT AN NDEhttps://othersidende.com/submit-your-story/▶ Listen to Skyline on Spotify

The Refuge United Pentecostal Church

Evangelist Ethan Hagan | Revival Day 1 | February 2nd, 2024

The Cold-Case Christianity Podcast
Is the Christian Faith Forensically Verifiable?

The Cold-Case Christianity Podcast

Play Episode Listen Later Dec 27, 2023 50:39


Can the Christian worldview answer objections? Is it evidentially verifiable? Can we make a case? In this episode of the podcast, J. Warner talks with Bill Arnold about the strength of our position as Christians.

People in Transition
95. Thea Kelley -Job Search & Interview Coach

People in Transition

Play Episode Listen Later Dec 24, 2023 32:08


Thea Kelley has over 15 years of experience as a job search and interview coach, catering to job seekers across the nation. Her book, "Get That Job! The Quick and Complete Guide to a Winning Interview," achieved best-seller status on Amazon and received glowing praise as "Excellent" from Forbes.com.With a focus on mock interviews, personalized feedback, and expert tips, Thea equips individuals to distinguish themselves in the job market and secure their desired positions.Her coaching aims to achieve the following for aspiring candidates:·        Foster confidence: Transform the interview process from nerve-wracking to empowering.·        Articulate unique value: Clearly communicate what sets you apart.·        Strategically handle tough questions with authentic responses.·        Develop compelling resumes, social media profiles, and career documents that compel employers to take notice.For valuable job search insights and a complimentary gift, subscribing to Thea's blog at https://jobsearchandinterviewcoach.com is a smart move. From there, access one-on-one interview coaching, assistance with resumes and social media profiles, and tailored job search strategies—proven methods to expedite your journey to landing a fantastic job opportunity.In this episode, we discussed several crucial job search topics: ·        Understanding your REV points (Relevant, Exceptional, and Verifiable) and leveraging them to stand out during interviews.·        Emphasizing the significance of leaving a lasting impression on hiring managers through compelling stories that reinforce your suitability for the job.·        Demonstrating, as a candidate, your ability to contribute solutions to the company's challenges and pain points.·        Understanding the importance of nonverbal cues, particularly on platforms like Zoom, such as maintaining eye contact, smiling, and maintaining good posture.·        Exploring and debunking the five detrimental myths associated with references, while learning how to turn them into an advantage.·        Thea's final advice for job seekers: Preparation, authenticity, and specificity are key elements. Thea presented many insightful job search strategies and recommendations. Her clients consistently commend her for her intelligence, thoughtfulness, and genuine commitment to helping them find their next great job opportunities.Don't miss out—subscribe to this podcast, leave a rating, and share it with your friends to spread the wealth of valuable advice.

Notes To My (Legal) Self
Season 6, Episode 7: Generative AI That's Instantly Verifiable and Court-ready (with Jacqueline Schafer)

Notes To My (Legal) Self

Play Episode Listen Later Dec 19, 2023 40:32


Jacqueline Schafer is a career appellate litigator and the founder and CEO of Clearbrief, winner of Litigation Technology Product of the Year at Legalweek 2023. Schafer was named to the American Bar Association's “2022 Women of Legal Tech” list, the 2022 Fastcase 50, Honoring ‘Innovators, Techies, Visionaries and Leaders' in Law, and also received the 2021 Washington State Bar APEX Award for Legal Innovation for founding Clearbrief as well as for her 2020 law review article ("Harnessing AI for Struggling Families"). In this episode, Jacqueline Schafer will share why Clearbrief's AI platform in Word is used by hundreds of firms, including Biglaw, as well as courts and government agencies. She'll share the design thinking behind Clearbrief's generative AI features that produce instant hyperlinked timelines from discovery docs, score your opponent's accuracy, and more.

Near Death Experience
Man Dies Of Burst Appendix; Meets Ancestors And Has Amazing Verifiable Moment!

Near Death Experience

Play Episode Listen Later Dec 17, 2023 10:19


Man Dies Of Burst Appendix; Meets Ancestors And Has Amazing Verifiable Moment! #NearDeathExperience #SurvivalStory #CloseCall #BrushWithDeath #LifeAfterDeath #MiracleMoments #BeyondTheBrink #SecondChance #NDEjourney #CheatedDeath #GuardianAngels #NearMissChronicles #UnbelievableEscape #HeartStoppingMoments #NearDeathEncounter #FateIntervention #LuckyToBeAlive #OutoftheAbyss #NearFatalExperience #ResilienceStories --- Support this podcast: https://podcasters.spotify.com/pod/show/ndeworld/support

The Future of Identity
Harrison Tang: Spokeo's Vision of a “People Search Engine” Powered by Verifiable Credentials

The Future of Identity

Play Episode Listen Later Nov 15, 2023 34:10


In this episode, we sit down with Harrison Tang, Co-founder and & CEO of Spokeo, which is a “people search engine”. Spokeo aggregates many sources of data about people and sells that data to verifiers. So at first blush, Harrison is an unlikely person to be a massive advocate for SSI and co-chair of the W3C Credentials Community Group.We dig in to why Spokeo cares about verifiable credentials, and how verifiable credentials represent the opportunity to deliver more trust to Spokeo's customers and how they allow Spokeo to participate in the next wave of digital identity innovation. We also explore complex concepts like negative reputation and how AI will impact the personal data landscape. We finish out by talking about what current IDtech companies can learn from Spokeo's early go-to-market and what it will take to get verifiable credential adoption.To learn more about Spokeo visit https://www.spokeo.com/ or follow Spokeo on social media (@Spokeo). You can find Harrison on social media as well (@tang_talks) and keep up with his latest thinking through his podcast ‘Tang Talks'.Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We'd love to hear from you.

Untold Stories
Unleashed: Bitcoin, Yokai & Collab.land

Untold Stories

Play Episode Listen Later Sep 12, 2023 61:27


Enjoy this episode! I loved recording it. Please leave reviews and subscribe! Anjali Young is the Co-Founder and CCO of Abridged Inc, and a DAO Officer for Collab. Land.  Ross Plummer - Chief Executive Officer - Yōkai [00:01:03] Bitcoin Upgrade in Soft Fork. [00:03:18] Yokai and Collab.land [00:07:23] Scaling cybersecurity teams. [00:11:05] DAO fixing inefficiencies and insecurity. [00:14:52] Why humans are needed. [00:17:06] Harnessing distributed and federation technologies. [00:20:45] Single dashboard for security personnel. [00:24:40] Roles in cybersecurity industry. [00:27:09] Chronometry and timestamping. [00:31:48] Triage in film production. [00:35:59] The best thing to do is just do nothing at all. [00:37:17] You have to be a yokai in this world. [00:41:20] Online and offline communities. [00:43:27] Moderating online communities. [00:48:01] Building community culture without token gating. [00:51:33] Blending play and work together. [00:54:33] The future of online communities. [00:58:07] Verifiable credentials and PoApps. [01:01:16] Find that fucking edge.

Paul's Security Weekly
Identity and Verifiable Credentials in Cars - Eve Maler - ASW #249

Paul's Security Weekly

Play Episode Listen Later Aug 1, 2023 73:46


Identity isn't new, but we do have new ways of presenting and protecting identity with things like payment wallets and verifiable credentials. But we also have identity in surprising places -- like cars. We'll answer some questions like: - Why do we even have identities in cars? - What else is your car connected to? - How should devs be thinking about security in this space? In the news segment, Zenbleed in AMD, Google's TAG sees a drop in zero-days, new security testing handbook from Trail of Bits, Phil Venables' advice on public speaking, car battery monitor that monitors location(!?), more news on TETRA, Visit https://securityweekly.com/asw for all the latest episodes!  Follow us on Mastodon: https://infosec.exchange/@AppSecWeekly  Follow us on Twitter: https://www.twitter.com/secweekly Like us on Facebook: https://www.facebook.com/secweekly Show Notes: https://securityweekly.com/asw-249