Podcasts about LSP

  • 264PODCASTS
  • 669EPISODES
  • 56mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jan 29, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about LSP

Show all podcasts related to lsp

Latest podcast episodes about LSP

Louisiana Unfiltered
Guard Turned Predator: The Tyler Holliday Scandal at Louisiana State Penitentiary

Louisiana Unfiltered

Play Episode Listen Later Jan 29, 2026 45:29 Transcription Available


In this episode, Attorney Joe Long joins Kiran to explore a past case at Louisiana State Penitentiary involving Tony Johnson, an inmate at Angola State Penitentiary, who alleges he was sexually assaulted by LSP at Angola lieutenant, Tyler Holliday. Timestamps 08:23 Understanding Countermeasures14:51 The Assault Details17:06 Reporting the Abuse 25:13 The Civil Case Outcome27:32 The DNA Revelation31:41 Finding Credible Witnesses34:34 The Appeal ProcessLocal Sponsors for this episode include:Neighbors Federal Credit Union:Another Chance Bail Bonds:Dudley DeBosier Injury LawyersFamily Worship Center ChurchSound and Editing for this audio podcast by Envision Podcast Production:

Python Bytes
#463 2025 is @wrapped

Python Bytes

Play Episode Listen Later Dec 22, 2025 43:19 Transcription Available


Topics covered in this episode: Has the cost of building software just dropped 90%? More on Deprecation Warnings How FOSS Won and Why It Matters Should I be looking for a GitHub alternative? Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. HEADS UP: We are taking next week off, happy holiday everyone. Michael #1: Has the cost of building software just dropped 90%? by Martin Alderson Agentic coding tools are collapsing “implementation time,” so the cost curve of shipping software may be shifting sharply Recent programming advancements haven't been that great of a true benefit: Cloud, TDD, microservices, complex frontends, Kubernetes, etc. Agentic AI's big savings are not just code generation, but coordination overhead reduction (fewer handoffs, fewer meetings, fewer blocks). Thinking, product clarity, and domain decisions stay hard, while typing and scaffolding get cheap. Is it the end of software dev? Not really, see Jevons paradox: when production gets cheaper, total demand can rise rather than spending simply falling. (Historically: the efficiency of coal use led to the increased consumption of coal) Pushes back on “only good for greenfield” by arguing agents also help with legacy code comprehension and bug-fixing. I 100% agree. #Legacy code for the win. Brian #2: More on Deprecation Warnings How are people ignoring them? yep, it's right in the Python docs: -W ignore::DeprecationWarning Don't do that! Perhaps the docs should give the example of emitting them only once -W once::::DeprecationWarning See also -X dev mode , which sets -W default and some other runtime checks Don't use warn, use the @warnings.deprecated decorator instead Thanks John Hagen for pointing this out Emits a warning It's understood by type checkers, so editors visually warn you You can pass in your own custom UserWarning with category mypy also has a command line option and setting for this --enable-error-code deprecated or in [tool.mypy] enable_error_code = ["deprecated"] My recommendation Use @deprecated with your own custom warning and test with pytest -W error Michael #3: How FOSS Won and Why It Matters by Thomas Depierre Companies are not cheap, companies optimize cost control. They do this by making purchasing slow and painful. FOSS is/was a major unlock hack to skip procurement, legal, etc. Example is months to start using a paid “Add to calendar” widget! It “works both ways”: the same bypass lowers the barrier for maintainers too, no need for a legal entity, lawyers, liability insurance, or sales motion. Proposals that “fix FOSS” by reintroducing supply-chain style controls (he name-checks SBOMs and mandated processes) risk being rejected or gamed, because they restore the very friction FOSS sidesteps. Brian #4: Should I be looking for a GitHub alternative? Pricing changes for GitHub Actions The self-hosted runner pricing change caused a kerfuffle. It's has been postponed But… if you were to look around, maybe pay attention to These 4 GitHub alternatives are just as good—or better Codeburg, BitBucket, GitLab, Gitea And a new-ish entry, Tangled Extras Brian: End of year sale for The Complete pytest Course Use code XMAS2025 for 50% off before Dec 31 Writing work on Lean TDD book on hold for holidays Will pick up again in January Michael: PyCharm has better Ruff support now out of the box, via Daniel Molnar This is from the release notes of 2025.3: "PyCharm 2025.3 expands its LSP integration with support for Ruff, ty, Pyright, and Pyrefly.” If you check out the LSP section it will land you on this page and you can go to Ruff. The Ruff doc site was also updated. Previously it was only available external tools and a third party plugin, this feels like a big step. Fun quote I saw on ExTwitter: May your bug tracker be forever empty. Joke: Try/Catch/Stack Overflow Create a super annoying linkedin profile - From Tim Kellogg, submitted by archtoad

Ultimate Guide to Partnering™
281 – Why SHI's Audacious Transformation is Mastering Agentic AI

Ultimate Guide to Partnering™

Play Episode Listen Later Dec 21, 2025 22:33


Welcome back to the Ultimate Guide to Partnering® Podcast. AI agents are your next customers. Subscribe to our Newsletter: https://theultimatepartner.com/ebook-subscribe/ Check Out UPX:https://theultimatepartner.com/experience/ In this episode, Vince Menzione sits down with SHI leaders Joseph Bellian and Stefanie Dunn, alongside Microsoft's Marcus Jewett, to dissect SHI's massive evolution from a traditional Large Account Reseller (LAR) to a strategic Global Systems Integrator (GSI). They explore the cultural and operational shifts required to move from a transaction-heavy model to a services-led approach, highlighting their alignment with Microsoft's MSEM methodology, the implementation of the Entrepreneurial Operating System (EOS), and their cutting-edge work with AI Labs and Agentic AI. Key Takeaways SHI has evolved from a transactional powerhouse into a Global Systems Integrator (GSI) focused on services and outcomes. The organization implemented the Entrepreneurial Operating System (EOS) to align vision, people, and data across sales and delivery. SHI serves as “Customer Zero” for Microsoft AI, implementing Copilot internally to better guide customers. The partnership mirrors Microsoft's MSEM methodology to ensure seamless co-selling and customer success lifecycles. SHI's AI Labs in New Jersey provides a secure environment for clients to build and test custom AI solutions. The shift requires moving from a “Hulk” (strength/sales) mindset to a “Tony Stark” (brainpower/strategy) mindset. Key Tags: SHI International, global systems integrator, Microsoft services, Joseph Bellian, Stefanie Dunn, Marcus Jewett, AI labs, agentic AI, MSEM methodology, entrepreneurial operating system, digital transformation, customer zero, copilot implementation, solution provider, cloud migration, data governance, services led growth. Ultimate Partner is the independent community for technology leaders navigating the tectonic shifts in cloud, AI, marketplaces, and co-selling. Through live events, UPX membership, advisory, and the Ultimate Guide to Partnering® podcast, we help organizations align with hyperscalers, accelerate growth, and achieve their greatest results through successful partnering. Transcript:Transcript: Joseph Bellian – Stefanie Dunn – Marcus Jewett WORKFILE AUDIO [00:00:00] Vince Menzione: We’ve got it. So it is interesting how these sessions kind of follow each other. Hopefully you’re seeing kind of a flow from marketplaces and the conversation about how to be a really great ISV to how an ISV took and built a channel strategy and how they integrated alliances and channels together. [00:00:16] Vince Menzione: Well, we have an, we have another really great example here to talk through. I have this, uh, incredible like background. Like I’m a hundred years old, basically. I don’t even want to tell anybody that. But, uh, I got to work with this organization way back in my days at Microsoft. They are, they were and are one of the top, I’ll call them, they were classically a reseller company. [00:00:40] Vince Menzione: They one of the largest, we call ’em large account resellers back in the day. Uh, their leader built a multi-billion dollar organization. I’m gonna let them talk through who they are today, but we have an opportunity to talk about transformation. From that lens now too, like how does an organization that’s really good at doing one thing evolve, transform and take advantage of these tectonic shifts we’re seeing? [00:01:03] Vince Menzione: So, uh, we’ve got some incredible leaders. I’m gonna have them come up on stage. And everybody introduced themselves from SHI and also from Microsoft. And we’re gonna have a really great conversation today. Great to have you. [00:01:26] Vince Menzione: So I’m gonna let, I’m gonna let you guys introduce yourselves because, uh, everybody knows you as DJ Marco Polo. So we’re gonna, we’ll start with you over in the far end, Marcus. Okay. Vince, I, [00:01:36] Marcus Jewett: I’ll try to be shy. [00:01:37] Vince Menzione: No, [00:01:37] Marcus Jewett: uh, hi everyone, my name is Marcus Jut, I am the Global Partner Development Manager for the SHI partnership. [00:01:43] Marcus Jewett: Uh, I have been overseeing this partnership for just under 12 years. Wow. So I have seen the evolutional journey of this partner and really proud of where they, uh, have matured their business and the partnership with Microsoft. [00:01:57] Stefanie Dunn: Thank you. Oh. [00:01:58] Marcus Jewett: Is there, is yours on? Oh, [00:02:00] Vince Menzione: mines [00:02:00] Stefanie Dunn: on. Hi, I am Stephanie Dunn, a director of Microsoft Services at SHI. [00:02:07] Stefanie Dunn: And it is an, it’s a pleasure to be here. It’s a pleasure to have Marcus as our PDM and, uh, Joe and Vince, uh, very, very happy to be here. Um, and I lead our Microsoft Services sales, uh, area. So across, uh, cloud AI business transformation and, uh. And, uh, data and ai. [00:02:28] Joseph Bellian: Great, great to have you, Stephanie. Thank you. [00:02:30] Joseph Bellian: Joe. Joe Bellion. I’m the VP of Microsoft Alliances and programs. Uh, I’ve been here at SHI for about eight months now, but been in and around the partner ecosystem for about a decade. Uh, I think of my organization of like kind of two aspects. So leading the charge around alliances, aligning our field sellers and specialists with Microsoft, as well as the, the programs backend incentives and operations. [00:02:51] Joseph Bellian: But, um, the real focus is driving the go to market strategy here at SHI. [00:02:55] Vince Menzione: Yeah. So great. So I started to allude to this earlier about like traditional, one of the top three or four companies actually. And we used to use the term, uh, LSP back in the day, or lar, we’ve got several iterations. Microsoft’s gone through several iterations of that name. [00:03:11] Vince Menzione: Marcus knows all of them probably by heart. Tell us what was the impetus to change the organization? Become more like a ser, a services led company as opposed to a transaction led organization? [00:03:21] Joseph Bellian: Yeah, absolutely. Throw one more acronym. SSP. SSP, that was another one. So, uh, solution provider. Um, but, uh, yeah, I, I’d say probably a couple things. [00:03:29] Joseph Bellian: Um, one, the big one, no news to anybody in the room and online as well. The shift with EAs, director of Microsoft, as well as, uh, the whole CSP hero motion. So we do recognize that opportunity, uh, to have services attached, to engage with our clients as well as our joint partnerships with Microsoft, uh, with services out in the field. [00:03:48] Joseph Bellian: Uh, the second one, probably the biggest one is our clients. Hearing out our clients that shift. Um, we’re talking about ai, ai, everything, AI services. Uh, we’re now in the whole era of agentic ai. What does that mean? How do you take advantage of those offerings? And so we recognize that, that our clients are spending millions of dollars with the Microsoft products, but how do you take advantage of that investment and maximize it in their environment? [00:04:13] Joseph Bellian: And so having services to help navigate those complex solutions, that’s where we’re, we’re leaning in. [00:04:18] Vince Menzione: So what did it take to change? Transformation doesn’t come easy. There’s mindset. There’s all these cultural changes that need to happen. From your perspective, both of your perspectives, what did it take internally for this change to happen? [00:04:31] Joseph Bellian: Yeah. Um, so if you, if you heard of the entrepreneurial operating system EOS Yes. And we’ve adopted that internally. Um, if you’re not familiar, it kind of comprises of six components. So vision, people, data, um, process. Issues and, um, uh, traction. So I apologize, that’s, uh, but take, take that model and put it into our business of what we did. [00:04:57] Joseph Bellian: Um, so two kind of twofold. One, moving our entire services practice organization under one, one operating rhythm, um, under Jordan Ello, our CTO. So pre-sales and delivery. So looking at that, the how we go to market with our services, single vision. Uh, single process. So it’s consistent as we’re engaging not only through our partners, but through our clients, but then also on the other side of the house, our Microsoft practice, having all of our resources under one roof so that it’s a single way we go to market. [00:05:28] Joseph Bellian: Aligning our go to market strategy, one-to-one with Microsoft. Why it, it does two things. One, it allows us to be very clear of how we are going to market to our clients, but it allows us to partner even better with our Microsoft counterparts. Yeah, when, when Microsoft, it’s always ever changing. You’re familiar, every six months to a year solution plays and the go-to-market strategy changes, uh, we’re there at the forefront in ensuring that we have our solutions mapped a hundred percent so that we can just co-sell together. [00:05:58] Joseph Bellian: Break down those walls. Let’s do more together. [00:06:00] Vince Menzione: And, uh, geographically you were sep, your teams were separated. You have a big operation in Texas. You also have a big New Jersey operation, which was where the company was founded, in fact. So I’d love to get the perspective on this, Marcus. From your perspective, like what did it do, what was it like before and what did it become? [00:06:17] Marcus Jewett: Oh yeah, let’s go back in the way back machine to 12 years ago. Um, it was a different partner, a different operating model, uh, in those early days. And this is really when we started to move customers from on-premises to more cloud-based subscription technologies. Uh, SHI was always just an incredible selling machine. [00:06:36] Marcus Jewett: If they could not do anything, they could always sell. And for any of you who are familiar with the Marvel movies, um. I, I, I, I use a reference internally with them. SHI was always like the Hulk root for strength. You know, you tell ’em to go sell something, Hulk Smash, they can knock that out. Well, as we really needed these partners to evolve and really help our customers with their technologies, whether it’s driving adoption, monthly active usage, consumption. [00:07:02] Marcus Jewett: We needed them to be more like Tony Stark, right? We needed the brain power, and so over the last, let’s call it five or six years, SHI has continued to invest in their Microsoft practice. They went from an organization that was really focused on management of EA acquisition of new Microsoft logo. To continuing to develop that muscle, but also investing in ways to help customers through their managed services, through their professional services. [00:07:28] Marcus Jewett: And it’s been a, a journey. Right? SHI is a large organization. For a long time they were Microsoft’s largest partner. And from a transactional build revenue perspective, and they still are in many ways, but we really needed them to demonstrate that they could help our, their customers, our shared customers take full advantage of all of the entitlements and the technology they, that they’ve purchased from us. [00:07:50] Marcus Jewett: And that’s really where the evolution has been with SHI when I first started, uh, this is like, God, 12 years ago, there were 20 people that were Microsoft centric resources that really were focused on. Customer acquisition and net new logos. And today that organization from a sales perspective is over 150 sellers. [00:08:09] Marcus Jewett: Wow. That are just focused on Microsoft. So that CSP, they, they fill the top of the funnel for services to help drive program utilization. And that’s not even talking about the dedicated services resources that works under Stephanie. So it’s been. An incredible journey. Microsoft has invested in SHI and in turn, SHI has invested into Microsoft. [00:08:31] Marcus Jewett: They’ve basically taken their approach in terms of how they go to market with Microsoft, and they’ve mirrored that almost like how Joe and I are wearing the same jacket. That’s really how they’ve aligned their, their go to market strategy, really making it a mirror where they take it. They’ve taken our Microsoft M methodology. [00:08:50] Marcus Jewett: And they’ve essentially adopted it and made it their own. So now when our sellers are talking with SHI sellers, they’re speaking the same language. [00:08:58] Vince Menzione: You’re teeing it up beautifully for your conversation with Stephanie here. Stephanie, I want to hear like how you’ve done all those things. ’cause it’s really your organization that’s focused on this, right? [00:09:06] Stefanie Dunn: Yeah, absolutely. So for us it’s all about shared outcomes. It we’re listening to the. Customer. We’re listening to Microsoft and we’ve really taken that to heart. Uh, the customer is at the center of every single thing that we do. I know all of us as partners. That’s really our vision, likely, and the reason why we’re here is our customers. [00:09:26] Stefanie Dunn: But really understanding how to take advantage of that partnership and build something incredible. And it is transformative. Uh, you know, we started as a licensing powerhouse, as Marcus alluded to, and now we’re going deep into services. So we’re aligning to co-sell motions. We’re aligning to the, the industries. [00:09:46] Stefanie Dunn: Uh, we’re creating marketplace offers. We’ve got our programs, uh, tied to all of our services offerings. And so when we look at the broader ecosystem, we see the vision of Microsoft. Uh, we’ve hired the right people, we’ve put the right processes into place, and we have the technology expertise in-house to really share. [00:10:08] Stefanie Dunn: In the journey with our customers and leading them. [00:10:11] Vince Menzione: And you know, you talk about like solution plays. You talked about industry. People don’t always recognize this when you talk to Microsoft sellers. They’re very focused on the industry they’re in, and you have to have those conversations that, this came up earlier, but we never got into this. [00:10:25] Vince Menzione: But you’re aligning your solution plays, you’re aligning your conversations to be very like healthcare and education, all those different markets, right? [00:10:32] Stefanie Dunn: We are. We are, which is very new for SHI in the services industry, and so you know, we’re taking our CSP plays. Um, our licensing plays and really saying, well, what can you do with that? [00:10:43] Stefanie Dunn: Right. You know, how can we advise you? And then we, we dig into the actual industry verticals to, to get tactical with them. You know, it’s, it’s about providing the strategy. It’s about providing the extra hands. They all need extra hands. They, you know, our, our customers need us. As an extension of their team. [00:11:01] Stefanie Dunn: And so for us it’s really important to dig into that and, and be, and be that, that listening ear and you know, that expert in the room for them, uh, from advisory standpoint. And so all of our se services sellers are advisors as well. They’re not selling a product, they’re not selling, uh, something individual. [00:11:19] Stefanie Dunn: We are selling to. Fill and fulfill their goals and business outcomes, which is extremely unique, I will say, because we do have that end to end. So it does start with the licensing. It starts with assessing what you really have, meeting with those advisors, and then putting together a roadmap to help them. [00:11:37] Stefanie Dunn: Understand. Okay, well this is what it’s gonna take to get you here. Here’s our, uh, we love reverse timelines at SHI and so, um, it’s d minus din and so this is where you wanna go and this is when you wanna get there. So this is how we’re gonna help you, uh, along that roadmap. [00:11:53] Vince Menzione: I am gonna put you on the spot here with m Sem. [00:11:55] Vince Menzione: ’cause I think Microsoft finally laid out a process a couple years ago for you to like line up to, ’cause you were doing one piece of it before. Do you want to talk about m how em plays in here and how SHI is leveraging it? [00:12:07] Marcus Jewett: Right. So, uh, across our SEM stages, there are five different stages, and this is the customer journey from these, you know, pre-sales, scoping, uh, engagements with customers all the way through delivery. [00:12:19] Marcus Jewett: And then of course, like that customer success lifecycle and managed services. Again, this was not a language or a way that SHI really approached their business. Again, it was very much like, let’s. Get the customer to purchase on an EA or let’s renew the customer. And then once that cycle was complete, then it, it was almost like adding fries. [00:12:38] Marcus Jewett: Would you like some services with your ea? Right. And, uh, it took a, it took a while, right? Some very, uh, difficult conversations, but we were able to find, finally get the right people in the room to make the right investments. And now when you think about how SHI goes to market, they don’t necessarily leverage the term SEM internally, but. [00:12:59] Marcus Jewett: All of their customer methodologies or their sales methodologies in terms of how they service their customers aligns perfectly. Even when we get into the descriptive part of building out our, uh, partner business plan, we did that across every stage of the M SEM methodology. So that we can ensure that the teams at SHI are in perfect alignment with the teams at Microsoft. [00:13:20] Marcus Jewett: So, uh, I’m, I’m really excited about how we’ve been able to mature the practice and how SHI is now 100% aligned with Microsoft across all of our solution areas, whether it’s. Security, you know, cloud and infrastructure or AI business solutions. There’s a very mirrored approach to how we support customers. [00:13:39] Marcus Jewett: Yeah. I want [00:13:40] Vince Menzione: to double click on the AI component. You know, we were up here earlier, Irwin and I were up here talking about being a frontier firm, and I’ll open it up to all, all of you to individually answer this. I know, Marcus, you have some insights here about the ai. You mentioned AI already. But also to Stephanie and Joe about how you’re taking AI and modern work and workplace and, and, and, and addressing this market specifically. [00:14:07] Vince Menzione: Where, where, where do we wanna start there? [00:14:09] Joseph Bellian: Yeah. One big one. Um, if you’re not familiar, we have ai, an AI labs, um, onsite, uh, lab, and based out of Jersey, one of our headquarters. So on the forefront of the AI technology, but the real focus there is being able to meet with our clients and obviously joint partnerships, um, to build and develop solutions safe, um, offline in a safe, secure environment. [00:14:33] Joseph Bellian: Because let’s be honest, I mean, ai, it’s moving fast and, and we, we, we need to ensure that our data’s secure. Um, and there’s a lot of risk out there. And so we are partnering, um, um, out there with Nvidia and other other providers, um, but specifically with Microsoft in the cloud, um, and securing that environment. [00:14:51] Joseph Bellian: So AI Labs, bringing our clients in, building custom solutions, the area of a jet AI’s here. It’s [00:14:57] Vince Menzione: there. It is here. Yeah, it is here, Stephanie. [00:15:00] Stefanie Dunn: Thank you. Yes, and I’ll just add, uh, for, for our customers, they need to make sure that their foundation is right. You know, they’re coming from maybe all different other clouds. [00:15:09] Stefanie Dunn: They’ve, you know, got multi-tenant really understanding what their structure looks like, and then. Creating that secure foundation. So we’ve got a lot, you know, we do a lot around, uh, just full M 365 migrations and then into understanding the identity and the security baseline under that, making sure that that’s correct. [00:15:29] Stefanie Dunn: And then we can start journeying into some of these other conversations. Data governance, data engineering, uh, all that is extremely important. We have an entire dedicated team, uh, within services sales. Pre-sales with essays or solution architects and delivery, uh, as well as just the project management. [00:15:48] Stefanie Dunn: And, and it’s just this full life cycle to understand where are you and we need to make sure that, that your structure’s built correctly or else it’s never gonna succeed. So a little bit, we take it back to the foundation level, I’ll just say from a customer, uh, engagement perspective to make sure that what they wanna do, they can do securely. [00:16:06] Marcus Jewett: Very cool. I, I’d like to add one other piece there. Um, you know, obviously to Joe’s point earlier, like if anyone says they know exactly what the AI journey will look like for most customers in six months, they’re probably not telling you the truth. Right? This is, we’re, we’re building the plane in the air. [00:16:22] Marcus Jewett: But, uh, one thing Microsoft has really built a foundation on is looking at our partners. And the ones who have adopted AI internally, especially Microsoft Technologies, and we call it Customer zero, right? Ensuring working with partners who have invested in their internal usage of Microsoft AI technology. [00:16:41] Marcus Jewett: So it’s all the various flavors of copilot. Rolling it out and implementing it across their organizations and building their own internal use cases, which they can go in turn and use to go help drive successful engagements with their end customers. So SHI has also been one of our, uh, brightest partners when it comes to that customer Zero journey. [00:17:01] Marcus Jewett: Uh, and it’s something I’m very, very proud of to see. Uh, we’re leveraging the, the use cases and the learnings our SHI is to really go out there and help customers navigate through their own. Uh, complexities of their AI journey as well. So, uh, my kudos to SHI as customer. Zero. Very proud of you and opera feels great. [00:17:20] Marcus Jewett: And you’re [00:17:20] Vince Menzione: providing support engineering, organ organization that supports this function? [00:17:24] Marcus Jewett: Oh, absolutely. As a globally managed partner, I mean, we’re, we’re gonna always be there to help our partners through the journey, right? So whether they need internal readiness or technical support, uh, whether it’s workshops, however we can help the partners best. [00:17:38] Marcus Jewett: Uh, position and posture themselves to go help customers with these, uh, AI engagements. Uh, we’re, we’re there to invest. Uh, we’ve invested in SHI for the last several years across, uh, ai, and we will continue to do so. [00:17:52] Vince Menzione: So what’s the message for the partner community, Joe, that, that, like, how should they perceive you? [00:17:57] Vince Menzione: How should they think about you? Should they, how should they think about engaging with you? Okay. [00:18:02] Joseph Bellian: Yeah, so I mean, obviously we’re an SSP, we’re never gonna, we’re never gonna, um, lose that, that accreditation with Microsoft. But the, the real focus of what we wanna be recognized as A-G-S-I-A global systems integrator, um, being able to engage our clients jointly, co-selling together and meeting them where they’re at across their digital journey. [00:18:21] Joseph Bellian: Uh, we have the capabilities to handle their licensing and understanding the complex matrix in their environment, their IT infrastructure. But being able to have a solution for every part of the journey of where they’re at, because every client’s in a different situation. Yeah. So, so in reality, it’s A-G-S-I-A global systems integrator, being able to engage across their journey. [00:18:42] Vince Menzione: So that’s a, did everybody hear that? ’cause I, I heard that for the first time. That’s a very different perception of the, of the previous organization and getting there. Uh, and you also, I remember this from the transactional side of the business. You were at the very type, at the top of the pyramid, right? [00:18:56] Vince Menzione: Yeah. You handled some of the largest corporations in the, in the world. Yeah. And you know companies as well as organizations like government, governmental organizations across different markets as well. [00:19:07] Joseph Bellian: Yep. A hundred percent. [00:19:08] Vince Menzione: Yeah. So GS. Yeah. [00:19:11] Marcus Jewett: And it’s really important to, for SHI to, to develop that GSI muscle. [00:19:15] Marcus Jewett: Uh, you mentioned at the beginning, Joe, that Microsoft, uh, we have various routes to market. Uh, one of those routes to market, uh, especially in the enterprise space or in our strategic space, is for customers to procure direct. Uh, SHI has longstanding relationships with those customers, and as these customers renew their agreements into a direct model with Microsoft, the way they stay engaged and add value to these prop, uh, to these customers is through their services, their professional services, their managed services. [00:19:42] Marcus Jewett: So going back to Joe’s Point around really defining themselves as a, uh, A GSI, that is also an SSP has been paramount to their overall transformational journey and their overall success. [00:19:55] Vince Menzione: And you also work, so I would assume you work with some of the ISVs in the room too. Yeah, I would think there’s some really great relationships or synergies. [00:20:01] Vince Menzione: Is that, is that an area of muscle you’ve been building out or, yeah, it’s battle, it’s an opportunity. [00:20:06] Joseph Bellian: I mean, I, I believe you have a segment coming up as well on it, um, around NPO. Um, and so there’s a, there’s a play in every motion from services, play services attached through ISVs, your SaaS offers. Um, we do recognize that that’s an opportunity. [00:20:18] Joseph Bellian: Uh, we’re having great success when you look at the marketplace, um, through the multi private party offers. Um, it allows us to expand our footprint and take, uh, take advantage of those relationships and co-sell together. So, absolutely. Wow. [00:20:30] Vince Menzione: Very cool. So you’re gonna be around most of the day today? Yes. I hope. [00:20:34] Vince Menzione: Mm-hmm. So for the partners that are in the room, I think that great conversations with both of you, Stephanie and Joe, and, uh, great conversation. Is there anything else we wanna share with everyone? [00:20:46] Marcus Jewett: Uh, no. It’s just, I would, I would leave you all with the fact that, again, uh, for every partner. Uh, make certain that you, you’re finding a way to differentiate yourself and tell your story. [00:20:57] Marcus Jewett: Uh, you may be doing some amazing work, uh, but if you’re not finding ways to, to tell that story and make certain your customers, and for me, Microsoft, make certain that, that the Microsoft teams you’re working with have very clear understanding of what your capabilities are today, then you may be missing the mark. [00:21:13] Marcus Jewett: I, I, I use this analogy all the time. Uh, the largest retailer on the planet. Who is it? Come on, help me out. I’m sorry. Largest retailer. Box Box. Walmart. Walmart, that’s right. You can turn on a television on any given day and you will still see a Walmart commercial. So yes, tell your story. Yes, very [00:21:34] Joseph Bellian: smart move. [00:21:34] Joseph Bellian: And one more, um, I just wanna make sure I land out there, is the success and where we go from here. Um, it’s this right here in the room. Um, us partnering together, bringing the partner ecosystem together. Um, in reality, we’re not competing together. We should be collaborating together and working together, um, in our client’s joint environments. [00:21:52] Joseph Bellian: Microsoft says it well, it’s that one Microsoft story. It’s that better together story and the more we can work together, the more success we’ll have together. [00:22:00] Vince Menzione: Awesome. I want to thank you so much for your sponsorship and for being here. Uh, big news here, I think it should be like on the front page of the partner ecosystem journal that you’re now, you’re now GSII think that that says quite, that says volumes to, to the community out there. [00:22:15] Joseph Bellian: Yeah. [00:22:15] Vince Menzione: Thank you. [00:22:15] Joseph Bellian: Absolutely. [00:22:16] Vince Menzione: Yeah. Thank you. Thank you both for joining us. So great to have you both. Thank you. Thank you, Marcus, to have you as well. Thank you. Thank you, Jeff. Thank you very much Stephanie. So great. So great to spend time with you. Thank you. And this.

Jim Reeves
#544 The Jim Reeves Christmas Show (Including Jim's RCA Victor album 12 Songs Of Christmas, LSP-2758)

Jim Reeves

Play Episode Listen Later Dec 19, 2025 58:17


#544 The Jim Reeves Christmas Show (Including Jim's RCA Victor album 12 Songs Of Christmas, LSP-2758) by Jim Reeves

Python Bytes
#461 This episdoe has a typo

Python Bytes

Play Episode Listen Later Dec 9, 2025 28:50 Transcription Available


Topics covered in this episode: PEP 798: Unpacking in Comprehensions Pandas 3.0.0rc0 typos A couple testing topics Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: PEP 798: Unpacking in Comprehensions After careful deliberation, the Python Steering Council is pleased to accept PEP 798 – Unpacking in Comprehensions. Examples [*it for it in its] # list with the concatenation of iterables in 'its' {*it for it in its} # set with the union of iterables in 'its' {**d for d in dicts} # dict with the combination of dicts in 'dicts' (*it for it in its) # generator of the concatenation of iterables in 'its' Also: The Steering Council is happy to unanimously accept “PEP 810, Explicit lazy imports” Brian #2: Pandas 3.0.0rc0 Pandas 3.0.0 will be released soon, and we're on Release candidate 0 Here's What's new in Pands 3.0.0 Dedicated string data type by default Inferred by default for string data (instead of object dtype) The str dtype can only hold strings (or missing values), in contrast to object dtype. (setitem with non string fails) The missing value sentinel is always NaN (np.nan) and follows the same missing value semantics as the other default dtypes. Copy-on-Write The result of any indexing operation (subsetting a DataFrame or Series in any way, i.e. including accessing a DataFrame column as a Series) or any method returning a new DataFrame or Series, always behaves as if it were a copy in terms of user API. As a consequence, if you want to modify an object (DataFrame or Series), the only way to do this is to directly modify that object itself. pd.col syntax can now be used in DataFrame.assign() and DataFrame.loc() You can now do this: df.assign(c = pd.col('a') + pd.col('b')) New Deprecation Policy Plus more - Michael #3: typos You've heard about codespell … what about typos? VSCode extension and OpenVSX extension. From Sky Kasko: Like codespell, typos checks for known misspellings instead of only allowing words from a dictionary. But typos has some extra features I really appreciate, like finding spelling mistakes inside snake_case or camelCase words. For example, if you have the line: *connecton_string = "sqlite:///my.db"* codespell won't find the misspelling, but typos will. It gave me the output: *error: `connecton` should be `connection`, `connector` ╭▸ ./main.py:1:1 │1 │ connecton_string = "sqlite:///my.db" ╰╴━━━━━━━━━* But the main advantage for me is that typos has an LSP that supports editor integrations like a VS Code extension. As far as I can tell, codespell doesn't support editor integration. (Note that the popular Code Spell Checker VS Code extension is an unrelated project that uses a traditional dictionary approach.) For more on the differences between codespell and typos, here's a comparison table I found in the typos repo: https://github.com/crate-ci/typos/blob/master/docs/comparison.md By the way, though it's not mentioned in the installation instructions, typos is published on PyPI and can be installed with uv tool install typos, for example. That said, I don't bother installing it, I just use the VS Code extension and run it as a pre-commit hook. (By the way, I'm using prek instead of pre-commit now; thanks for the tip on episode #448!) It looks like typos also publishes a GitHub action, though I haven't used it. Brian #4: A couple testing topics slowlify suggested by Brian Skinn Simulate slow, overloaded, or resource-constrained machines to reproduce CI failures and hunt flaky tests. Requires Linux with cgroups v2 Why your mock breaks later Ned Badthelder Ned's taught us before to “Mock where the object is used, not where it's defined.” To be more explicit, but probably more confusing to mock-newbies, “don't mock things that get imported, mock the object in the file it got imported to.” See? That's probably worse. Anyway, read Ned's post. If my project myproduct has user.py that uses the system builtin open() and we want to patch it: DONT DO THIS: @patch("builtins.open") This patches open() for the whole system DO THIS: @patch("myproduct.user.open") This patches open() for just the user.py file, which is what we want Apparently this issue is common and is mucking up using coverage.py Extras Brian: The Rise and Rise of FastAPI - mini documentary “Building on Lean” chapter of LeanTDD is out The next chapter I'm working on is “Finding Waste in TDD” Notes to delete before end of show: I'm not on track for an end of year completion of the first pass, so pushing goal to 1/31/26 As requested by a reader, I'm releasing both the full-so-far versions and most-recent-chapter Michael: My Vanishing Gradient's episode is out Django 6 is out Joke: tabloid - A minimal programming language inspired by clickbait headlines

Oh My Glob! An Adventure Time Podcast
Season 9 - Episodes 7-9 ("Elements" Parts 6-8)

Oh My Glob! An Adventure Time Podcast

Play Episode Listen Later Nov 17, 2025 59:56


We have reached the end of Adventure Time's "Elements" miniseries, and it's been a fun ride. Join the Oh My Glob crew as we get into LSP's sass, Matt's trivia gripes and a whole bunch o' other stuff. Rate us on Apple Podcasts! itunes.apple.com/us/podcast/oh-my-glob-an-adventure-time-podcast/id1434343477?mt=2Contact us: ohmyglobpodcast@gmail.comInstagram: @ohmyglobpodTrivia Theme by Adrian C.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 388: On-time Delivery

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Nov 10, 2025 36:04


Scientist Kris Nichols finds the disconnect between food production and soil health “terrifying” and says the stakes are too high not to mend that break. So what are we waiting for? (2 of 2 parts) More Information • Ear to the Ground No. 387 Interview with Kris Nichols (1 of 2 parts) • LSP's Soil…  Read More → Source

Let´s Snacka Plast
#166 "En LÅNG paus senare"

Let´s Snacka Plast

Play Episode Listen Later Oct 29, 2025 85:23


Lsp är tillbaka med ett rykande färskt avsnitt!Vi uppdaterar er om senaste läget i våra liv, hur många Ace kan Ted rimligt vis göra i sitt liv, och en liten hyllning till Kastaplast och alla deras magiska discar!Vi pratar även lite om våra spekulationer för framtiden inom discgolfen och en massa mer!Lyssna in på en podplattform eller haka på oss när vi streamar Live på Youtube!Anders och TedOver and out ;)

Moneda Moves
How This Founder Builds Wealth Wellness For All with Margarita Quiñones Peña, Founder, Latina Sweat Project

Moneda Moves

Play Episode Listen Later Oct 8, 2025 36:52


Welcome to an evolved era of the Moneda Moves podcast. As the environment changes around us and our communities face palpable threats to livelihoods and built wealth, we too are realigning, and you'll notice that our episodes are going to sound a little different moving forward. We are extending the definition of what capital looks like for our community.  Now on the Moneda Moves podcast, we're not just talking about assets and cash, we're talking about capital in all of its forms: financial, social, political, and cultural. We are an incredibly powerful and resilient community. There's no better time to pull all our levers of power to not just survive but continue to thrive in any economy and administration. From entrepreneurs to innovators, we are rewriting what growth looks like. One of the topics we plan to explore on Moneda Moves is how we can support our community with our capital. We as a community have access to capital in our everyday lives. Via supporting value-aligned and good businesses, grants, community funding, and more, businesses can balance accessibility and sustainability. By tapping into these resources, we can support the most vulnerable people in our community. Today, we are highlighting the  Latina Sweat Project, a Chicago-based wellness nonprofit dedicated to making yoga and holistic health accessible to underserved communities. I am also highlighting LSP as I also now sit as board co-chair, as I fully believe in their mission and how health becomes a fundamental pillar to building complete wealth. Margarita Quiñones Peña is the Founder and Executive Director of the Chicago-based nonprofit.. A first-generation Mexican immigrant, Margarita's journey crossing the U.S.-Mexico border as a child shaped her lifelong commitment to equity, healing, and representation. She is also the author of Homecoming: El Viaje a Mi Hogar, a children's book that uplifts the voices of migrant youth. Through Latina Sweat, she creates community-centered yoga classes, yoga teacher trainings, and wellness programs that empower women and families to reclaim their health, culture, and leadership.In this week's episode, Margarita and I discuss how founding the Latina Sweat Project is building holistic wellness for entire communities. By making her classes financially accessible, the most vulnerable people in Chicago neighborhoods can participate in classes ranging from yoga to strength training. The Latina Sweat Project has grown from having to operate pop-up style to finally having its own studio, which launched earlier this fall. They plan to continue providing access to wellness for underserved communities while also growing as a thriving Latina-owned business. Follow Margarita on Instagram @mquino4. Follow The Latina Sweat Project on Instagram @latinasweatproject and on their website. Follow Moneda Moves on Instagram: @MonedaMovesFollow your host Lyanne Alfaro on Instagram: @LyanneAlfaroMain podcast theme song from Premium Beat. Our music is from Epidemic Sound.Podcast production for this episode was provided by CCST, an Afro-Latina-owned boutique podcast production and copywriting studio. 

Land Stewardship Project's Ear to the Ground
Ear to the Ground 382: No Offseason

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Sep 29, 2025 38:09


LSP’s Local Democracy Challenge is a reminder that we can’t allow democracy to grow dormant between elections. More Information • LSP’s Local Democracy Challenge Web Page • Land Stewardship Action • Ear to the Ground Podcast 341 Featuring Land Stewardship Action You can find LSP Ear to the Ground podcast episodes on Spotify, Pandora, iTunes, YouTube,…  Read More → Source

Legally Speaking Podcast - Powered by Kissoon Carr
Special Minisode: The Legally Speaking Podcast Live Lounge - Powered by Lost Rhythms

Legally Speaking Podcast - Powered by Kissoon Carr

Play Episode Listen Later Sep 19, 2025 13:47


In this special minisode of the Legally Speaking Podcast, we are joined by DJs Frankie Gioia and Adam Guy (aka Guyza) from Lost Rhythms. They discuss their journey in the music industry, the challenges of balancing demanding careers with their passion for DJing, and their exciting future plans, including upcoming gigs and music releases. This minisode is also a teaser for more exciting things to come for the Live Lounge, which is now a weekly event at the Legally Speaking Podcast.Key Points From the DiscussionHow this collaboration was formed in Ibiza on the Lawyers RetreatLost Rhythms bringing back older tunesHow to balance work and passionDJing: a passion for over 30 yearsHow energy breeds energy in the music sceneFind out more on the official LSP website

Never Ending Adventure: An Adventure Time Podcast
#167 - Something Old, Something OOO

Never Ending Adventure: An Adventure Time Podcast

Play Episode Listen Later Sep 9, 2025 47:09


S5E44 - We got us TreeTrunks tying the knot, uncovering some of her creepy past with her exes, her momma just done saying some inappropriate stuff, wedding crasher LSP, AND the King of OOO just being a weirdo. Things.....just be weird sometimes. 

Bitcoin Optech Podcast
Bitcoin Optech: Newsletter #368 Recap

Bitcoin Optech Podcast

Play Episode Listen Later Sep 8, 2025 67:15


Mark “Murch” Erhardt and Mike Schmidt discuss Newsletter #368.News● Draft BIP for block template sharing (0:30) ● Trusted delegation of script evaluation (28:07) Changes to services and client software● ZEUS v0.11.3 released (33:07) ● Rust Utreexo resources (33:25) ● Peer-observer tooling and call to action (34:11) ● Bitcoin Core Kernel-based node announced (37:22) ● SimplicityHL released (38:23) ● LSP plugin for BTCPay Server (39:17) ● Proto mining hardware and software announced (39:42) ● Oracle resolution demo using CSFS (40:46) ● Relai adds taproot support (41:11) Releases and release candidates● LND v0.19.3-beta (43:09) ● Bitcoin Core 29.1rc1 (43:29) ● Core Lightning v25.09rc2 (43:55) Notable code and documentation changes● Bitcoin Core #32896 (44:33) ● Bitcoin Core #33106 (46:57) ● Core Lightning #8467 (1:02:49) ● Core Lightning #8354 (1:03:26) ● Eclair #3103 (1:04:07) ● Eclair #3134 (1:04:43) ● LDK #3897 (1:05:56)

Fernschuss - Der Kickbase & Bundesliga Podcast

Moin Freunde, wir machen eine kleine Spontanfolge in der Länderspielpause und reden über alles mögliche was noch so kickbase relevant ist. Wir schauen uns explizit die Programme der Mannschaften nach der LSP an und erarbeiten so ein cooles Team mit euch zusammen für die kommenden Spieltage! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Fernschuss - Der Kickbase & Bundesliga Podcast
Kömür muss mit Axel Tape spielen

Fernschuss - Der Kickbase & Bundesliga Podcast

Play Episode Listen Later Sep 3, 2025 81:55


Länderspielpause. Argh. Aber Freunde wir geben, dass IHR die LSP zu eurem Gunsten nutzen könnt! Wir gehen auf alle formstarken und formschwachen Mannschaften ein und sagen euch wen ihr behalten sollt und wen nicht! Learn more about your ad choices. Visit podcastchoices.com/adchoices

YourTechReport
Zoho Just Built Its Own LLM—Here's Why It Matters

YourTechReport

Play Episode Listen Later Aug 6, 2025 22:01


Zoho launches its own large language model—Zia LLM—built in India, designed for business, and powered by privacy-first AI agents that redefine what digital employees can do. Zoho is taking a bold step into the AI future with the launch of its own large language model (LLM) and a suite of enterprise-ready AI agents, all developed in-house—not in Silicon Valley, but in India. In this conversation, Zoho executive Chandrasekhar “LSP” joins Your Tech Report to unpack what makes Zoho's approach to AI different—and why it could reshape how businesses automate, analyze, and serve customers. With its own infrastructure, private data policies, and “no AI tax” pricing model, Zoho aims to give businesses control over their data, their automation, and their outcomes. LSP explains how Zoho's custom-built LLMs are trained on licensed datasets, operate within customer firewalls, and are tailored to specific business contexts—unlike consumer LLMs from OpenAI or Google. We also dive into Zoho's digital employee framework, the Zoho Directory's access guardrails, and the new Zia agent marketplace, which enables developers to create and monetize AI agents. From speech recognition to interoperability across platforms, this episode offers a deep look into Zoho's vision for AI—one grounded in privacy, performance, and purpose. 0:00 – Zoho's Big AI Announcement 3:25 – Why Zoho Built Its Own LLM from Scratch 8:40 – Privacy by Design: No Data Sharing, No AI Tax 12:20 – Digital Employees vs Traditional Agents 16:10 – Zoho Directory & Enterprise Guardrails 21:15 – Zia Marketplace and Multi-Agent Workflows 27:10 – Speech Recognition and Low-Resource Language Support 31:00 – Staying Grounded Through the AI Hype 35:45 – Zoho's Vision for Accessible, Affordable AI 38:00 – Zoholics Conference Preview #ZohoZia #AIPrivacy #LLM #DigitalEmployees #EnterpriseAI #YourTechReport #Zoholics Learn more about your ad choices. Visit megaphone.fm/adchoices

Land Stewardship Project's Ear to the Ground
Ear to the Ground 378: Dumping the Doubts

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jul 26, 2025 30:00


Noreen Thomas got into organic crop farming almost three decades ago as a way to produce healthy food and survive economically. Today, she’s the mentor she never had. More Information • Register for LSP's “Bringing Small Grains Back to Minnesota” Networking Meeting on Aug. 2, 2025, in Madison, Minn. • Doubting Thomas Farms • 2025…  Read More → Source

SUBJECT TO INTERPRETATION
Interpreters, AI & Advocacy – Safe AI Taskforce (part 2) [EP 86]

SUBJECT TO INTERPRETATION

Play Episode Listen Later Jul 25, 2025 69:25


In this episode of Subject to Interpretation, host Maria Ceballos sits down with Ludmila Golovine (CEO of MasterWord Services) and Dr. Bill Rivers (Principal at WP Rivers & Associates), members of the Safe AI Taskforce, to continue the very important conversation on the impacts of AI in the language services field. Tune in to better understand how language professionals can respond to emerging technologies, learn the contexts where a human presence will continue or, perhaps, be even more necessary moving forward—and why every interpreter should stay informed and involved.Click here to watch the 1st part of this must-listen conversation with the Safe AI Taskforce: https://www.youtube.com/watch?v=P4gZASietjg Visit the Safe AI Taskforce website: https://safeaitf.org/Ludmila Golovine is the President and CEO of MasterWord Services, Inc., a top-ranked LSP globally. She has dedicated over 30 years to the language services industry, and for the past 15 years has been an international speaker/advocate for language rights and social justice. She is the Strategic Partnerships Manager for the Global Community Programs of Women in Localization, a founding member of the Global Coalition of Language Rights, member of TBAT (Texas Business Against Human Trafficking), active participant in the UN Global Compact Initiative, and chairs the Advisory Subcommittee for the Translation and Interpretation Program at the Houston Community College. Her work has been recognized by numerous awards, including California Healthcare Interpreting Association (CHIA) Trainer of the Year Award 2021, Houston Business Journal's Women Who Mean Business Award, and Congressional Recognition G7 “Excellence in International Service” award.Dr. Bill Rivers is Principal at WP Rivers & Associates. A former Russian/English translator and interpreter, Russian teacher, academic researcher and administrator, and for-profit and non-profit executive, he has more than 30 years' experience in language advocacy and capacity at the national level, with significant experience in culture and language for economic development and national security in the Intelligence Community, private and academic sectors, and publications in second and third language acquisition research, proficiency assessment, program evaluation, and language policy development and advocacy. His company is contracted by the ALC for advocacy support.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 377: Flour Power

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jul 23, 2025 43:44


When Peter and Brittany Haugen sought to diversify their western Minnesota crop farm, they realized there was little infrastructure available to support small grains. So they forged their own link in the food chain by launching Sandhill Mill. More Information • Register for LSP’s “Bringing Small Grains Back to Minnesota” Networking Meeting on Aug. 2,…  Read More → Source

Smart Software with SmartLogic
Set Theoretic Types in Elixir with José Valim

Smart Software with SmartLogic

Play Episode Listen Later Jul 10, 2025 45:40


Elixir creator José Valim returns to the podcast to unpack the latest developments in Elixir's set-theoretic type system and how it is slotting into existing code without requiring annotations. We discuss familiar compiler warnings, new warnings based on inferred types, a phased rollout in v1.19/v1.20 that preserves backward compatibility, performance profiling the type checks across large codebases, and precise typing for maps as both records and dictionaries. José also touches on CNRS academic collaborations, upcoming LSP/tooling enhancements, and future possibilities like optional annotations and guard-clause typing, all while keeping Elixir's dynamic, developer-friendly experience front and center. Key topics discussed in this episode: Set-theoretic typing (union, intersection, difference) Compiler-driven inference with zero annotations Phased rollout strategy in 1.19 and 1.20 Performance profiling for large codebases Map typing as records and dictionaries Exhaustivity checks and behavioral typing in GenServers Language Server Protocol & tooling updates Future optional annotations and guard-clause typing CNRS collaboration for theoretical foundations Clear error messages and false-positive reduction Community-driven feedback and iterative improvements Links mentioned: https://github.com/elixir-nx https://livebook.dev/ https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.html https://hexdocs.pm/dialyxir/0.4.0/readme.html https://remote.com/ Draw the Owl meme: https://i.imgur.com/rCr9A.png https://dashbit.co/blog/data-evolution-with-set-theoretic-types https://hexdocs.pm/ecto/Ecto.html https://github.com/elixir-lsp/elixir-ls Special Guest: José Valim.

The KTS Success Factor™ (a Podcast for Women)
The Language of Success from an Interpreter's Perspective with Theresa Slater

The KTS Success Factor™ (a Podcast for Women)

Play Episode Listen Later Jul 2, 2025 27:36


Scattered doubts and self-criticism can hinder women from achieving their full potential in business. Without action, these feelings can lead to missed opportunities and unfulfilled dreams. By taking small steps and seeking support, women can overcome obstacles and thrive in their entrepreneurial journeys. Theresa Slater, known as Terry, is the president of Empire Interpreting Services, which she founded in 2003. With over 300 interpreters and a range of customer-centric services, her company has become an award-winning organization. Terry is also a speaker, author, and advisor to new entrepreneurs. Her new book, The Language of Success: An Interpreter's Entrepreneurial Journey, is part autobiography and part how-to guide for aspiring business owners. In this episode, Terry shares her inspiring journey from a challenging upbringing to becoming a successful entrepreneur. She discusses the importance of overcoming imposter syndrome, the value of self-care, and the necessity of hiring help to grow both personally and professionally.   What you will learn from this episode: Understand the impact of imposter syndrome and how to overcome it. Discover the importance of self-care and physical strength in empowering women. Gain insights into the significance of listening to your gut in decision-making.   “Stop caring what other people think. Become physically stronger. And start to care about being respected and not being liked.” – Theresa Slater   Valuable Free Resource: Check out Theresa's book, The Language of Success, for insights and strategies on entrepreneurship.   Topics Covered: 01:48 - Understanding Empire Interpreting Services, Deep dive into what a language service provider (LSP) actually does and why it matters 02:48 - Terry's Transformational Journey into Business 05:48 - Mastering Sign Language and Expanding Services, Terry's journey learning sign language and building comprehensive language solutions 07:04 - Overcoming Critical Business-Building Challenges, Conquering personal obstacles, including the hidden enemy of imposter syndrome 10:19 - The Strategic Self-Improvement Journey, Why self-care isn't selfish — it's an essential business strategy 13:54 - The Story Behind Writing the Book, What inspired and motivated Terry to finally share her entrepreneurial blueprint 16:22 - Mastering the Art of Strategic Hiring, Critical importance of hiring support for exponential business growth and personal freedom 20:02 - Self-Care as Competitive Advantage, The devastating impact of neglecting self-care on health, performance, and success 23:08 - Continuous Growth and Adaptation, Why ongoing learning and adaptation are non-negotiable in today's business landscape 24:54 - Essential Advice for Women Entrepreneurs, The power of listening to your gut instincts for better decision-making   Key Takeaways: “You have to applaud how far you've come, especially when you come from a challenging background.” – Theresa Slater “Self-care is not selfish; it's essential for your well-being and success.” – Theresa Slater “Listening to your gut can save you from making poor decisions in business and life.” – Theresa Slater   Ways to Connect with Theresa Slater: Website: https://www.empireinterpreting.com/  Email: tslater@empireinterpreting.com   Ways to Connect with Sarah E. Brown: Website: https://www.sarahebrown.com Facebook: https://www.facebook.com/DrSarahEBrown LinkedIn: https://www.linkedin.com/in/sarahebrownphd To speak with her: bookachatwithsarahebrown.com

Light 'Em Up
Uncharted Waters, Unprecedented Times: Will Your Hard-won Civil Liberties be Lost? The Trump DOJ Green-lights Police Brutality. The Push to Pardon George Floyd's Killer. Will America's Experiment in Self-Government Survive the Slide into Tyrann

Light 'Em Up

Play Episode Listen Later Jun 21, 2025 71:05


Welcome to this educational and explosive, brand-new edition of Light ‘Em Up!Share us with a friend!  We are now being actively downloaded in 131 countries!We continue our intense focus on how the Rule of law and democracy are being endangered.Democracy hangs in the balance and is under constant daily attack — threatened on every front.What better example than the current Department of Justice (DOJ) ordering its civil rights division to halt the majority of its functions, including a freeze on pursuing any:—      new cases—     indictments or—     consent decree settlements.For civil rights this is a crisis!  It has only been 59 years since the Voting Rights Act of 1965 was passed. This was a landmark piece of legislation that helped to dismantle many discriminatory barriers and enforce the voting rights of African Americans.  Imagine having that office shut down during the LBJ Administration!  The KKK would have won!In a democracy, the majority can wield immense power, potentially leading to the suppression of dissenting voices and the marginalization of minority groups.You had better begin to ask yourself the tough question:Are you okay with your civil rights being suspended until 2028 and maybe beyond?White people, too, can have their civil rights violated.  Are you ready for that?Will the police be able to simply continue to brutalize people and get away with it as the Louisiana State Police did on May 10th, 2019, with Ronald Greene?Greene was an unarmed 49-year-old black man who, on a dark night in Monroe, Louisiana, 6 members of the LSP “goon squad” tazed, punched, kicked, pepper sprayed, and dragged face down on the concrete, only to place him in a chokehold until he died.Good night and good luck! Under this current Trump administration your civil rights will be “enforced” like his were.We are staring in the face of “soft despotism" or "soft tyranny".This occurs when a powerful, centralized state, while not overtly oppressive, gradually takes over the responsibilities and decision-making of individuals and communities.The state becomes like a benevolent but overbearing parent, providing for citizens' needs and ensuring their well-being, but in doing so, it diminishes their capacity for independent thought and action.  We've arrived there, stop fooling yourself otherwise.We'll discuss and analyze the current push from the ultra-conservative-talk-show host, Ben Shapiro to petition the adjudicated felon Donald Trump to federally pardon Derek Chauvin, the felon, former police officer — who drove his knee into the neck of George Floyd for more than 9 minutes, hastening his death on May 25th, 2020.We have passed the 5-year mark of this deadly encounter on the streets in Minneapolis, MN and tell me, what has changed for the better?Shapiro clearly sees this as an opportunity to continue to support his white, racist agenda as it gins up his base of white nationalist followers.  MAGA-folk and beyond!We ask out loud:Could a president do that?What would it matter, since Chauvin also is in prison on state charges?And we'll wrap things up looking at what happens to democracy when police regularly brutalize its citizens as the “politics of policing” has changed drastically since George Floyd's' death.The truth is under attack!  The truth is worth defending!Tune in for all of the explosive details.Justice comes to those that fight, not those that cry!Without fear or favor we follow the facts and tackle the topics that touch your lives.Follow our sponsors:  Newsly & Feedspot.We want to hear from you!

Land Stewardship Project's Ear to the Ground
Ear to the Ground 374: The Power of Being Heard

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jun 12, 2025 36:42


LSP members and allies spoke out during the 2025 session of the Minnesota Legislature. It paid off in the form of support for local food markets, farmland access, and soil health. More Information • Blog: Wrap-up of 2025 Legislative Session • LSP’s State Policy Web Page You can find LSP Ear to the Ground podcast…  Read More → Source

Ask Jim Miller
The Best Way to Communicate Market Stats ✍️ | Monday Morning Pep Talk 281 #realestatepodcast

Ask Jim Miller

Play Episode Listen Later Jun 1, 2025 13:20


If you want to stand out as a real estate advisor in 2025, you need more than market knowledge—you need to communicate that data clearly, confidently, and consistently across every platform you touch. In Episode #281 of Monday Morning Pep Talk, Jim Miller shows you exactly how to translate raw market stats into compelling narratives—using AI tools like ChatGPT to write better listing presentations, Instagram captions, LinkedIn posts, and newsletter updates. You'll learn how to turn metrics like months of supply, days on market (DOM), and list-to-sale price ratio (LSP) into trust-building insights that set the tone for every buyer and seller conversation.

On The Brink
Episode 418: Jolynn Ledgerwood

On The Brink

Play Episode Listen Later May 27, 2025 62:27


Jolynn D. Ledgerwood has over 25 years experience in Learning and Development. Her experience spans Hospitality, Consumer Goods, ProfessionalServices, IT, and Cyber Security. She has worked with several large companies including PepsiCo, Brinker International, FritoLay, Critical Start, and Toyota Motors. While she enjoyed her work in the large corporate setting, she was discouraged by the methodologies for Team Building and allowing ALL members a voice.  When she found LEGO®️ Serious Play®️, she was drawn to its familiarity and plentiful application opportunities. (LSP has over 15,000 facilitators in the Europe countries, and only 100+ in the US.). She added it to her list of Certifications including Bob Goff's Dream Big, The Primal Question,and Gallup StrengthsFinder.

SlatorPod
#249 How to Expand in AI Data Services with DATAmundi CEO Véronique Özkaya

SlatorPod

Play Episode Listen Later May 6, 2025 37:12


Véronique Özkaya, Co-CEO of DATAmundi, returns to SlatorPod for round 2 to talk about the company's strategic rebrand and how it is positioning itself as a key player in the data-for-AI space.Véronique details her journey to leading DATAmundi, formerly known as Summa Linguae, where she now drives a strategic shift from traditional language services to AI-focused data enablement.The Co-CEO explains that their LSP background makes them well-suited to offer fine-tuning services for AI, especially in multilingual and domain-specific contexts. However, she cautions that language expertise alone isn't enough; deep tech infrastructure, data science capabilities, and the ability to quickly build custom workflows are also essential.While many companies still rely on crowd-sourced, basic annotation, DATAmundi targets higher-complexity projects requiring domain experts and linguists. Véronique notes the market for data-for-AI is growing significantly faster than traditional LSP work and sees a second wave of demand from enterprises needing to adapt pre-trained models.Véronique highlights data scarcity, hallucination, and bias as core AI challenges that DATAmundi tackles through technical solutions and expert guidance, helping enterprises as they face pressure to implement AI despite legacy systems and unclear strategies.Looking ahead, DATAmundi plans to expand its consultative services through further acquisitions, focusing not on tech per se, but on organizations that deepen its expertise in data application and AI deployment.

SlatorPod
#248 DeepL Plants Flag on iPhone, RWS Stock Puzzle

SlatorPod

Play Episode Listen Later May 2, 2025 29:38


Florian and Esther discuss the language industry news of the week, with DeepL becoming the first third-party translation app users can set as default on the iPhone, a position gained by navigating Apple's developer requirements that others like Google Translate have yet to meet.Florian and Esther examine RWS's mid-year trading update, which triggered a steep 40% share price drop despite stable revenue, healthy profits, and manageable debt.On the partnerships front, the duo covers multiple collaborations: Acclaro and Phrase co-funded a new Solutions Architect role, Unbabel entered a strategic partnership with Acclaro, and Phrase partnered with Clearly Local in Shanghai. Also, KUDO expanded its network with new partners, while Deepdub was featured in an AWS case study for its work with Paramount. Wistia partnered with HeyGen to launch translation and AI-dubbing features and Synthesia joined forces with DeepL, further cementing the trend of avatar-based multilingual video content.In Esther's M&A corner, MotionPoint acquired GetGloby to enhance multilingual marketing capabilities, while OXO and Powerling merged to form a transatlantic LSP leader. TransPerfect deepened its media footprint with two studio acquisitions from Technicolor, and Magna Legal Services continued its acquisition spree with Basye Santiago Reporting.Meanwhile, in funding, Linguana, an AI dubbing startup targeted at YouTube creators, raised USD 8.5m, and pyannoteAI secured EUR 8m to enhance multilingual voice tech using speaker diarization. The episode concluded with speculation about DeepL's rumored IPO, which could have broader implications for capital markets.

The Art of SBA Lending
The Model Has Changed: Rethinking SBA Operations ft. Mike Breckheimer, Brian Carlson & Chris Kwiatkowski | Ep. 178

The Art of SBA Lending

Play Episode Listen Later May 1, 2025 50:00


This week on The Art of SBA Lending, we're confronting the hard truth: the traditional SBA lending model no longer works. Rising overhead, margin compression, and tighter audits have flipped the economics, and now, shops across the country are closing or scrambling to restructure. So what's the new model? And how do you scale without losing control? Ray Drew is joined by three SBA leaders navigating this shift in real time. Mike Breckheimer, Brian Carlson, and Chris Kwiatkowski are coming together to unpack the numbers, the pitfalls, and the path forward.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 369: Emerging Agrarians

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Apr 11, 2025 37:52


Ka Zoua Berry says supporting a future generation of farmers who don’t fit the traditional Midwestern stereotype isn’t just about building a resilient farm and food system. It’s also about building resilient communities. More Information • Big River Farms • Emerging Farmers Conference • Farmland Access Hub • LSP Farmland Clearinghouse You can find LSP…  Read More → Source

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh

Stephan Livera Podcast
How Lightning Builders Can Improve Bitcoin Wallets with Nick Slaney | SLP640

Stephan Livera Podcast

Play Episode Listen Later Mar 3, 2025 60:43


In this episode, Stephan speaks with Nick Slaney about the current state and future of the Lightning Network. They discuss the misconceptions surrounding Lightning adoption, the legal challenges faced by developers, and the opportunities for Lightning Service Providers (LSPs). Nick shares insights on hosted channels, liquidity management, and the user experience of Lightning, emphasizing the importance of understanding costs associated with using the network. The conversation highlights the potential for growth and innovation in the Lightning ecosystem as it continues to evolve. In this conversation, Stephan and Nick Slaney delve into the intricacies of the Lightning Network, Bitcoin fees, and the role of stablecoins in the crypto ecosystem. They discuss the real-world user experience with Bitcoin and Lightning, emphasizing the importance of understanding user needs and the misconceptions prevalent in online discussions. The conversation also touches on the implications of Taproot assets for the Lightning Network and the future of Bitcoin development, highlighting the need for better user experiences and broader adoption.Takeaways

Land Stewardship Project's Ear to the Ground
Ear to the Ground 365: Perennial Pivot

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Feb 13, 2025 21:08


When Sogn Valley Farm transitioned out of intensive production of vegetable crops, it opened up opportunities to utilize a unique cousin of wheat as a way to steward the land. More Information • Sogn Valley Farm • Forever Green Initiative • Ear to the Ground 229: Kernza’s Continuous Cover • Wrap-Up of LSP’s 2025 Small…  Read More → Source

Bitcoin Takeover Podcast
S16 E6: Super Testnet on Monero vs Lightning Network Privacy

Bitcoin Takeover Podcast

Play Episode Listen Later Feb 4, 2025 149:39


Bitcoin developer Super Testnet argues that the Lightning Network is more private – and therefore better suited for darknet markets than Monero. In this episode, he breaks down all the nuances involved and defines good financial privacy. Time stamps: Introducing Super Testnet (00:00:48) Lightning on Dark Web Markets (00:01:08) Lightning Network Privacy Features (00:01:40) Analysis of Sender and Receiver Privacy (00:02:02) Onion Routing Explanation (00:03:07) Invoice Privacy Comparison (00:04:36) Transaction Visibility in Monero? (00:06:08) Information Storage in Lightning (00:07:12) Liquidity and Large Transactions (00:08:10) Amount Privacy in Lightning (00:09:34) Private Channels in Lightning (00:11:25) Routing Nodes and Privacy (00:13:59) How Monero Transactions Work (00:15:08) Encryption Standards in Monero (00:16:01) Recipient Privacy in Monero (00:17:54) Privacy Tech (00:18:52) Network Level Privacy (00:19:02) Tor Usage in Lightning Network (00:19:44) Routing Node Configuration (00:20:07) Dandelion++ (00:21:00) IP Address Association in Lightning (00:21:22) Encryption in Lightning Transactions (00:22:50) Monero's Network Privacy by Default (00:23:18) Chainalysis Video Reference (00:23:40) Remote Procedure Call Limitations (00:24:38) Custodial Solutions and Privacy (00:26:31) Privacy Advantages of Mints (00:28:08) Full Chain Membership Proofs (00:29:53) Encrypted Senders in Lightning (00:31:52) Comparison with Zcash (00:32:30) Barriers for Lightning Network Adoption (00:34:05) Exploring XMR Bazaar (00:35:02) SideShift (00:36:03) Paul Sztorc's Core Untouched Soft Work (00:37:14) Drivechains Activation (00:38:27) Ossification of Bitcoin (00:41:09) Concerns About Ossification (00:41:51) ZK Rollups Discussion (00:42:35) Citrea's Zero Knowledge Proof Rollup (00:45:05) Community Concerns on Lightning Network (00:48:34) Chainalysis and Dandelion Protocol (00:50:23) LSP and KYC Privacy Issues (00:52:39) Receiver Privacy in Lightning Network (00:53:28) Phoenix Wallet Setup (00:54:15) Sender Privacy Concerns (00:55:30) View Key and Monero (00:57:10) Chainalysis and Lightning Network (01:02:08) Monero Tracing Capabilities (01:06:01) User Input Error in Privacy (01:07:02) The Lightning Network vs. Monero Privacy (01:10:56) Conference Plans in Romania (01:12:00) Monero Payment Channel Network (01:14:36) Full Chain Membership Proofs (01:15:21) Lightning Network and Sender Encryption (01:15:32) Stablecoins and Lightning Network (01:16:22) Monero Transaction Validation (01:18:05) Zero Knowledge Proofs in Monero (01:18:56) Bitcoin's Zero Knowledge Rollups (01:20:31 Rollups and Bitcoin Scalability (01:21:04) Trojan Horse Concept in Bitcoin (01:23:46) Tornado Cash vs. Coinjoin (01:25:36) Coin Pool on Bitcoin (01:27:34) Darknet Market Listings (01:29:13) Nostr and Classified Ads (01:29:34) Privacy in Darknet Transactions (01:30:50) Risks of Direct Payments (01:31:54) Exploring Shopstr Listings (01:32:43) Comparing Shopstr and XMR Bazaar (01:35:00) Privacy Improvements in Shopstr (01:37:13) Lightning Network Developments (01:44:42) KYC and Banking Issues (01:48:24) Introduction to Bank Privacy Issues (01:48:59) Financial Regulations in Romania (01:49:48) Advice on Relocation for Financial Privacy (01:50:12) Intrusiveness of Banking Regulations (01:51:03) Personal Experience with Banking Scrutiny (01:51:34) Living Arrangements (01:52:13) Lightning Network Implementations Privacy (01:53:16) Privacy Implications of Lightning Wallets (01:54:03) User-Friendliness of Lightning Wallets (01:54:57) BOLT 12 and Privacy Claims (01:56:29) Improvements in BOLT 12 (01:57:09) Critique of BOLT 12's Privacy Features (01:59:30) Super Testnet's Current Projects and Work Focus (02:00:15) Development of Mint Market Cap Tool (02:01:30) Title Transfer App and State Chains (02:02:30) Ensuring Security in State Chains (02:03:32) Nostr Wallet Connect Protocol (02:04:26) Creation of Faucet Generator (02:05:40) Creating a Testnet (02:06:36) State Chains Discussion (02:07:12) Prediction Market Concept (02:08:14) Project Backlog Overview (02:10:14) Super Testnet's Music Career (02:12:29) Upcoming Conferences (02:14:39) Coin Pools Advantages (02:15:41) Planning Conference Attendance (02:17:25) Workshops and Commitments (02:17:56) Health and Fitness Journey (02:19:08) Should Bitcoin Increase the Block Size? (02:20:13) Soft Fork Proposal (02:21:04) Market Value of Transactions (02:22:47) Workshop Availability (02:24:02) Social Media Presence (02:25:14) Scams and Fake Accounts (02:26:02) Social Engineering Tactics (02:26:39) Money Requests Clarification (02:27:34) Social Links and Resources (02:27:58) Audience Engagement (02:28:28) Closing Remarks (02:29:01)

FreightCasts
WHAT THE TRUCK?!? EP799 Mexico tariffs see their shadow: delayed; are load boards listening to truckers?

FreightCasts

Play Episode Listen Later Feb 3, 2025 45:35


On episode 799 of WHAT THE TRUCK?!? Dooner is joined by GenLog's CEO Ryan Joyce to talk about their $14.6M Series A. We'll find out how this will help them in their fight against freight theft and fraud. Owner-operator Jayme Anderson says that load boards like DAT aren't doing enough to prevent scammers. He'll talk about his heated debates with the DAT and will tell us why he doesn't think they're listening to owner-operators. It's also A1 vs Heinz 57 and Jayme and me chug our favorite steak sauces.  Counteract's Simon Martin and Daniel LeBlanc are all about tire maxing. They say that when it comes to tires it's all about balance. BlueYonder's Ann Maire Jonkman shares LSP strategy. Over the weekend the world was flipped on its head with tariffs against Canada, Mexico and China. Where do we move from here? Catch new shows live at noon EDT Mondays, Wednesdays and Fridays on FreightWaves LinkedIn, Facebook, X or YouTube, or on demand by looking up WHAT THE TRUCK?!? on your favorite podcast player and at 5 p.m. Eastern on SiriusXM's Road Dog Trucking Channel 146. Watch on YouTube Check out the WTT merch store Subscribe to the WTT newsletter Apple Podcasts Spotify More FreightWaves Podcasts #WHATTHETRUCK #FreightNews #supplychain Learn more about your ad choices. Visit megaphone.fm/adchoices

What The Truck?!?
Mexico tariffs see their shadow: delayed; are load boards listening to truckers?

What The Truck?!?

Play Episode Listen Later Feb 3, 2025 45:35


On episode 799 of WHAT THE TRUCK?!? Dooner is joined by GenLog's CEO Ryan Joyce to talk about their $14.6M Series A. We'll find out how this will help them in their fight against freight theft and fraud. Owner-operator Jayme Anderson says that load boards like DAT aren't doing enough to prevent scammers. He'll talk about his heated debates with the DAT and will tell us why he doesn't think they're listening to owner-operators. It's also A1 vs Heinz 57 and Jayme and me chug our favorite steak sauces.  Counteract's Simon Martin and Daniel LeBlanc are all about tire maxing. They say that when it comes to tires it's all about balance. BlueYonder's Ann Maire Jonkman shares LSP strategy. Over the weekend the world was flipped on its head with tariffs against Canada, Mexico and China. Where do we move from here? Catch new shows live at noon EDT Mondays, Wednesdays and Fridays on FreightWaves LinkedIn, Facebook, X or YouTube, or on demand by looking up WHAT THE TRUCK?!? on your favorite podcast player and at 5 p.m. Eastern on SiriusXM's Road Dog Trucking Channel 146. Watch on YouTube Check out the WTT merch store Subscribe to the WTT newsletter Apple Podcasts Spotify More FreightWaves Podcasts #WHATTHETRUCK #FreightNews #supplychain Learn more about your ad choices. Visit megaphone.fm/adchoices

WWL First News with Tommy Tucker
Driving can still be treacherous right now

WWL First News with Tommy Tucker

Play Episode Listen Later Jan 24, 2025 3:44


Tommy talks with Jacob Pucheu with LSP about driving safely

Atareao con Linux
ATA 664 Vi, Vim o Neovim ¿Cual es el mejor?

Atareao con Linux

Play Episode Listen Later Jan 23, 2025 22:43


#vi #vim #neovim ¿cual es el mejor editor #linux de los tres?¿cual elegir?¿que diferencias hay entre los tres?¿donde utilizar cada uno de ellos? Últimamente, tanto en en el grupo de Telegram como en el canal de YouTube hay una pregunta recurrente, que es ¿Que diferencias hay entre Vim y Neovim?. ¿Cual escoger para cada situación?. Así que esto me dio una idea para un episodio, y para lo cual ha sido necesario documentarme, claro. He querido añadir también al vetusto Vi, con el objetivo de que la comparativa sea lo mas exhaustiva posible, y que sepas cual es tu mejor opción en cada caso. En mi caso, particular, cuando decidí adentrarme en el mundo de Vi, lo hice directamente a Vim, y tengo que confesarte que me costó decidirme dar el salto de Vim a Neovim. Aunque este salto lo hice básicamente por dos aspectos que para mi resultaban importantes, el primero es el LSP, Language Server Protocol, y en segundo lugar por los complementos de Neovim, que al utilizar LUA como lenguaje de scripting facilitaba mucho la creación de estos. Así, en este episodio voy a intentar aclarar las diferencias entre Vi, Vim y Neovim, cuando elegir uno u otro y la razón para hacerlo. Más información y enlaces en las notas del episodio

Reversim Podcast
487 Bumpers 85

Reversim Podcast

Play Episode Listen Later Dec 31, 2024


פרק מספר 487 של רברס עם פלטפורמה - באמפרס מספר 85: רן, דותן ואלון באולפן הוירטואלי עם סדרה של קצרצרים שתפסו את תשומת הלב בתקופה האחרונה - בלוגים מעניינים, דברים מ- GitHub, וכל מיני פרויקטים מעניינים או דברים יפים שראינו באינטרנט וחשבנו לאסוף ולהביא אליכם.וכמיטב המסורת לאחרונה - גם לא מעט AI, כי על זה הצעירים מדברים בזמן האחרון.

Bitcoin Magazine
The Bitcoin Treasury Wave w/ LQWD Tech and Shane Stuart

Bitcoin Magazine

Play Episode Listen Later Dec 22, 2024 48:49


Step into the future of corporate Bitcoin adoption with Liquid Technologies groundbreaking approach to integrating Bitcoin and Lightning Network infrastructure. In this exclusive interview, CEO Shane Stuart shares insights into how Liquid became one of Canada's top five publicly traded companies for Bitcoin per share, while pioneering Lightning Network innovation as a leading LSP provider. Host: Allen Helm Guest: @LQWDTech & Shane Stuart Lower your time preference and lock-in your Bitcoin 2025 conference tickets today!!! Use promo code BM10 for 10% off your tickets today! Click Here: http://b.tc/conference/2025 #Bitcoin #LightningNetwork #CorporateBitcoin #BitcoinStrategy #CryptoAdoption #BitcoinTreasury #BitcoinBusiness #BitcoinInnovation #CorporateStrategy #BitcoinInfrastructure #LightningInnovation #BitcoinPayments #CorporateCrypto #BitcoinTechnology #BitcoinDevelopment #BlockchainTechnology #FinTech #PaymentInnovation #BitcoinFuture #CryptoInfrastructure

The Art of SBA Lending
Largest SBA Loan Service Provider Gets Acquired (again) feat. Mike Breckheimer | Ep. 169

The Art of SBA Lending

Play Episode Listen Later Dec 12, 2024 51:19


In this episode of The Art of SBA Lending, Ray sits down with Mike Breckheimer at NAGGL 2024 to discuss the ins and outs of starting a lender service provider (LSP) business and navigating the SBA ecosystem. Mike shares his extensive experience building turnkey solutions for banks, credit unions, and other institutions looking to outsource SBA loan operations. He dives into the intricate balance of compliance, relationship building, and patience required to thrive in this niche industry. Key Highlights: Starting an LSP: Learn the essential steps to establish yourself as a trusted SBA lender service provider, including the necessary regulatory steps with OCRM. Sales Strategies for LSPs: Mike explains the long sales cycle when working with banks and credit unions and how patience was the key during the process. Navigating SBA Oversight: Discover the role of OCRM in reviewing lender service provider agreements and maintaining oversight in the SBA ecosystem. Tech & Innovation in SBA Lending: Explore how emerging technology and data analytics are reshaping the SBA lending landscape. Whether you're an aspiring LSP entrepreneur or a seasoned professional curious about industry trends, this episode is packed with valuable insights on the current state of SBA lending. Don't miss out on our exclusive NAGGL interview series—subscribe now to catch every episode! This episode is sponsored by: Lumos Data Lumos empowers your small business lending growth with cutting-edge analytics and streamlined applications that optimize your performance. If you're ready to take your small business lending to the next level with cutting edge analytics visit lumosdata.com.   Rapid Business Plans Rapid Business Plans is the go-to provider of business plans and feasibility studies for government guaranteed small business lenders. For more information, or to set up a Get Acquainted call go to http://www.rapidbusinessplans.com/art-of-sba   SBA Jobs Board Hiring for your SBA department? We've got you covered! SBA Jobs Board is here to bridge the gap between you and top SBA talent. Our Art of SBA Lending audience is packed with experts ready for their next career move. List your openings with us to connect with the best in the industry and find the right fit for your team. Live now on our new Art of SBA Website | https://www.artofsba.com/job-board   BDO's…let's start your weeks strong! Sign up for our weekly sales advice series, Sales Ammo. Every Monday morning wake up to a piece of Rays sales advice in your inbox to help you rise to the top. Subscribe here: https://www.artofsba.com/army-of-bdos   Loving The Art of SBA Lending episodes? Make sure to follow along with our sister shows, The BDO Show and SBA Today, each week with the links below! https://www.youtube.com/@TheBDOShow http://www.youtube.com/@SBAToday   Head to http://www.artofsba.com   for more information and to sign up for our must-read monthly newsletter to stay up to date with The Art of SBA Lending.

Stephan Livera Podcast
The Evolution of Alby with Michael Bumann | SLP622

Stephan Livera Podcast

Play Episode Listen Later Dec 5, 2024 60:22


Bumi & Stephan explore the evolution of Alby from a browser extension to a self-custodial Lightning wallet, Alby Hub. The conversation delves into the integration of Nostr for self-sovereign digital identity, security considerations for browser extensions, and the role of LSPs in channel management.  Bumi explains the architecture of Alby Hub, its user experience, and pricing models, emphasizing the importance of integrating Bitcoin into various applications. They also discuss the cost structures associated with Bitcoin services, the optimization of Lightning channels, and the challenges of on-chain payments.  The conversation highlights the importance of merchant adoption and the innovative Nostr Wallet Connect (NWC) protocol, which decouples wallets from applications, making it easier for developers. They introduce Alby Go, a mobile application designed for seamless payments, and explore the future of self-custodial solutions in the cryptocurrency space. Takeaways

Thinking Elixir Podcast
227: Oban Web Goes Open Source?

Thinking Elixir Podcast

Play Episode Listen Later Nov 5, 2024 29:35


News includes Oban Web going open source, making it more accessible for startups, a new community resource featuring over 80 Phoenix LiveView components, interesting insights from a frontend technology survey highlighting Phoenix's potential, the introduction of Klife, a high-performance Elixir + Kafka client, and more! Show Notes online - http://podcast.thinkingelixir.com/227 (http://podcast.thinkingelixir.com/227) Elixir Community News https://www.youtube.com/shorts/mKp30PNM_Q4 (https://www.youtube.com/shorts/mKp30PNM_Q4?utm_source=thinkingelixir&utm_medium=shownotes) – Parker Selbert announced that the Oban Web dashboard will be open sourced. https://github.com/rails/solid_queue/ (https://github.com/rails/solid_queue/?utm_source=thinkingelixir&utm_medium=shownotes) – The Rails community is working on a database-backed job queue called "Solid Queue". Mark shares a personal story about the significance of Oban Web being open sourced for startups. https://x.com/shahryar_tbiz/status/1850844469307785274 (https://x.com/shahryar_tbiz/status/1850844469307785274?utm_source=thinkingelixir&utm_medium=shownotes) – An announcement of an open source project with more than 80 Phoenix LiveView components. https://github.com/mishka-group/mishka_chelekom (https://github.com/mishka-group/mishka_chelekom?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for the open source project with Phoenix LiveView components. https://mishka.tools/chelekom/docs/ (https://mishka.tools/chelekom/docs/?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation and interactive examples for the Phoenix LiveView components. https://x.com/ZachSDaniel1/status/1850882330249875883 (https://x.com/ZachSDaniel1/status/1850882330249875883?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel mentions that Igniter is effectively used for installing components. https://www.youtube.com/live/bHoCMMk2ksc (https://www.youtube.com/live/bHoCMMk2ksc?utm_source=thinkingelixir&utm_medium=shownotes) – Dave Lucia will live-stream coding an Igniter installer for OpenTelemetry. https://fluxonui.com/getting-started/introduction (https://fluxonui.com/getting-started/introduction?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to Fluxon UI, a paid resource with Phoenix LiveView components. https://tsh.io/state-of-frontend/#frameworks (https://tsh.io/state-of-frontend/#frameworks?utm_source=thinkingelixir&utm_medium=shownotes) – Results of a frontend technology survey where Phoenix is mentioned. https://www.youtube.com/playlist?list=PLSk21zn8fFZAa5UdY76ASWAwyu_xWFR6u (https://www.youtube.com/playlist?list=PLSk21zn8fFZAa5UdY76ASWAwyu_xWFR6u?utm_source=thinkingelixir&utm_medium=shownotes) – YouTube playlist of Elixir Stream Week presentations. https://elixirforum.com/t/2024-10-21-elixir-stream-week-five-days-five-streams-five-elixir-experts-online/66482/17 (https://elixirforum.com/t/2024-10-21-elixir-stream-week-five-days-five-streams-five-elixir-experts-online/66482/17?utm_source=thinkingelixir&utm_medium=shownotes) – Forum post about Elixir Stream Week featuring presentations and streams. https://elixirforum.com/t/klife-a-kafka-client-with-performance-gains-over-10x/67040 (https://elixirforum.com/t/klife-a-kafka-client-with-performance-gains-over-10x/67040?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction of Klife, a new Elixir + Kafka client with improved performance. https://github.com/oliveigah/klife (https://github.com/oliveigah/klife?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for the Klife Kafka client in Elixir. https://github.com/BeaconCMS/beacon/blob/main/ROADMAP.md (https://github.com/BeaconCMS/beacon/blob/main/ROADMAP.md?utm_source=thinkingelixir&utm_medium=shownotes) – Roadmap for the BeaconCMS project. https://x.com/josevalim/status/1850106541887689133?s=12&t=ZvCKMAXrZFtDX8pfjW14Lw (https://x.com/josevalim/status/1850106541887689133?s=12&t=ZvCKMAXrZFtDX8pfjW14Lw?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim clarifies that Elixir and LSP remain separate projects with independent release schedules. https://flutterfoundation.dev/blog/posts/we-are-forking-flutter-this-is-why/ (https://flutterfoundation.dev/blog/posts/we-are-forking-flutter-this-is-why/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post about Flutter forking into Flock to promote open-source community development. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Land Stewardship Project's Ear to the Ground
Ear to the Ground 356: First Things First

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Nov 4, 2024 27:50


Thinking of applying for NRCS funds? First, advises vegetable and livestock farmer Klaus Zimmermann-Mayo, figure out what kind of farming you want to do and how you want to do it. More Information • Whetstone Farm • Go Farm Connect • NRCS Environmental Quality Incentives Program • NRCS Service Center Locator You can find LSP…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 355: Silver Buckshot

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Oct 31, 2024 26:54


Father-son team Joe and Matthew Fitzgerald are quite willing to share their insights with other farmers on how to get started in organic crop production. First piece of advice: sell your fishing boat. More Information • Fitzgerald Organic Mad Agriculture Video • Organic Agronomy Training Service • LSP Soil Health Web Page You can find LSP…  Read More → Source

The Translation Company Talk
S05E11: Building & Scaling a Global LSP Team

The Translation Company Talk

Play Episode Listen Later Oct 28, 2024 51:53


In this episode of The Translation Company Talk, we welcome back Jordan Evans, CEO of Language Network and Managing Partner of HireGlobo, an industry expert with deep insights into scaling Language Service Providers (LSPs) and building global teams. We dive into how LSPs are scaling up today, touching on virtual teams' structure, business enablement, and the challenges faced with managing hybrid teams across global locations. Jordan also discusses the technological hurdles that come with operating a global, hybrid team and shares strategies for overcoming them. As we explore the intricacies of global expansion, Jordan provides valuable advice on how LSPs can structure their businesses to fully benefit from a global presence, while carefully balancing the risks of scaling too quickly. He shares insights on how to manage cultural shifts within a growing team and explains how scaling impacts not just operations, but also the supply chain, including the linguist community. We also delve into the financial aspects of scaling, such as preparing for investment costs, evaluating different geographical jurisdictions, and the pros and cons of mergers and acquisitions as a growth strategy. Whether you're part of an LSP or another industry, this episode is full of practical advice on building and sustaining a thriving global team. Subscribe to the Translation Company Talk podcast on Apple Podcasts, iTunes, Spotify, Audible or your platform of choice. This episode of the Translation Company Talk podcast is brought to you by Hybrid Lynx.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 353: 7 Years Later

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Oct 27, 2024 20:49


Jon and Carin Stevens farm unforgiving land that leaves little room for mistakes. But thanks to a system based on no-till, cover cropping, and reintegrating livestock, a “victory year” has finally emerged from the ashes of failure. More Information • LSP Soil Health Web Page • Maple Grove Farms YouTube Page You can find LSP…  Read More → Source

Thinking Elixir Podcast
225: A BeaconCMS of Hope

Thinking Elixir Podcast

Play Episode Listen Later Oct 22, 2024 21:28


News includes coming info on new features in Elixir v1.18, the release of Beacon CMS v0.1 with new tools for developers, German Velasco's insightful video on the origins of Phoenix contexts, Alex Koutmos sharing his sql_fmt tool for cleaner SQL code in Ecto, an exciting new tool for the Mastodon community called MastodonBotEx, and more! Show Notes online - http://podcast.thinkingelixir.com/225 (http://podcast.thinkingelixir.com/225) Elixir Community News https://x.com/josevalim/status/1846109246116536567 (https://x.com/josevalim/status/1846109246116536567?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim updated his Elixir Stream Week presentation to talk about Elixir v1.18. https://x.com/NickGnd/status/1846103330352697455 (https://x.com/NickGnd/status/1846103330352697455?utm_source=thinkingelixir&utm_medium=shownotes) – Discussion about the new LSP server for Elixir v1.18. https://github.com/elixir-webrtc/ex_webrtc (https://github.com/elixir-webrtc/ex_webrtc?utm_source=thinkingelixir&utm_medium=shownotes) – ExWebRTC library for Elixir mentioned in the context of Elixir Stream Week. https://x.com/BeaconCMS/status/1844089765572026611 (https://x.com/BeaconCMS/status/1844089765572026611?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of Beacon CMS v0.1 release. https://www.youtube.com/watch?v=JBLOd9Oxwpc (https://www.youtube.com/watch?v=JBLOd9Oxwpc?utm_source=thinkingelixir&utm_medium=shownotes) – Hype video for the new Beacon CMS release. https://github.com/BeaconCMS/beacon (https://github.com/BeaconCMS/beacon?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub repository for Beacon CMS, an open-source CMS built with Phoenix LiveView. https://www.youtube.com/live/c2TLDiFv8ZI (https://www.youtube.com/live/c2TLDiFv8ZI?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel and Leandro paired programming session on Beacon CMS Igniter task. https://github.com/BeaconCMS/beacon_demo (https://github.com/BeaconCMS/beacon_demo?utm_source=thinkingelixir&utm_medium=shownotes) – Beacon_demo project helps users try Beacon CMS locally. https://www.youtube.com/watch?v=5jk0fIJOFuc (https://www.youtube.com/watch?v=5jk0fIJOFuc?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf video related to Beacon CMS development. Hexdeck.pm is a new community tool for browsing multiple HexDocs pages at once. https://hexdeck.pm/ (https://hexdeck.pm/?utm_source=thinkingelixir&utm_medium=shownotes) – Website for hexdeck.pm, a documentation aggregator. https://github.com/hayleigh-dot-dev/hexdeck (https://github.com/hayleigh-dot-dev/hexdeck?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for hexdeck.pm, created by Hayleigh from the Gleam team. https://github.com/elixir-lsp/elixir-ls/releases/tag/v0.24.1 (https://github.com/elixir-lsp/elixir-ls/releases/tag/v0.24.1?utm_source=thinkingelixir&utm_medium=shownotes) – Update to ElixirLS, fixing several crashes. German Velasco created a stream video explaining the origins of Phoenix "contexts". https://x.com/germsvel/status/1846137519508787644 (https://x.com/germsvel/status/1846137519508787644?utm_source=thinkingelixir&utm_medium=shownotes) – Tweet about German Velasco's stream video on Phoenix contexts. https://www.elixirstreams.com/tips/why-phoenix-contexts (https://www.elixirstreams.com/tips/why-phoenix-contexts?utm_source=thinkingelixir&utm_medium=shownotes) – German explains the history of Phoenix Contexts. https://www.youtube.com/watch?v=tMO28ar0lW8 (https://www.youtube.com/watch?v=tMO28ar0lW8?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord's keynote on Phoenix 1.3 at Lonestar ElixirConf 2017. https://phoenixframework.org/blog/phoenix-1-3-0-released (https://phoenixframework.org/blog/phoenix-1-3-0-released?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post on Phoenix 1.3 release. https://x.com/akoutmos/status/1843706957267656969 (https://x.com/akoutmos/status/1843706957267656969?utm_source=thinkingelixir&utm_medium=shownotes) – Alex Koutmos' announcement of sql_fmt version 0.2.0 support for ~SQL sigil and Mix Formatter plugin. https://github.com/akoutmos/sql_fmt (https://github.com/akoutmos/sql_fmt?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for sql_fmt, a SQL formatting tool. https://github.com/akoutmos/ecto_dbg (https://github.com/akoutmos/ecto_dbg?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub page for ectodbg, which uses sqlfmt for debugging Ecto SQL queries. https://mastodon.kaiman.uk/@neojet/113284100323613786 (https://mastodon.kaiman.uk/@neojet/113284100323613786?utm_source=thinkingelixir&utm_medium=shownotes) – MastodonBotEx simplifies interacting with the Mastodon API. https://github.com/kaimanhub/MastodonBot.ex (https://github.com/kaimanhub/MastodonBot.ex?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for MastodonBotEx designed for Mastodon API interactions. https://codebeamnyc.com/#schedule (https://codebeamnyc.com/#schedule?utm_source=thinkingelixir&utm_medium=shownotes) – Details about the schedule for CodeBEAM NYC Lite for November 15, 2024. https://elixirfriends.transistor.fm/episodes/friend-3-tyler-young (https://elixirfriends.transistor.fm/episodes/friend-3-tyler-young?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir Friend's podcast episode with Tyler Young discussing marketing and technology topics. https://elixirfriends.transistor.fm/episodes/friend-2-david-bernheisel (https://elixirfriends.transistor.fm/episodes/friend-2-david-bernheisel?utm_source=thinkingelixir&utm_medium=shownotes) – Previous Elixir Friend's podcast episode with David Bernheisel. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Never Ending Adventure: An Adventure Time Podcast
#148 - Not so Sweet on the Candy Streets

Never Ending Adventure: An Adventure Time Podcast

Play Episode Listen Later Sep 24, 2024 52:05


S5E25 - Finn and Jake play detective tracking down what is expected to be the masked villian of an LSP disaster....turns out they ain't as good as Joshua and Margaret! 

Land Stewardship Project's Ear to the Ground
Ear to the Ground 347: Bite-by-Bite

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Sep 5, 2024 35:49


Mapping a rural region’s “community food assets” reveals isolated islands of opportunity in a sea of corn and soybeans. LSP’s Scott DeMuth says now is the time to connect the dots and create a new relationship between farmers, eaters, and the places they live in. More Information • LSP's Community-Based Food Systems Web Page • Report:…  Read More → Source

Oh My Glob! An Adventure Time Podcast
Season 6 - Episodes 13, 14 (Thanks for the Crabapples, Giuseppe!, Princess Day)

Oh My Glob! An Adventure Time Podcast

Play Episode Listen Later Aug 19, 2024 58:05


Amy and Matt discuss fan-favorite Adventure Time episode, "Thanks for the Crabapples, Giuseppe!" and then get into the Marceline and LSP-centric "Princess Day". It's a pretty dang swell time. A pretty dang swell time indeed. For Amy's episode predictions, we present... Caroline's Handy Dandy Grading Rubric: -Does the prediction contain the same characters as the actual episode? -If I worked at A.T. corp. would I produce this episode idea? -How much creative effort was put forth while coming up w/ this prediction? -Do the prediction and the actual ep. follow the same archetype (i.e. love & loss, heroic adventure, self-discovery, etc.)? -Would this story aide in the development of the overall plot and/or character development? -Do the events of the story seem plausible in regard to character traits (i.e. It would not be plausible for Finn to do something evil)? -Does a similar story line occur at some later point in the show? -Has a similar story line already occurred in a previously reviewed episode? Rate us on Apple Podcasts! itunes.apple.com/us/podcast/oh-my-glob-an-adventure-time-podcast/id1434343477?mt=2 Facebook: facebook.com/ohmyglobpodcast Contact us: ohmyglobpodcast@gmail.com And that Twitter thing: https://twitter.com/ohmyglobpodcast Amy: https://twitter.com/moxiespeaks Trivia Theme by Adrian C.