Business of Tech

Follow Business of Tech
Share on
Copy link to clipboard

Each day, the flood of technology news hits. In an industry that always changes, those who deliver technology services need to focus on the information that matters to them. The Business of Tech podcast focuses on the news you need to know. Covering both the story and why it matters to the way s…

MSP Radio


    • Apr 10, 2026 LATEST EPISODE
    • weekdays NEW EPISODES
    • 12m AVG DURATION
    • 1,943 EPISODES

    Ivy Insights

    The Business of Tech podcast is an exceptional show that offers valuable insights into the world of technology. Featuring some of the brightest minds in the industry as guests, this podcast provides a window into their thoughts and predictions for the future. The bite-sized episodes are perfect for my morning commute, offering just the right amount of information to start my day.

    What sets this podcast apart is its ability to captivate listeners with engaging topics and expert guests. There was never a moment where I felt lost or disengaged during an episode. The discussions are well-structured, informative, and empowering. The host's sense of humor adds a touch of entertainment and ensures that each episode is anything but dull.

    Additionally, the podcast covers a wide range of tech-related subjects, giving listeners fresh perspectives on various aspects of the industry. The interviews provide a deep dive into current trends, challenges, and opportunities in tech. The host's ability to break down complex concepts into easily understandable language makes this podcast accessible to both tech enthusiasts and those new to the field.

    One downside is that the episodes can sometimes feel outdated as they are not regularly updated. It would be great to have more recent content to stay up-to-date with the latest developments in technology. However, this does not detract from the overall value provided by the podcast's extensive archive.

    In conclusion, The Business of Tech podcast is an excellent resource for anyone interested in technology and its impact on our lives. The show's informative and entertaining format, coupled with its impressive lineup of guests, makes it a must-listen for anyone working in or passionate about the tech industry. Despite occasional dated content, this podcast remains highly recommended for its ability to deliver valuable insights in an engaging manner.



    Search for episodes from Business of Tech with a specific topic:

    Latest episodes from Business of Tech

    Why Remediation Capacity, Not Detection, Now Defines MSP Accountability

    Play Episode Listen Later Apr 10, 2026 12:12


    The episode identifies a structural shift in the MSP business model: security is no longer a discrete service or line item but has become the organizing principle for operations and accountability. This is driven by an industry-wide trend toward increased automation in both attack and defense, as well as a shift in liability and accountability from vendors to the MSPs themselves. Companies such as Acronis and Anthropic are highlighted for introducing tools that increase the rate and automation of threat discovery, while research and market analysis by Watchguard and Jay McBain indicate that the capacity to remediate, rather than discover, security threats now forms the operational bottleneck. The most consequential development referenced is the acceleration of security automation and vulnerability discovery, specifically through Anthropic's Project Glasswing and Watchguard's reporting of a 1,500% surge in new endpoint malware variants. Anthropic's approach—limiting broad release of its model due to potential misuse for rapid exploitation—was supported by partnerships with cloud and technology firms like AWS, Apple, Google, and Microsoft, backed by up to $100 million in usage credits. Watchguard's data demonstrates that while threat discovery is increasing, the rate of remediation has not kept pace, creating a supply-demand imbalance in skilled security operations. Further reinforcing this trend, Acronis has promoted a 24x7x365 Managed Detection and Response (MDR) tool positioned to let MSPs deliver always-on monitoring without managing a full security operations center. Meanwhile, broader channel and delivery ecosystem analysis by Jay McBain emphasizes that partners, rather than platform vendors, bear primary responsibility for steady-state customer environments. This confluence of developments shifts the value—and the risk—onto the operational capabilities and governance structures of MSPs. Other referenced solutions, such as Zero Networks' microsegmentation, underscore that containing damage, not just preventing access, is a new business imperative. The operational implication for MSPs and IT providers is a shift from measuring security by tools deployed to measuring and pricing security by demonstrated remediation throughput. Service contracts will need to specify not only what solutions are deployed, but also explicit commitments on response times, closure rates, and SLA-backed operating motions. A lack of clear remediation commitments raises unpriced liability as discovery rates outpace closure capacity. Providers are encouraged to separate vulnerability discovery reporting from remediation progress, build reporting layers that highlight closure rates, and reconsider flat-fee models that do not account for increased operational workloads and accountability risks. 00:00 Closure Is Finite 04:10 Close the Gap 06:32 Govern or Absorb 08:57 Why Do We Care?  Supported by:  Zero Networks ScalePad 

    AI Monetization Remains Out of Reach for Most MSPs, Say GTIA's Carolyn April and CompTIA's Seth Robinson

    Play Episode Listen Later Apr 9, 2026 35:12


    The central structural shift examined is the widening disconnect between the vendor-driven narrative of rapid AI monetization and the operational reality faced by MSPs, as exposed by recent research from GTIA and CompTIA. Despite pervasive messaging from technology vendors that AI features are ready for seamless integration and immediate profitability, survey data indicates that most MSPs remain in early adoption stages, lack tangible processes to operationalize AI, and are stymied by workforce and workflow constraints. Supporting evidence is drawn from CompTIA's data showing that 70% of businesses are still in early AI adoption stages, and only 55% of MSPs expect to turn a profit on AI initiatives in the near term—up from 34%, but well below vendor promises. The majority of current AI activity remains at the individual user level rather than embedded in business-wide workflows, restricting quantifiable ROI and limiting the visibility of productivity gains. Both Speaker B and Speaker C emphasized that most MSPs do not yet have the organizational capability or maturity to move beyond experimentation to operational deployment and monetization. Related developments further illustrate this operational gap. Research cited by Speaker B highlights that only a subset of larger MSPs with more resources have been able to achieve early success with AI, while most are still grappling with process integration, pricing strategies, and talent acquisition. Both GTIA and CompTIA reports suggest that optimism among firms about AI's potential is running ahead of genuine structural change, with workforce shortages, undefined internal governance, and difficulties in business model adaptation acting as durable barriers. Market sentiment remains positive, but actual organizational transition lags significantly, especially among smaller MSPs. Operationally, this environment introduces heightened risk for MSPs who overcommit on vendor promises without aligning internal processes, workforce strategy, and governance. Dependencies on vendor-supplied AI tools expose firms to pricing uncertainty and potential margin compression, especially as clients begin questioning the value proposition when human roles are replaced by automation. Without formalized internal AI governance and skill development, most MSPs face mounting challenges in demonstrating measurable ROI, adapting delivery models, and sustaining service margins. The implication for decision-makers is the need for prudent, phased adoption—prioritizing internal process maturity and realistic expectations over rapid adoption in response to vendor pressure. Supported by: CometBackUpTimeZest

    AI Governance Moves Center Stage: Why Audits and Policy Now Define MSP Risk

    Play Episode Listen Later Apr 8, 2026 12:59


    The episode identifies a structural shift in the evaluation and deployment of AI within organizations: decision-making is now driven by governance, control, and auditability rather than by features or capabilities of AI tools. This mechanism is anchored in the need for defendable practices amidst heightened scrutiny from institutions, regulators, and insurers. The change is observable in companies such as Anthropic and OpenAI, as well as in regulatory and procurement activities tracked by outlets like The New York Times and Business Insider, signaling that market adoption is tightly coupled to liability, enforcement, and institutional risk visibility. A primary area of evidence is cybersecurity, where state-sponsored attackers have leveraged AI to automate infiltration attempts, according to reporting on Anthropic's disclosures concerning Chinese actors targeting dozens of companies and agencies. The same sources note that Anthropic's AI identified over 500 previously unknown zero-day vulnerabilities in open-source software, demonstrating increased operational tempo and automation on both sides of the cybersecurity equation. In procurement, declining app download metrics for Claude, following its involvement in U.S. security policy narratives, showcase how reputational and geopolitical risk can quickly alter adoption patterns. Additional developments reinforce this trend. Machine learning conferences have systematically audited and penalized the use of AI-generated peer review, leading to hundreds of paper rejections and mass article retractions, according to Semaphore and Nature. On the hardware front, HP, AMD, and Intel are collaborating to address BitLocker vulnerabilities via an industry standard rather than proprietary features, illustrating how vendors are responding to systemic risk through structural controls and standards. Channelholic's references to workforce limitations underscore that automation's workload cannot be absorbed by labor alone. For MSPs and IT service providers, these developments mean the core value proposition shifts from offering AI tools to governing their use, ensuring full documentation, traceability, and defensibility. Failure to treat this as a governance issue leads to underpricing, overlooked controls, and transfer of liability for autonomously executed actions. Providers must now develop acceptable use policies, audit AI agent activity logs, and systematically vet vendors on audit trail, policy, and breach notification—otherwise risking exclusion from regulated deals and exposure to contractual and compliance penalties. 00:00 The Visibility Problem 03:45 Platform Lock-In 06:30 Governed or Liable 09:35 Why Do We Care?  Supported by:  CometBackUp and TimeZest

    AI Deployment Exposes Workflow Gaps—MSPs Face Increased Liability and Coordination Demands

    Play Episode Listen Later Apr 7, 2026 13:06


    Automation and AI are shifting the pricing and accountability models for managed service providers, with risk increasingly centered on governance, workflow coherence, and outcome measurement rather than tool deployment. Evidence from studies like Fixify, reports from ChannelLive, and real-world cases such as the City of Seattle's pause on Microsoft Copilot rollout highlight that technology adoption is now gated less by access to solutions and more by readiness to govern, coordinate, and prove outcomes across fragmented processes. Automation exposes underlying coordination debt, moving the client focus from paying for labor time to demanding measurable outcomes and managed exceptions. Fixify's analysis of more than 50,000 support tickets from 30+ organizations showed tickets with at least 75% automation saw average resolution in 4.4 hours versus roughly three days for non-automated tickets. Data cited from OpenAI found that 93% of London SMBs use AI tools, but readiness and uptake are highly uneven within the UK. In Seattle, more than 450 labor hours per week were reported saved during the Copilot pilot, yet adoption was paused due to concerns over data governance and accountability for errors, not tool capability. According to coverage in GeekWire and IT Pro, these dynamics are shifting buyer expectations and vendor liabilities. Supporting developments include security concerns outlined by Kaseya's INKY report, which highlights the normalization of AI-generated phishing and changes in attack formats, forcing defenders to rethink detection and response. The operational surface of automation—where AI reshapes data, not just moves it—means standard controls and classic alerts are increasingly bypassed. Reports from Information Week and experts such as Dan Lorman emphasize that accountability for exceptions, shadow AI usage, and data exposure is shifting by default onto providers, whether or not contracts address these risks. These trends mean MSPs face direct operational and contract exposure: clients and auditors are demanding proof of how AI touches data, how exceptions are handled, and where logs and controls exist. Pricing based on seats or tickets is becoming harder to defend as automation compresses labor and raises expectations for accountability. Providers must reconsider SLAs, explicitly define automation boundaries, charge for governance activities, and move toward outcome-based pricing models if they want to avoid absorbing unpriced liability and operational complexity. 00:00 Automation Divide 04:27 Coordination Debt 06:01 Automation Liability 09:18 Why Do We Care?  Supported by:  JumpCloud HaloPSA   

    Richard Luna: MSPs Risk Margin Erosion by Relying on Rented Stacks and App Reselling

    Play Episode Listen Later Apr 6, 2026 26:40


    The episode exposes a structural shift in the MSP sector toward increased commoditization and infrastructure dependence, with an industry trend favoring outsourced, app-focused service delivery over internal technical depth. Protected Harbor, led by Richard Luna, is presented as a counterpoint—running its own infrastructure and software, and prioritizing ownership of the technical stack rather than relying extensively on third-party platforms. Luna argues this industry-wide movement has created a market where low entry barriers and rented, commoditized solutions undermine differentiation and inflate operational risk. Central to the discussion is the declining emphasis on technical generalists within MSP organizations, replaced by hyper-specialization and a proliferation of app resale as a service model. Luna attributes industry-wide declines in service quality and net promoter scores (typically ranging from 30–38 for MSPs) to these trends, suggesting the loss of generalist skills erodes problem-solving capacity and increases reliance on external vendors for core functions. He states that running owned infrastructure and open-source tools allows for tighter cost controls, standardization, and faster response to operational events—a contrast to MSP models that outsource most functions. Supporting developments include a detailed critique of the risk dynamics associated with using hyperscale vendors for client-facing services. Luna distinguishes between utility-grade services like power, which can be outsourced without significantly affecting the customer relationship, and services closer to the client experience (e.g., remote access, help desk, data workflows) that, if outsourced, reduce both control and differentiation. Additional risk surfaces are highlighted with the integration of AI and automation, especially when MSPs use large public models that may ingest sensitive client data and create potential information leakage or competitive exposure. The operational implications for MSPs and IT leaders include heightened vendor dependency, expanding contract risk, and declining service quality when organizations prioritize app resale and specialization over in-house competency and direct infrastructure management. To mitigate these risks, the episode suggests MSPs should reassess which functions to control internally versus outsource, invest in developing technical generalists, and scrutinize the downstream effects of workflow automation and AI adoption—especially regarding client data privacy, model training, and real-time operational accountability.

    AI Moves MSPs From Tool Support to Operational Liability as Hybrid Platforms Expand

    Play Episode Listen Later Apr 3, 2026 10:49


    The episode highlights the increased operational complexity and governance burden resulting from the fragmented adoption of AI and hybrid, multi-platform environments in IT service delivery. Companies such as Proton (with Proton Workspace) and governance platforms like KiloClaw represent the expanding landscape of tools requiring oversight, while core productivity platforms continue to diversify. Research from Westcon-Comstor, Forrester, and Gartner, as reported by Dave Sobel, demonstrates that AI is not a turnkey solution but introduces a new operational surface area that must be actively managed. Channel Dive's Westcon-Comstor survey of 500 MSP and cloud decision-makers found that almost a quarter see cloud migration and management as their main revenue opportunity, but over 30% identify cross-platform data management as the top challenge. Security and governance pressures follow closely. Forrester data shows only a marginal increase in prompt engineering proficiency, while most employees report that AI increases workloads rather than reducing them, indicating persistent process fragmentation and unclear roles. VentureBeat cited Intuit's observation that successful AI adoption is characterized not by autonomy, but by controlled execution where humans maintain accountability for judgment and exception handling. Supporting this, products like Proton Workspace are fragmenting the core productivity stack, and the emergence of “shadow AI” (where personal AI agents operate outside formal governance) is driving organizations to deploy governance tools such as KiloClaw. According to research cited from Front, 93% of companies are using AI in customer operations, yet 71% report significant AI-related issues in the past three months, indicating that poorly governed automation increases handoffs, exceptions, and escalations which often default to MSPs to resolve. For MSPs and IT service providers, these trends translate into an expanded responsibility for governing the automation and AI layers within client environments. When MSP contracts and service definitions fail to specify the scope of coordination, exception handling, and governance for AI and automation tools, the provider risks absorbing significant unmetered labor and liability. The episode emphasizes that governance tooling should be viewed as temporary infrastructure and not a core component of an MSP practice. Providers should audit client environments for AI exposure, review contract terms, and prepare to offer explicit, separately priced control layers as customer demand for governance outcomes increases. 00:00 Stack Fragmentation 02:56 Human-Bounded AI 04:25 Coordination Tax 07:18 Why Do We Care?  Supported by:  CometBackup HaloPSA 

    AI Agents Shift MSP Accountability: Howard Cohen on Liability Beyond IT Infrastructure

    Play Episode Listen Later Apr 2, 2026 34:52


    The episode highlights a structural shift from MSPs managing infrastructure to supplying, designing, and maintaining AI-driven agents, raising new questions of accountability and operational risk. As AI agents evolve from assistive chatbots to supervised and potentially autonomous systems, the channel faces liability transfer, governance gaps, and an increased need for systems architecture competence. Companies referenced include Klarna, which serves as a cautionary tale for poor AI design, and vendors such as OpenAI, Anthropic, and Microsoft, all of whom are engaged in moving the market toward agent-based operations. The most consequential development detailed is the shifting liability for AI-driven outcomes: agent builders and MSPs become responsible for unintended actions, errors, or hallucinations produced by deployed agents. Clarifying accountability is necessary as incidents—such as email mishandling or unauthorized decisions by AI agents—do not absolve the MSP of responsibility. Recent discussions indicate few cases where foundational technology vendors are held liable; usually, the burden falls on those who deploy and support AI agents for clients. The episode cites Klarna's experience as a failure of design thinking, emphasizing that the design of agents—beginning with the end in mind—is key to mitigating risk. Supporting developments include the segmentation of AI solutions across SMB, mid-market, and enterprise clients, with complexities scaling as MSPs attempt to transition from simple assistive AI to supervised and fully autonomous agents. The episode notes that fewer than 5% of deployed agents are fully automated, and security vendors are increasingly involved in AI governance, risk, and compliance (GRC) due to the importance of data governance in AI projects. Regulatory coverage and insurance gaps are recognized, with advice for MSPs to re-examine their E&O policies and move toward frameworks for AI trust and transparency. Operational implications for MSPs and IT service providers are concrete: providers must reconsider contract exposure, review insurance coverage, and invest in AI governance mechanisms such as agent oversight and auditing. Price-to-value methods are recommended over simplistic per-agent or per-hour billing, requiring sophisticated project scoping and market analysis. The episode underscores that MSPs cannot rely solely on vendor solutions for risk mitigation—service providers are ultimately accountable for AI outcomes delivered to clients, necessitating operational safeguards and human-in-the-loop design wherever possible. Supported by: ScalePadZero Networks

    Control Layer Becomes Essential: Clients Trust AI Outputs Less, MSPs Must Provide Audit Trails

    Play Episode Listen Later Apr 1, 2026 11:48


    The dominant structural shift highlighted is the movement of value from AI-driven features to the ownership and governance of the control plane—specifically, entities that set boundaries, maintain proof, and keep automated workflows within defined limits. This shift is evidenced by workforce polling from Quinnipiac University, business formation trends tracked by the Bank of America Institute and Census Bureau data, and product launches from vendors like TeamViewer and KnowBefore. These developments underscore a growing reliance on automation where traditional human oversight is minimized, and technology increasingly assumes direct control over work execution. The episode details workforce sentiment, citing a Quinnipiac University poll where only 15% of respondents expressed willingness to work for an AI boss, and 70% anticipated AI would reduce job opportunities. Bank of America Institute data notes a 15% year-over-year increase in high propensity businesses—those likely to launch—while businesses planning to hire have fallen by 4%. TeamViewer has introduced TIA Reporting, which generates dashboards via natural language prompts, reducing specialist requirements. KnowBefore's ADA Orchestration automates security awareness scheduling and execution, reportedly shortening setup times from hours to seconds. These examples show how vendors are deploying AI tools that replace specific manual oversight with algorithmic management. Supporting developments reinforce the governance gap. According to a CIO Dive report, 96% of C-suite leaders expect productivity gains from AI, yet 77% of employees report increased workloads, signaling misalignment between leadership intent and actual outcomes. Tech Bullion reveals 60% of organizations have AI integrated in at least one core function, with 65% using generative AI regularly, but fewer than a quarter have operationalized ethical AI frameworks. The Verge covers enhancements to Anthropics' tools that embed guardrails where organizational controls are lacking. Additional survey data from TechCrunch shows that usage of AI is growing while trust in its outputs remains weak; only 24% of respondents trust AI most of the time. Operationally, the implication is clear for MSPs and IT leaders: as organizations reduce human oversight and delegate more work to automation, the auditability, accountability, and control of automated workflows become direct contractual risk. Control layers—such as logging, exception handling, approval thresholds—must be productized and priced, not treated as informal advisory work. Liability for automation failures must be clearly assigned and managed through contractual terms, with automation incident response separated from standard support. Without enforceable governance and evidence of control, MSPs risk absorbing unpaid remediation work as clients expect both automation benefits and assurance of outcome. 00:00 Bossless Workforce 03:22 AI, No Guardrails 05:45 Govern or Absorb 08:41 Why Do We Care?  Supported by:  Nerdio HaloPSA  

    Why Stack Complexity, Not Automation, Drives MSP Margin Volatility

    Play Episode Listen Later Mar 31, 2026 11:07


    Margin volatility driven by operational complexity and governance gaps is reshaping the economic landscape for MSPs and IT service providers. Evidence shows that the effectiveness of automation now depends less on deployment volume and more on whether it reduces complexity and enforces coherence across client environments, as highlighted by Speaker A referencing reports from TechCentral, Avik, and vendors such as VMware, Broadcom, Microsoft, and Apple. The key structural shift is that clients and technology vendors are consolidating platforms and workflows to restore operational clarity, which fundamentally alters how MSPs structure service offerings and pricing. The most consequential development cited is Broadcom's transition of VMware users toward Cloud Foundation 9, with half of surveyed organizations (n=450 across 14 countries, each with 500+ employees) stating an intent to reduce their VMware footprint by 2028 in response to bundled offerings deemed too costly or complex, according to The Register. This reduction in adoption signals accelerated migration efforts, downsizing of virtual machine fleets, and movement toward alternative platforms, indicating margin pressure and uncertainty for MSPs supporting heterogeneous environments. Supporting developments reinforce this shift. Apple's introduction of Apple Business—a unified platform encompassing device management, email, calendar, directory services, and marketing tools—demonstrates a move toward environments with fewer moving parts and less operational ambiguity. Microsoft's Copilot Cowork for Microsoft 365 similarly embeds AI directly within core workflows, with enterprise guardrails and coherence at its center, rather than simply layering on new tools. Reports from Avik and Forrester underscore persistent gaps between leadership intent and frontline capability, especially around fragmented visibility and unaddressed governance requirements, amplifying the consequences of unmanaged complexity and AI misalignment . For MSPs and technology leaders, the operational takeaway is a need to prioritize the reduction of client environment complexity and establish explicit controls around AI and automation. Auditing fixed-fee agreements for AI work clauses, defining coverage for remediation and exception handling, and building enforceable governance layers are critical to avoid absorbing unpriced risk and free labor. Stack simplification is now paramount, since automation on top of complexity increases volatility and cost. Service contracts are trending toward bifurcation, with standardized platform offerings at lower rates and non-standard exception handling priced separately, shifting where profit and risk reside. 00:00 Consolidation Wave 03:08 Coherence Gap 04:59 Margin Leak 08:14 Why Do We Care?  Supported by:  ScalePad Zero Networks 

    Howard Rubin: Why Tech Spending Benchmarks Often Mislead Operators

    Play Episode Listen Later Mar 30, 2026 20:12


    A persistent structural challenge highlighted in this episode is the disconnect between technology investment and demonstrable business outcomes, which fuels operational inefficiency and accountability gaps in technology spending. As articulated by technology economist Dr. Howard Rubin, a common industry tendency is to measure IT success based on technology adoption or budget size rather than objective business results. This pattern is not limited to large enterprises but affects small and mid-sized organizations, many of which feel compelled to maintain “current” technology without clear evidence of operational or financial return. Primary evidence centers on the inadequacy of current macroeconomic indicators—such as the Consumer Price Index (CPI) and Gross Domestic Product (GDP)—for assessing technology value and risk in smaller organizations. Dr. Rubin noted that official statistics and classic economic telemetry do not track the true inflation or productivity impact of technology stacks, particularly as hyperscalers invest trillions in infrastructure. The transcript highlights that price increases or capital recovery pressures in services like Microsoft Office or cloud platforms are likely to affect smaller organizations first, exacerbating operational risk and cost unpredictability. Supporting developments include analysis of flawed benchmarking practices, such as using IT spend as a fixed ratio to revenue or operating expense without examining enabling value or efficiency outcomes. Failure to contextualize technology investments can lead to counterproductive decisions, like arbitrary cost-cutting when IT as a percentage of expenses rises, ignoring possible operational savings or revenue lift driven by technology. Dr. Rubin advocates for pattern recognition and bespoke analysis over reliance on aggregated industry numbers, pointing out that mass market vendor investments and macroeconomic policy often obscure direct impacts at the SMB and MSP level. For MSPs and technology decision-makers, the operational implication is a heightened need to create internal technology inflation indices and track category-specific price pressures. Rather than relying on aggregate industry benchmarks or public economic data, service providers should establish tailored metrics to capture their own cost structures, labor pressures, and technology value. The discussion points toward the need for more deliberate accountability and ongoing evaluation—especially given that upstream price increases from hyperscalers and SaaS vendors are set to impact providers and their clients, with limited ability to negotiate at smaller scale.  

    Draup Data Shows Cybersecurity Hiring Pressure Will Persist Into 2028 - Vijay Swaminathan

    Play Episode Listen Later Mar 27, 2026 21:13


    The core structural shift highlighted involves a skills convergence and expanded role definition across technology and business functions. Draup's Global Tech Talent Report and commentary by Vijay Swaminathan underscore the rising complexity and blending of job expectations, particularly as artificial intelligence (AI) and automation penetrate workflows. Companies are reorganizing hiring strategies and role definitions, prioritizing adaptable expertise over traditional IT job titles, and emphasizing domain specialization. Service providers are observing a move from specialized roles toward hybrid positions that demand broader understanding of business operations, compliance, security, and AI. The most consequential development is the persistent and intensifying shortage of cybersecurity professionals, as referenced in Draup's report. According to Vijay Swaminathan, the gap between open cybersecurity positions and qualified candidates is projected to continue through at least 2028, driven by accelerated adoption of AI/ML, IoT, and cloud technologies. Job requirements have shifted, with a 25–30% increase in skill expectations for roles in engineering, security, and product management. This expansion of necessary competencies outpaces traditional training and hiring channels, further complicating workforce planning for the sector. Additional developments reinforce these structural stressors. The report asserts that 40% of current core tech skills will be partially obsolete by 2027 due to ongoing skill fusion and AI-enabled workflows, not just layoffs. Companies are also recruiting for new categories such as “builders,” “orchestrators,” and “synthesizers,” whose duties blend technical and business intelligence. Vijay Swaminathan points out an emerging need for deep domain expertise, process documentation, and AI governance, as evolving data collection and product experience initiatives redefine value creation across verticals like retail and hospitality. For MSPs, IT service providers, and technology leaders, these changes increase operational complexity and demand more investment in continuous upskilling, industry-specific hiring, and governance. Maintaining domain specialization and robust compliance documentation will become baseline requirements for winning and retaining business, but these add overhead and require strategic selection of verticals. The evolving tech stack and expansion of hybrid workflows drive greater dependency on creative, adaptable talent—exposing firms to increased risk if reskilling and governance fall behind the pace of automation and regulatory scrutiny. Supported by: NerdioHaloPSA

    How AI and Vendor Defaults Are Redefining MSP Liability: Brad Gross on Contract Risk

    Play Episode Listen Later Mar 26, 2026 42:21


    The episode identifies risk allocation and governance gaps in managed service provider (MSP) contracts as the prevailing structural challenge driven by the rapid deployment of AI solutions and evolving vendor models. This shift is characterized by increased pressure from both upstream vendors—including Microsoft, Anthropic, and OpenAI—and end clients, who demand swift adoption of AI-enabled productivity features without corresponding updates to underlying agreements or clarity on responsibility. These market developments have introduced new liability exposures for MSPs, as legacy contract language is ill-suited for environments where MSPs rely on, or are required to implement, external or agentic technologies. The discussion details how aggressive marketing and client demand for AI solutions outpace both technical maturity and customer readiness for governance. According to Speaker B, this urgency often pressures MSPs to deploy AI features—such as automated recommendations for firewall settings or configuration changes—without comprehensive risk disclosure or client policy alignment. The transcript notes a pattern in which clients insist on operational changes based on AI system outputs, even when technical staff advise caution, resulting in disputes over responsibility when these interventions lead to adverse outcomes. The episode further highlights operational risk endemic to the shift toward consumption-based pricing and increasing default configurations set by upstream vendors. For instance, Microsoft's move toward extended service term (EST) pricing and other consumption models are cited as drivers that transfer variable cost risk directly to MSP clients. The lack of customer engagement in quarterly business reviews and misalignment in expectations around true-up processes were presented as reinforcing issues, potentially leaving service providers solely accountable for the financial and operational impact of unexpected platform behavior or AI incidents. For MSP operators, the immediate operational implications include the necessity for explicit contract revisions, detailed service descriptions, and targeted AI-specific policies referenced at the quoting and onboarding stages. Providers are advised to distinguish clearly between services, tools, and outcomes within agreements and establish client buy-in through formal documentation and regular communication. Without disciplined governance procedures, written allocation of AI-related risks, and enforced business reviews, MSPs face elevated exposure to liability inherited from vendor defaults and unaddressed gaps in legacy contract frameworks. Supported by: RythmzABC Solutions, LLC

    Agentic AI Shifts Liability to MSPs: The Operational Risks Behind Autonomous Agents

    Play Episode Listen Later Mar 25, 2026 11:14


    The core structural shift addressed centers on the transition from AI as an assistive tool to agentic AI operating autonomously within business systems, thereby moving risk and control issues to the forefront. Agentic AI—characterized by the ability to independently execute actions within user interfaces, browsers, and systems of record—is changing the dynamics of accountability and operational authority. Companies like Meta are experiencing incidents where AI systems can enact changes or publish guidance inside live environments, making the question less about feature innovation and more about containment, permission, and the allocation of responsibility. A key development cited is a security-related incident at Meta, where an AI-generated and published security directive resulted in a real operational consequence without direct execution rights. This illustrates the growing risk, as agentic AIs are now capable of operating through the same channels as human users while accessing sensitive data and functions. Vendors such as Anthropic are enabling agentic capabilities, including control over full user workflows and system access, while security vendors and platforms like Microsoft are shifting towards identity frameworks and policies specifically designed to constrain agent autonomy and protect operational environments. Additional developments reinforcing this shift include the expansion of agentic AI into mainstream products, such as Perplexity's browser embedding AI assistants directly in everyday workflows, and the increasing integration of AI agents into databases and enterprise platforms. As these agents mature, the risk profile shifts from theoretical to operational, with vendors updating contracts to transfer liability downstream to service operators. This emphasizes that risk is no longer contained by traditional permissions and access control, and audit trails and proactive governance must become new priorities for service providers. These dynamics demand that MSPs and IT leaders re-examine operational and contractual practices. Agent deployment without properly scoped permissions, logging, and defined ownership of outcomes exposes operators to unpriced liabilities rather than incremental value. Practical requirements now include explicit service agreements covering agent actions, comprehensive permission reviews, and client-facing agent readiness assessments to establish due diligence. Failure to provide evidence of agent governance can result in being treated as uninsurable risk, pushing governance standards from optional best practice to commercial necessity. 00:00 AI Acts Now 02:57 Who Owns It? 05:13 Trust Breaks Here 08:01 Why Do We Care?  Supported by:  CometBackup HaloPSA   

    Government Policy Moves Vendor Choice from Preference to Proof for MSPs

    Play Episode Listen Later Mar 24, 2026 11:42


    The structural mechanism highlighted in this episode is the shift of government policy from serving as a regulatory guardrail to acting as a direct steering function in technology selection, shifting liability boundaries and procurement decisions onto MSPs and their contracts. Federal agencies, including the FCC and the White House, are no longer just prescribing security outcomes but are increasingly specifying acceptable inputs such as specific routers, AI contract terms, and cloud platforms, converting technology choices into explicit compliance obligations. A consequential development supporting this shift is the FCC's move to ban imports of consumer-grade routers manufactured outside the United States, a policy change that directly impacts not only residential but also business environments such as home offices and smaller hybrid setups. Additionally, the White House's push for a unified national AI governance framework, rather than a patchwork of state-based rules, further codifies what vendors and MSPs must document and justify in both procurement and ongoing service delivery. Contractual requirements—such as the GSA's draft AI clause—are moving compliance from best practice guidance to enforceable terms, influencing which vendors can bid for federal contracts and what they must attest to regarding AI-enabled services. Related stories underscore the tightening of enforcement through procurement and certification gates. The transcript cites the FedRAMP system as an example, where conditional approvals and review backlogs highlight operational challenges and reinforce how authorization is less about technical sufficiency and more about meeting buyer and audit expectations. The trend toward requiring supply chain and AI attestations by default in master service agreements is consolidating vendor choice around those that can produce defensible documentation, while increasing burdens for those unable to do so. For MSPs and IT providers, the practical implications are increased operational complexity and contract risk. Vendor selection now carries liability exposure that extends beyond technical performance to proving decisions in audits, insurance reviews, and contract disputes. Maintaining evidence-ready reports for backup, recovery, and AI governance is no longer optional, as the inability to produce such proof can result in being excluded from regulated verticals. The expected tradeoff is a consolidation of vendors and solutions, weighted toward those who offer prepackaged compliance and attestation capabilities, but with an accompanying risk of over-dependence and concentration. 00:00 Contract Conditions 02:53 Gates, Not Laws 04:34 Compliance Consolidates 07:30 Why Do We Care?  Supported by:  ScalePad  Nerdio 

    AI Cost Volatility Forces MSPs to Rethink Cloud, On-Prem, and Hybrid Governance

    Play Episode Listen Later Mar 23, 2026 12:07


    The episode outlines a structural shift in the managed services landscape, moving from technology stack standardization toward continuous governance as the primary product. Increasing AI adoption is driving volatility in both hardware costs and cloud billing, expanding the complexity and risk profile that MSPs must manage. Companies such as Microsoft, OpenAI, and Akamai are actively shaping this shift by revising product rollouts and pushing for workload placement strategies that prioritize cost, control, and risk mitigation rather than platform ideology. The core evidence highlighted is that volatile AI-related costs are directly impacting endpoint and cloud spend, undermining the traditional set-it-and-forget-it approach. IDC has revised global PC shipment expectations downward by 11.3% for 2026, citing memory shortages and supply chain disruptions, which is driving up hardware refresh costs and complicating standardization efforts. Wasabi reports that 48% of cloud storage budgets are being consumed by fees instead of capacity, while 72% of organizations now operate with hybrid storage strategies. These developments are increasing the need for contractual controls and workload governance to protect MSP margins. Supporting developments reinforce the market's pivot toward governance. Microsoft's rollback of Copilot integration and the US government's warnings after the Stryker incident emphasize the operational risk of rapid or unmanaged AI deployments. Akamai's expansion of AI inference to thousands of edge locations and OpenAI's launch of smaller, cost-targeted models underscore the growing significance of workload placement and model selection as ongoing operational decisions. According to a Westcon-Comstor survey, nearly a third of MSPs are already repositioning themselves as hybrid advisors, reflecting this market adjustment. For MSPs and IT leaders, the implications are clear: traditional fixed-fee models that bundle variable costs are now a liability, absorbing unpriced volatility as AI usage increases. Sustainable operation requires MSPs to separate governance from consumption within contracts and clearly define policies for workload placement, spend guardrails, and permission controls. The episode indicates that successful providers will be those who document, enforce, and price for governance, while those who treat hybrid as a generic technology support issue will face margin erosion and increased risk exposure. 00:00 AI Cost Shock 03:18 Placement Is Strategy 06:06 Margin Splits Here 08:54 Why Do We Care?  Supported by:  JumpCloud  HaloPSA   

    How MSPs Are Reshaping Staffing With AI and Automation: Insights From Peter Kujawa

    Play Episode Listen Later Mar 22, 2026 37:12


    The structural shift explored centers on the reconfiguration of labor dynamics within the MSP sector, driven by slowing wage inflation, increased automation, and the early adoption of AI. This mechanism is documented in the Service Leadership Annual IT Solution Provider Compensation Report, which highlights how top-performing MSPs are leveraging automation and AI for productivity improvements rather than aggressive hiring strategies. The report, as referenced by Service Leadership (a ConnectWise company), provides direct benchmarking on compensation and operational models, underscoring a pivot from pure labor-intensive growth to efficiency and automation as profit drivers. According to the report, wage inflation in the MSP space peaked in 2021–2022, with MSPs facing cost increases as high as 10–14%, but pressures have since gradually eased. Despite this moderation, labor represents 75–80% of cost of goods sold, and wages continue to rise at nearly twice the rate of the consumer price index, the report finds. Best-in-class MSPs have achieved higher margins per employee by both slowing headcount growth and integrating automation and AI, rather than through blanket budget cuts or wage freezes. Notably, these more productive MSPs employ a higher proportion of junior (level 1) technicians, maintain lower average compensation per employee, and tie greater proportions of total pay to performance-based incentives, unlike the bottom quartile. The episode also references broader MSP market forces including security concerns amplified by AI adoption, persistent vendor support gaps such as those with Microsoft, and instability illustrated by OpenAI's controversial government contracts and resulting user boycotts. These developments demonstrate how increasing automation and agent-based AI can pose new governance requirements, business continuity risks, and ethical dilemmas. Commentary from the SMB Community Podcast reinforces that industry consolidation, vendor reliability, and the balance between productivity and customer satisfaction will remain ongoing concerns for operators. For MSPs and IT service leaders, the implication is not a simple outsourcing of operational burden to technology, but an increase in vendor dependency, requirement for ongoing process redesign, and heightened need for accountability in compensation, automation, and security policy. Adopting automation and AI is likely to shift job mixes and compensation frameworks, reducing reliance on senior technical labor but requiring rigorous performance-based structures and clear governance for emerging technologies. The trend also signals a need for careful vendor selection and data management, as operational resiliency becomes increasingly tied to the stability and support capacity of automation and AI infrastructure providers. Supported by: RythmzABC Solutions, LLC

    Flat-Fee Dispute Resolution: Rich Lee Examines the Operational Shift for IT Service Providers

    Play Episode Listen Later Mar 22, 2026 21:19


    A significant shift addressed in this episode is the reconfiguration of business dispute resolution away from traditional litigation toward digital arbitration infrastructure. New Era ADR exemplifies this mechanism by providing a cloud-based, tech-enabled platform designed to compress legal dispute timelines and costs, fundamentally altering the risk structure for businesses that face contract enforcement issues and litigation exposure. The most consequential development is New Era ADR's assertion that its system resolves typical business disputes in approximately 100 days—up to 90% faster than court litigation—using digital workflows, AI-assisted processes, and a flat-fee pricing model. According to New Era ADR's leadership, the core platform includes end-to-end case management, digital document exchange, and process automation. The platform is positioned as enforceable under the Federal Arbitration Act, enabling mutual agreement for digital arbitration in contractual clauses and establishing predictable resolution timelines versus the uncertainty and duration common in court proceedings. Additional details reinforce this structural shift: the adoption mechanism leverages standard contract language, enabling businesses to designate New Era ADR as their default dispute forum with minimal operational friction. Safeguards are designed around deliberate limits on automation and AI deployment, with a focus on maintaining user trust and compliance with legal standards. Rules and procedures are engineered to prevent process abuse and to align the incentives of mediators and arbitrators, with both service providers and neutral parties subject to flat fees. Early customer adoption, including organizations in regulated sectors and high-profile enterprises, provides social proof for the model. Operational implications for MSPs and IT leaders include reduced contract risk exposure from protracted litigation and improved cost predictability. Shifting dispute resolution to digital arbitration platforms requires careful consideration of contract language, arbitration enforceability, and process transparency. Flat-fee models transfer focus from hourly billing to procedure-driven controls, which may impact how MSPs structure their own agreements, vendor relationships, and liability management. Dependence on third-party arbitration platforms adds a new governance dimension, mandating ongoing evaluation of compliance, automation boundaries, and audit trails to mitigate bias and unintended outcomes.

    AI Adoption Is Funding Itself Through Labor Cuts, Not Productivity Gains

    Play Episode Listen Later Mar 20, 2026 11:03


    The deployment of artificial intelligence across the business sector is introducing structural margin pressure rather than delivering the promised productivity dividend. Rather than self-funding through measurable efficiency gains, AI investments are currently being financed through compensation cuts, organizational tightening, and heightened performance expectations, as evidenced by data from ActivTrak, Gallup, Novoresume, and ResumeBuilder. This shift positions AI less as a driver of output and more as a cost-cutting measure embedded in software spending. Concrete developments show that, according to ActivTrak analysis, time spent on email and messaging has increased after AI adoption, while uninterrupted focus time has declined. Gallup data confirms that about 40% of employees use AI tools, though only a fraction leverage them effectively. Novoresume's survey reveals that although half of AI users report completing tasks more quickly, much of the saved time is not reinvested in productive output, and over half of respondents believe they could perform their roles at a similar level without AI involvement. Supporting evidence from Jobs for the Future identifies significant worker skepticism and low readiness, with only 36% of employees feeling equipped to use AI effectively and 44% viewing AI as a net negative for jobs and quality of life. Further, Snowflake's findings indicate that organizations are adjusting headcount to fill new skill gaps while eliminating overlapping functions. Inside the channel, ConnectWise observes that larger MSPs and VARs are curtailing compensation increases and relying on AI as a headcount management lever, exacerbating delivery expectations as evidenced in the Resume Builder findings. The operational consequences for MSPs and IT service providers are clear: organizations can no longer treat AI as a simple add-on. Providers face heightened expectations to deliver measurable outcomes—such as enhanced ticket resolution or lower escalation rates—despite constrained labor resources and ongoing workflow disruption. Without system-level productivity proof, procurement may preemptively reduce service spend. Effective risk management now requires auditing AI deployments for verifiable workflow changes, embedding measurable AI outcomes in QBRs, and treating workflow redesign and user training not as optional extras but as necessary, billable services. 00:00 Busier With AI 03:05 AI Outpaces Workers 05:33 MSP Squeeze 07:46 Why Do We Care?  Supported by:  Nerdio , HaloPSA

    How Insurers Like CyberWrite Are Shifting Cyber Risk and Claims Accountability for MSPs – Nir Perry

    Play Episode Listen Later Mar 19, 2026 22:09


    The episode highlights a structural shift in the cyber insurance market, marked by increasing reliance on risk analytics and automation for underwriting and claims management. Companies like CyberWrite and its CyGPT platform exemplify this move, leveraging artificial intelligence and large language models (LLMs) to support decisions around risk evaluation, policy underwriting, and post-incident analysis. The discussion points to a broader trend where insurers, seeking profitability and efficiency amidst rising cyber threats, increasingly depend on technical risk scoring and automated assessment rather than deep operational understanding of client environments. A key development is the heightened use of pre-breach and post-breach data collection by insurers for client evaluation. According to Nir Perry, insurance companies deploy platforms that scan client attack surfaces, dark web exposure, and implemented security measures, supplemented by questionnaires often completed by MSPs or IT managers. For larger clients or more significant coverage, insurers require more detailed controls and evidence, but the overall business remains highly profitable, with loss ratios generally favorable except in brief harder-market phases. The industry's underwriting models, as outlined by Nir Perry, prioritize statistical risk reduction based on historical breach data, not bespoke knowledge of each MSP's operational reality. Secondary factors reinforcing this shift include tension between checklist-based compliance approaches and practical security management, as well as the growing expectation that AI-enabled tools will speed up risk assessments and ROI modeling for security investments. Nir Perry notes that modern LLM-driven systems can rapidly extract and interpret risk information from technical documentation, enabling faster, data-driven recommendations for both insurers and MSPs. However, the episode also covers gaps in accountability when large software vendors shift the risk of vulnerabilities onto customers—a contrast to physical world liability frameworks—indicating persistent governance gaps in cyber risk assignment. For MSPs and IT leaders, increased dependency on insurer-driven checklists and risk models means that decision-making must closely track evolving carrier requirements, not merely technical best practices. Contractual and evidentiary risk arises if controls asserted during underwriting are not maintained, with some carriers declining coverage where documentation is inaccurate or solutions are misrepresented. Providers must account for operational delays during incidents, as insurer processes may prioritize forensics and evidence over immediate restoration. The proliferation of AI tools for risk analysis can help justify investments to business stakeholders but also increases the need for transparent and auditable decision records.

    Vendor Consolidation Shifts MSP Value from Tool Management to Proof of Enforcement

    Play Episode Listen Later Mar 18, 2026 11:59


    The dominant structural mechanism identified is the consolidation of security operations from individual point tools to integrated control planes that automate enforcement and provide continuous assurance. This shift, highlighted through developments at companies such as Huntress, NinjaOne, CrowdStrike, and NVIDIA, is driven by increased complexity in client environments and the acceleration of AI adoption outpacing internal governance frameworks. The trend forces MSPs away from tool management and toward delivering evidence-based assurance within unified operational models. A core evidence point is the visibility and skills gap in AI deployment across enterprises. The Pentera Benchmark study cited in the episode found that two-thirds of CISOs report limited visibility into AI use within their organizations, with none claiming complete oversight. Most respondents named lack of internal expertise as the main barrier, and many are extending legacy security controls to cover AI systems despite unclear ownership and governance. The market response—such as Check Point's introduction of an AI advisory service—indexes on closing this governance deficit created by rapid, unregulated AI adoption. Supporting developments reinforce this consolidation trend. Huntress now offers managed endpoint and identity posture services that automate security enforcement, while NinjaOne integrates vulnerability identification, patching, and remediation workflows to minimize operator error and reduce tool sprawl. CrowdStrike and NVIDIA are embedding security controls directly into the AI runtime environment, tying governance and observability into the stack rather than layering it on later. These actions illustrate and accelerate the power shift to platform vendors capable of centralized, automated control. For MSPs and IT service leaders, the operational impact includes increased vendor dependency, pressure to clearly define and prove enforceable outcomes in contracts, and greater risk exposure if platforms control key client data or proof artifacts. The move toward orchestration layers raises switching costs and pushes MSPs to build their own proof and reporting layers to maintain client value. Failure to adapt risks relegating providers to low-margin, commoditized contracts dependent on external vendors for both delivery and accountability. Three things to know today 00:00 Attackers Adapt 03:11 Platform Takeover 05:34 MSP Reckoning 09:01 Why Do We Care?  Supported by:  ScalePad  Nerdio 

    Margin Pressure for MSPs: How Microsoft Autopatch Moves Governance Upstream

    Play Episode Listen Later Mar 17, 2026 11:39


    The episode reveals a structural shift in the managed services market, where the value proposition for MSPs and IT service providers is moving away from “running the tools” to delivering governance, risk management, and outcome-driven services. This shift is catalyzed by the increasing commoditization of tool-centric operations, as platforms and vendors such as Microsoft (Autopatch), Atera (autonomous agents), Summit Holdings (MSP as a service), and Ruest (RoboRoosty AI Workflow Builder) push standardized automation, workflow tools, and backend service packaging into the market. Cisco's Global State of Security report underscores this trend, identifying tool maintenance and fragmentation as primary sources of inefficiency. Evidence from Cisco shows 59% of security leaders pointing to tool maintenance as the chief inefficiency, with 78% citing tool dispersion and lack of integration. For MSPs, this results in growing unbillable labor spent on connecting systems, onboarding, retraining, and managing exceptions. The report indicates that the cost to deliver services is escalating faster than the value captured in contracts, exposing a margin squeeze and highlighting the risk that unmanaged operational complexity poses to profitability. Secondary developments reinforce the structural shift. Atera's no-ticket operational model and Microsoft's implementation of security updates through Intune and Autopatch transfer control and cadence of IT operations upstream, leaving MSPs responsible for policy exceptions and business risk translation rather than day-to-day execution. Summit Holdings' “MSP as a service” and D&H's expansion into enablement and training further commoditize backend functions, reducing differentiation for providers who fail to retain independent client intelligence and risk management. Operationally, the implications for MSPs and IT leaders are clear: dependency on vendor platforms and wholesale backend solutions increases, making risk ownership and client-specific intelligence the remaining sources of defensible value. Providers unable to price or document governance and exception management risk seeing margins erode as they absorb unbillable labor and liability. Future operational strategy will require clear mapping of tools to billable outcomes, explicit governance layers, and careful evaluation of which client insights remain uniquely held versus replicated across standardized platforms. Three things to know today 00:00 Tools vs Outcomes 02:50 Delivery Gets Packaged 05:17 Defaults Have Costs 07:42 Why Do We Care?  Supported by:  TimeZest Small Biz Thoughts Community

    Pentagon AI Model Ban Shifts Control from Vendors to Procurement Authorities

    Play Episode Listen Later Mar 16, 2026 9:00


    The episode details a structural shift in the technology landscape: AI models are increasingly being treated as commodity components, with operational control and procurement decisions moving to the orchestration layer. This change is illustrated by government procurement actions, specifically the Pentagon's designation of Anthropic's Claude model as a supply chain risk and the subsequent shift in model eligibility requirements. Policymaking authorities are now directly dictating which models can be used within national security supply chains, reconfiguring where power, liability, and decision-making sit. The primary development is the Department of Defense's recent disqualification of Anthropic's Claude from eligible contracts, leading to both contract cancellations and legal disputes. Anthropic has responded with lawsuits contesting its supply chain risk designation, while Microsoft has sought court intervention to block the Pentagon's ban, asserting this would prevent disruption to military AI workflows. The State Department has also moved its internal chatbot infrastructure from Claude Sonic 4.5 to OpenAI's GPT-4.1, aligning with the President's compliance directive. Supporting developments include Google's deployment of Gemini-powered AI agents within the Department of Defense, and the emergence of tools such as Perplexity's APIs, which aim to simplify workflow construction across multiple models. The episode emphasizes that model swaps by agencies are not merely technical updates, but policy-driven control decisions. These actions underscore a climate in which model eligibility and operational portability are shaped by compliance and procurement authorities rather than technical teams or vendors. Operational implications for MSPs and IT providers are profound. Single-model dependencies now present measurable contract risk, especially for clients in defense, healthcare, or finance sectors. Swapping models requires revalidation of prompts, outputs, and integrations, rather than simple API repointing. Providers are advised to audit workflows for reliance on any one model, prioritize abstraction layers that enable smooth transitions, and position model-agnostic architectures as proactive risk management. In a landscape defined by commodity models and policy-driven eligibility, model diversification now represents continuity planning rather than an engineering preference. Three things to know today: 00:00 Pentagon vs. Anthropic 02:19 Beyond the Model 05:07 Why Do We Care?  Supported by:  ScalePad, Small Biz Thoughts Community

    RAM Shortages Reshape Channel Economics: Interview with Howard Davies

    Play Episode Listen Later Mar 15, 2026 21:50


    The episode centers on sustained component shortages in the IT channel, specifically RAM, which are expected to last for approximately two years. Dave Sobel and the CEO of Contextworld review the immediate and projected impacts, citing that shortages are driving manufacturers to allocate available components to higher-priced machines, hollowing out mid-range offerings. The result is a decline in unit sales, particularly in the consumer segment, offset by increases in average selling prices. Vendors may see overall revenue growth despite fewer units sold, but questions remain about whether increased margins will benefit distributors and resellers or be absorbed by vendors. Supporting data includes projections for the European market: unit sales are anticipated to decline by around 7%, while average selling prices may rise by approximately 14%, yielding a potential 6% net increase in vendor revenues. There is a distinction between business and consumer purchasing behaviors; business buyers are expected to maintain higher levels of spending due to operational requirements and perceived advantages from new hardware, especially AI-enabled devices, while consumer demand is forecast to soften due to price sensitivity. Adjacent topics include shifts in purchasing habits and technology adoption. Contextworld's sales data indicate increased demand for in-person retail, particularly in Europe and the UK, attributed to consumer interest in hands-on evaluation of new technologies, such as AI-capable PCs. While AI as a concept seldom drives purchasing decisions directly, named features like Copilot PCs are recognized as influencing consumer choices. The conversation also highlights Apple's expanding focus on business markets, with optimism for its forthcoming AI capabilities, and the emergence of vendors like Anthropic targeting enterprises with security and social responsibility as differentiators. For MSPs and IT leaders, the primary operational implications include the need to adapt to a competitive landscape marked by supply constraints, price volatility, and evolving buyer behavior. The channel may be strengthened by integrating new value-added services, such as cybersecurity and managed services, yet risk remains regarding margin capture and vendor strategies. Providers are advised to monitor shifts toward ecosystem-driven AI solutions and evolving market programs, as well as opportunities in "declining" market segments that may still offer profitability for those able to meet residual demand efficiently.

    Microsoft and Anthropic Reshape MSP Partner Control Through Ecosystem Lock-In

    Play Episode Listen Later Mar 13, 2026 9:10


    The episode identifies a fundamental structural shift in the MSP and IT services landscape: vendor channel consolidation and ecosystem dependency are increasingly determining who controls customer relationships, margins, and access to recurring revenue streams. Companies such as Microsoft, Anthropic, and Huntress are actively reshaping the ecosystem by investing significant resources in partner programs and platform strategies that dictate operational baselines and restrict neutrality. This realignment is driving MSPs to deliberately choose platform alignments, as attempting to remain neutral increasingly results in a loss of relevance and market access. Central to this shift is Anthropic's $100 million investment in launching the Claude Partner Network for 2026, which creates certification and co-sell incentives for firms capable of implementing Claude within enterprise environments. According to Dave Sobel, this is not long-range product development but a concentrated customer acquisition cost to rapidly build channel coverage. In parallel, Microsoft is embedding Anthropic models within Copilot, shifting to a multi-model approach that retains flexibility at the AI model layer while keeping Azure as the entrenched operational platform. Supporting developments reinforce these channel and ecosystem pressures. Huntress's move to expand its partner program to value-added resellers (VARs) dilutes its previously MSP-exclusive channel, removing some of the distribution advantages MSPs may have relied upon. Sonomi's positioning of third-party risk management as an MSP revenue opportunity comes amid rising supply chain risk, as supported by ConnectWise's 2026 MSP Threat Report highlighting increased identity abuse and supply chain attacks. Simultaneously, declining PC shipments—especially for budget devices—are shifting the economic emphasis from hardware projects to operational service engagements such as identity governance and lifecycle management. The operational implications for MSPs are clear: partner program frameworks have become the gatekeepers of pricing, leads, and ongoing service annuities, reducing the room for independent strategy or procurement-driven decisions. Ecosystem alignment must be intentional and based on a realistic assessment of program timelines, certification windows, and revenue structure. As hardware refresh cycles slow and vendors consolidate services and identity requirements, MSPs face increased dependency risk, potential margin erosion, and diminished negotiating leverage. Those failing to anticipate or adapt to these shifts risk being relegated to subcontractor roles without control over customer relationships or recurring revenues. Three things to know today 00:00 AI Channel War 02:27 Identity Baseline Shift 03:43 Refresh Revenue Shift 04:46 Why Do We Care?  Supported by:  Small Biz Thoughts Community   

    Drop in Search Clicks and Rise in AI Distribution Channels Shift Value Away from Traditional MSPs

    Play Episode Listen Later Mar 12, 2026 11:29


    AI deployment is compressing margins and altering the economic structure of the IT services market, with digital platforms and private equity–backed consulting now determining who controls distribution, interfaces, and downstream value capture. As referenced by Dave Sobel, developments such as large language models reshaping search, IT distributors repositioning as digital marketplaces, and private equity standardizing AI consulting are reducing the role of traditional MSPs to commoditized implementation labor. Concrete market evidence includes the Global Technology Distribution Council's report citing that 80% of vendors see partner ecosystem growth as key, while 86% are using or testing digital platforms to drive cloud and AI services. Examples such as Anthropic's discussions to create AI consulting joint ventures with Blackstone and Hellman Friedman, as well as OpenAI's partnerships with Thrive Holdings and Shield Technology Partners, show that operational models are being standardized and consolidated. Meanwhile, AI-powered search is reducing clicks to original content by up to 89%, transferring value to whoever controls the user interface. Supporting data from surveys conducted by the SMB Group, Pega Systems, and Atlassian highlight that 53% of SMBs are using AI, but only 3% of organizations report measurable business transformation despite a 33% productivity boost. Consumers show distrust in AI-driven customer service, and employee burnout and reduced confidence indicate that MSPs are absorbing increased operational complexity and support burdens even as margins compress. These developments reinforce the channel consolidation and margin repricing mechanisms described above. For MSPs and IT leaders, the practical risks include growing dependency on distributor and vendor digital marketplaces, narrowing ability to influence platform economics, and the transfer of governance obligations without matching margin. Priority areas are building defensible, repeatable governance frameworks around AI, owning escalation and validation paths, and repositioning services toward process redesign engagements—not commoditized tool deployment. Failing to establish an IP or governance wedge may result in MSPs being locked into subcontractor roles with little leverage over pricing or client outcomes. Three things to know today: 00:00 Channel Bypassed 02:26 Delivery Commoditized 04:15 MSPs Left Holding 07:12 Why Do We Care?  Supported by:  ScalePadSmall biz Thought Community    

    AI Risk Goes Downstream: Why MSPs Are Inheriting Liability from Vendors and Policy Gaps

    Play Episode Listen Later Mar 11, 2026 9:35


    The dominant structural mechanism highlighted is the industry-wide shift toward liability transfer and governance gaps in AI procurement, deployment, and incident response. According to Dave Sobel, both vendors and organizations are accelerating AI adoption without corresponding investments in oversight, training, or clear accountability structures. This is reflected across multiple sectors, from software vendors such as Grammarly, Eightfold.ai, Cohesity, and Rubrik, to business leaders and policymakers, where risk is systematically deferred downstream rather than managed at the point of adoption. The most consequential evidence is the quantitative disconnect between stated AI priorities and functional oversight. Research cited by Dave Sobel from Economist Impact and HR Dive found that while 38% of organizations budget for AI and 86% of executives rate AI as essential, only 16% offer internal training and over half of department-level AI initiatives lack formal oversight (Ernst & Young). Additionally, 88% of AI vendors limit their liability, and only 17% align with regulatory compliance, per cited surveys, leaving substantial legal and operational risk for end users and service providers. Supporting this trend, Dave Sobel points to Grammarly's opt-out identity usage in new features and a class action lawsuit against Eightfold.ai regarding AI-driven employment decisions. Vendors such as Cohesity, Rubrik, ServiceNow, and Datadog are responding by building tools focused on remediation and recovery from AI-driven incidents, underscoring a shift from preventive governance to reactive containment. Policy moves—such as expanded operational cyber roles for the private sector—further offload accountability without addressing contractual and insurance exposure. For MSPs and technology leaders, these developments create practical risks: unclear service scope around AI tool usage in contracts, increased exposure to billable incidents and legal action, and rising labor costs for incident recovery. Service providers must audit agreements for AI-specific language, distinguish AI-related incidents from standard SLAs, and treat AI governance as a managed risk service. The pressure will increasingly fall on MSPs to account for training gaps, audit trails, compliance attestations, and recovery procedures—not simply the technology itself. Three things to know today 00:00 ROI Reality Check 02:12 Governance Gap Widens 03:14 Cleanup Economy Rises 05:45 Why Do We Care?  Supported by:  CometBackup 

    Microsoft and OpenAI Expand AI Agents While Shifting Governance Costs to MSPs

    Play Episode Listen Later Mar 10, 2026 9:50


    A structural shift is occurring in the managed IT services landscape as AI capabilities are rapidly embedded across enterprise applications, with oversight and risk management functions increasingly separated out and monetized as add-on services. Vendors, including Microsoft and OpenAI, are deploying AI agents in essential tools such as Outlook, Teams, and Excel, then selling governance, security, and compliance capabilities as additional paid layers. The core mechanism is the transfer of operational and liability risk downstream to IT service providers and their clients, while ownership of the control plane and margin on risk mitigation remain with the vendors. The episode highlights consequential findings regarding AI reliability and adoption. A Nature Medicine study found that OpenAI's ChatGPT Health underestimated emergency severity in 51.6% of cases, prompting concerns about overreliance on AI for critical decisions. Additionally, Confluent's UK executive survey indicated that 62% of organizations are already shifting decision-making to AI, but only 7% have a company-wide AI strategy, and fewer than half of executives and employees agree on actual daily AI usage. Most leaders receive little formal AI training yet are second-guessing their own judgment in favor of AI output. Further reinforcing the governance gap, Microsoft is launching Agent 365 and new enterprise security tiers, while OpenAI's acquisition of Promptfoo signals a focus on AI reliability testing and compliance monitoring. Funding for GRC platforms like IntelliGRC demonstrates capital flowing into third-party oversight solutions. The recurring pattern is vendors first pushing broad agent adoption, then introducing and monetizing governance as a discrete add-on, often outside the default package. Operationally, MSPs and IT leaders face increased liability exposure if they rely on vendor-native governance without independent audit or measurement capability. The absence of industry-standard reliability metrics for AI, combined with the perception and usage gaps inside organizations, calls for MSPs to lead in auditing, documenting, and independently measuring AI usage and performance. Failing to proactively manage these controls can result in silent risk absorption and unfavorable positioning as vendors bundle compliance and pass residual risk downstream to service providers. Three things to know today 00:00 AI vs. Judgment                            02:35 Agents vs. Oversight 04:04 AI Reliability Gap 05:15 Why Do We Care?  Supported by:  ScalePad 

    AI Remediation Without Governance: How MSPs Face Rising Liability and Cost Exposure

    Play Episode Listen Later Mar 9, 2026 14:20


    The dominant structural shift identified centers on liability allocation and governance in the context of agentic AI deployment across IT and managed services. The episode underscores how automation is moving beyond content generation to direct operational and security actions, referencing technology from OpenAI (GPT-5.3 Instant), Anthropic (Claude Marketplace), Google Workspace CLI, Microsoft's SharePoint AI features, and Hexnode's Genie AI. Vendors are embedding AI deeper into productivity and endpoint infrastructure, increasing both operational efficiency and the risk footprint—making governance, reliability, and accountability the new competitive differentiators. The most consequential development highlighted is the industry-wide disconnect between rapid AI remediation adoption and lagging governance. According to Omdia, 88% of organizations are using AI-driven remediation, but only 44% have implemented it for most exposure types, and nearly half (49%) of security teams lack trust in these systems. IBM data shows that 63% of organizations lack formal AI incident response policies, meaning deployment often outpaces the development of auditability and risk management. This creates a landscape where automated decisions are taken at scale without clear accountability structures or incident protocols. Supporting developments reinforce these governance and risk concerns. Reports of cognitive fatigue—termed “AI brain fry”—affecting over 14% of users (Boston Consulting Group/UC Riverside) and a 39% increase in error rates among those affected, point to compounding human and system risk when automation outpaces oversight. Market analysis from Accenture, Wharton, and the Dallas Fed notes that AI has shifted skill demand, displaced younger tech workers, and pressured traditional fixed-fee business models. Meanwhile, vendors are migrating from predictable per-seat pricing to variable token-based consumption, passing operational uncertainty onto MSPs and their clients. For MSPs, IT service providers, and technology leaders, the practical implications are clear. Failure to implement explicit governance, contract clauses, and incident protocols exposes providers to unpredictable liability. Passing through ungoverned consumption costs under fixed-contracts damages margins as AI use expands. The increasing cognitive load on staff supervising partially trusted automation further compounds operational risk. As the pricing model shifts, providers must negotiate new contract terms, institute AI incident playbooks, audit tool autonomy, and manage the blast radius of AI with the same rigor as legacy security controls. 00:00 Platform Land Grab 03:56 Who Owns Failure 07:27 Skills Over Titles 09:52 Why Do We Care?  Supported by: JumpCloud  

    AI Integration Raises Data Governance Demands for MSPs — Colin Blair

    Play Episode Listen Later Mar 8, 2026 19:56


    The episode centers on D&H's strategic approach to vendor selection, AI program development, and partner enablement within the evolving landscape for MSPs and IT solution providers. Colin Blair, Executive Vice President for cybersecurity at D&H, details a governance-driven process for curating vendor relationships, with emphasis on aligning with Gartner quadrant leaders, peer insight metrics, and channel-partner readiness. D&H's focus remains on SMB and mid-market segments where complexity is increasing, especially around compliance, data governance, and cybersecurity. Supporting this curated model, Colin Blair notes that D&H maintains onboarding rigor but rarely offboards vendors within its advanced solutions group, citing ongoing hyper-growth and the need to continuously add value for partners. The vendor evaluation emphasizes data-driven benchmarks and sustained relationship-building at industry events. The company is prioritizing supply chain strength for MSPs, driven by measurable factors such as profitability, cultural compatibility, and proven channel strategies. The conversation also highlights the expansion of the Go Big AI program, which aims to increase AI literacy among both partners and end customers. Training initiatives reached over 5,000 partners, focusing on foundational applications like Microsoft Copilot and AI PCs, while acknowledging that project success is heavily dependent on data quality and governance. Use cases where implementations see traction are typically well-defined, such as Vision AI for video analytics in healthcare and security verticals. The need for tailored, consultative conversations is cited as significant, as end customers and partners often lack clarity on automation priorities or AI readiness. The implications for MSPs and IT leaders are pragmatic: sustainable advantage is less about technology adoption and more about managing operational complexity, ensuring data governance, and enhancing cybersecurity postures. Decision-makers are cautioned to assess both the maturity and applicability of AI solutions, invest in targeted literacy and consultation, and anchor their vendor relationships in measurable business value. The focus should be on careful risk management, transparent partnership evaluation, and supporting clients through consultative, outcome-driven initiatives rather than broad or speculative technology bets.

    The Decline of Core MSP Services: Surviving the Shift to AI-Driven Differentiation with Anurag Agarwal

    Play Episode Listen Later Mar 7, 2026 43:48


    Research presented by Dave Sobel and Anurag Agarwal highlights a steep decline in profitability for core MSP services, driven by heightened commoditization and vendor-led automation of basic offerings such as endpoint management and help desk operations. According to Techaisle's 2026 data, the traditional labor-plus-license model is no longer sustainable, as shrinking margins force service providers to reconsider foundational strategies. The central message underscores an urgent need for MSPs to prioritize proprietary intellectual property (IP) and vertical-specific solutions—not for incremental growth, but as a matter of operational survival. Supporting this assessment, the discussion details how market demand has shifted: MSPs can no longer depend on generic solutions but must differentiate with specialized, repeatable offerings that address the financial optimization and liability concerns of business clients. The data indicates that SMBs are increasingly unwilling to invest in pilots or “all-you-can-eat” AI models without visible ROI and demand concrete solutions linked to business outcomes. Vendors and MSPs alike are being tasked with providing smaller, outcome-focused wins and developing skillsets in agentic orchestration, where AI-enabled digital agents and human technicians operate as co-equal components of the workforce. A related trend explored is the shift toward agentic AI and “zero-touch” MSP models, featuring automation of routine IT tasks and focus on workflow engineering rather than manual services. However, the episode notes that most providers are unprepared for the new set of risks and governance liabilities: as clients increasingly utilize AI agents, accountability for errors and regulatory compliance will rest heavily with MSPs, especially in sensitive geographies such as Europe where contractual governance is becoming standard. Conversations on whether to “build or buy” new capabilities reflect a split market, with only the top tier capable of meaningful in-house development, and the majority relying on third-party platforms with limited differentiation. For MSPs, IT service firms, and decision-makers, the core implication is the need to rapidly develop operational and governance maturity around automation, AI orchestration, and packaged offerings. Clinging to traditional models or treating AI as a mere add-on introduces significant risk, including shrinking margins, increased liability, and potential obsolescence. Providers are advised to narrow focus, specialize in vertical solutions, invest in internal competency with AI-enabled platforms, and shift toward packaged IP to avoid falling behind as both client expectations and regulatory requirements escalate.

    MSPWell Launch Reveals Governance Gaps in Channel's Mental Health Initiatives

    Play Episode Listen Later Mar 6, 2026 12:46


    The episode centers on a structural governance gap within the managed services industry as it attempts to address mental health using relationship-driven models typical of event and community management. This approach is exemplified by the launch of MSPWell, a not-for-profit mental wellness initiative incorporated in Ontario, Canada, targeting participants in the IT channel. The initiative operates as a live community—particularly via Discord—without formalized clinical oversight or published operational guardrails such as moderation standards, crisis escalation protocols, or sponsor influence controls. Evidence for an urgent governance concern is provided by industry data and operational decisions. According to MSPWell, burnout affects significant percentages of the workforce—citing an 82% burnout risk from a Mercer report and 66% from separate research. Despite the recurrence of staffing challenges in the MSP industry, MSPWell's infrastructure is underway with participation at industry events and vendor sponsorship, but formal governance documentation remains incomplete. The initiative explicitly confirms the absence of licensed mental health professionals in published leadership or advisory roles, positioning its support as peer-led. Supporting developments highlight how rapid community launch and sponsor-driven funding amplify risks when core protections are missing. Early coverage focused on recognizable names and event presence, while Dave Sobel emphasizes that, in mental health-adjacent contexts, moderation, privacy, and escalation protocols are not only differentiators but essential safeguards. At present, MSPWell's Discord community operates without visible guidelines or documented procedures, which exposes participants to predictable failure modes such as oversharing, privacy breaches, and harmful peer advice. Operationally, MSPs and IT service providers face heightened liability when participating in or supporting such initiatives without robust controls. Dave Sobel advises operators to request moderation, crisis, and data retention policies before endorsing participation, to treat involvement as networking rather than clinical support, and to monitor for the integration of licensed professionals into governance. The absence of enforceable governance exposes both individuals and sponsoring vendors to reputational and legal risk, and sets problematic precedent for future wellness platforms in the industry. 00:00 MSPWell Builds Mental-Health Platform on Sponsor-Funded Community Model 03:21 Guardrails, Guidelines, and Moderation  06:15 The Consequences 08:09 Why Do We Care? & What to Consider Supported by:  TimeZest   

    Margin Redistribution Forces MSP Service Restructuring in Memory-Constrained Markets

    Play Episode Listen Later Mar 5, 2026 11:44


    Market segmentation driven by rising memory costs is actively restructuring the endpoint device landscape, leading to margin redistribution across the technology stack. Apple exemplified this bifurcation strategy by launching an entry-level MacBook Neo at $599 built on the A18 Pro iPhone chip, while simultaneously increasing prices on other MacBook Air and Pro models by $100 to $400 in response to global memory shortages. This deliberate move separates high-margin premium hardware from low-cost devices, effectively diminishing the traditional mid-tier device segment where most SMB and MSP standards have typically been positioned. Supporting data highlights the broader industry impact: 62% of small businesses report ongoing supply chain disruption, affecting pricing, timing, and availability, according to recent NFIB survey data. Component suppliers such as Broadcom are capturing upstream value, with a reported 29% year-over-year revenue increase driven by concentrated AI infrastructure demand. Omnia's forecast anticipates a significant smartphone shipment decline in 2026, primarily attributed to rising memory costs and uneven impact, disproportionately squeezing entry-level devices while preserving premium margins. A parallel challenge emerges within organizational governance and service delivery. The Logicalis Global CIO Report 2026 found over half of CIOs believe AI adoption is outpacing their management capabilities, with 90% of organizations lacking internal technical expertise yet 72% planning further AI investment. This gap between ambition and readiness, combined with traditional ticket-based operating models, means unmanaged risk increases as businesses prioritize speed over structured governance. Internal IT builds are increasingly abandoned, with 71% of IT and security leaders reporting failure to meet on-time and budget targets, signaling that velocity and accountability, not just ticket closure, are becoming core client expectations. Implications for MSPs and IT service providers are immediate and operational. Service models must account for hardware segmentation by incorporating differentiated support structures for entry-level versus premium devices. Increased complexity and support demands from constrained hardware will compress margins unless properly priced and standardized. MSPs are positioned closest to liability accumulation as clients face both hardware refresh and AI adoption without sufficient internal expertise. Advisory frameworks should address total cost of ownership, memory shortage context, and governance gaps, productizing assessments and redesigning service delivery for speed with explicit controls to manage risk. Three things to know today 00:00 Memory Costs Squeeze Entry-Level Hardware as Suppliers Capture Margin Upstream 02:24 Apple's $599 MacBook Neo Signals a Split Hardware Strategy, Not a Budget Play 04:22 IT Service Models Built on Approvals Are Losing to Speed-First Competitors 06:57 Why Do We Care?  Supported by:

    Risk Moves Upstream: How Embedded Governance and Insurance Set New MSP Constraints

    Play Episode Listen Later Mar 4, 2026 11:11


    The MSP market is undergoing a critical shift toward risk management as the central value proposition, with operational accountability now defined by the ability to produce defensible documentation and deliver rapid incident response. According to Dave Sobel, MSPs are no longer primarily offering stack management, but are increasingly brokering risk through cyber warranties, insurance underwriting, incident retainers, and AI governance frameworks. Those unable to support their claims with evidence and formal processes risk becoming mere facilitators for third-party terms and losing control over their margins. Recent developments reinforce this shift. A Splunk report finds that nearly all CISOs now view AI governance and risk management as their responsibility, citing threat actor sophistication as a primary driver. AI is assisting with event triage and data correlation, but verification—especially around AI-generated content—is unreliable, with detection tools struggling against advanced fakes. Insurance mechanisms are becoming productized with prioritized incident response, and legal intelligence is being embedded into MSP workflows. Vendors like N-able, Monjur, SentinelOne, and DocuSign are directly integrating financial, legal, and governance functions into their offerings, fundamentally altering client and vendor relationships. Adjacent stories illustrate volatility in traditional safeguards and the operational reality of adaptive threats. CISA leadership changes indicate instability in public response institutions. AI-powered malware exemplifies the challenge: ESET's PromptSpy uses Gemini to continuously adapt its persistence, outpacing static detection models. Insurance underwriters are increasingly demanding machine-verifiable evidence of controls, using detailed questionnaires to distinguish autonomous AI from marketing claims. The risk is no longer just technical; it is structural. For MSPs and IT leaders, operational posture is now shaped by an ecosystem of embedded warranties, legal terms, governance requirements, and adaptive threats. The ability to document, defend, and productize risk controls becomes a baseline for credibility and insurance eligibility. Failure to build evidence pipelines and clarify vendor-imposed liabilities exposes service providers to compounded risk. The practical implication is a necessity for MSPs to treat governance and detection as measurable, documented capabilities—not assumptions or routine paperwork. Three things to know today: 00:00 CISOs Own Governance, Detectors Lag Fakes, Response Gets Contracted — Accountability Follows 03:14 N-able, SentinelOne, DocuSign Move Risk Management Into the Stack — MSP Terms Follow 05:10 CISOs Want Agentic AI, But Insurers and Adaptive Malware Are Forcing the Timeline 07:32 Why Do We Care?  Supported by:  CometBackUpSmall Biz Thoughts Community

    Supply Chain Risk Designations Are Reshaping Federal AI Procurement

    Play Episode Listen Later Mar 3, 2026 13:41


    The episode centers on the federal government's evolving approach to AI vendor governance, underscored by the recent directive from President Donald Trump for federal agencies to halt the use of Anthropic's AI technology. This shift follows the Pentagon's termination of its relationship with Anthropic over the company's refusal to relax contract restrictions around citizen data and autonomous weapons, ultimately resulting in Anthropic being designated as a “supply chain risk” by Defense Secretary Pete Hegseth. For MSPs and IT providers serving federal and SLED clients, this designation functions as an immediate procurement barrier rather than a negotiable label, directly impacting vendor eligibility and contract continuity. Contextually, 70% of federal agencies are reassessing their use of AI tools amid fluid regulations and heightened concerns around transparency and accountability, according to recent reports. The National Institute of Standards and Technology (NIST) has launched the AI Agent Standards Initiative, but enforcement is several years away, with only a request for information planned by March 2026. In parallel, a diplomatic initiative led by Secretary of State Marco Rubio opposes international regulations on foreign data handling, though this stance does not supersede foreign law, creating a complex compliance landscape, especially for multinationals. Meanwhile, the U.S. Supreme Court's refusal to hear an AI copyright case reaffirms the lack of copyright protection for purely AI-generated works. The episode also discusses OpenAI's agreement with the Pentagon, described by CEO Sam Altman as "rushed," and criticized for permitting domestic surveillance under flexible legal interpretations. Public and employee backlash prompted OpenAI to revise contract terms, but critics argue essential permission structures remain. Anthropic's rollout of an AI migration feature during this period is flagged as a compliance event, raising risk when transferring data histories across vendor boundaries without audit or logging. Notably, consumer responses to AI vendor practices—evidenced by surges in Claude signups and ChatGPT uninstalls—are now influencing enterprise technology procurement as values-based purchasing enters the operational conversation for service providers. Operationally, the lack of a stable legislative or regulatory framework means MSPs and their clients face rapidly shifting governance through contract terms and procurement policy rather than law. The episode cautions that vendor selection cannot be guided by assumptions of ethical safeguards in provider policies or by default transitions to alternative vendors such as OpenAI, whose legal standing remains unsettled. Key recommendations include auditing client environments for exposure to designated supply chain risks, refraining from rigid vendor integrations, updating contractual IP language in light of the absence of AI copyright, and maintaining ongoing awareness of governance developments. Multi-vendor strategies and adaptable compliance positions are identified as essential risk mitigation practices in an environment marked by administrative fiat and reactive vendor positions. Three things to know today 00:00 Anthropic Blacklisted After Rejecting Pentagon's Autonomous Weapons Data Demands 04:58 OpenAI Wins Federal AI Contract Anthropic Refused, Then Rewrites It Under Pressure 07:38 Anthropic Outages Hit as Claude Sign-Ups Quadruple, ChatGPT Uninstalls Surge 295% Supported by: ScalePadSmall Biz Thoughts Community  

    Hardware Cost Volatility Forces MSPs to Reprice Contracts and Restructure Service Models

    Play Episode Listen Later Mar 2, 2026 12:49


    Enterprise IT spending is projected to reach $4.5 trillion by 2026, but this growth is concentrated in software, cloud services, and AI infrastructure for large organizations, according to HG Insights and Omdia research cited by Dave Sobel. The system integration market is positioned to approach $950 billion in 2025, with enterprises working with an average of 6.3 technology partners. A substantial surge in AI-optimized server sales, as reflected in Dell Technologies' reported 342% year-over-year increase in revenue for those systems, is reshaping supply chains and vendor dynamics, leading to shortages of DRAM, SSDs, and hard drives. Underlying this development are volatile component costs. DRAM prices have doubled quarter over quarter, and both Micron Technologies and Western Digital have indicated they are sold out for 2026. HP reports that RAM now constitutes 35% of new PC materials costs, up dramatically from 18% the previous quarter. Such cost shifts are creating downstream risks for managed service providers (MSPs) with fixed-price agreements, as the economic assumptions underpinning many contracts—stable hardware prices and predictable cloud costs—no longer hold. The episode also highlights an increase in application sprawl and a widening gap between IT budgets and other operational costs. A Torii report shows large enterprises use over 2,191 applications on average, with more than 61% bypassing formal IT approvals, resulting in unmanaged security and compliance exposure. Additionally, 80% of small businesses report rising energy costs that directly compete with IT budget allocations. Industry analysis from Jefferies and Boston Consulting Group signals that AI and automation are not viewed uniformly as productivity boosters and may compress revenue models in both Indian and domestic IT services sectors. The practical implication for MSPs is the urgent need to audit and reprice contracts related to hardware procurement and refresh cycles, clearly documenting and communicating current cost realities with clients. Dave Sobel stresses reframing device lifecycle extensions as a security risk rather than a cost-saving measure and warns against selling clients on speculative AI market projections. The advice is to focus on specific, scoped use cases and to structure agreements that accurately reflect volatility in component costs and the operational burden of application sprawl, ensuring financial and legal accountability as the IT services landscape evolves. 00:00 $4.96T IT Spend Surge Bypasses SMBs as AI Infrastructure Captures Enterprise Budgets 03:58 Dell's $43B AI Server Backlog Triggers DRAM Shortage, Repricing Downstream Hardware 05:52 AI Shrinks IT Services Revenue Model; MSPs Face Contested Implementation Role   This is the Business of Tech.    Supported by:

    Cybersecurity Distribution and Shared Risk Models: Interview with Jason Beal of Exclusive Networks

    Play Episode Listen Later Mar 1, 2026 19:15


    The episode centers on the evolving responsibility and risk allocation within cybersecurity distribution, with particular focus on Exclusive Networks' approach. Jason Beal, as president of Exclusive Networks North America, outlines their emphasis on a technical workforce, maintaining a 1:3 ratio of engineers to sales representatives. This structure is positioned to address the increasing complexity of cybersecurity and the demands faced by service provider partners, aiming to support solution integration and customer needs while clarifying each party's liability. Supporting this structure, Jason Beal identifies the role of the distributor as both an extension and enabler for MSPs and IT services companies. Distributors are expected to supplement partners' capabilities—whether technical, financial, or operational—without assuming technology failure risk, which remains with the original technology vendors. Discussion of shared responsibility models also distinguishes between sales success (customer adoption, retention) and risk management. Recent developments in cyber insurance are cited as having reduced the direct risk burden on MSPs, shifting much of the liability away from service providers toward technology creators, albeit within contractually defined limits. Adjacent to cybersecurity, the conversation addresses skill and adoption gaps prompted by rapid technical innovation, specifically referencing artificial intelligence (AI). Jason Beal quantifies educational efforts by highlighting a collaboration with Cal Poly San Luis Obispo, which has seen 100 students engaged to help address workforce shortfalls in cybersecurity and AI. Additionally, academic experience informs the importance of modernizing IT operations curricula to better reflect current business challenges, such as cloud, AI, and global supply chain impacts. For MSPs and IT service providers, implications include the growing necessity to audit core competencies and allocate resources strategically, leveraging distributors not just for sourcing products but for specialized expertise, integration, and operational support. Risk mitigation remains tied to understanding contract language, vendor accountability, and developments in cyber insurance. The pace of AI and other technology adoption requires continuous education and careful evaluation of both operational risk and the practical limitations of solutions promoted by the channel and distribution partners.

    Anthropic Refuses Pentagon AI Demands; Burger King's AI Monitoring Raises Privacy Risks

    Play Episode Listen Later Feb 27, 2026 14:08


    Anthropic's refusal to remove safeguards against mass domestic surveillance and fully autonomous weapons in its interactions with the Department of Defense establishes an explicit boundary on the use of AI in federal contracts. The company cited specific civic and legal risks, emphasizing that current AI systems are not reliable enough for autonomous weapon deployment and warning that government pressure on vendors to bypass statutory constraints poses broader accountability issues. This underscores a shift in liability for MSPs and IT providers—any weakening of safeguards under contract does not eliminate risk but instead transfers possible exposure down the technology supply chain. This position is reinforced by the lack of unconditional trust in military oversight, as highlighted by the Pentagon CTO's remarks, and by clear legal challenges, including violations of the Fourth Amendment and Department of Defense Directive 3000.09. Dave Sobel asserts that professional liability and cyber policies do not typically cover actions undertaken solely at government request where legal limits are breached. This increases the necessity for MSPs and IT leaders to verify that contract language explicitly defines acceptable AI use and to ensure written documentation before government or enterprise client demands arise. Additional analysis includes operational deployments of AI in service and workplace environments. Burger King's AI chatbot, Patty, and ServiceNow's autonomous request resolution underscore the friction between efficiency claims and trust gaps, as evidenced by a YouGov survey that found 68% of consumers lack confidence in AI customer service. Dave Sobel notes that MSP benchmarks tied to vendor ticket closure rates may not reflect real client satisfaction or risk, especially when legal requirements for monitoring and consent are not met. The episode further covers market reactions to speculative reports on AI-driven job displacement, studies demonstrating AI's failure to maintain human-like restraint in conflict scenarios, and IBM's valuation drop due to AI modernization tools. For MSPs and IT decision-makers, the practical takeaway is the need for documented governance, explicit contractual safeguards, and ongoing risk assessments when deploying or recommending AI solutions—particularly in environments where trust, human oversight, and insurability are not yet aligned with technical capability. Three things to know today: 00:00 Anthropic Refuses Pentagon Demands on Surveillance and Autonomous Weapons, Risks Contract 03:40 AI Hits the Human Layer — and Governance, Consent, and Trust Infrastructure Aren't Ready 07:37 AI Moves Markets, Escalates Wars, and Splits Partner Ecosystems — In One Week   This is the Business of Tech.    Supported by:  IT Service Provider University

    Pentagon Pressures Anthropic for AI Access; VMware Exit Costs and Compliance Risks for MSPs

    Play Episode Listen Later Feb 26, 2026 13:58


    The episode's central development is the ongoing dispute between the U.S. Department of Defense and Anthropic regarding Pentagon demands for unrestricted access to Claude, Anthropic's AI model. According to Dave Sobel, the Pentagon has threatened to sever ties or invoke the Defense Production Act if the company does not comply, seeking capabilities that Anthropic argues may be illegal—specifically mass surveillance without warrants and autonomous weapons systems without human control. This move exposes Managed Service Providers (MSPs) serving defense contractors to unpredictable legal, operational, and compliance risks embedded in their AI workflows. The analysis highlights that a commercial AI provider's acceptable use policy now intersects directly with national security policy, and even partial vendor compliance can trigger regulatory or legal instability for dependent organizations. For MSPs, this means that building service offerings on AI infrastructures without clear fallback strategies or documented policy change clauses can lead to unmanageable risk and liability in the event of provider or legal regime shifts. Dave Sobel stresses that failing to address policy volatility as part of a managed service amounts to underwriting geopolitical risk without compensation. Other notable developments include the passage of the Small Business Artificial Intelligence Advancement Act, federal cybersecurity resource contraction as CISA operates with 38% staffing after layoffs, and heightened uncertainty around cloud infrastructure due to Microsoft's Azure Local “air-gapped” offering not wholly mitigating U.S. CLOUD Act exposure. Vendor news covered new AI-powered compliance features from Compliance Scorecard (version 10) and Beachhead Solutions (ComplianceEZ 2.0), Apple's accelerated retirement of Rosetta 2 translation technology, a Microsoft 365 Copilot DLP change, and continued fallout from VMware's acquisition by Broadcom, which has led to ongoing cost and trust challenges for cloud and infrastructure partners. The episode's clear implications for MSPs and IT providers are operational. Service catalogs and statements of work should actively address AI provider liability, dependency exit planning, and degraded federal cybersecurity support. Without scheduled and documented compatibility and risk reviews, MSPs absorb hidden exposure into their margins. Vendor stability can no longer be assumed, and proactive policy, renewal intelligence, and transparent advisory sessions are now required to avoid unplanned liability, budget crises, and damaged client trust. Four things to know today 00:00 Pentagon Threatens Anthropic Over Claude Access, Demands Autonomous Weapons Use 04:31 CISA Cuts, Azure Sovereignty Push Signal End of Federal MSP Safety Net 06:56 AI Compliance Tools Flood Market as MSPs Face Validation Gap 09:54 86% of Firms Cutting VMware Ties as Broadcom Renewal Costs Loom   This is the Business of Tech.    Supported by: Small Biz Thoughts Community

    Goldman Sachs Reports $700B AI Spend Yields No US GDP Growth; 40% of AI Projects Face Cancellation

    Play Episode Listen Later Feb 25, 2026 14:50


    Recent analysis from Goldman Sachs indicates that $700 billion in AI investment during 2025 resulted in no measurable U.S. GDP growth, with most AI equipment imports negating domestic benefits and 80% of surveyed firms reporting no productivity or employment improvements. This pattern suggests that AI-related spending has primarily shifted margins from enterprise IT budgets to a small number of infrastructure vendors rather than delivering distributed value. Internal concerns are rising, with 90% of IT leaders questioning AI's return on investment, and 80% citing fragmented data as a primary challenge to measuring outcomes. Further context reveals that agentic AI initiatives face operational headwinds: Gartner expects 40% of such projects to be cancelled by 2027, and S&P Global found nearly half are abandoned before production, most often due to inadequate planning and data foundations. Margin erosion is widespread, attributed to AI implementation costs, and attempts to scale AI agents into production remain limited by inference costs and insufficient infrastructure. Despite increased adoption efforts, sustainable value delivery from AI platforms remains elusive for most organizations. Enterprise AI access is becoming increasingly concentrated. OpenAI's partnership with consulting firms such as BCG, McKinsey, Accenture, and Capgemini consolidates control of the enterprise distribution layer, narrowing competitive opportunities for smaller providers. Meanwhile, Amazon's 13-hour AWS outage, linked to the misconfiguration of an internal AI tool, underscores the liability ambiguity in agentic systems—where vendors may attribute autonomous actions to user error, complicating risk assignment. Additional updates from vendors such as Anthropic, Cloudflare, and New Relic address incremental technical capabilities, with a distinct focus on cost, operational governance, and policy enforcement. The prevailing themes for MSPs and IT leaders are increased scrutiny of AI value, heightened exposure to cost and accountability risk, and the emergence of managed service opportunities around data governance, cost instrumentation, and liability management. With enterprise market channels consolidating and risk shifting toward service providers, integrating robust contractual definitions for autonomy, incident attribution, and financial boundaries is essential to limit harm and clarify responsibility before incidents occur. Four things to know today 00:00 Goldman: $700B AI Spend Delivered Near-Zero U.S. GDP Growth in 2025 03:49 OpenAI Enlists BCG, McKinsey, Accenture to Distribute Enterprise AI Agents 06:44 Report: Amazon's Own Engineers Prefer Claude Over Its Mandated Internal Tools 08:56 AI Inference Costs Are Falling — But Governance Gaps Are Growing This is the Business of Tech.    Supported by: CometBackup  Small Biz Thoughts Community   

    Remote Monitoring Tool Abuse Surges, Microsoft Copilot Control Failures, and AI's Channel Impact

    Play Episode Listen Later Feb 24, 2026 14:11


    Cybercrime's escalation has reached a projected $12.2 trillion annual impact by 2031, with a notable surge in remote monitoring and management (RMM) tool abuse—up 277% year-over-year, according to Huntress and supporting vendor reports. Attackers utilize legitimate IT tools to facilitate stealthier ransomware and phishing campaigns, amplifying structural vulnerabilities within MSP technology stacks. Key metrics from Acronis, WatchGuard, and Vectra AI indicate a shift to smaller, more evasive malware campaigns, longer times to ransomware deployment (averaging 20 hours), and widespread unaddressed security alerts, raising questions about the adequacy of current defenses and incident response practices. Vendor-supplied threat intelligence further shows that MSPs' reliance on signature-based platforms and insufficient visibility leaves them exposed to evolving attack techniques. Data reviewed suggests phishing footholds can quickly compromise cross-client environments, and legal ramifications heavily fall on the service provider when RMM or monitoring tools act as entry points. Notably, only about 58-60% of organizations report full visibility across their systems, with a majority of alerts remaining unaddressed, underscoring gaps in operational maturity and preparedness. Adjacent coverage highlighted Microsoft Copilot's repeated security control failures within regulated environments, specifically its inability to enforce sensitivity labels and boundaries across emails—most recently affecting the UK's National Health Service. The lack of vendor-announced architectural changes calls into question the viability of deploying AI tools in compliance-driven contexts. Separately, political and public backlash against surveillance technologies (such as Flock cameras) demonstrates that unchecked data collection is no longer a manageable passive risk, as data becomes increasingly actionable and retains liability beyond technical considerations. The practical takeaway for MSPs and IT leaders is a need to prioritize audit, documentation, and enforcement of controls within their technology stacks, especially where vendor tools or AI-driven automation intersect with compliance and client trust. Preserving operational optionality and scrutinizing vendor terms—particularly data sharing and architectural enforcement—are essential to reduce exposure. Waiting for vendor patches, disregarding documented control failures, or underestimating public scrutiny elevate liability across legal, reputational, and client relationship domains. Four things to know today: 00:00 Vendor Threat Reports Converge on One Risk MSPs Can't Outsource: The RMM as Breach Vector 05:11 Copilot Failed Compliance Controls Twice in Eight Months — A Patch Won't Fix That 07:03 Flock Backlash Exposes the Liability Hidden in Every Vendor Data-Sharing Contract 09:42 GTDC Summit: Distributors Pitch AI On-Ramp as Hyperscalers Compress Their Margin Sponsored by:  

    IT Salary Compression, AI Trust Decline, and Vendor Consolidation Impact MSP Strategies

    Play Episode Listen Later Feb 23, 2026 14:15


    Recent data highlights a growing disconnect between technology spending and measurable business outcomes, with small business optimism softening and widespread skepticism about the benefits of artificial intelligence. The transcript cites an 80% rate of firms seeing no noticeable AI-driven productivity improvements, while trust in technology companies, particularly AI vendors, has declined globally according to the Edelman report. For MSPs, this presents a risk of credibility gaps, especially for those selling AI solutions without corresponding outcome data, as client trust and spending habits grow more discerning in the face of unfulfilled promises. Further context is provided by economic indicators showing a resilient U.S. economy, yet persistent challenges for small businesses. The NFIB Small Business Optimism Index has dropped slightly to 99.3, with insurance costs and labor quality as major pain points; only 16% of business owners expect higher sales. At the same time, IT professionals face salary compression—median IT salaries fell from $145,000 in 2023 to $115,000 in 2024—despite a severe shortage of skilled cloud, AI, and infrastructure talent, as less than 10% of hiring managers are confident in filling in-demand roles. Additional market pressures include rising technology budgets—three-quarters of CFOs anticipate larger tech allocations, but headcount increases are slowing and tech spending faces a widening affordability gap due to sector-specific inflation outpacing budget growth. Vendor-specific developments, such as Western Digital exhausting hard drive capacity for 2026 and Enable reporting 12.8% revenue growth alongside ongoing losses and a 65% stock decline since 2021, illustrate structural risks. Vendor rationalization and strategic uncertainty are likely outcomes for MSPs relying heavily on underperforming partners. Key takeaways for service providers and IT leaders include the need for caution in messaging and solution positioning: outcome data and defensible value propositions are essential when advocating AI or cloud services. Salary data should be weighed against demand-side evidence to avoid retention failures. Finally, dependency on vendors with deteriorating financial outlooks heightens operational risk; providers should proactively assess alternatives and align with financially sustainable partners to reduce exposure during vendor consolidation cycles or market restructures. Four things to know today 00:00 AI Productivity Gap Widens as Trust Drops — MSPs Selling Outcomes They Can't Measure Face CFO Audits  04:51 IT Median Salary Dropped 20% in 2024, But Only 7% of Hiring Managers Can Fill AI and Cloud Roles 07:26 IT Inflation Hits 6.9% as CFOs Concentrate Spend; Western Digital Fully Booked Through 2026 10:28 N-Able Beats Revenue, Misses Earnings as 2026 Growth Guidance Drops to 8–9%   Sponsored by: CometBackup Small Biz Thoughts Community

    Jessica Yeck on AI Project Challenges and Partner Strategies at TD SYNNEX

    Play Episode Listen Later Feb 21, 2026 12:39


    The discussion centers on the implementation challenges and partner enablement strategies for artificial intelligence (AI) within the technology channel. According to TD Synnex's AI Accelerator program, only a small portion of AI projects achieve active deployment and measurable ROI, with widespread difficulties cited in scaling complex AI use cases. Jessica Yeck, SVP of Vendor Solutions at TD Synnex, highlights that progress is contingent upon engaging partners at their current state of AI readiness and aligning support resources accordingly. The evidence reflects a move away from one-size-fits-all approaches toward tailored frameworks that focus on tangible business outcomes and repeatable processes. TD Synnex's revised strategy prioritizes meeting partners “where they are,” using assessment frameworks that differentiate between partners with defined AI strategies and those seeking foundational guidance. Jessica Yeck references leveraging the broader technology ecosystem—including vendors, ISVs, and hyperscalers—to deliver solutions with multi-party input. This approach enables partners to identify actionable opportunities and develop pipelines, but demands cross-functional collaboration and technical-specialist engagement, particularly as customization—rather than rigid standardization—is required for effective deployment. The episode also addresses the evolving role of technology distribution in supporting partners beyond logistics. There is explicit recognition of the importance of financial mechanisms, marketplace access, and consultative guidance for services. Jessica Yeck underscores the interconnectedness of relationship-building, competency focus, and ecosystem utilization, noting that partners do not need exhaustive in-house technical skills if they can identify and collaborate with relevant specialists. This points to a strategic shift in what services and value partners can realistically deliver. For MSPs and IT service providers, the key implications involve re-evaluating approaches to AI enablement and partner relations. Instead of prioritizing technical uniformity or attempting to master every subsystem, providers should invest in relationship management and focused competency development while leveraging broader ecosystem resources. Adoption risk is reduced when partners clearly understand their customers' primary objectives and are prepared to orchestrate service delivery with targeted technical and financial support from their distribution networks. The episode reiterates that risk and accountability in AI projects hinge on practical readiness, process discipline, and honest assessment of operational capabilities, rather than technology enthusiasm or over-reliance on standardized templates.

    Creative AI Go-to-Market Strategies for MSPs in 2026: SMB Community Podcast

    Play Episode Listen Later Feb 19, 2026 23:05


    Welcome to a feed drop ofthe SMB Community Podcast, the longest-running MSP-focused podcast in the industry.  Hosts James Kernan and Amy Babinchak dive deep into AI go-to-market strategies for 2026, inspired by insights from Amy Babinchak's recent AI class for MSPs.They open with the latest news on Microsoft Copilot and Anthropic's integration, highlighting new privacy and security features for Office apps. Then, they explore how MSPs can not only adopt AI internally but also create new, innovative service offerings for their clients—like custom AI grant-writing agents for nonprofits, real-world business demonstrations, and the integration of AI readiness assessments.Pricing strategies, project sales versus monthly recurring revenue, and the importance of meaningful quarterly business reviews also come under the spotlight. Throughout the conversation, Amy Babinchak and James Kernan share practical examples, discuss industry challenges, and encourage listeners to rethink and monetize their approach to AI as we move toward 2026.Tune in for fresh ideas, actionable strategies, and a glimpse into the real-world experiences of MSPs shaping the future with AI, and find it on your favorite podcast player.   Links at https://smbcommunitypodcast.com

    Managed Services and AI Integration: Interview with Brian Harmison on Corsica Technologies' Strategy

    Play Episode Listen Later Feb 17, 2026 22:47


    Corsica Technologies' reported 105% year-over-year growth in managed services bookings stands out as the primary development, indicating heightened demand for flexible service models among businesses with existing IT functions. According to Brian Harmison, CEO of Corsica, this growth is attributed to the company's focus on operational integration, automation, and data-centric managed services that supplement, rather than replace, in-house IT capabilities. The significance for MSPs is not the expansion itself, but the operational choices that enable sustained trust and differentiated engagement in a competitive landscape. Supporting details clarify Corsica's operational strategy: instead of automating or deploying AI indiscriminately, Harmison emphasizes that automation and AI are only effective atop an already “operationally excellent” MSP framework. Practical deployments cited include user onboarding/offboarding workflows, which demand both internal process clarity and integration with client HR systems. The company positions data integration and workflow consulting as integral to MSP-client relationships, not as add-on projects. Corsica's contracts reportedly reduce friction and avoid asset-tracking or incremental billing, seeking to foster longer-term trust over short-term revenue optimization. The episode also addresses the implications of Corsica's acquisition of Accountability IT. Harmison cites alignment in operating models and targeted capabilities—especially in Microsoft security and AI expertise—as central to the integration's value, rather than generic synergies. He notes that continuity of client relationships and careful preservation of existing service structures were prioritized in the first 90 days, even at the expense of speed, to mitigate operational risk and maintain client trust. The discussion highlights the risk tradeoffs between scaling for broader capability and maintaining agility for specialized client needs. For MSPs and IT leaders, the takeaway is to focus on risk reduction through operational excellence and trusted client relationships. Embracing automation and AI is not a universal solution; process maturity and readiness in both the provider and customer are preconditions for any meaningful implementation. Acquisitions require careful cultural and operational integration, with an emphasis on continuity and incremental capability, rather than immediate consolidation or scale. The episode frames operational clarity and trust—not rapid expansion or technology adoption—as critical determinants of long-term viability and resilience in managed services.

    Deploying Agentic AI at Scale: Infrastructure, Reliability, and Risk with Ran Aroussi

    Play Episode Listen Later Feb 16, 2026 23:03


    Agentic AI is being deployed as production infrastructure in enterprise settings, but prevailing frameworks remain unreliable for mission-critical operations. Dave Sobel and Ron Aroussi from Muxie underscored that while AI agents are functional—especially in non-deterministic contexts like customer support—expectations of deterministic, workflow-based reliability are not met. The move from demonstration agents to production-scale tools brings heightened attention to issues of reliability, observability, and especially risk of vendor lock-in for Managed Service Providers (MSPs) and their clients.Operational deployment of AI agents currently gravitates toward roles with minimal operational risk, such as customer-facing chatbots or internal chief-of-staff assistants. Aroussi explained that while such agents can automate initial support tiers and internal daily briefings, their unpredictability and potential for error limit their use in processes demanding strict oversight and accountability. He identified two core use cases—external (customer support) and internal (personalized information management)—explicitly noting that agents are best positioned to augment rather than fully automate complex workflows at this stage.A critical risk for MSPs lies in attempting to retrofit existing software frameworks to support agents, which introduces integration complexity and increases the likelihood of operational failures. Purpose-built infrastructure for agentic AI offers better alignment between AI capabilities and production requirements, with Aroussi citing drastically reduced hallucination rates and improved oversight when using native tools. Open source is identified as a foundational element for AI development, but it incurs its own risks, particularly around third-party code quality and the long-term sustainability of community-driven projects.The practical implication for MSPs and IT service providers is clear: a cautious, incremental adoption approach focused on low-risk use cases, coupled with rigorous controls on agent permissions and robust audit trails, is essential. Decision-makers should avoid assuming agents operate with the reliability or accountability of traditional software, prioritize operational transparency, and ensure that responsibilities for agent actions are clearly defined and enforced at the implementation level. Vendor lock-in and software provenance remain significant governance concerns as agentic AI moves from experiment to infrastructure.

    AI Spending Impact, Channel Share Decline, and MSP Growth Strategies With Jay McBain

    Play Episode Listen Later Feb 15, 2026 43:55


    The central development addressed is the disconnect between rising overall IT spending and the declining channel share for MSPs and IT partners. Dave Sobel, in discussion with an industry analyst, highlights a reduction in indirect channel participation—from over 75% to a projected 66.7% in 2026—primarily due to the concentration of AI infrastructure investment among the largest technology firms. These hyperscalers and their associated CapEx do not translate into traditional channel opportunities, restricting partner involvement to areas outside large-scale AI data center buildouts.Supporting data point to a technological industry projected to reach $6.07 trillion in customer spend, growing at 10.2%, compared to significantly lower world GDP growth. However, almost none of the rapid AI-related CapEx from companies like Nvidia and Google flows down to channel partners, who instead rely on client-facing managed services, advisory, and security service work. The increasing complexity of customer demand—such as the shift toward managed security (15% growth) and AI services (35.3% compounded growth)—further pushes MSPs to focus on services surrounding the core product, rather than on direct product resale or thin margin opportunities.A significant operational shift within the channel also emerges: the distinction between “influence” and “execution” partners. Vendor programs increasingly recognize partner contributions outside of transactional resale, such as co-selling, advisory contributions, and services attached before or after the point of sale. This trend is reinforced as platforms move toward “point systems” and indirect revenue attribution, redefining how MSPs measure channel health and partner value in a more complex, multi-partner environment.For MSPs, IT providers, and decision-makers, the key operational implications are clear. Traditional growth through seat expansion is less reliable as hiring softens, and managed services must focus on multiplier opportunities—profitable service revenue attached to each dollar of product sold. Capturing value requires adapting to changing program structures, emphasizing trusted advisor roles, and collaborating effectively with adjacent partners. Near-term investment in understanding and building pre-sales AI and security services, and tracking evolving vendor economics, is essential for navigating the new realities of partner participation, risk allocation, and long-term business health.

    Generative AI Drives Tech Spend Shift as Channel Margins Face Pressure

    Play Episode Listen Later Feb 13, 2026 14:40


    Global technology spending is projected to reach $5.6 trillion by 2026, with nearly two-thirds of this investment directed toward software and computer equipment, particularly servers, according to Forrester. Generative AI is cited as a primary driver of this increase, shifting the balance of power toward cloud providers such as AWS and Azure. This escalation has implications for operational margins and the position of IT service providers, as businesses increasingly migrate complex workloads to cloud infrastructure ecosystems.Supporting data shows a disconnect between tech employment trends and hiring activity. In January 2026, technology companies cut approximately 20,155 jobs, mainly in telecommunications, while job postings for tech positions rose by 13% compared to the prior month, based on CompTIA analysis. Dave Sobel interprets this as a shift away from permanent IT headcount to project-based, AI-focused engagements. This development places pressure on service providers, who must adapt to buyers reallocating spend from traditional staffing models to short-term, outcome-oriented contracts.Adjacent discussion covered two press releases: VirtuaCare launched a support offering for Windows-based MSPs needing Apple expertise, delivering an externally verifiable, Apple-certified service. In contrast, Miso announced a roadmap for an autonomous AI L1 technician but did not substantiate claims with deliverables or customer data. Dave Sobel emphasized the need for MSPs to demand piloting, outcome metrics, and auditable product maturity, warning against reliance on unproven AI solutions and highlighting the risk of outsourcing as only a temporary solution.The core implication for MSPs and IT providers is a need for tactical negotiation and operational risk management. Dave Sobel recommends using AI first to reduce internal labor costs before introducing it as a client offering, prioritizing outcome-based pricing and adjusting contracts to retain value from efficiency gains. Providers should avoid becoming displaced labor, rigorously test new technologies before adoption, and remain vigilant regarding vendor claims. The emphasis remains on capturing and defending margins through accountable operations and contract governance rather than chasing speculative innovation.Three things to know today00:00 Tech Spending Hits $5.6T but MSPs Face Margin Squeeze Without AI Pricing Reset05:31 VirtuaCare Ships Apple Support; Mizo Announces Roadmap—One's Testable Today08:17 MSPs Must Capture AI Efficiency Value or Face Margin CompressionThis is the Business of Tech.   Supported by:  Small Biz Thought CommunityCheck out Killing IT

    AI Operational Risk, Sovereign Cloud Mandates, and MSP Compliance Liabilities Examined

    Play Episode Listen Later Feb 12, 2026 14:13


    Mid-market organizations are transitioning from pilot projects to operationalizing generative AI and agentic workflows, according to a TechEYE article and Tech Isle survey cited by Dave Sobel. This shift centers on outcome-driven automation but exposes providers to new liability concerns, mainly due to fragmented, unreliable data and shadow AI usage—employees employing unauthorized tools outside official controls. The primary risk is that MSPs may be blamed for incidents where contract boundaries and technical controls do not cover browser-based generative AI use, making forensic evidence and documented enforcement essential for defending accountability. Supporting data from Tech Isle found that over 5,000 companies are pursuing structured approaches to AI-enabled growth, but face persistent issues in data trust, governance, and user fatigue. Additionally, European investment in sovereign cloud infrastructure is projected to triple between 2025 and 2027, driven by regulatory demands and concerns about U.S. data sovereignty. MSPs managing split architectures—sovereign providers for regulated data and hyperscalers for everything else—encounter API mismatches, operational complexity, and margin pressure. The recommendation is to standardize policy enforcement, identity management, and residency mapping while prioritizing audit-ready reporting and exception handling. AI-driven cyberattacks have increased, with reports from Level Blue and Check Point Research highlighting a surge in both attack volume and sophistication. Only 53% of CISOs feel prepared for AI threats, despite 45% expecting to be impacted within a year. Browser-based generative AI use introduces visibility gaps, raising the risk of negligence claims when service providers cannot demonstrate governance or forensic readiness. Reauthorization of the Cybersecurity Information Sharing Act (CISA) underscores that voluntary data sharing is inadequate, with CIRCA now requiring mandatory 72-hour incident reporting for critical infrastructure. The key takeaways for MSPs and IT leaders are to proactively define AI coverage and governance in contracts, enforce acceptable use policies, and instrument monitoring to close visibility gaps. Providers who can deliver forensic-grade telemetry, managed compliance programs, and operational readiness for incident reporting will be better positioned to defend against penalties, retain higher-value accounts, and offer meaningful differentiation. These structural challenges—fragmented control planes, increased compliance costs, and permanent risk friction—necessitate a strategic shift toward governance-led service models.Three things to know today00:00 Midmarket Shifts to Agentic AI as Europe Triples Sovereign Cloud Spending by 202706:08 Most Security Chiefs Say They're Not Ready for AI-Powered Cyberattacks Coming This Year09:46 CISA 2015 Reauthorized Through 2026; CIRCIA Mandates Expose Voluntary Sharing Failure This is the Business of Tech.   Supported by:  TimeZest  IT Service Provider University

    AI Raises Workloads and Burnout: HBR Study, Medical Risk, and New Governance for MSPs

    Play Episode Listen Later Feb 11, 2026 13:33


    Artificial intelligence (AI) is intensifying workloads rather than alleviating them, leading to increased burnout and declining decision quality, according to findings published in the Harvard Business Review and cited by Dave Sobel. The episode underscores that AI lowers the cost of producing outputs such as drafts and summaries but raises throughput targets and introduces new verification burdens. Economic gains from AI remain concentrated where capital and skilled labor already exist, while negative impacts—like displacement and wage pressure—are felt locally. These dynamics highlight the need for robust governance, particularly for managed service providers (MSPs) who deploy AI solutions.Supporting studies referenced include the International AI Safety Report, which details heightened uncertainty around AI development and its risks, as well as research from Oxford documenting the unreliability of AI chatbots in real-world medical decision-making. Experts warn that rapid automation without corresponding improvements in control systems creates structural constraints, making traditional software governance frameworks inadequate for unpredictable AI behaviors. Without proactive measures, these gaps risk exacerbating economic inequality and liability in regulated environments.Additional developments include OpenAI's release of upgraded agent features—such as GPT-5.2, improved context retention, managed shell containers, and a new skills standard—presented as operational enhancements but raising concerns about black-box context handling, auditability, and dependency risk. T-Mobile's AI-powered live translation service offers greater convenience but eliminates audit trails, shifting compliance risk to customers and prohibiting independent verification. Quark Cyber's launch of an internal cyber risk score introduces further complexity, as the scoring methodology is embedded within a financial product structure and lacks transparent validation.For MSPs and IT service leaders, the key takeaway is to treat new AI features and risk metrics as tools with significant tradeoffs. AI deployments should focus on governance layers that include workload caps, quality gates, and measurable outcomes rather than simply accelerating productivity. New features should be used for low-stakes workflows and carefully avoided in high-risk or regulated contexts unless auditable controls and deterministic checkpoints are established. Vendor-managed risk scores and warranties require independent validation before being positioned as client-facing truth standards.Four things to know today00:00 Harvard, Oxford Studies Find AI Raises Workload, Delivers Inadequate Medical Advice05:01 OpenAI Updates Deep Research and Adds New Agent Runtime Capabilities07:33 T-Mobile Tests Real-Time Call Translation Built Into Its Network09:17 Cork Cyber Rolls Out New Risk Score for Managed Service ProvidersThis is the Business of Tech.   Supported by:  ScalePad Small Biz Thoughts Community

    OpenAI Introduces ChatGPT Ads and Enterprise Agent Platform; Anthropic Releases Opus 4.6

    Play Episode Listen Later Feb 10, 2026 14:52


    OpenAI's twin initiatives to monetize ChatGPT's free tier through ads and launch the Frontier enterprise agent platform represent a shift in the AI provider's business model, with substantial implications for compliance and operational governance. Free and low-cost ChatGPT users will now see sponsored links unless they opt to reduce daily usage; only customers paying $20 or more per month retain an ad-free experience. OpenAI is concurrently marketing Frontier to enterprise clients such as HP, Intuit, and Uber, offering AI agent orchestration and deploying a team of consultants to support custom AI applications. The company projects enterprise revenue will constitute 50% of its income by year-end, up from 40% the prior month.Operating in both the consumer funnel and the enterprise layer, OpenAI combines top-of-funnel data monetization with vertical integration of services. The ad-supported free tier raises compliance concerns, as user interactions become subject to additional data collection and monetization. For organizations, this means enforcement decisions around whether and how employees may use free AI tools in regulated or sensitive environments. The more consequential development, however, is the introduction of enterprise agent orchestration through Frontier, where questions persist regarding liability, governance, production stability, and how organizations are protected from errors committed by autonomous agents.Related market movements include Anthropic's release of Claude Opus 4.6—which enables multi-agent collaboration with context windows up to 1 million tokens—and Microsoft's planned shift for Windows to a signed-by-default trust model. Anthropic's enhancements to agent functionality remain constrained by key gaps, such as conflict arbitration mechanisms, rollback procedures, and documented cost models, and the expanded context remains limited to beta testers. Microsoft's strategy to enforce signed apps by default mirrors iOS's approach to application trust, but its operational viability depends on how override mechanisms are managed by both users and IT administrators. Additional developments in backup, asset management, and AI governance (as seen with NinjaOne, JumpCloud, and Zoom) reflect a general trend towards increased integration and platform consolidation, though with ongoing gaps in security and compliance as AI adoption accelerates.The practical takeaway for MSPs and IT service leaders is the need to re-evaluate policies around free AI tool usage, invest in governance and auditability for enterprise AI, and prepare operational systems for stricter software trust and exception management requirements. Structural changes in software security and AI orchestration are transferring costs and risks from incident response to ongoing policy enforcement and exception handling. Those offering AI services should prioritize model-agnostic governance and avoid reliance on a single vendor's automation layer, as vertical integration by platform providers is reducing the defensibility of narrow service offerings.Four things to know today:00:00 OpenAI Adds Ads to Free ChatGPT; Launches Frontier Platform for Enterprise Agents04:07 Anthropic Ships Opus 4.6 Agent Teams; Model Found 500 Zero-Days in Testing06:43 Microsoft Announces Signed-App-Only Mode for Windows 11; Phased Rollout Planned10:19 NinjaOne Adds Asset Management; Zoom Launches AI Workspace Tool; JumpCloud Opens VC ArmThis is the Business of Tech.   Supported by:  CometBackup IT Service Provider University

    IT Spending Rises but Channel Share Falls; AI Arms Race and Shrinking Jobs Impact MSPs

    Play Episode Listen Later Feb 9, 2026 12:56


    IT spending continues to expand, with North America projected to lead a 12.6% increase to $2.6 trillion, primarily due to hyperscaler investments in AI infrastructure. However, the proportion of technology spending funneled through channel partners is declining, now at 61% compared to over 70% four years ago, according to a survey by Omnia. This shift signals that while the market is growing, traditional margin and resale opportunities for MSPs are narrowing as vendors redirect a larger share of revenue direct while still relying on partners for implementation, support, and customer operations.Data from Salesforce underscores a near-universal trend toward partner involvement in sales, with 94% of surveyed global salespeople leveraging partners to close deals and 90% using tools to manage relationships. Despite this, Dave Sobel clarifies the distinction between involvement and compensation, highlighting that partner influence on deals does not guarantee economic participation at previous levels. These dynamics reinforce that MSPs must adapt to a reality where their role in the value chain is being separated into influence and execution, with the middle tier facing increasing pressure.Additional analysis draws attention to labor market changes and technology commoditization. U.S. job openings have fallen to their lowest point in over five years, undermining MSP growth strategies dependent on seat expansion. Simultaneously, the AI market is fragmenting at the application layer—with Google's Gemini app, Grok, and OpenAI's ChatGPT shifting market shares rapidly—while hyperscalers like Alphabet (Google) commit unprecedented capital expenditures, fueling an infrastructure arms race even as front-end AI tools become more interchangeable.The practical implication for MSPs and IT service providers is increased pressure to re-evaluate business models, operationalize AI offerings, and focus on defensible, productized services. Reliance on a single vendor or seat-based growth forecasts presents heightened risk. Successful adaptation will require a shift toward managed services around AI operations, governance, and productivity—emphasizing accountability, optionality, and measurable ROI—rather than assuming historic revenue models will persist.Three things to know today:00:00 Partners Essential to Sales but Losing Economic Share, Survey Shows05:44 US Job Market Shows Low Hiring, Low Firing Despite Falling Openings       08:00 Alphabet Plans $180B AI Capex as Gemini Hits 750M UsersThis is the Business of Tech.   Supported by: Small Biz Thoughts Community

    Claim Business of Tech

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel