Podcasts about story links

  • 46PODCASTS
  • 339EPISODES
  • 26mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 17, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about story links

Latest podcast episodes about story links

Business of Tech
Margin Pressure for MSPs: How Microsoft Autopatch Moves Governance Upstream

Business of Tech

Play Episode Listen Later Mar 17, 2026 11:39


The episode reveals a structural shift in the managed services market, where the value proposition for MSPs and IT service providers is moving away from “running the tools” to delivering governance, risk management, and outcome-driven services. This shift is catalyzed by the increasing commoditization of tool-centric operations, as platforms and vendors such as Microsoft (Autopatch), Atera (autonomous agents), Summit Holdings (MSP as a service), and Ruest (RoboRoosty AI Workflow Builder) push standardized automation, workflow tools, and backend service packaging into the market. Cisco's Global State of Security report underscores this trend, identifying tool maintenance and fragmentation as primary sources of inefficiency. Evidence from Cisco shows 59% of security leaders pointing to tool maintenance as the chief inefficiency, with 78% citing tool dispersion and lack of integration. For MSPs, this results in growing unbillable labor spent on connecting systems, onboarding, retraining, and managing exceptions. The report indicates that the cost to deliver services is escalating faster than the value captured in contracts, exposing a margin squeeze and highlighting the risk that unmanaged operational complexity poses to profitability. Secondary developments reinforce the structural shift. Atera's no-ticket operational model and Microsoft's implementation of security updates through Intune and Autopatch transfer control and cadence of IT operations upstream, leaving MSPs responsible for policy exceptions and business risk translation rather than day-to-day execution. Summit Holdings' “MSP as a service” and D&H's expansion into enablement and training further commoditize backend functions, reducing differentiation for providers who fail to retain independent client intelligence and risk management. Operationally, the implications for MSPs and IT leaders are clear: dependency on vendor platforms and wholesale backend solutions increases, making risk ownership and client-specific intelligence the remaining sources of defensible value. Providers unable to price or document governance and exception management risk seeing margins erode as they absorb unbillable labor and liability. Future operational strategy will require clear mapping of tools to billable outcomes, explicit governance layers, and careful evaluation of which client insights remain uniquely held versus replicated across standardized platforms. Three things to know today 00:00 Tools vs Outcomes 02:50 Delivery Gets Packaged 05:17 Defaults Have Costs 07:42 Why Do We Care?  Supported by:  TimeZest Small Biz Thoughts Community

Business of Tech
Pentagon AI Model Ban Shifts Control from Vendors to Procurement Authorities

Business of Tech

Play Episode Listen Later Mar 16, 2026 9:00


The episode details a structural shift in the technology landscape: AI models are increasingly being treated as commodity components, with operational control and procurement decisions moving to the orchestration layer. This change is illustrated by government procurement actions, specifically the Pentagon's designation of Anthropic's Claude model as a supply chain risk and the subsequent shift in model eligibility requirements. Policymaking authorities are now directly dictating which models can be used within national security supply chains, reconfiguring where power, liability, and decision-making sit. The primary development is the Department of Defense's recent disqualification of Anthropic's Claude from eligible contracts, leading to both contract cancellations and legal disputes. Anthropic has responded with lawsuits contesting its supply chain risk designation, while Microsoft has sought court intervention to block the Pentagon's ban, asserting this would prevent disruption to military AI workflows. The State Department has also moved its internal chatbot infrastructure from Claude Sonic 4.5 to OpenAI's GPT-4.1, aligning with the President's compliance directive. Supporting developments include Google's deployment of Gemini-powered AI agents within the Department of Defense, and the emergence of tools such as Perplexity's APIs, which aim to simplify workflow construction across multiple models. The episode emphasizes that model swaps by agencies are not merely technical updates, but policy-driven control decisions. These actions underscore a climate in which model eligibility and operational portability are shaped by compliance and procurement authorities rather than technical teams or vendors. Operational implications for MSPs and IT providers are profound. Single-model dependencies now present measurable contract risk, especially for clients in defense, healthcare, or finance sectors. Swapping models requires revalidation of prompts, outputs, and integrations, rather than simple API repointing. Providers are advised to audit workflows for reliance on any one model, prioritize abstraction layers that enable smooth transitions, and position model-agnostic architectures as proactive risk management. In a landscape defined by commodity models and policy-driven eligibility, model diversification now represents continuity planning rather than an engineering preference. Three things to know today: 00:00 Pentagon vs. Anthropic 02:19 Beyond the Model 05:07 Why Do We Care?  Supported by:  ScalePad, Small Biz Thoughts Community

Business of Tech
RAM Shortages Reshape Channel Economics: Interview with Howard Davies

Business of Tech

Play Episode Listen Later Mar 15, 2026 21:50


The episode centers on sustained component shortages in the IT channel, specifically RAM, which are expected to last for approximately two years. Dave Sobel and the CEO of Contextworld review the immediate and projected impacts, citing that shortages are driving manufacturers to allocate available components to higher-priced machines, hollowing out mid-range offerings. The result is a decline in unit sales, particularly in the consumer segment, offset by increases in average selling prices. Vendors may see overall revenue growth despite fewer units sold, but questions remain about whether increased margins will benefit distributors and resellers or be absorbed by vendors. Supporting data includes projections for the European market: unit sales are anticipated to decline by around 7%, while average selling prices may rise by approximately 14%, yielding a potential 6% net increase in vendor revenues. There is a distinction between business and consumer purchasing behaviors; business buyers are expected to maintain higher levels of spending due to operational requirements and perceived advantages from new hardware, especially AI-enabled devices, while consumer demand is forecast to soften due to price sensitivity. Adjacent topics include shifts in purchasing habits and technology adoption. Contextworld's sales data indicate increased demand for in-person retail, particularly in Europe and the UK, attributed to consumer interest in hands-on evaluation of new technologies, such as AI-capable PCs. While AI as a concept seldom drives purchasing decisions directly, named features like Copilot PCs are recognized as influencing consumer choices. The conversation also highlights Apple's expanding focus on business markets, with optimism for its forthcoming AI capabilities, and the emergence of vendors like Anthropic targeting enterprises with security and social responsibility as differentiators. For MSPs and IT leaders, the primary operational implications include the need to adapt to a competitive landscape marked by supply constraints, price volatility, and evolving buyer behavior. The channel may be strengthened by integrating new value-added services, such as cybersecurity and managed services, yet risk remains regarding margin capture and vendor strategies. Providers are advised to monitor shifts toward ecosystem-driven AI solutions and evolving market programs, as well as opportunities in "declining" market segments that may still offer profitability for those able to meet residual demand efficiently.

Business of Tech
Microsoft and Anthropic Reshape MSP Partner Control Through Ecosystem Lock-In

Business of Tech

Play Episode Listen Later Mar 13, 2026 9:10


The episode identifies a fundamental structural shift in the MSP and IT services landscape: vendor channel consolidation and ecosystem dependency are increasingly determining who controls customer relationships, margins, and access to recurring revenue streams. Companies such as Microsoft, Anthropic, and Huntress are actively reshaping the ecosystem by investing significant resources in partner programs and platform strategies that dictate operational baselines and restrict neutrality. This realignment is driving MSPs to deliberately choose platform alignments, as attempting to remain neutral increasingly results in a loss of relevance and market access. Central to this shift is Anthropic's $100 million investment in launching the Claude Partner Network for 2026, which creates certification and co-sell incentives for firms capable of implementing Claude within enterprise environments. According to Dave Sobel, this is not long-range product development but a concentrated customer acquisition cost to rapidly build channel coverage. In parallel, Microsoft is embedding Anthropic models within Copilot, shifting to a multi-model approach that retains flexibility at the AI model layer while keeping Azure as the entrenched operational platform. Supporting developments reinforce these channel and ecosystem pressures. Huntress's move to expand its partner program to value-added resellers (VARs) dilutes its previously MSP-exclusive channel, removing some of the distribution advantages MSPs may have relied upon. Sonomi's positioning of third-party risk management as an MSP revenue opportunity comes amid rising supply chain risk, as supported by ConnectWise's 2026 MSP Threat Report highlighting increased identity abuse and supply chain attacks. Simultaneously, declining PC shipments—especially for budget devices—are shifting the economic emphasis from hardware projects to operational service engagements such as identity governance and lifecycle management. The operational implications for MSPs are clear: partner program frameworks have become the gatekeepers of pricing, leads, and ongoing service annuities, reducing the room for independent strategy or procurement-driven decisions. Ecosystem alignment must be intentional and based on a realistic assessment of program timelines, certification windows, and revenue structure. As hardware refresh cycles slow and vendors consolidate services and identity requirements, MSPs face increased dependency risk, potential margin erosion, and diminished negotiating leverage. Those failing to anticipate or adapt to these shifts risk being relegated to subcontractor roles without control over customer relationships or recurring revenues. Three things to know today 00:00 AI Channel War 02:27 Identity Baseline Shift 03:43 Refresh Revenue Shift 04:46 Why Do We Care?  Supported by:  Small Biz Thoughts Community   

Business of Tech
Drop in Search Clicks and Rise in AI Distribution Channels Shift Value Away from Traditional MSPs

Business of Tech

Play Episode Listen Later Mar 12, 2026 11:29


AI deployment is compressing margins and altering the economic structure of the IT services market, with digital platforms and private equity–backed consulting now determining who controls distribution, interfaces, and downstream value capture. As referenced by Dave Sobel, developments such as large language models reshaping search, IT distributors repositioning as digital marketplaces, and private equity standardizing AI consulting are reducing the role of traditional MSPs to commoditized implementation labor. Concrete market evidence includes the Global Technology Distribution Council's report citing that 80% of vendors see partner ecosystem growth as key, while 86% are using or testing digital platforms to drive cloud and AI services. Examples such as Anthropic's discussions to create AI consulting joint ventures with Blackstone and Hellman Friedman, as well as OpenAI's partnerships with Thrive Holdings and Shield Technology Partners, show that operational models are being standardized and consolidated. Meanwhile, AI-powered search is reducing clicks to original content by up to 89%, transferring value to whoever controls the user interface. Supporting data from surveys conducted by the SMB Group, Pega Systems, and Atlassian highlight that 53% of SMBs are using AI, but only 3% of organizations report measurable business transformation despite a 33% productivity boost. Consumers show distrust in AI-driven customer service, and employee burnout and reduced confidence indicate that MSPs are absorbing increased operational complexity and support burdens even as margins compress. These developments reinforce the channel consolidation and margin repricing mechanisms described above. For MSPs and IT leaders, the practical risks include growing dependency on distributor and vendor digital marketplaces, narrowing ability to influence platform economics, and the transfer of governance obligations without matching margin. Priority areas are building defensible, repeatable governance frameworks around AI, owning escalation and validation paths, and repositioning services toward process redesign engagements—not commoditized tool deployment. Failing to establish an IP or governance wedge may result in MSPs being locked into subcontractor roles with little leverage over pricing or client outcomes. Three things to know today: 00:00 Channel Bypassed 02:26 Delivery Commoditized 04:15 MSPs Left Holding 07:12 Why Do We Care?  Supported by:  ScalePadSmall biz Thought Community    

Business of Tech
AI Risk Goes Downstream: Why MSPs Are Inheriting Liability from Vendors and Policy Gaps

Business of Tech

Play Episode Listen Later Mar 11, 2026 9:35


The dominant structural mechanism highlighted is the industry-wide shift toward liability transfer and governance gaps in AI procurement, deployment, and incident response. According to Dave Sobel, both vendors and organizations are accelerating AI adoption without corresponding investments in oversight, training, or clear accountability structures. This is reflected across multiple sectors, from software vendors such as Grammarly, Eightfold.ai, Cohesity, and Rubrik, to business leaders and policymakers, where risk is systematically deferred downstream rather than managed at the point of adoption. The most consequential evidence is the quantitative disconnect between stated AI priorities and functional oversight. Research cited by Dave Sobel from Economist Impact and HR Dive found that while 38% of organizations budget for AI and 86% of executives rate AI as essential, only 16% offer internal training and over half of department-level AI initiatives lack formal oversight (Ernst & Young). Additionally, 88% of AI vendors limit their liability, and only 17% align with regulatory compliance, per cited surveys, leaving substantial legal and operational risk for end users and service providers. Supporting this trend, Dave Sobel points to Grammarly's opt-out identity usage in new features and a class action lawsuit against Eightfold.ai regarding AI-driven employment decisions. Vendors such as Cohesity, Rubrik, ServiceNow, and Datadog are responding by building tools focused on remediation and recovery from AI-driven incidents, underscoring a shift from preventive governance to reactive containment. Policy moves—such as expanded operational cyber roles for the private sector—further offload accountability without addressing contractual and insurance exposure. For MSPs and technology leaders, these developments create practical risks: unclear service scope around AI tool usage in contracts, increased exposure to billable incidents and legal action, and rising labor costs for incident recovery. Service providers must audit agreements for AI-specific language, distinguish AI-related incidents from standard SLAs, and treat AI governance as a managed risk service. The pressure will increasingly fall on MSPs to account for training gaps, audit trails, compliance attestations, and recovery procedures—not simply the technology itself. Three things to know today 00:00 ROI Reality Check 02:12 Governance Gap Widens 03:14 Cleanup Economy Rises 05:45 Why Do We Care?  Supported by:  CometBackup 

Business of Tech
Microsoft and OpenAI Expand AI Agents While Shifting Governance Costs to MSPs

Business of Tech

Play Episode Listen Later Mar 10, 2026 9:50


A structural shift is occurring in the managed IT services landscape as AI capabilities are rapidly embedded across enterprise applications, with oversight and risk management functions increasingly separated out and monetized as add-on services. Vendors, including Microsoft and OpenAI, are deploying AI agents in essential tools such as Outlook, Teams, and Excel, then selling governance, security, and compliance capabilities as additional paid layers. The core mechanism is the transfer of operational and liability risk downstream to IT service providers and their clients, while ownership of the control plane and margin on risk mitigation remain with the vendors. The episode highlights consequential findings regarding AI reliability and adoption. A Nature Medicine study found that OpenAI's ChatGPT Health underestimated emergency severity in 51.6% of cases, prompting concerns about overreliance on AI for critical decisions. Additionally, Confluent's UK executive survey indicated that 62% of organizations are already shifting decision-making to AI, but only 7% have a company-wide AI strategy, and fewer than half of executives and employees agree on actual daily AI usage. Most leaders receive little formal AI training yet are second-guessing their own judgment in favor of AI output. Further reinforcing the governance gap, Microsoft is launching Agent 365 and new enterprise security tiers, while OpenAI's acquisition of Promptfoo signals a focus on AI reliability testing and compliance monitoring. Funding for GRC platforms like IntelliGRC demonstrates capital flowing into third-party oversight solutions. The recurring pattern is vendors first pushing broad agent adoption, then introducing and monetizing governance as a discrete add-on, often outside the default package. Operationally, MSPs and IT leaders face increased liability exposure if they rely on vendor-native governance without independent audit or measurement capability. The absence of industry-standard reliability metrics for AI, combined with the perception and usage gaps inside organizations, calls for MSPs to lead in auditing, documenting, and independently measuring AI usage and performance. Failing to proactively manage these controls can result in silent risk absorption and unfavorable positioning as vendors bundle compliance and pass residual risk downstream to service providers. Three things to know today 00:00 AI vs. Judgment                            02:35 Agents vs. Oversight 04:04 AI Reliability Gap 05:15 Why Do We Care?  Supported by:  ScalePad 

Business of Tech
AI Remediation Without Governance: How MSPs Face Rising Liability and Cost Exposure

Business of Tech

Play Episode Listen Later Mar 9, 2026 14:20


The dominant structural shift identified centers on liability allocation and governance in the context of agentic AI deployment across IT and managed services. The episode underscores how automation is moving beyond content generation to direct operational and security actions, referencing technology from OpenAI (GPT-5.3 Instant), Anthropic (Claude Marketplace), Google Workspace CLI, Microsoft's SharePoint AI features, and Hexnode's Genie AI. Vendors are embedding AI deeper into productivity and endpoint infrastructure, increasing both operational efficiency and the risk footprint—making governance, reliability, and accountability the new competitive differentiators. The most consequential development highlighted is the industry-wide disconnect between rapid AI remediation adoption and lagging governance. According to Omdia, 88% of organizations are using AI-driven remediation, but only 44% have implemented it for most exposure types, and nearly half (49%) of security teams lack trust in these systems. IBM data shows that 63% of organizations lack formal AI incident response policies, meaning deployment often outpaces the development of auditability and risk management. This creates a landscape where automated decisions are taken at scale without clear accountability structures or incident protocols. Supporting developments reinforce these governance and risk concerns. Reports of cognitive fatigue—termed “AI brain fry”—affecting over 14% of users (Boston Consulting Group/UC Riverside) and a 39% increase in error rates among those affected, point to compounding human and system risk when automation outpaces oversight. Market analysis from Accenture, Wharton, and the Dallas Fed notes that AI has shifted skill demand, displaced younger tech workers, and pressured traditional fixed-fee business models. Meanwhile, vendors are migrating from predictable per-seat pricing to variable token-based consumption, passing operational uncertainty onto MSPs and their clients. For MSPs, IT service providers, and technology leaders, the practical implications are clear. Failure to implement explicit governance, contract clauses, and incident protocols exposes providers to unpredictable liability. Passing through ungoverned consumption costs under fixed-contracts damages margins as AI use expands. The increasing cognitive load on staff supervising partially trusted automation further compounds operational risk. As the pricing model shifts, providers must negotiate new contract terms, institute AI incident playbooks, audit tool autonomy, and manage the blast radius of AI with the same rigor as legacy security controls. 00:00 Platform Land Grab 03:56 Who Owns Failure 07:27 Skills Over Titles 09:52 Why Do We Care?  Supported by: JumpCloud  

Business of Tech
AI Integration Raises Data Governance Demands for MSPs — Colin Blair

Business of Tech

Play Episode Listen Later Mar 8, 2026 19:56


The episode centers on D&H's strategic approach to vendor selection, AI program development, and partner enablement within the evolving landscape for MSPs and IT solution providers. Colin Blair, Executive Vice President for cybersecurity at D&H, details a governance-driven process for curating vendor relationships, with emphasis on aligning with Gartner quadrant leaders, peer insight metrics, and channel-partner readiness. D&H's focus remains on SMB and mid-market segments where complexity is increasing, especially around compliance, data governance, and cybersecurity. Supporting this curated model, Colin Blair notes that D&H maintains onboarding rigor but rarely offboards vendors within its advanced solutions group, citing ongoing hyper-growth and the need to continuously add value for partners. The vendor evaluation emphasizes data-driven benchmarks and sustained relationship-building at industry events. The company is prioritizing supply chain strength for MSPs, driven by measurable factors such as profitability, cultural compatibility, and proven channel strategies. The conversation also highlights the expansion of the Go Big AI program, which aims to increase AI literacy among both partners and end customers. Training initiatives reached over 5,000 partners, focusing on foundational applications like Microsoft Copilot and AI PCs, while acknowledging that project success is heavily dependent on data quality and governance. Use cases where implementations see traction are typically well-defined, such as Vision AI for video analytics in healthcare and security verticals. The need for tailored, consultative conversations is cited as significant, as end customers and partners often lack clarity on automation priorities or AI readiness. The implications for MSPs and IT leaders are pragmatic: sustainable advantage is less about technology adoption and more about managing operational complexity, ensuring data governance, and enhancing cybersecurity postures. Decision-makers are cautioned to assess both the maturity and applicability of AI solutions, invest in targeted literacy and consultation, and anchor their vendor relationships in measurable business value. The focus should be on careful risk management, transparent partnership evaluation, and supporting clients through consultative, outcome-driven initiatives rather than broad or speculative technology bets.

Business of Tech
The Decline of Core MSP Services: Surviving the Shift to AI-Driven Differentiation with Anurag Agarwal

Business of Tech

Play Episode Listen Later Mar 7, 2026 43:48


Research presented by Dave Sobel and Anurag Agarwal highlights a steep decline in profitability for core MSP services, driven by heightened commoditization and vendor-led automation of basic offerings such as endpoint management and help desk operations. According to Techaisle's 2026 data, the traditional labor-plus-license model is no longer sustainable, as shrinking margins force service providers to reconsider foundational strategies. The central message underscores an urgent need for MSPs to prioritize proprietary intellectual property (IP) and vertical-specific solutions—not for incremental growth, but as a matter of operational survival. Supporting this assessment, the discussion details how market demand has shifted: MSPs can no longer depend on generic solutions but must differentiate with specialized, repeatable offerings that address the financial optimization and liability concerns of business clients. The data indicates that SMBs are increasingly unwilling to invest in pilots or “all-you-can-eat” AI models without visible ROI and demand concrete solutions linked to business outcomes. Vendors and MSPs alike are being tasked with providing smaller, outcome-focused wins and developing skillsets in agentic orchestration, where AI-enabled digital agents and human technicians operate as co-equal components of the workforce. A related trend explored is the shift toward agentic AI and “zero-touch” MSP models, featuring automation of routine IT tasks and focus on workflow engineering rather than manual services. However, the episode notes that most providers are unprepared for the new set of risks and governance liabilities: as clients increasingly utilize AI agents, accountability for errors and regulatory compliance will rest heavily with MSPs, especially in sensitive geographies such as Europe where contractual governance is becoming standard. Conversations on whether to “build or buy” new capabilities reflect a split market, with only the top tier capable of meaningful in-house development, and the majority relying on third-party platforms with limited differentiation. For MSPs, IT service firms, and decision-makers, the core implication is the need to rapidly develop operational and governance maturity around automation, AI orchestration, and packaged offerings. Clinging to traditional models or treating AI as a mere add-on introduces significant risk, including shrinking margins, increased liability, and potential obsolescence. Providers are advised to narrow focus, specialize in vertical solutions, invest in internal competency with AI-enabled platforms, and shift toward packaged IP to avoid falling behind as both client expectations and regulatory requirements escalate.

Business of Tech
MSPWell Launch Reveals Governance Gaps in Channel's Mental Health Initiatives

Business of Tech

Play Episode Listen Later Mar 6, 2026 12:46


The episode centers on a structural governance gap within the managed services industry as it attempts to address mental health using relationship-driven models typical of event and community management. This approach is exemplified by the launch of MSPWell, a not-for-profit mental wellness initiative incorporated in Ontario, Canada, targeting participants in the IT channel. The initiative operates as a live community—particularly via Discord—without formalized clinical oversight or published operational guardrails such as moderation standards, crisis escalation protocols, or sponsor influence controls. Evidence for an urgent governance concern is provided by industry data and operational decisions. According to MSPWell, burnout affects significant percentages of the workforce—citing an 82% burnout risk from a Mercer report and 66% from separate research. Despite the recurrence of staffing challenges in the MSP industry, MSPWell's infrastructure is underway with participation at industry events and vendor sponsorship, but formal governance documentation remains incomplete. The initiative explicitly confirms the absence of licensed mental health professionals in published leadership or advisory roles, positioning its support as peer-led. Supporting developments highlight how rapid community launch and sponsor-driven funding amplify risks when core protections are missing. Early coverage focused on recognizable names and event presence, while Dave Sobel emphasizes that, in mental health-adjacent contexts, moderation, privacy, and escalation protocols are not only differentiators but essential safeguards. At present, MSPWell's Discord community operates without visible guidelines or documented procedures, which exposes participants to predictable failure modes such as oversharing, privacy breaches, and harmful peer advice. Operationally, MSPs and IT service providers face heightened liability when participating in or supporting such initiatives without robust controls. Dave Sobel advises operators to request moderation, crisis, and data retention policies before endorsing participation, to treat involvement as networking rather than clinical support, and to monitor for the integration of licensed professionals into governance. The absence of enforceable governance exposes both individuals and sponsoring vendors to reputational and legal risk, and sets problematic precedent for future wellness platforms in the industry. 00:00 MSPWell Builds Mental-Health Platform on Sponsor-Funded Community Model 03:21 Guardrails, Guidelines, and Moderation  06:15 The Consequences 08:09 Why Do We Care? & What to Consider Supported by:  TimeZest   

Business of Tech
Margin Redistribution Forces MSP Service Restructuring in Memory-Constrained Markets

Business of Tech

Play Episode Listen Later Mar 5, 2026 11:44


Market segmentation driven by rising memory costs is actively restructuring the endpoint device landscape, leading to margin redistribution across the technology stack. Apple exemplified this bifurcation strategy by launching an entry-level MacBook Neo at $599 built on the A18 Pro iPhone chip, while simultaneously increasing prices on other MacBook Air and Pro models by $100 to $400 in response to global memory shortages. This deliberate move separates high-margin premium hardware from low-cost devices, effectively diminishing the traditional mid-tier device segment where most SMB and MSP standards have typically been positioned. Supporting data highlights the broader industry impact: 62% of small businesses report ongoing supply chain disruption, affecting pricing, timing, and availability, according to recent NFIB survey data. Component suppliers such as Broadcom are capturing upstream value, with a reported 29% year-over-year revenue increase driven by concentrated AI infrastructure demand. Omnia's forecast anticipates a significant smartphone shipment decline in 2026, primarily attributed to rising memory costs and uneven impact, disproportionately squeezing entry-level devices while preserving premium margins. A parallel challenge emerges within organizational governance and service delivery. The Logicalis Global CIO Report 2026 found over half of CIOs believe AI adoption is outpacing their management capabilities, with 90% of organizations lacking internal technical expertise yet 72% planning further AI investment. This gap between ambition and readiness, combined with traditional ticket-based operating models, means unmanaged risk increases as businesses prioritize speed over structured governance. Internal IT builds are increasingly abandoned, with 71% of IT and security leaders reporting failure to meet on-time and budget targets, signaling that velocity and accountability, not just ticket closure, are becoming core client expectations. Implications for MSPs and IT service providers are immediate and operational. Service models must account for hardware segmentation by incorporating differentiated support structures for entry-level versus premium devices. Increased complexity and support demands from constrained hardware will compress margins unless properly priced and standardized. MSPs are positioned closest to liability accumulation as clients face both hardware refresh and AI adoption without sufficient internal expertise. Advisory frameworks should address total cost of ownership, memory shortage context, and governance gaps, productizing assessments and redesigning service delivery for speed with explicit controls to manage risk. Three things to know today 00:00 Memory Costs Squeeze Entry-Level Hardware as Suppliers Capture Margin Upstream 02:24 Apple's $599 MacBook Neo Signals a Split Hardware Strategy, Not a Budget Play 04:22 IT Service Models Built on Approvals Are Losing to Speed-First Competitors 06:57 Why Do We Care?  Supported by:

Business of Tech
Risk Moves Upstream: How Embedded Governance and Insurance Set New MSP Constraints

Business of Tech

Play Episode Listen Later Mar 4, 2026 11:11


The MSP market is undergoing a critical shift toward risk management as the central value proposition, with operational accountability now defined by the ability to produce defensible documentation and deliver rapid incident response. According to Dave Sobel, MSPs are no longer primarily offering stack management, but are increasingly brokering risk through cyber warranties, insurance underwriting, incident retainers, and AI governance frameworks. Those unable to support their claims with evidence and formal processes risk becoming mere facilitators for third-party terms and losing control over their margins. Recent developments reinforce this shift. A Splunk report finds that nearly all CISOs now view AI governance and risk management as their responsibility, citing threat actor sophistication as a primary driver. AI is assisting with event triage and data correlation, but verification—especially around AI-generated content—is unreliable, with detection tools struggling against advanced fakes. Insurance mechanisms are becoming productized with prioritized incident response, and legal intelligence is being embedded into MSP workflows. Vendors like N-able, Monjur, SentinelOne, and DocuSign are directly integrating financial, legal, and governance functions into their offerings, fundamentally altering client and vendor relationships. Adjacent stories illustrate volatility in traditional safeguards and the operational reality of adaptive threats. CISA leadership changes indicate instability in public response institutions. AI-powered malware exemplifies the challenge: ESET's PromptSpy uses Gemini to continuously adapt its persistence, outpacing static detection models. Insurance underwriters are increasingly demanding machine-verifiable evidence of controls, using detailed questionnaires to distinguish autonomous AI from marketing claims. The risk is no longer just technical; it is structural. For MSPs and IT leaders, operational posture is now shaped by an ecosystem of embedded warranties, legal terms, governance requirements, and adaptive threats. The ability to document, defend, and productize risk controls becomes a baseline for credibility and insurance eligibility. Failure to build evidence pipelines and clarify vendor-imposed liabilities exposes service providers to compounded risk. The practical implication is a necessity for MSPs to treat governance and detection as measurable, documented capabilities—not assumptions or routine paperwork. Three things to know today: 00:00 CISOs Own Governance, Detectors Lag Fakes, Response Gets Contracted — Accountability Follows 03:14 N-able, SentinelOne, DocuSign Move Risk Management Into the Stack — MSP Terms Follow 05:10 CISOs Want Agentic AI, But Insurers and Adaptive Malware Are Forcing the Timeline 07:32 Why Do We Care?  Supported by:  CometBackUpSmall Biz Thoughts Community

Business of Tech
Supply Chain Risk Designations Are Reshaping Federal AI Procurement

Business of Tech

Play Episode Listen Later Mar 3, 2026 13:41


The episode centers on the federal government's evolving approach to AI vendor governance, underscored by the recent directive from President Donald Trump for federal agencies to halt the use of Anthropic's AI technology. This shift follows the Pentagon's termination of its relationship with Anthropic over the company's refusal to relax contract restrictions around citizen data and autonomous weapons, ultimately resulting in Anthropic being designated as a “supply chain risk” by Defense Secretary Pete Hegseth. For MSPs and IT providers serving federal and SLED clients, this designation functions as an immediate procurement barrier rather than a negotiable label, directly impacting vendor eligibility and contract continuity. Contextually, 70% of federal agencies are reassessing their use of AI tools amid fluid regulations and heightened concerns around transparency and accountability, according to recent reports. The National Institute of Standards and Technology (NIST) has launched the AI Agent Standards Initiative, but enforcement is several years away, with only a request for information planned by March 2026. In parallel, a diplomatic initiative led by Secretary of State Marco Rubio opposes international regulations on foreign data handling, though this stance does not supersede foreign law, creating a complex compliance landscape, especially for multinationals. Meanwhile, the U.S. Supreme Court's refusal to hear an AI copyright case reaffirms the lack of copyright protection for purely AI-generated works. The episode also discusses OpenAI's agreement with the Pentagon, described by CEO Sam Altman as "rushed," and criticized for permitting domestic surveillance under flexible legal interpretations. Public and employee backlash prompted OpenAI to revise contract terms, but critics argue essential permission structures remain. Anthropic's rollout of an AI migration feature during this period is flagged as a compliance event, raising risk when transferring data histories across vendor boundaries without audit or logging. Notably, consumer responses to AI vendor practices—evidenced by surges in Claude signups and ChatGPT uninstalls—are now influencing enterprise technology procurement as values-based purchasing enters the operational conversation for service providers. Operationally, the lack of a stable legislative or regulatory framework means MSPs and their clients face rapidly shifting governance through contract terms and procurement policy rather than law. The episode cautions that vendor selection cannot be guided by assumptions of ethical safeguards in provider policies or by default transitions to alternative vendors such as OpenAI, whose legal standing remains unsettled. Key recommendations include auditing client environments for exposure to designated supply chain risks, refraining from rigid vendor integrations, updating contractual IP language in light of the absence of AI copyright, and maintaining ongoing awareness of governance developments. Multi-vendor strategies and adaptable compliance positions are identified as essential risk mitigation practices in an environment marked by administrative fiat and reactive vendor positions. Three things to know today 00:00 Anthropic Blacklisted After Rejecting Pentagon's Autonomous Weapons Data Demands 04:58 OpenAI Wins Federal AI Contract Anthropic Refused, Then Rewrites It Under Pressure 07:38 Anthropic Outages Hit as Claude Sign-Ups Quadruple, ChatGPT Uninstalls Surge 295% Supported by: ScalePadSmall Biz Thoughts Community  

Business of Tech
Hardware Cost Volatility Forces MSPs to Reprice Contracts and Restructure Service Models

Business of Tech

Play Episode Listen Later Mar 2, 2026 12:49


Enterprise IT spending is projected to reach $4.5 trillion by 2026, but this growth is concentrated in software, cloud services, and AI infrastructure for large organizations, according to HG Insights and Omdia research cited by Dave Sobel. The system integration market is positioned to approach $950 billion in 2025, with enterprises working with an average of 6.3 technology partners. A substantial surge in AI-optimized server sales, as reflected in Dell Technologies' reported 342% year-over-year increase in revenue for those systems, is reshaping supply chains and vendor dynamics, leading to shortages of DRAM, SSDs, and hard drives. Underlying this development are volatile component costs. DRAM prices have doubled quarter over quarter, and both Micron Technologies and Western Digital have indicated they are sold out for 2026. HP reports that RAM now constitutes 35% of new PC materials costs, up dramatically from 18% the previous quarter. Such cost shifts are creating downstream risks for managed service providers (MSPs) with fixed-price agreements, as the economic assumptions underpinning many contracts—stable hardware prices and predictable cloud costs—no longer hold. The episode also highlights an increase in application sprawl and a widening gap between IT budgets and other operational costs. A Torii report shows large enterprises use over 2,191 applications on average, with more than 61% bypassing formal IT approvals, resulting in unmanaged security and compliance exposure. Additionally, 80% of small businesses report rising energy costs that directly compete with IT budget allocations. Industry analysis from Jefferies and Boston Consulting Group signals that AI and automation are not viewed uniformly as productivity boosters and may compress revenue models in both Indian and domestic IT services sectors. The practical implication for MSPs is the urgent need to audit and reprice contracts related to hardware procurement and refresh cycles, clearly documenting and communicating current cost realities with clients. Dave Sobel stresses reframing device lifecycle extensions as a security risk rather than a cost-saving measure and warns against selling clients on speculative AI market projections. The advice is to focus on specific, scoped use cases and to structure agreements that accurately reflect volatility in component costs and the operational burden of application sprawl, ensuring financial and legal accountability as the IT services landscape evolves. 00:00 $4.96T IT Spend Surge Bypasses SMBs as AI Infrastructure Captures Enterprise Budgets 03:58 Dell's $43B AI Server Backlog Triggers DRAM Shortage, Repricing Downstream Hardware 05:52 AI Shrinks IT Services Revenue Model; MSPs Face Contested Implementation Role   This is the Business of Tech.    Supported by:

Business of Tech
Cybersecurity Distribution and Shared Risk Models: Interview with Jason Beal of Exclusive Networks

Business of Tech

Play Episode Listen Later Mar 1, 2026 19:15


The episode centers on the evolving responsibility and risk allocation within cybersecurity distribution, with particular focus on Exclusive Networks' approach. Jason Beal, as president of Exclusive Networks North America, outlines their emphasis on a technical workforce, maintaining a 1:3 ratio of engineers to sales representatives. This structure is positioned to address the increasing complexity of cybersecurity and the demands faced by service provider partners, aiming to support solution integration and customer needs while clarifying each party's liability. Supporting this structure, Jason Beal identifies the role of the distributor as both an extension and enabler for MSPs and IT services companies. Distributors are expected to supplement partners' capabilities—whether technical, financial, or operational—without assuming technology failure risk, which remains with the original technology vendors. Discussion of shared responsibility models also distinguishes between sales success (customer adoption, retention) and risk management. Recent developments in cyber insurance are cited as having reduced the direct risk burden on MSPs, shifting much of the liability away from service providers toward technology creators, albeit within contractually defined limits. Adjacent to cybersecurity, the conversation addresses skill and adoption gaps prompted by rapid technical innovation, specifically referencing artificial intelligence (AI). Jason Beal quantifies educational efforts by highlighting a collaboration with Cal Poly San Luis Obispo, which has seen 100 students engaged to help address workforce shortfalls in cybersecurity and AI. Additionally, academic experience informs the importance of modernizing IT operations curricula to better reflect current business challenges, such as cloud, AI, and global supply chain impacts. For MSPs and IT service providers, implications include the growing necessity to audit core competencies and allocate resources strategically, leveraging distributors not just for sourcing products but for specialized expertise, integration, and operational support. Risk mitigation remains tied to understanding contract language, vendor accountability, and developments in cyber insurance. The pace of AI and other technology adoption requires continuous education and careful evaluation of both operational risk and the practical limitations of solutions promoted by the channel and distribution partners.

Business of Tech
Anthropic Refuses Pentagon AI Demands; Burger King's AI Monitoring Raises Privacy Risks

Business of Tech

Play Episode Listen Later Feb 27, 2026 14:08


Anthropic's refusal to remove safeguards against mass domestic surveillance and fully autonomous weapons in its interactions with the Department of Defense establishes an explicit boundary on the use of AI in federal contracts. The company cited specific civic and legal risks, emphasizing that current AI systems are not reliable enough for autonomous weapon deployment and warning that government pressure on vendors to bypass statutory constraints poses broader accountability issues. This underscores a shift in liability for MSPs and IT providers—any weakening of safeguards under contract does not eliminate risk but instead transfers possible exposure down the technology supply chain. This position is reinforced by the lack of unconditional trust in military oversight, as highlighted by the Pentagon CTO's remarks, and by clear legal challenges, including violations of the Fourth Amendment and Department of Defense Directive 3000.09. Dave Sobel asserts that professional liability and cyber policies do not typically cover actions undertaken solely at government request where legal limits are breached. This increases the necessity for MSPs and IT leaders to verify that contract language explicitly defines acceptable AI use and to ensure written documentation before government or enterprise client demands arise. Additional analysis includes operational deployments of AI in service and workplace environments. Burger King's AI chatbot, Patty, and ServiceNow's autonomous request resolution underscore the friction between efficiency claims and trust gaps, as evidenced by a YouGov survey that found 68% of consumers lack confidence in AI customer service. Dave Sobel notes that MSP benchmarks tied to vendor ticket closure rates may not reflect real client satisfaction or risk, especially when legal requirements for monitoring and consent are not met. The episode further covers market reactions to speculative reports on AI-driven job displacement, studies demonstrating AI's failure to maintain human-like restraint in conflict scenarios, and IBM's valuation drop due to AI modernization tools. For MSPs and IT decision-makers, the practical takeaway is the need for documented governance, explicit contractual safeguards, and ongoing risk assessments when deploying or recommending AI solutions—particularly in environments where trust, human oversight, and insurability are not yet aligned with technical capability. Three things to know today: 00:00 Anthropic Refuses Pentagon Demands on Surveillance and Autonomous Weapons, Risks Contract 03:40 AI Hits the Human Layer — and Governance, Consent, and Trust Infrastructure Aren't Ready 07:37 AI Moves Markets, Escalates Wars, and Splits Partner Ecosystems — In One Week   This is the Business of Tech.    Supported by:  IT Service Provider University

Business of Tech
Pentagon Pressures Anthropic for AI Access; VMware Exit Costs and Compliance Risks for MSPs

Business of Tech

Play Episode Listen Later Feb 26, 2026 13:58


The episode's central development is the ongoing dispute between the U.S. Department of Defense and Anthropic regarding Pentagon demands for unrestricted access to Claude, Anthropic's AI model. According to Dave Sobel, the Pentagon has threatened to sever ties or invoke the Defense Production Act if the company does not comply, seeking capabilities that Anthropic argues may be illegal—specifically mass surveillance without warrants and autonomous weapons systems without human control. This move exposes Managed Service Providers (MSPs) serving defense contractors to unpredictable legal, operational, and compliance risks embedded in their AI workflows. The analysis highlights that a commercial AI provider's acceptable use policy now intersects directly with national security policy, and even partial vendor compliance can trigger regulatory or legal instability for dependent organizations. For MSPs, this means that building service offerings on AI infrastructures without clear fallback strategies or documented policy change clauses can lead to unmanageable risk and liability in the event of provider or legal regime shifts. Dave Sobel stresses that failing to address policy volatility as part of a managed service amounts to underwriting geopolitical risk without compensation. Other notable developments include the passage of the Small Business Artificial Intelligence Advancement Act, federal cybersecurity resource contraction as CISA operates with 38% staffing after layoffs, and heightened uncertainty around cloud infrastructure due to Microsoft's Azure Local “air-gapped” offering not wholly mitigating U.S. CLOUD Act exposure. Vendor news covered new AI-powered compliance features from Compliance Scorecard (version 10) and Beachhead Solutions (ComplianceEZ 2.0), Apple's accelerated retirement of Rosetta 2 translation technology, a Microsoft 365 Copilot DLP change, and continued fallout from VMware's acquisition by Broadcom, which has led to ongoing cost and trust challenges for cloud and infrastructure partners. The episode's clear implications for MSPs and IT providers are operational. Service catalogs and statements of work should actively address AI provider liability, dependency exit planning, and degraded federal cybersecurity support. Without scheduled and documented compatibility and risk reviews, MSPs absorb hidden exposure into their margins. Vendor stability can no longer be assumed, and proactive policy, renewal intelligence, and transparent advisory sessions are now required to avoid unplanned liability, budget crises, and damaged client trust. Four things to know today 00:00 Pentagon Threatens Anthropic Over Claude Access, Demands Autonomous Weapons Use 04:31 CISA Cuts, Azure Sovereignty Push Signal End of Federal MSP Safety Net 06:56 AI Compliance Tools Flood Market as MSPs Face Validation Gap 09:54 86% of Firms Cutting VMware Ties as Broadcom Renewal Costs Loom   This is the Business of Tech.    Supported by: Small Biz Thoughts Community

Business of Tech
Goldman Sachs Reports $700B AI Spend Yields No US GDP Growth; 40% of AI Projects Face Cancellation

Business of Tech

Play Episode Listen Later Feb 25, 2026 14:50


Recent analysis from Goldman Sachs indicates that $700 billion in AI investment during 2025 resulted in no measurable U.S. GDP growth, with most AI equipment imports negating domestic benefits and 80% of surveyed firms reporting no productivity or employment improvements. This pattern suggests that AI-related spending has primarily shifted margins from enterprise IT budgets to a small number of infrastructure vendors rather than delivering distributed value. Internal concerns are rising, with 90% of IT leaders questioning AI's return on investment, and 80% citing fragmented data as a primary challenge to measuring outcomes. Further context reveals that agentic AI initiatives face operational headwinds: Gartner expects 40% of such projects to be cancelled by 2027, and S&P Global found nearly half are abandoned before production, most often due to inadequate planning and data foundations. Margin erosion is widespread, attributed to AI implementation costs, and attempts to scale AI agents into production remain limited by inference costs and insufficient infrastructure. Despite increased adoption efforts, sustainable value delivery from AI platforms remains elusive for most organizations. Enterprise AI access is becoming increasingly concentrated. OpenAI's partnership with consulting firms such as BCG, McKinsey, Accenture, and Capgemini consolidates control of the enterprise distribution layer, narrowing competitive opportunities for smaller providers. Meanwhile, Amazon's 13-hour AWS outage, linked to the misconfiguration of an internal AI tool, underscores the liability ambiguity in agentic systems—where vendors may attribute autonomous actions to user error, complicating risk assignment. Additional updates from vendors such as Anthropic, Cloudflare, and New Relic address incremental technical capabilities, with a distinct focus on cost, operational governance, and policy enforcement. The prevailing themes for MSPs and IT leaders are increased scrutiny of AI value, heightened exposure to cost and accountability risk, and the emergence of managed service opportunities around data governance, cost instrumentation, and liability management. With enterprise market channels consolidating and risk shifting toward service providers, integrating robust contractual definitions for autonomy, incident attribution, and financial boundaries is essential to limit harm and clarify responsibility before incidents occur. Four things to know today 00:00 Goldman: $700B AI Spend Delivered Near-Zero U.S. GDP Growth in 2025 03:49 OpenAI Enlists BCG, McKinsey, Accenture to Distribute Enterprise AI Agents 06:44 Report: Amazon's Own Engineers Prefer Claude Over Its Mandated Internal Tools 08:56 AI Inference Costs Are Falling — But Governance Gaps Are Growing This is the Business of Tech.    Supported by: CometBackup  Small Biz Thoughts Community   

Business of Tech
Remote Monitoring Tool Abuse Surges, Microsoft Copilot Control Failures, and AI's Channel Impact

Business of Tech

Play Episode Listen Later Feb 24, 2026 14:11


Cybercrime's escalation has reached a projected $12.2 trillion annual impact by 2031, with a notable surge in remote monitoring and management (RMM) tool abuse—up 277% year-over-year, according to Huntress and supporting vendor reports. Attackers utilize legitimate IT tools to facilitate stealthier ransomware and phishing campaigns, amplifying structural vulnerabilities within MSP technology stacks. Key metrics from Acronis, WatchGuard, and Vectra AI indicate a shift to smaller, more evasive malware campaigns, longer times to ransomware deployment (averaging 20 hours), and widespread unaddressed security alerts, raising questions about the adequacy of current defenses and incident response practices. Vendor-supplied threat intelligence further shows that MSPs' reliance on signature-based platforms and insufficient visibility leaves them exposed to evolving attack techniques. Data reviewed suggests phishing footholds can quickly compromise cross-client environments, and legal ramifications heavily fall on the service provider when RMM or monitoring tools act as entry points. Notably, only about 58-60% of organizations report full visibility across their systems, with a majority of alerts remaining unaddressed, underscoring gaps in operational maturity and preparedness. Adjacent coverage highlighted Microsoft Copilot's repeated security control failures within regulated environments, specifically its inability to enforce sensitivity labels and boundaries across emails—most recently affecting the UK's National Health Service. The lack of vendor-announced architectural changes calls into question the viability of deploying AI tools in compliance-driven contexts. Separately, political and public backlash against surveillance technologies (such as Flock cameras) demonstrates that unchecked data collection is no longer a manageable passive risk, as data becomes increasingly actionable and retains liability beyond technical considerations. The practical takeaway for MSPs and IT leaders is a need to prioritize audit, documentation, and enforcement of controls within their technology stacks, especially where vendor tools or AI-driven automation intersect with compliance and client trust. Preserving operational optionality and scrutinizing vendor terms—particularly data sharing and architectural enforcement—are essential to reduce exposure. Waiting for vendor patches, disregarding documented control failures, or underestimating public scrutiny elevate liability across legal, reputational, and client relationship domains. Four things to know today: 00:00 Vendor Threat Reports Converge on One Risk MSPs Can't Outsource: The RMM as Breach Vector 05:11 Copilot Failed Compliance Controls Twice in Eight Months — A Patch Won't Fix That 07:03 Flock Backlash Exposes the Liability Hidden in Every Vendor Data-Sharing Contract 09:42 GTDC Summit: Distributors Pitch AI On-Ramp as Hyperscalers Compress Their Margin Sponsored by:  

Business of Tech
IT Salary Compression, AI Trust Decline, and Vendor Consolidation Impact MSP Strategies

Business of Tech

Play Episode Listen Later Feb 23, 2026 14:15


Recent data highlights a growing disconnect between technology spending and measurable business outcomes, with small business optimism softening and widespread skepticism about the benefits of artificial intelligence. The transcript cites an 80% rate of firms seeing no noticeable AI-driven productivity improvements, while trust in technology companies, particularly AI vendors, has declined globally according to the Edelman report. For MSPs, this presents a risk of credibility gaps, especially for those selling AI solutions without corresponding outcome data, as client trust and spending habits grow more discerning in the face of unfulfilled promises. Further context is provided by economic indicators showing a resilient U.S. economy, yet persistent challenges for small businesses. The NFIB Small Business Optimism Index has dropped slightly to 99.3, with insurance costs and labor quality as major pain points; only 16% of business owners expect higher sales. At the same time, IT professionals face salary compression—median IT salaries fell from $145,000 in 2023 to $115,000 in 2024—despite a severe shortage of skilled cloud, AI, and infrastructure talent, as less than 10% of hiring managers are confident in filling in-demand roles. Additional market pressures include rising technology budgets—three-quarters of CFOs anticipate larger tech allocations, but headcount increases are slowing and tech spending faces a widening affordability gap due to sector-specific inflation outpacing budget growth. Vendor-specific developments, such as Western Digital exhausting hard drive capacity for 2026 and Enable reporting 12.8% revenue growth alongside ongoing losses and a 65% stock decline since 2021, illustrate structural risks. Vendor rationalization and strategic uncertainty are likely outcomes for MSPs relying heavily on underperforming partners. Key takeaways for service providers and IT leaders include the need for caution in messaging and solution positioning: outcome data and defensible value propositions are essential when advocating AI or cloud services. Salary data should be weighed against demand-side evidence to avoid retention failures. Finally, dependency on vendors with deteriorating financial outlooks heightens operational risk; providers should proactively assess alternatives and align with financially sustainable partners to reduce exposure during vendor consolidation cycles or market restructures. Four things to know today 00:00 AI Productivity Gap Widens as Trust Drops — MSPs Selling Outcomes They Can't Measure Face CFO Audits  04:51 IT Median Salary Dropped 20% in 2024, But Only 7% of Hiring Managers Can Fill AI and Cloud Roles 07:26 IT Inflation Hits 6.9% as CFOs Concentrate Spend; Western Digital Fully Booked Through 2026 10:28 N-Able Beats Revenue, Misses Earnings as 2026 Growth Guidance Drops to 8–9%   Sponsored by: CometBackup Small Biz Thoughts Community

Business of Tech
Jessica Yeck on AI Project Challenges and Partner Strategies at TD SYNNEX

Business of Tech

Play Episode Listen Later Feb 21, 2026 12:39


The discussion centers on the implementation challenges and partner enablement strategies for artificial intelligence (AI) within the technology channel. According to TD Synnex's AI Accelerator program, only a small portion of AI projects achieve active deployment and measurable ROI, with widespread difficulties cited in scaling complex AI use cases. Jessica Yeck, SVP of Vendor Solutions at TD Synnex, highlights that progress is contingent upon engaging partners at their current state of AI readiness and aligning support resources accordingly. The evidence reflects a move away from one-size-fits-all approaches toward tailored frameworks that focus on tangible business outcomes and repeatable processes. TD Synnex's revised strategy prioritizes meeting partners “where they are,” using assessment frameworks that differentiate between partners with defined AI strategies and those seeking foundational guidance. Jessica Yeck references leveraging the broader technology ecosystem—including vendors, ISVs, and hyperscalers—to deliver solutions with multi-party input. This approach enables partners to identify actionable opportunities and develop pipelines, but demands cross-functional collaboration and technical-specialist engagement, particularly as customization—rather than rigid standardization—is required for effective deployment. The episode also addresses the evolving role of technology distribution in supporting partners beyond logistics. There is explicit recognition of the importance of financial mechanisms, marketplace access, and consultative guidance for services. Jessica Yeck underscores the interconnectedness of relationship-building, competency focus, and ecosystem utilization, noting that partners do not need exhaustive in-house technical skills if they can identify and collaborate with relevant specialists. This points to a strategic shift in what services and value partners can realistically deliver. For MSPs and IT service providers, the key implications involve re-evaluating approaches to AI enablement and partner relations. Instead of prioritizing technical uniformity or attempting to master every subsystem, providers should invest in relationship management and focused competency development while leveraging broader ecosystem resources. Adoption risk is reduced when partners clearly understand their customers' primary objectives and are prepared to orchestrate service delivery with targeted technical and financial support from their distribution networks. The episode reiterates that risk and accountability in AI projects hinge on practical readiness, process discipline, and honest assessment of operational capabilities, rather than technology enthusiasm or over-reliance on standardized templates.

Business of Tech
Creative AI Go-to-Market Strategies for MSPs in 2026: SMB Community Podcast

Business of Tech

Play Episode Listen Later Feb 19, 2026 23:05


Welcome to a feed drop ofthe SMB Community Podcast, the longest-running MSP-focused podcast in the industry.  Hosts James Kernan and Amy Babinchak dive deep into AI go-to-market strategies for 2026, inspired by insights from Amy Babinchak's recent AI class for MSPs.They open with the latest news on Microsoft Copilot and Anthropic's integration, highlighting new privacy and security features for Office apps. Then, they explore how MSPs can not only adopt AI internally but also create new, innovative service offerings for their clients—like custom AI grant-writing agents for nonprofits, real-world business demonstrations, and the integration of AI readiness assessments.Pricing strategies, project sales versus monthly recurring revenue, and the importance of meaningful quarterly business reviews also come under the spotlight. Throughout the conversation, Amy Babinchak and James Kernan share practical examples, discuss industry challenges, and encourage listeners to rethink and monetize their approach to AI as we move toward 2026.Tune in for fresh ideas, actionable strategies, and a glimpse into the real-world experiences of MSPs shaping the future with AI, and find it on your favorite podcast player.   Links at https://smbcommunitypodcast.com

Business of Tech
Managed Services and AI Integration: Interview with Brian Harmison on Corsica Technologies' Strategy

Business of Tech

Play Episode Listen Later Feb 17, 2026 22:47


Corsica Technologies' reported 105% year-over-year growth in managed services bookings stands out as the primary development, indicating heightened demand for flexible service models among businesses with existing IT functions. According to Brian Harmison, CEO of Corsica, this growth is attributed to the company's focus on operational integration, automation, and data-centric managed services that supplement, rather than replace, in-house IT capabilities. The significance for MSPs is not the expansion itself, but the operational choices that enable sustained trust and differentiated engagement in a competitive landscape. Supporting details clarify Corsica's operational strategy: instead of automating or deploying AI indiscriminately, Harmison emphasizes that automation and AI are only effective atop an already “operationally excellent” MSP framework. Practical deployments cited include user onboarding/offboarding workflows, which demand both internal process clarity and integration with client HR systems. The company positions data integration and workflow consulting as integral to MSP-client relationships, not as add-on projects. Corsica's contracts reportedly reduce friction and avoid asset-tracking or incremental billing, seeking to foster longer-term trust over short-term revenue optimization. The episode also addresses the implications of Corsica's acquisition of Accountability IT. Harmison cites alignment in operating models and targeted capabilities—especially in Microsoft security and AI expertise—as central to the integration's value, rather than generic synergies. He notes that continuity of client relationships and careful preservation of existing service structures were prioritized in the first 90 days, even at the expense of speed, to mitigate operational risk and maintain client trust. The discussion highlights the risk tradeoffs between scaling for broader capability and maintaining agility for specialized client needs. For MSPs and IT leaders, the takeaway is to focus on risk reduction through operational excellence and trusted client relationships. Embracing automation and AI is not a universal solution; process maturity and readiness in both the provider and customer are preconditions for any meaningful implementation. Acquisitions require careful cultural and operational integration, with an emphasis on continuity and incremental capability, rather than immediate consolidation or scale. The episode frames operational clarity and trust—not rapid expansion or technology adoption—as critical determinants of long-term viability and resilience in managed services.

Business of Tech
Deploying Agentic AI at Scale: Infrastructure, Reliability, and Risk with Ran Aroussi

Business of Tech

Play Episode Listen Later Feb 16, 2026 23:03


Agentic AI is being deployed as production infrastructure in enterprise settings, but prevailing frameworks remain unreliable for mission-critical operations. Dave Sobel and Ron Aroussi from Muxie underscored that while AI agents are functional—especially in non-deterministic contexts like customer support—expectations of deterministic, workflow-based reliability are not met. The move from demonstration agents to production-scale tools brings heightened attention to issues of reliability, observability, and especially risk of vendor lock-in for Managed Service Providers (MSPs) and their clients.Operational deployment of AI agents currently gravitates toward roles with minimal operational risk, such as customer-facing chatbots or internal chief-of-staff assistants. Aroussi explained that while such agents can automate initial support tiers and internal daily briefings, their unpredictability and potential for error limit their use in processes demanding strict oversight and accountability. He identified two core use cases—external (customer support) and internal (personalized information management)—explicitly noting that agents are best positioned to augment rather than fully automate complex workflows at this stage.A critical risk for MSPs lies in attempting to retrofit existing software frameworks to support agents, which introduces integration complexity and increases the likelihood of operational failures. Purpose-built infrastructure for agentic AI offers better alignment between AI capabilities and production requirements, with Aroussi citing drastically reduced hallucination rates and improved oversight when using native tools. Open source is identified as a foundational element for AI development, but it incurs its own risks, particularly around third-party code quality and the long-term sustainability of community-driven projects.The practical implication for MSPs and IT service providers is clear: a cautious, incremental adoption approach focused on low-risk use cases, coupled with rigorous controls on agent permissions and robust audit trails, is essential. Decision-makers should avoid assuming agents operate with the reliability or accountability of traditional software, prioritize operational transparency, and ensure that responsibilities for agent actions are clearly defined and enforced at the implementation level. Vendor lock-in and software provenance remain significant governance concerns as agentic AI moves from experiment to infrastructure.

Business of Tech
AI Spending Impact, Channel Share Decline, and MSP Growth Strategies With Jay McBain

Business of Tech

Play Episode Listen Later Feb 15, 2026 43:55


The central development addressed is the disconnect between rising overall IT spending and the declining channel share for MSPs and IT partners. Dave Sobel, in discussion with an industry analyst, highlights a reduction in indirect channel participation—from over 75% to a projected 66.7% in 2026—primarily due to the concentration of AI infrastructure investment among the largest technology firms. These hyperscalers and their associated CapEx do not translate into traditional channel opportunities, restricting partner involvement to areas outside large-scale AI data center buildouts.Supporting data point to a technological industry projected to reach $6.07 trillion in customer spend, growing at 10.2%, compared to significantly lower world GDP growth. However, almost none of the rapid AI-related CapEx from companies like Nvidia and Google flows down to channel partners, who instead rely on client-facing managed services, advisory, and security service work. The increasing complexity of customer demand—such as the shift toward managed security (15% growth) and AI services (35.3% compounded growth)—further pushes MSPs to focus on services surrounding the core product, rather than on direct product resale or thin margin opportunities.A significant operational shift within the channel also emerges: the distinction between “influence” and “execution” partners. Vendor programs increasingly recognize partner contributions outside of transactional resale, such as co-selling, advisory contributions, and services attached before or after the point of sale. This trend is reinforced as platforms move toward “point systems” and indirect revenue attribution, redefining how MSPs measure channel health and partner value in a more complex, multi-partner environment.For MSPs, IT providers, and decision-makers, the key operational implications are clear. Traditional growth through seat expansion is less reliable as hiring softens, and managed services must focus on multiplier opportunities—profitable service revenue attached to each dollar of product sold. Capturing value requires adapting to changing program structures, emphasizing trusted advisor roles, and collaborating effectively with adjacent partners. Near-term investment in understanding and building pre-sales AI and security services, and tracking evolving vendor economics, is essential for navigating the new realities of partner participation, risk allocation, and long-term business health.

Business of Tech
Generative AI Drives Tech Spend Shift as Channel Margins Face Pressure

Business of Tech

Play Episode Listen Later Feb 13, 2026 14:40


Global technology spending is projected to reach $5.6 trillion by 2026, with nearly two-thirds of this investment directed toward software and computer equipment, particularly servers, according to Forrester. Generative AI is cited as a primary driver of this increase, shifting the balance of power toward cloud providers such as AWS and Azure. This escalation has implications for operational margins and the position of IT service providers, as businesses increasingly migrate complex workloads to cloud infrastructure ecosystems.Supporting data shows a disconnect between tech employment trends and hiring activity. In January 2026, technology companies cut approximately 20,155 jobs, mainly in telecommunications, while job postings for tech positions rose by 13% compared to the prior month, based on CompTIA analysis. Dave Sobel interprets this as a shift away from permanent IT headcount to project-based, AI-focused engagements. This development places pressure on service providers, who must adapt to buyers reallocating spend from traditional staffing models to short-term, outcome-oriented contracts.Adjacent discussion covered two press releases: VirtuaCare launched a support offering for Windows-based MSPs needing Apple expertise, delivering an externally verifiable, Apple-certified service. In contrast, Miso announced a roadmap for an autonomous AI L1 technician but did not substantiate claims with deliverables or customer data. Dave Sobel emphasized the need for MSPs to demand piloting, outcome metrics, and auditable product maturity, warning against reliance on unproven AI solutions and highlighting the risk of outsourcing as only a temporary solution.The core implication for MSPs and IT providers is a need for tactical negotiation and operational risk management. Dave Sobel recommends using AI first to reduce internal labor costs before introducing it as a client offering, prioritizing outcome-based pricing and adjusting contracts to retain value from efficiency gains. Providers should avoid becoming displaced labor, rigorously test new technologies before adoption, and remain vigilant regarding vendor claims. The emphasis remains on capturing and defending margins through accountable operations and contract governance rather than chasing speculative innovation.Three things to know today00:00 Tech Spending Hits $5.6T but MSPs Face Margin Squeeze Without AI Pricing Reset05:31 VirtuaCare Ships Apple Support; Mizo Announces Roadmap—One's Testable Today08:17 MSPs Must Capture AI Efficiency Value or Face Margin CompressionThis is the Business of Tech.   Supported by:  Small Biz Thought CommunityCheck out Killing IT

Business of Tech
AI Operational Risk, Sovereign Cloud Mandates, and MSP Compliance Liabilities Examined

Business of Tech

Play Episode Listen Later Feb 12, 2026 14:13


Mid-market organizations are transitioning from pilot projects to operationalizing generative AI and agentic workflows, according to a TechEYE article and Tech Isle survey cited by Dave Sobel. This shift centers on outcome-driven automation but exposes providers to new liability concerns, mainly due to fragmented, unreliable data and shadow AI usage—employees employing unauthorized tools outside official controls. The primary risk is that MSPs may be blamed for incidents where contract boundaries and technical controls do not cover browser-based generative AI use, making forensic evidence and documented enforcement essential for defending accountability. Supporting data from Tech Isle found that over 5,000 companies are pursuing structured approaches to AI-enabled growth, but face persistent issues in data trust, governance, and user fatigue. Additionally, European investment in sovereign cloud infrastructure is projected to triple between 2025 and 2027, driven by regulatory demands and concerns about U.S. data sovereignty. MSPs managing split architectures—sovereign providers for regulated data and hyperscalers for everything else—encounter API mismatches, operational complexity, and margin pressure. The recommendation is to standardize policy enforcement, identity management, and residency mapping while prioritizing audit-ready reporting and exception handling. AI-driven cyberattacks have increased, with reports from Level Blue and Check Point Research highlighting a surge in both attack volume and sophistication. Only 53% of CISOs feel prepared for AI threats, despite 45% expecting to be impacted within a year. Browser-based generative AI use introduces visibility gaps, raising the risk of negligence claims when service providers cannot demonstrate governance or forensic readiness. Reauthorization of the Cybersecurity Information Sharing Act (CISA) underscores that voluntary data sharing is inadequate, with CIRCA now requiring mandatory 72-hour incident reporting for critical infrastructure. The key takeaways for MSPs and IT leaders are to proactively define AI coverage and governance in contracts, enforce acceptable use policies, and instrument monitoring to close visibility gaps. Providers who can deliver forensic-grade telemetry, managed compliance programs, and operational readiness for incident reporting will be better positioned to defend against penalties, retain higher-value accounts, and offer meaningful differentiation. These structural challenges—fragmented control planes, increased compliance costs, and permanent risk friction—necessitate a strategic shift toward governance-led service models.Three things to know today00:00 Midmarket Shifts to Agentic AI as Europe Triples Sovereign Cloud Spending by 202706:08 Most Security Chiefs Say They're Not Ready for AI-Powered Cyberattacks Coming This Year09:46 CISA 2015 Reauthorized Through 2026; CIRCIA Mandates Expose Voluntary Sharing Failure This is the Business of Tech.   Supported by:  TimeZest  IT Service Provider University

Business of Tech
AI Raises Workloads and Burnout: HBR Study, Medical Risk, and New Governance for MSPs

Business of Tech

Play Episode Listen Later Feb 11, 2026 13:33


Artificial intelligence (AI) is intensifying workloads rather than alleviating them, leading to increased burnout and declining decision quality, according to findings published in the Harvard Business Review and cited by Dave Sobel. The episode underscores that AI lowers the cost of producing outputs such as drafts and summaries but raises throughput targets and introduces new verification burdens. Economic gains from AI remain concentrated where capital and skilled labor already exist, while negative impacts—like displacement and wage pressure—are felt locally. These dynamics highlight the need for robust governance, particularly for managed service providers (MSPs) who deploy AI solutions.Supporting studies referenced include the International AI Safety Report, which details heightened uncertainty around AI development and its risks, as well as research from Oxford documenting the unreliability of AI chatbots in real-world medical decision-making. Experts warn that rapid automation without corresponding improvements in control systems creates structural constraints, making traditional software governance frameworks inadequate for unpredictable AI behaviors. Without proactive measures, these gaps risk exacerbating economic inequality and liability in regulated environments.Additional developments include OpenAI's release of upgraded agent features—such as GPT-5.2, improved context retention, managed shell containers, and a new skills standard—presented as operational enhancements but raising concerns about black-box context handling, auditability, and dependency risk. T-Mobile's AI-powered live translation service offers greater convenience but eliminates audit trails, shifting compliance risk to customers and prohibiting independent verification. Quark Cyber's launch of an internal cyber risk score introduces further complexity, as the scoring methodology is embedded within a financial product structure and lacks transparent validation.For MSPs and IT service leaders, the key takeaway is to treat new AI features and risk metrics as tools with significant tradeoffs. AI deployments should focus on governance layers that include workload caps, quality gates, and measurable outcomes rather than simply accelerating productivity. New features should be used for low-stakes workflows and carefully avoided in high-risk or regulated contexts unless auditable controls and deterministic checkpoints are established. Vendor-managed risk scores and warranties require independent validation before being positioned as client-facing truth standards.Four things to know today00:00 Harvard, Oxford Studies Find AI Raises Workload, Delivers Inadequate Medical Advice05:01 OpenAI Updates Deep Research and Adds New Agent Runtime Capabilities07:33 T-Mobile Tests Real-Time Call Translation Built Into Its Network09:17 Cork Cyber Rolls Out New Risk Score for Managed Service ProvidersThis is the Business of Tech.   Supported by:  ScalePad Small Biz Thoughts Community

Business of Tech
OpenAI Introduces ChatGPT Ads and Enterprise Agent Platform; Anthropic Releases Opus 4.6

Business of Tech

Play Episode Listen Later Feb 10, 2026 14:52


OpenAI's twin initiatives to monetize ChatGPT's free tier through ads and launch the Frontier enterprise agent platform represent a shift in the AI provider's business model, with substantial implications for compliance and operational governance. Free and low-cost ChatGPT users will now see sponsored links unless they opt to reduce daily usage; only customers paying $20 or more per month retain an ad-free experience. OpenAI is concurrently marketing Frontier to enterprise clients such as HP, Intuit, and Uber, offering AI agent orchestration and deploying a team of consultants to support custom AI applications. The company projects enterprise revenue will constitute 50% of its income by year-end, up from 40% the prior month.Operating in both the consumer funnel and the enterprise layer, OpenAI combines top-of-funnel data monetization with vertical integration of services. The ad-supported free tier raises compliance concerns, as user interactions become subject to additional data collection and monetization. For organizations, this means enforcement decisions around whether and how employees may use free AI tools in regulated or sensitive environments. The more consequential development, however, is the introduction of enterprise agent orchestration through Frontier, where questions persist regarding liability, governance, production stability, and how organizations are protected from errors committed by autonomous agents.Related market movements include Anthropic's release of Claude Opus 4.6—which enables multi-agent collaboration with context windows up to 1 million tokens—and Microsoft's planned shift for Windows to a signed-by-default trust model. Anthropic's enhancements to agent functionality remain constrained by key gaps, such as conflict arbitration mechanisms, rollback procedures, and documented cost models, and the expanded context remains limited to beta testers. Microsoft's strategy to enforce signed apps by default mirrors iOS's approach to application trust, but its operational viability depends on how override mechanisms are managed by both users and IT administrators. Additional developments in backup, asset management, and AI governance (as seen with NinjaOne, JumpCloud, and Zoom) reflect a general trend towards increased integration and platform consolidation, though with ongoing gaps in security and compliance as AI adoption accelerates.The practical takeaway for MSPs and IT service leaders is the need to re-evaluate policies around free AI tool usage, invest in governance and auditability for enterprise AI, and prepare operational systems for stricter software trust and exception management requirements. Structural changes in software security and AI orchestration are transferring costs and risks from incident response to ongoing policy enforcement and exception handling. Those offering AI services should prioritize model-agnostic governance and avoid reliance on a single vendor's automation layer, as vertical integration by platform providers is reducing the defensibility of narrow service offerings.Four things to know today:00:00 OpenAI Adds Ads to Free ChatGPT; Launches Frontier Platform for Enterprise Agents04:07 Anthropic Ships Opus 4.6 Agent Teams; Model Found 500 Zero-Days in Testing06:43 Microsoft Announces Signed-App-Only Mode for Windows 11; Phased Rollout Planned10:19 NinjaOne Adds Asset Management; Zoom Launches AI Workspace Tool; JumpCloud Opens VC ArmThis is the Business of Tech.   Supported by:  CometBackup IT Service Provider University

Business of Tech
IT Spending Rises but Channel Share Falls; AI Arms Race and Shrinking Jobs Impact MSPs

Business of Tech

Play Episode Listen Later Feb 9, 2026 12:56


IT spending continues to expand, with North America projected to lead a 12.6% increase to $2.6 trillion, primarily due to hyperscaler investments in AI infrastructure. However, the proportion of technology spending funneled through channel partners is declining, now at 61% compared to over 70% four years ago, according to a survey by Omnia. This shift signals that while the market is growing, traditional margin and resale opportunities for MSPs are narrowing as vendors redirect a larger share of revenue direct while still relying on partners for implementation, support, and customer operations.Data from Salesforce underscores a near-universal trend toward partner involvement in sales, with 94% of surveyed global salespeople leveraging partners to close deals and 90% using tools to manage relationships. Despite this, Dave Sobel clarifies the distinction between involvement and compensation, highlighting that partner influence on deals does not guarantee economic participation at previous levels. These dynamics reinforce that MSPs must adapt to a reality where their role in the value chain is being separated into influence and execution, with the middle tier facing increasing pressure.Additional analysis draws attention to labor market changes and technology commoditization. U.S. job openings have fallen to their lowest point in over five years, undermining MSP growth strategies dependent on seat expansion. Simultaneously, the AI market is fragmenting at the application layer—with Google's Gemini app, Grok, and OpenAI's ChatGPT shifting market shares rapidly—while hyperscalers like Alphabet (Google) commit unprecedented capital expenditures, fueling an infrastructure arms race even as front-end AI tools become more interchangeable.The practical implication for MSPs and IT service providers is increased pressure to re-evaluate business models, operationalize AI offerings, and focus on defensible, productized services. Reliance on a single vendor or seat-based growth forecasts presents heightened risk. Successful adaptation will require a shift toward managed services around AI operations, governance, and productivity—emphasizing accountability, optionality, and measurable ROI—rather than assuming historic revenue models will persist.Three things to know today:00:00 Partners Essential to Sales but Losing Economic Share, Survey Shows05:44 US Job Market Shows Low Hiring, Low Firing Despite Falling Openings       08:00 Alphabet Plans $180B AI Capex as Gemini Hits 750M UsersThis is the Business of Tech.   Supported by: Small Biz Thoughts Community

Business of Tech
Why AI Pilots Stall: Data, Complexity, and the Build vs. Buy Debate With Ashwin Mehta

Business of Tech

Play Episode Listen Later Feb 8, 2026 24:06


AI pilot programs are consistently failing to deliver measurable business value, with a primary cause identified as a lack of clearly defined problem statements guiding these initiatives. Ashwin Mehta, an AI strategist with experience leading enterprise transformations, emphasized that many organizations initiate AI pilots without specific objectives, resulting in projects that struggle to demonstrate impact or justify further investment. This lack of focus often leads to stalled initiatives, rather than progress into scalable production environments.The discussion outlined how mid-market and small businesses typically implement AI by acquiring SaaS tools with embedded AI features, rather than building bespoke solutions. Ashwin Mehta observed that while “build versus buy” considerations have shifted as orchestration and database platforms become more accessible, custom development still brings additional risk, skill requirements, and long-term maintenance burden. Even as technical barriers decrease, organizations are cautioned to weigh lifecycle costs and operational support needs before pursuing custom builds.Data management was highlighted as a recurrent challenge, both from an organizational readiness perspective and regarding regulatory risk. Ashwin Mehta underscored the importance of establishing a single source of truth for business-critical data and classifying information by its regulatory sensitivity. Without such data discipline, adoption of AI tools—especially in regulated sectors—becomes a source of uncertainty, with organizations defaulting to restrictive or prohibitive AI policies due to inadequate risk visibility.For MSPs and technology leaders, the operational implications are clear: pilots without rigorous scoping and problem definition are unlikely to progress, and sustainable AI adoption requires purposeful data governance and clear frameworks for project prioritization. With the complexity of AI implementations extending beyond technical issues to include cost volatility, compliance, change management, and skills gaps, providers must approach each initiative with a structured, risk-aware mindset and ensure ongoing oversight as both technology and regulatory landscapes evolve.Sponsored by:  ScalePad

Business of Tech
OpenAI Equity Move in MSPs, AI Adoption Challenges, and Tier 1 Job Impact—Interview with Seth Robinson

Business of Tech

Play Episode Listen Later Feb 7, 2026 34:41


OpenAI's direct investment and technical involvement with Thrive Holdings, specifically through its partnership with SHIELD Technology Partners, presents a new precedent for AI's integration into the managed service provider (MSP) space. Unlike prior private equity roll-ups or traditional organic growth, this move involves embedding OpenAI's models and engineers directly within SHIELD's platform, an entity that has rapidly acquired and integrated nine MSPs and executed two $100 million funding rounds. The arrangement is characterized by efforts to optimize MSP operations through proprietary AI automation, raising immediate questions around operational dependency and the shifting locus of software control.According to Seth Robinson, this approach signals OpenAI's attempt to navigate both consumer and enterprise technology markets—a dynamic seen previously in mobility—and reflects the broader tension between individual AI use cases and deeply integrated stack solutions. The initiative may accelerate operational scale, but it also introduces new operational risks by centralizing key components of service delivery and support within a single AI-driven platform, potentially affecting vendor lock-in, data governance, and continuity of MSP business models.Parallel developments highlight new vendor integration strategies among MSP-focused software providers. One example is Lexfold's AI documentation system, which, rather than integrating directly with core PSA and RMM tools, utilizes intermediary platforms such as Scalepad and Liongard for data access. Seth Robinson emphasizes that these alternative integration points may alter an MSP's center of operational gravity and complexity management, underscoring the need to assess not just functional outcomes but also system dependencies and brittleness introduced by new integration paths.For MSPs and IT leaders, these trends underscore the necessity of rigorous due diligence in vendor relationships, clarity on operational dependencies, and attention to the long-term implications of AI-enabled automation. Management—not elimination—of complexity remains central, with the risk of oversimplification leading to commoditization and loss of differentiation. Moreover, advances in AI should prompt greater scrutiny about talent pipelines, upskilling strategies, and the potential risks of eroding early-career roles, which may impact long-term service quality and resilience. Careful evaluation of integration points, data integrity, and operational control is recommended to mitigate the practical and organizational risks emerging from these developments.

Business of Tech
AI Fails to Deliver ROI for CEOs While Bot Traffic Surges and CISA Targets End-of-Life Devices

Business of Tech

Play Episode Listen Later Feb 6, 2026 14:37


A PwC survey of over 4,400 CEOs across 105 countries found that 56% report artificial intelligence has not delivered meaningful revenue growth or cost savings in the past year. Only one in eight organizations saw both benefits. The core issue, as highlighted by Dave Sobel, lies in poor integration—largely due to data quality challenges and legacy systems—leaving many businesses stuck in what PwC terms “experimentation purgatory.” Despite significant investment, AI infrastructure is often failing to produce measurable returns.This lack of operational discipline is mirrored by the rising incident of AI bots, which now account for 1 out of every 50 website visits, a sixfold increase from earlier reports. AI is successfully extracting value from enterprise infrastructure through sophisticated scraping, as companies pay for tools that return little and simultaneously fund infrastructure serving AI bots. The operational cost and exposure from bot traffic and ineffective AI tool adoption highlight the disconnect between hype and practical benefit.Adjacent stories expand on the governance gap and evolving expectations around risk. The U.S. and China declined to sign a non-binding declaration on military AI, underlining global regulatory fragmentation. In contrast, the Cybersecurity and Infrastructure Security Agency (CISA) issued a binding directive for federal civilian agencies to remove unsupported devices within a year, signaling substantial operational risk from end-of-life technology. These regulatory movements are expected to drive similar risk accountability into the private sector, primarily through insurance requirements.For MSPs and IT service providers, the takeaway is not to chase AI-powered offerings but to prioritize readiness, control, and cost accountability. Vendor partner programs (Cisco and 1Password) reward lifecycle management and customer retention, not AI sales. The practical competitive advantage is operational honesty—delivering realistic assessments, proactive client interactions, and transparent guidance. Automation should fund genuine client relationship activities, not replace them. The focus should remain on safeguarding operational integrity, controlling technology risk, and building customer success capability.Four things to know today:00:00 PwC Survey Finds Most Business Leaders Still Waiting for AI Payoff05:00 Federal Agencies Ordered to Eliminate End-of-Life Devices Over Cyber Threats08:06 Cisco and 1Password Launch Partner Programs Focused on Customer Success10:52 Harvard Business Review Says Human Touch Remains Critical Advantage Over AIThis is the Business of Tech.   Supported by:  Small Biz Thought Community 

Business of Tech
OpenAI Enters Ads and Consulting; AI Deployment Shifts Liability and Costs for MSPs

Business of Tech

Play Episode Listen Later Feb 5, 2026 14:13


The primary development centers on the shift toward smaller, task-specific AI models within enterprises and how this shift is primarily about transferring liability from AI vendors to operators. Dave Sobel notes that while narrower AI models are being marketed as safer and easier to govern, the reality is that they shift the burden of control, oversight, and risk directly onto the organizations deploying them. Hidden costs—particularly those related to data infrastructure, compliance, and ongoing governance—are substantial, often eclipsing the initial AI investment.Supporting data includes findings from a Salesforce survey indicating that CIOs allocate a median of 20% of their budgets to data and infrastructure management versus 5% to AI itself. Dave Sobel stresses that the real cost of an AI project can be significantly higher than client expectations, pointing out a 4:1 spending ratio between supporting infrastructure and the AI technology. This underscores the risk for MSPs who may fail to price in the operational and governance requirements appropriately, exposing themselves to financial and compliance liabilities.Adjacent stories address OpenAI's strategic expansion into advertising and direct consulting, marking a move from pure technology platform to direct competitor for services revenue. OpenAI is creating an Ads Integrity Team to manage advertiser verification and reduce scam risk but acknowledges the challenges of maintaining effective controls at scale. In parallel, OpenAI is embedding engineers within client operations—mirroring other internal AI initiatives such as those at Shield and Entegris—and reinforcing a market divide. MSPs who build such capabilities internally capture margin, while others face lasting margin compression as purchasers of external solutions.The implications for MSPs and IT leaders are direct. Success depends less on which AI model is selected and more on the provider's ability to establish rigorous governance, liability management, and ongoing operational control. The market is bifurcating: service providers who can build in-house AI platforms or attract strategic investment will retain efficiency as margin, while those relegated to purchasing third-party tools risk further erosion of profitability and competitive position. The decision to build or buy is becoming a business model risk, not just a procurement choice, and the opportunity to address it is narrowing.Three things to know today:00:00 Firms Shift to Task-Specific AI Models Amid Governance, Liability Concerns 04:35 OpenAI Launches Ads Integrity Team, Hires Hundreds as Services Push Begins08:34 MSP Market Splits as Integris, Shield Build Internal AI, Others Buy ToolsThis is the Business of Tech.   Supported by:  IT Service Provider University 

Business of Tech
CISA Ransomware Intelligence Lag, Azure TLS Cutoff, and Risks from AI Skills Marketplaces

Business of Tech

Play Episode Listen Later Feb 4, 2026 14:52


The episode focuses on current security risks and limitations in industry intelligence, highlighting that CISA's Known Exploited Vulnerabilities (KEV) catalog often lags by years in tagging vulnerabilities exploited by ransomware. One cited vulnerability sat in the catalog for 1,353 days before being flagged as ransomware-exploited, illustrating a significant delay in actionable intelligence. This gap raises concerns for MSPs whose patching priorities rely on outdated catalogs, potentially leading to a misalignment between compliance activities and actual threat vectors.Supporting this, Dave Sobel underscores how evolving threat models frequently bypass traditional vulnerability management. The recent compromise of OpenClaw's skills marketplace, with a 12% malicious rate in submitted skills and basic post-facto reporting mechanisms, demonstrates that credential theft and malicious automation now present risks outside standard patch management. The core operational challenge for MSPs is not just software vulnerability but the governance of AI-enabled tools and uncontrolled marketplaces that can expose clients to breaches.Further contextualizing risk and automation, vendor launches include Lexful's AI-native documentation for MSPs and Cavelo Flash's agentless assessment tool. These offerings promise streamlined documentation and rapid risk assessment, but Dave Sobel notes their reliance on beta features, integration dependencies, and non-definitive compliance positions. Additionally, DocuSign's release of AI-generated contract summaries raises questions about liability, as inaccurate summaries can mislead signers, and responsibility defaults to the end user rather than the vendor.The primary implication for MSPs and technology leaders is the need to inventory all AI-powered tools with access to client environments, actively govern marketplace adoption, and critically evaluate automation claims. Compliance-focused patching is no longer sufficient; operational oversight must prioritize credential management and identity governance over checklist-based approaches. Caution is advised before rapid migration to beta solutions or locking into long-term contracts, as both reduce flexibility and increase exposure to emerging, non-traditional attack surfaces.Three things to know today00:00 CISA's Ransomware Tags Arrive Years Late While AI Tools Steal Credentials Now05:53 IT Glue Founder Launches AI Documentation Platform Lexful for MSPs at Right of Boom09:52 Cavelo and DocuSign Launch AI Tools That Automate Assessments and Contract ReviewsThis is the Business of Tech.   Supported by: Small Biz Thoughts Community

Business of Tech
AI Adoption Outpaces Trust, Microsoft Sets NTLM Deadline, Right to Repair Expands

Business of Tech

Play Episode Listen Later Feb 3, 2026 14:39


The episode centers on the expanding adoption of artificial intelligence (AI) tools among workers alongside a notable decline in confidence. According to a Manpower Group study cited by Dave Sobel, AI confidence among workers decreased by 18% even as usage increased by 13% over the past year. This divergence highlights a governance and operational gap for MSPs, as enterprise clients confront both the potential and the risks of AI-enabled solutions, facing unresolved issues of output reliability, oversight, and liability when missteps occur.Supporting this trend, findings from the Stanford University Institute for Human-Centered Artificial Intelligence indicate that nearly 30% of AI chatbot users encountered harmful suggestions. While these statistics lack detailed breakdowns – such as which platforms or definitions of “harmful” – they shape widespread client perceptions and intensify scrutiny of AI guidance provided by IT service providers. Meanwhile, enterprise vendors like Zendesk report improved satisfaction rates from automated resolutions but emphasize the costly need to overhaul workflows and data management to effectively harness AI benefits.Additional focus is given to Microsoft's scheduled deprecation of the NTLM authentication protocol, replaced by newer mechanisms that are not yet fully deployed or reliable. Dave Sobel notes that legacy systems depending on NTLM present tangible operational and legal risks for MSPs, as clients may face authentication failures or re-enable insecure protocols unless thoroughly audited. Elsewhere, the "right to repair" movement is gaining ground as the Environmental Protection Agency affirms farmers' rights to repair their own equipment, with broader implications for IT hardware access and vendor-dependent service models.The confluence of these developments underscores the importance for MSPs and IT leaders to shift focus from product access and resale toward risk governance, lifecycle planning, and documenting client decisions—especially in AI, authentication methodologies, and hardware maintenance. Mitigating liability, clarifying accountability with clients, and tracking evolving vendor and regulatory actions are essential to maintain relevance and safeguard operations as service and product access models change. Three things to know today00:00 Workers Use More AI But Trust It Less, Creating New Service Risks03:44 Microsoft Plans NTLM Phase-Out Despite Unfinished Kerberos Replacement Technology06:32 Google, Adobe Launch AI Subscriptions While OpenAI Retires GPT-4o Next Month10:52 EPA Ruling Lets Farmers Repair Equipment, Pressures Tech Right-to-Repair LawsThis is the Business of Tech.   Supported by: 

Business of Tech
Small Business Optimism, Trillion-Dollar IT Services Projections, and Unmanaged AI Agent Risks

Business of Tech

Play Episode Listen Later Feb 2, 2026 16:04


The episode centers on the structural shift in managed services driven by the adoption of autonomous AI agents and the resulting accountability challenges for IT service providers. According to Dave Sobel, 22% of employees in Token Security's surveyed organizations are independently running AI agents such as OpenClaw with terminal and browser command capabilities, without formal IT oversight. This widespread shadow automation creates significant operational and security exposure, indicating unsanctioned user demand for advanced automation that IT has not provided. The core risk is not simply unauthorized technology use, but ineffective governance and lack of visibility into automation processes that can impact both client safety and provider liability.Context provided throughout the episode points to a disconnect between optimistic business sentiment and actionable IT spending. While the NFIB index reflects rising small business optimism and increased capital access, most technology-related investments appear to have already been made in prior periods. Only 19% of small businesses plan further equipment investments, suggesting limited near-term demand. Meanwhile, SBA workforce reductions signal longer loan processing times, affecting clients who depend on SBA-backed funding for technology projects—a concrete operational delay for MSPs whose services are linked to client capital expenditure timelines.Additional discussion focuses on evolving industry economics, notably a projected increase in the North American IT services market to $1.09 trillion by 2033, as reported by Research and Markets. However, Dave Sobel emphasizes that the majority of this growth is captured by hyperscalers and large integrators, not regional MSPs. Cooling wage inflation, detailed by Service Leadership, may present temporary margin opportunities but also introduces risk if MSPs respond with indiscriminate hiring rather than automation or upskilling strategies. The Shield Technology Partners investment, involving OpenAI's embedded research in IT operations, signals rapid automation of rules-based workflows and reiterates the urgency of addressing task displacement and margin compression.For MSPs and IT service leaders, the practical takeaway is clear: unmanaged, employee-driven AI automation presents both risk exposure and a mapping of unmet service demand. Blocking shadow agents is a reactive measure—long-term resilience depends on developing agent governance frameworks, including permissioning, audit, and incident response protocols. With shrinking margins and increasing automation, providers must reevaluate operational models, prioritize revenue-per-employee, and focus on delivering accountable, sanctioned automation services rather than competing on basic labor cost or commodity support.Four things to know today00:00  NFIB Index Hits 99.5 as 64% Face Inflation and SBA Cuts Half Its Workforce04:44  IT Services Market Growth to $1.09T Coincides With Declining Wage Inflation08:01  Shield Secures Second $100M From OpenAI-Backed Thrive Holdings for AI Operations Platform11:21  Token Security Reports 22% Shadow IT Adoption of OpenClawThis is the Business of Tech.   Supported by: MSP Radio - Internal Ad 

Business of Tech
Mike Riggs Joins Empath: Moving from Founder-Led Vision to Formal Product Governance

Business of Tech

Play Episode Listen Later Feb 2, 2026 12:19


The appointment of Mike Riggs as Chief Product Officer at Empath signifies the company's transition from founder-led intuition to formalized product governance. According to Wes Spencer, Empath reached over 500 MSP customers and now requires more disciplined processes as it moves from early-stage, high-velocity development to operational maturity. Mike Riggs described his role as systematizing elements that were previously managed informally—covering areas from design to engineering—and explicitly stated the intent to strengthen operational accountability for both the platform and its customers.This structural change follows recognition by the founders that their limited technical background required complementary leadership to scale effectively. Advisors highlighted that, while growth and partner engagement met expectations, scaling Empath's platform now demands greater rigor and repeatable operational practices. Empath's platform has evolved from being a convenience service to an operational dependency, with MSPs using it for training, team accountability, and embedded workflows. Mike Riggs emphasized the importance of refining user experience, onboarding processes, and support mechanisms as MSP reliance grows.A central theme discussed is the shift in Empath's product category—from a basic learning management tool toward a broader learning, development, and accountability platform for MSPs. Features such as notification systems and visibility into required actions move the platform beyond content delivery into proactive management of personnel performance and compliance. This evolution brings Empath closer to intersecting with HR, policy, and managerial oversight, compelling the company to balance user engagement features with the need for reliable, auditable, and controlled change management.For MSPs and IT service providers, Empath's shift has operational implications and risk factors. Increasing dependency on a single platform heightens the significance of product stability, disciplined rollout of new features, and clarity of governance. As platforms like Empath become more embedded in day-to-day operations, service providers must reassess processes for vendor risk management, accountability, and internal policy alignment. The move described is not an indicator of problems but of maturation—a transition that typically introduces both new safeguards and greater operational complexity.

Business of Tech
Navigating AI Adoption and Governance for Small Businesses: Interview with David Espindola

Business of Tech

Play Episode Listen Later Feb 1, 2026 20:50


The episode centers on practical approaches for Managed Service Providers (MSPs) and IT leaders assessing artificial intelligence (AI) adoption, with David Espindola detailing the crucial distinction between “maker,” “shaper,” and “taker” strategies. David Espindola emphasizes that organizations must intentionally decide their role in AI development and use—whether building proprietary systems, shaping solutions atop existing models, or simply consuming pre-built capabilities. This decision, he notes, is foundational for aligning risk tolerance, investment, and technical capacity with business goals, especially given the rapid pace and inherent uncertainty in AI's evolution.Supporting this framework, David Espindola references insights from a Small Business Administration project, which found that most small businesses are struggling to define applicable use cases for AI and tend toward risk-avoidant stances despite external pressures to adopt the technology. He stresses that AI implementation should not be a solution in search of a problem; rather, an organization's readiness, risk, investment capability, and specific industry context must determine its approach. Key recommendations include conducting readiness assessments, appointing internal AI champions, and starting with small, low-risk pilot projects to build internal understanding and governance processes before scaling.The discussion broadens to ethical and governance considerations, with both David Espindola and the host cautioning that responsible AI adoption is a business necessity rather than a compliance checkbox. They advocate for formal employee training, the establishment of clear usage policies, and strict controls over tool access to mitigate risks such as data leakage, hallucinated outputs, and misaligned communications. The emphasis is on building practical safeguards rather than pursuing AI for its own sake, reflecting a pragmatic, risk-managed approach tailored to each organization's context.For MSPs and IT service providers, the practical takeaways are clear: pursuing AI adoption requires a methodical, risk-aware strategy focused on business relevance, operational governance, and targeted experimentation. The harms of rushed deployments, poor change management, or lack of internal education are underscored, with the implication that long-term value and reduced exposure are found in deliberate, well-governed adoption efforts. Readiness assessments, pilot programs, and robust policy frameworks emerge as the primary enablers of sustainable outcomes in this rapidly evolving landscape.

Business of Tech
MSP Rollups, AI Investment, and Industry Consolidation Trends With Rich Freeman and Jessica Davis

Business of Tech

Play Episode Listen Later Jan 31, 2026 48:45


The current wave of managed service provider (MSP) consolidation and rollups is being distinguished by the integration of advanced artificial intelligence (AI) expertise, particularly among entities such as SHIELD and Titan. As discussed by Rich Freeman and Jessica Davis, these newer rollups are acquiring not just MSPs but also Silicon Valley AI talent and developing proprietary AI-driven services, a marked shift from earlier private equity-backed consolidators. Rich Freeman highlighted SHIELD's recent leadership hires from Palantir and direct collaboration agreements with OpenAI, signaling an intent to embed AI at the operational core rather than simply as a tool for optimization.The structure and access to data is central to these developments. As Rich Freeman elaborated, large rollups possess a scale-driven “AI flywheel” advantage: broader customer bases provide larger datasets, which in turn drive better AI performance, operational efficiency, and profitability. This concentration creates risks for smaller MSPs that lack equivalent data pools and resources for internal AI development. Jessica Davis noted that while tool vendors and platform companies such as ConnectWise and Kaseya are enhancing AI within their offerings, their efforts are not yet matching the focused investments of the largest rollups, and are simultaneously being pressured to accelerate innovation.Commercial and operational pressures are increasing throughout the MSP ecosystem. Jessica Davis cited indications of slowing managed services revenue growth projections (potentially below 10%), alongside potential cost-cutting or workforce reductions within large rollups as private equity owners seek AI-driven returns. Divergent rollup models are also emerging—with distinctions between platform centralization (e.g., retiring acquired brands) and decentralized, founder-friendly approaches (e.g., preserving local brands and founder involvement). Decisions around acquisition, platform engagement, and specialization are increasingly nuanced as founders and owners evaluate their options under new market dynamics.For MSPs and IT service leaders, these trends necessitate a measured response. The competitive risk posed by the AI-fueled scale of consolidated rollups underscores the importance of specialization, operational focus, and alignment with platform partners committed to democratizing AI resources. Community collaboration, best-practice sharing, and strategic use of vendor tools are positioned as potential mitigants to the structural disadvantages faced by smaller organizations. Governance, due diligence, and clear assessment of vendor or acquirer incentives should be prioritized, especially as service models and influencer dynamics continue to fragment. Remaining adaptable, resource-aware, and critically informed about the changing power landscape will be vital for sustainable operations.

Business of Tech
Moltbot's Security Flaws, Apple's Supply Challenges, and Windows 11 Trust Issues Analyzed

Business of Tech

Play Episode Listen Later Jan 30, 2026 11:34


The emergence of Moltbot, an open source AI agent designed to operate across various messaging platforms and automate tasks through local device execution, is creating new risk vectors for MSPs and IT providers. Functioning with admin-level access and connecting to services like OpenAI and Google, Moltbot's deployment has raised direct concerns around authority delegation without sufficient governance. Security researchers identified hundreds of exposed Moltbot instances, often due to misconfiguration, increasing the possibility of breaches and unauthorized data access. The episode underscores that these agents, treated as productivity tools, actually represent operational infrastructure capable of independent action, with potential impacts on client trust and regulatory liability.Expert sources cited in the discussion, including Cisco and Hudson Rock, have labeled Moltbot a security risk due to its storage of sensitive information in plain text and broad access permissions. The narrative warns that vendors and providers may underestimate the risks by normalizing deployment before establishing proper controls. Once these agents are embedded into workflows, reversing their use becomes difficult due to client reliance on perceived efficiency. The lack of mature governance frameworks, as shown by studies from Drexel University, means that many organizations lack even basic oversight of these autonomous agents.Adjacent industry developments highlight additional layers of operational complexity. Apple posted a 16% revenue increase, led by iPhone demand, and acquired Q AI to deepen its ambient automation capabilities, while shifting defaults that providers cannot easily influence or control. Simultaneously, the Linux community's succession planning and Microsoft's ongoing struggles with Windows 11 reliability further demonstrate systemic issues around authority, trust, and transparency in technology ecosystems.The episode's analysis signals clear expectations for MSPs and technology leaders: explicit approval protocols for AI agents are necessary, akin to traditional admin controls. Providers must proactively define governance boundaries, anticipate non-billable labor resulting from automation failures, and assess vendor behavior in terms of roadmap rigidity and escalation pathways. Teaching clients about authority in automated environments, not just managing installations, will reduce exposure and clarify accountability as agentic technologies become standard.Three things to know today00:00 Moltbot's Rise Highlights How AI Agents Are Becoming High-Risk Operators Without Governance03:49 Record iPhone Sales and a $2 Billion AI Acquisition Signal Apple's Long-Term Control Strategy06:04 Leadership Succession, Software Trust, and AI Agents Reveal a Shared Governance ProblemThis is the Business of Tech.   Supported by:  ScalePad 

Business of Tech
France Moves to Digital Sovereignty, South Korea's AI Law Challenges, and Microsoft Earnings Signal AI Dependence

Business of Tech

Play Episode Listen Later Jan 29, 2026 16:02


France's decision to discontinue American collaboration platforms such as Zoom and Microsoft Teams for government use—replacing them with the domestically developed Vizio platform—signals a shift toward digital sovereignty and data control within regulated jurisdictions. This move, formalized as part of France's Suite Numerique and to be implemented by 2027, highlights the increasing fragmentation of technology policy where national governments assert authority over platform selection and sensitive data handling. The development underscores operational risk for MSPs and IT service providers as assumptions of technology homogeneity across regions become unreliable.Supporting these shifts, South Korea enacted the world's first comprehensive AI legislation, requiring mandatory labeling of AI-generated content and risk assessments for high-impact systems, such as those in hiring and healthcare. According to the transcript, 98% of AI startups in South Korea report they are not prepared for compliance. Both developments reveal a pattern: early regulatory efforts tend to produce vague requirements, unclear enforcement, and real operational complexity. Providers operating in multiple jurisdictions must now anticipate compliance fragmentation and increased overhead as regulatory regimes diverge.Additional analysis focused on the continued evolution of the managed services stack, particularly through the lens of AI and workflow automation. Companies like Thrive are investing in enterprise platforms that embed AI-driven reasoning within workflow tools, shifting coordination away from traditional PSA ticketing systems. Meanwhile, integrations such as Quark Cyber with ScalePad's Lifecycle Manager X, and new partnerships between ServiceNow, TeamViewer, Anthropic, and OpenAI, illustrate a market splitting between providers focused on standardization and those managing more complex, enterprise-like environments. Microsoft's financial results further highlighted this trend, with record capital expenditure on AI infrastructure and increased reliance on proprietary chips to reduce dependency on external vendors like Nvidia and OpenAI.For MSPs, these developments raise practical governance and accountability questions. Shifts in regulatory authority and technology platforms create increased risk exposure for providers that do not proactively manage cross-jurisdictional compliance and secure defaults. Vendors are tightening control over platforms as AI becomes central to product architecture, often prioritizing internal risk management over shared upside with partners. Providers that fail to enforce robust data governance, understand cost drift, or plan for architectural lock-in are positioned less as strategic advisors and more as absorbers of client and vendor risk.Four things to know today00:00 France's Platform Ban and South Korea's AI Law Show Regulation Catching Up to Technology04:23 AI Is Reshaping the MSP Tool Stack as Thrive, ServiceNow, and ScalePad Take Different Paths07:37 Microsoft's SMTP AUTH Delay and CISA's AI Slip Show the Risk of Optional Security ControlsAND10:26 Earnings Show Microsoft Turning AI From Feature to Infrastructure as Partner Risk GrowsSponsored by: TimeZest 

Business of Tech
Channel Spending Tops $4 Trillion as MSPs Face Integration and AI Accountability Risks

Business of Tech

Play Episode Listen Later Jan 28, 2026 13:35


Global channel sales in IT are projected to exceed $4 trillion this year, with two-thirds of total spending driven by partner-led deals, according to Omdia research. However, managed service providers (MSPs) continue to encounter significant integration failures following mergers and acquisitions, leading to operational inefficiencies and diminished client trust. The Business of Tech analysis highlights that stacking acquisitions without comprehensive integration amplifies risks, particularly affecting margins, service consistency, and accountability.Supporting survey data from POPX indicates that 60% of UK MSPs report platform and data integration as critical hurdles post-acquisition, while 44% identify poor morale and lack of team alignment as sources of inefficiency. Notably, 38% experienced client disruption during transitional periods, signaling that rapid growth without sufficient operational coherence creates drag rather than leverage. These issues are compounded by rising technology budgets—nearly 75% of organizations expect increased IT spending—and intensifying reliance on AI and cloud services in MSP environments.Additional stories addressed include the widespread adoption of unsanctioned "Shadow AI" tools in healthcare settings, with over 40% of workers aware of unapproved usage, and the increasing tendency for AI platforms to reference general sources like YouTube over traditional medical authorities. The episode further examines new AI-driven arbitration tools, platform consolidations within managed security, and the centralization of authority across purchasing and service delivery ecosystems. Vendor integrations, such as Synchro's marketplace partnership with Ironscales and LevelBlue's acquisition of AlertLogic's unit, illustrate a shift away from component choices towards streamlined, but potentially opaque, accountability structures.For MSPs and IT service leaders, the central takeaway is not the urgency to adopt new tools, but the necessity to clarify ownership, governance, and liability as technology platforms accelerate efficiency and centralize control. Failure to address integration fundamentals, define formal oversight for AI-driven decisions, and maintain transparency amid automation will expose service providers to unpriced risks and erode client trust. Sustained growth is contingent upon operational discipline, not just expanding portfolios. Four things to know today 00:00 Channel Growth Accelerates While MSP Integration Failures Threaten Margins and Trust03:58 New Research Shows Agentic AI Adoption Outpacing Governance and Workforce Readiness07:25 AI Interfaces, Security Consolidation, and MSP Marketplaces Point to a Shift in Where Authority Lives10:27 AAA's AI Arbitrator Shows How Automation Changes Who Owns Decisions, Not Just How Fast They're Made This is the Business of Tech.    Supported by: 

Business of Tech
AI Adoption Stalls Among Workers While Leadership Advances and Organizational Risk Grows

Business of Tech

Play Episode Listen Later Jan 27, 2026 13:13


AI adoption within organizations is increasingly polarized, with Gallup data cited showing that while 77% of technology professionals use AI at work, overall workplace adoption rose only marginally from 45% to 46% in late 2025. This stagnation is attributed not to employee reluctance, but to aggressive uptake by leadership without corresponding redesign of roles and workflows at lower organizational levels. In the UK, research presented notes an 8% net job loss tied to AI alongside a 11.5% productivity increase, with younger workers expressing heightened concern over future employment security.Supporting analysis emphasizes that AI utilized only in decision-making circles can compress organizations, trading resilience for short-term efficiency. Dave Sobel cautions that celebrating productivity gains without acknowledging operational fragility introduces organizational brittleness, as headcount reductions outpace tangible capability improvements across all layers. The discussion underscores the risk in pitching AI as a leadership tool without regard for its broader impact.Additional topics include the risks of encryption practices—specifically Microsoft's BitLocker—and the limits of user control over recovery keys when stored in the cloud. Dave Sobel highlights governance failures when MSPs assume encryption equates to privacy without explicit decisions regarding key custody and authority, noting that silent trade-offs can expose organizations to privacy vulnerabilities. Furthermore, coverage of CISA's absence from RSA conference outlines how diminished federal engagement increases liability and ambiguity for MSPs tasked with interpreting security policy. New video authentication features from Ring are examined as evidence of a broader shift where provenance and chain of custody outweigh convenience, directly affecting the evidentiary value of managed data.The overarching implication for MSPs and IT providers is clear: risk, authority, and liability are being systematically reallocated within the supply chain and between vendors, government, and service providers. Operational preparedness now depends on explicit documentation, governance choices, and advance recognition of liability transfer. Failing to adapt—by leaving deployment decisions, key management, and evidentiary workflows unexamined—may result in organizational fragility, legal exposure, and loss of client trust. Four things to know today 00:00 Stalled AI Adoption and UK Job Losses Show Productivity Gains Are Not Broadly Shared04:06 BitLocker Encryption Allows Microsoft Access to Recovery Keys Stored in the Cloud06:21 CISA Breaks From Past Practice, Declines RSA Conference Appearance08:36 Ring Uses Cryptographic Seals to Verify Video Authenticity as Evidence Trust Becomes a Governance Issue This is the Business of Tech.    Supported by:  https://scalepad.com/dave/

Business of Tech
Global Managed Services Slowdown and Distributor Growth Highlight Shifting IT Service Models

Business of Tech

Play Episode Listen Later Jan 26, 2026 11:46


Global managed services contracts are experiencing reduced momentum as buyers display notable hesitation to commit to long-term agreements during a period defined by organizational pivots toward artificial intelligence. The Information Services Group reported only a 1.2% quarter-over-quarter increase in large managed services contracts in late 2025, totaling $10.9 billion, with full-year growth barely above 1%. While U.S. activity partially offsets contractions in EMEA and APAC, the prevailing environment is one of caution, shaped less by CIOs and more by business and finance leaders redirecting budgets to support internal AI initiatives and flexible operating arrangements.The growth in technology distributor activity in North America highlights increased market fragmentation rather than expanded service levels. Omdia Tech Services data indicates distributor billings grew almost 15% in 2024, reaching $16.6 billion, with over 72% of transactions concentrated among six distributors. Most billings originated with technology advisors, and both value-added resellers and MSPs contributed smaller shares. This shift points to a market emphasizing flexible sourcing—with more intermediaries and shorter deals—but raises questions about MSP control, as authority and accountability can become diluted.Intel's latest financial disclosures reveal persistent supply and execution challenges in delivering AI infrastructure solutions. Despite exceeding earnings expectations, weak revenue forecasts and admission of supply constraints resulted in a 13% decrease in company stock. The vendor attributed its underperformance to capacity shortages and forecasting issues, underscoring the risks MSPs now face in hardware planning for AI deployments. Additionally, the commoditization of key offerings such as Microsoft 365 backup and the automation of technology review processes further compress execution margins, reducing traditional revenue sources for service providers.For MSPs and IT leaders, these developments reinforce the need to reassess risk allocation, authority, and pricing models in client engagements. With execution becoming both cheaper and less differentiated, value must shift toward governance, outcome accountability, and explicit decision ownership. Delays or misjudgments related to hardware supply and service fulfillment present direct threats to project continuity and client satisfaction, emphasizing the importance of operational flexibility, active vendor management, and strategic repositioning of service offerings. Three things to know today 00:00 As Managed Services Stall Globally, Distributor-Led IT Buying Gains Momentum04:58 Intel Beats on Earnings but Misses on Confidence as AI Demand Outpaces Capacity07:27 As Backup and Reviews Are Automated, MSP Differentiation Shifts from Execution to Decision Ownership This is the Business of Tech.     Supported by:  https://scalepad.com/dave/

Business of Tech
AI for MSPs: Workflow Automation & Security

Business of Tech

Play Episode Listen Later Jan 25, 2026 21:22


This Business of Tech episode delves into the critical alignment of technology with how people work, emphasizing the strategic advantage for businesses, particularly those leveraging Apple ecosystems and remote teams. Rob Calvert, President of Second Son Consulting, highlights common misconceptions in IT, where decisions are often made in a vacuum without considering company culture or workflows. This disconnect leads to daily friction and hinders growth. Calvert shares an example of implementing zero-touch MDM, where the technological aspect is straightforward, but the real challenge lies in adapting workflows and company culture to accommodate remote hiring and device deployment timelines, ultimately enabling faster growth with less operational friction.The discussion underscores the importance of integrating IT decisions with broader business objectives. Calvert explains that for small to mid-sized businesses, understanding and defining existing workflows is a crucial first step. His firm's process involves auditing technology platforms, establishing role-based standards for new hires, and documenting procedures for onboarding and offboarding. This systematic approach, exemplified by streamlining onboarding from hours to minutes, ensures that technology serves as an asset rather than an obstacle, optimizing efficiency and security.Further insights are provided on security and compliance within Apple-centric environments, contrasting them with Microsoft-centric approaches. Key differences include procurement styles, the utilization of Apple Business Manager, and the implementation of non-removable MDM for enhanced security and control. The episode also touches on the growing impact of AI, with a focus on enabling local, on-device AI to address privacy concerns and accelerate business processes like proposal writing and research, while emphasizing the need for leadership to guide AI adoption and manage associated security implications.For MSPs and IT service leaders, the episode offers actionable strategies for improving client IT infrastructure. It stresses the value of aligning technology with specific business workflows and company culture to reduce friction and boost productivity. The discussion on Apple-centric IT and AI adoption provides practical guidance on managing devices, implementing robust security measures, and leveraging new technologies responsibly. The emphasis on creating standardized, documented processes for onboarding and offboarding, while remaining flexible to client needs and potential risks, is a key takeaway for enhancing service delivery and client satisfaction.

Business of Tech
MSP Mergers and Acquisitions: Private Equity, AI's Role, and Owner Decisions With Abraham Garver

Business of Tech

Play Episode Listen Later Jan 24, 2026 39:44


The episode centers on structural changes in the Managed Service Provider (MSP) mergers and acquisitions (M&A) landscape, with a focus on the increased influence of private equity (PE), platform strategies, and disciplined deal execution. Dave Sobel and Abraham Garver highlight that the primary driver for buyers has shifted from merely acquiring revenue to seeking operating models that support scale, standardization, and automation. Size of institutional funds directly shapes acquisition targets: funds with $500 million or more increasingly pursue MSPs with minimum EBITDA thresholds, commonly $3–5 million, with larger funds only able to transact at the $10–15 million EBITDA level or above. This signals a market separation, where smaller MSPs face heightened risk of being excluded from future platform opportunities.Supporting these structural shifts, Abraham Garver explains that the buyers' value assessment increasingly prioritizes new customer acquisition over one-off gains from cross-sales like cybersecurity add-ons. Organic growth, shown through the consistent addition of new client logos, outweighs temporary revenue boosts in determining valuation. The episode also outlines that AI investment and automation stories are not materially lifting valuations for smaller MSPs, unless directly reflected in improved financials. Larger providers may have the resources to invest meaningfully in AI, but for the majority—especially those below $10 million in revenue—outsourcing or leveraging third-party solutions is more practical than bespoke, high-cost internal development.A further operational risk discussed is the prevalence of "retrading"—buyers renegotiating valuations post–Letter of Intent (LOI) based on due diligence findings. Abraham Garver reveals that 60% of transactions see price reductions after the LOI, often for factors such as recent customer losses or missed forecasts, diverging from initial headline multiples. This reality highlights the importance of diligent contract negotiation, clear documentation, and the value of experienced advisors to navigate buyer tactics. Rob Calvert contributes additional insight on workflow and technology alignment, emphasizing the role of standardized onboarding and offboarding processes in reducing both operational friction and security gaps.For MSPs and IT service providers, the discussion clarifies several critical implications. First, with platform buyers seeking scale, only MSPs meeting explicit EBITDA and growth metrics will attract competitive offers; others should realistically assess the cost and likelihood of reinvention versus sale. Second, buyers' focus on execution and organic growth, not headline multiples or claims of technological advancement, makes robust financial performance and client acquisition strategies essential to preserving value. Third, the commonality of post-LOI repricing underlines the need for rigorous pre-sale diligence, explicit contractual terms, and experienced representation to preserve deal value and protect against downside risk. Lastly, operational standardization—especially in device and data management—remains central to both platform attractiveness and risk mitigation.

Business of Tech
Why AI ROI Is Elusive: Model Drift, Personal Data Use, and Workflow Liabilities

Business of Tech

Play Episode Listen Later Jan 23, 2026 15:57


Anthropic's disclosure of model drift within its Claude AI system highlights growing risks surrounding governance and ongoing alignment of artificial intelligence. The company has revised its guidelines using a “Constitutional AI” approach, aiming to instill reason-based behavior and ethical boundaries, and has openly acknowledged that an AI's internal controls may shift unpredictably over time—a concern when models are deeply embedded in business workflows. This admission places attention on governance and accountability rather than just model safety, making clear that the AI a company tests may become materially different after extended deployment, especially as personalization increases.Supporting these concerns, Anthropic's research demonstrated that large language models—including those from Google and Meta—can experience personality drift, with unintended shifts in behavior due to instability of internal control mechanisms. Google's updated AI offerings, tying personal data from Gmail and Photos to generative model responses, intensify challenges around data governance and organizational control. As vendors expand AI personalization and memory features, oversight gaps can emerge, raising questions about who retains authority over information, inference, and decision-making within automated systems.Adjacent findings indicate that the anticipated productivity gains from AI have yet to reach most enterprises. According to surveys cited by Dave Sobel, over half of CEOs report failing to realize ROI from AI investments, while frontline employees describe AI integrations as sources of friction and additional workload rather than relief. In the MSP sector, widespread adoption of “agentic” AI and digital labor is delivering financial upside for some providers, but it is also shifting operational liabilities—especially as contracts and security architectures lag behind new workflow realities.The core takeaway for MSPs and IT service providers is the necessity of reexamining control, authority, and contractual obligations in AI-enabled environments. Delegating tasks to automated agents increases exposure to unpriced and unmitigated risks if governance, liability, and monitoring mechanisms do not adapt accordingly. Effective harm reduction in this landscape requires treating workflows—not just models—as security perimeters, clarifying accountability for AI-driven actions, and ensuring that contractual and operational frameworks reflect these new sources of risk.00:00 AI Governance Moves Center Stage as Models Drift and Personalization Deepen05:08 AI Boosts Executive Productivity While Frontline ROI and Employee Experience Lag07:51 AI Exposes the Real Divide: Governance Failures vs. Effective Oversight in Government Systems10:39 MSPs Chase AI-Driven Margins, but Workflow Security and Liability Define the Real Risk This is the Business of Tech.   

Business of Tech
Authority Challenges for MSPs: Deepfake Risks, AI Security Shifts, and Vendor Accountability

Business of Tech

Play Episode Listen Later Jan 22, 2026 17:31


Escalating distrust in identity systems and misuse of AI are forcing a shift in security accountability for small and midsize businesses. Recent analysis highlights that the prevalence of deepfake-driven business email compromise and non-human digital identities is eroding confidence in traditional protective solutions. According to Techyle and supporting reports referenced by Dave Sobel, the ratio of non-human to human identities in organizations is now 144:1, further complicating authority and responsibility for managed service providers (MSPs). As trust in exclusive third-party control disintegrates, co-managed security models are becoming standard, repositioning decision-making and liability.The rise of AI-generated data—described as “AI slop”—has prompted increased adoption of zero trust models, with 84% of CIOs reportedly increasing funding for generative AI initiatives. However, as rogue AI agents are recognized as a significant insider threat, current security services are often ill-equipped to manage these new vulnerabilities. Regulatory bodies, including CISA, have issued guidance noting that the integration of AI into critical infrastructure introduces greater risk of outages and security breaches, particularly when governance remains ambiguous. High-profile vulnerabilities in open-source AI platforms used within cloud environments further highlight the persistence of operational risks.Adjacent technology updates include new releases from vendors such as 1Password, WatchGuard, JumpCloud, and ControlUp. These offerings focus on enhancing phishing prevention, expanding managed detection and response, and automating endpoint management for MSPs. However, Dave Sobel emphasizes that these tools introduce additional layers of automation and integration without adequately clarifying who ultimately holds authority and accountability when failures or breaches occur. There is a consistent warning that stacking solutions or outsourcing core functions without redefining operational control creates gaps between action and oversight.For MSPs and IT leaders, the key takeaway is that security risk is no longer defined by missing technology but by unclear governance, undefined authority, and misaligned incentives. Without explicit contractual and operational delineation of responsibility when deploying AI and automation, service providers are increasingly exposed to liability by default. The advice is to move beyond tool-centric strategies and focus on process clarity: define who authorizes, audits, and terminates non-human identities; establish which parties approve automation actions; and ensure clients understand shared responsibilities to mitigate silent risk accumulation. Four things to know today00:00 TechAisle Warns SMB Security Will Shift in 2026 as Identity Attacks and AI Agents Redefine Risk05:44 AI Moves Deeper Into Critical Infrastructure as Open-Source and Human Weaknesses Expand the Attack Surface09:35 MSP Security Platforms Automate Phishing Prevention and MDR—Outpacing Governance and Control Models12:12 AI-Powered MSP Tools Promise Control and Efficiency, But Shift Responsibility by Default This is the Business of Tech.    Supported by:  https://scalepad.com/dave/