POPULARITY
The episode's central development is the ongoing dispute between the U.S. Department of Defense and Anthropic regarding Pentagon demands for unrestricted access to Claude, Anthropic's AI model. According to Dave Sobel, the Pentagon has threatened to sever ties or invoke the Defense Production Act if the company does not comply, seeking capabilities that Anthropic argues may be illegal—specifically mass surveillance without warrants and autonomous weapons systems without human control. This move exposes Managed Service Providers (MSPs) serving defense contractors to unpredictable legal, operational, and compliance risks embedded in their AI workflows. The analysis highlights that a commercial AI provider's acceptable use policy now intersects directly with national security policy, and even partial vendor compliance can trigger regulatory or legal instability for dependent organizations. For MSPs, this means that building service offerings on AI infrastructures without clear fallback strategies or documented policy change clauses can lead to unmanageable risk and liability in the event of provider or legal regime shifts. Dave Sobel stresses that failing to address policy volatility as part of a managed service amounts to underwriting geopolitical risk without compensation. Other notable developments include the passage of the Small Business Artificial Intelligence Advancement Act, federal cybersecurity resource contraction as CISA operates with 38% staffing after layoffs, and heightened uncertainty around cloud infrastructure due to Microsoft's Azure Local “air-gapped” offering not wholly mitigating U.S. CLOUD Act exposure. Vendor news covered new AI-powered compliance features from Compliance Scorecard (version 10) and Beachhead Solutions (ComplianceEZ 2.0), Apple's accelerated retirement of Rosetta 2 translation technology, a Microsoft 365 Copilot DLP change, and continued fallout from VMware's acquisition by Broadcom, which has led to ongoing cost and trust challenges for cloud and infrastructure partners. The episode's clear implications for MSPs and IT providers are operational. Service catalogs and statements of work should actively address AI provider liability, dependency exit planning, and degraded federal cybersecurity support. Without scheduled and documented compatibility and risk reviews, MSPs absorb hidden exposure into their margins. Vendor stability can no longer be assumed, and proactive policy, renewal intelligence, and transparent advisory sessions are now required to avoid unplanned liability, budget crises, and damaged client trust. Four things to know today 00:00 Pentagon Threatens Anthropic Over Claude Access, Demands Autonomous Weapons Use 04:31 CISA Cuts, Azure Sovereignty Push Signal End of Federal MSP Safety Net 06:56 AI Compliance Tools Flood Market as MSPs Face Validation Gap 09:54 86% of Firms Cutting VMware Ties as Broadcom Renewal Costs Loom This is the Business of Tech. Supported by: Small Biz Thoughts Community
Recent analysis from Goldman Sachs indicates that $700 billion in AI investment during 2025 resulted in no measurable U.S. GDP growth, with most AI equipment imports negating domestic benefits and 80% of surveyed firms reporting no productivity or employment improvements. This pattern suggests that AI-related spending has primarily shifted margins from enterprise IT budgets to a small number of infrastructure vendors rather than delivering distributed value. Internal concerns are rising, with 90% of IT leaders questioning AI's return on investment, and 80% citing fragmented data as a primary challenge to measuring outcomes. Further context reveals that agentic AI initiatives face operational headwinds: Gartner expects 40% of such projects to be cancelled by 2027, and S&P Global found nearly half are abandoned before production, most often due to inadequate planning and data foundations. Margin erosion is widespread, attributed to AI implementation costs, and attempts to scale AI agents into production remain limited by inference costs and insufficient infrastructure. Despite increased adoption efforts, sustainable value delivery from AI platforms remains elusive for most organizations. Enterprise AI access is becoming increasingly concentrated. OpenAI's partnership with consulting firms such as BCG, McKinsey, Accenture, and Capgemini consolidates control of the enterprise distribution layer, narrowing competitive opportunities for smaller providers. Meanwhile, Amazon's 13-hour AWS outage, linked to the misconfiguration of an internal AI tool, underscores the liability ambiguity in agentic systems—where vendors may attribute autonomous actions to user error, complicating risk assignment. Additional updates from vendors such as Anthropic, Cloudflare, and New Relic address incremental technical capabilities, with a distinct focus on cost, operational governance, and policy enforcement. The prevailing themes for MSPs and IT leaders are increased scrutiny of AI value, heightened exposure to cost and accountability risk, and the emergence of managed service opportunities around data governance, cost instrumentation, and liability management. With enterprise market channels consolidating and risk shifting toward service providers, integrating robust contractual definitions for autonomy, incident attribution, and financial boundaries is essential to limit harm and clarify responsibility before incidents occur. Four things to know today 00:00 Goldman: $700B AI Spend Delivered Near-Zero U.S. GDP Growth in 2025 03:49 OpenAI Enlists BCG, McKinsey, Accenture to Distribute Enterprise AI Agents 06:44 Report: Amazon's Own Engineers Prefer Claude Over Its Mandated Internal Tools 08:56 AI Inference Costs Are Falling — But Governance Gaps Are Growing This is the Business of Tech. Supported by: CometBackup Small Biz Thoughts Community
Cybercrime's escalation has reached a projected $12.2 trillion annual impact by 2031, with a notable surge in remote monitoring and management (RMM) tool abuse—up 277% year-over-year, according to Huntress and supporting vendor reports. Attackers utilize legitimate IT tools to facilitate stealthier ransomware and phishing campaigns, amplifying structural vulnerabilities within MSP technology stacks. Key metrics from Acronis, WatchGuard, and Vectra AI indicate a shift to smaller, more evasive malware campaigns, longer times to ransomware deployment (averaging 20 hours), and widespread unaddressed security alerts, raising questions about the adequacy of current defenses and incident response practices. Vendor-supplied threat intelligence further shows that MSPs' reliance on signature-based platforms and insufficient visibility leaves them exposed to evolving attack techniques. Data reviewed suggests phishing footholds can quickly compromise cross-client environments, and legal ramifications heavily fall on the service provider when RMM or monitoring tools act as entry points. Notably, only about 58-60% of organizations report full visibility across their systems, with a majority of alerts remaining unaddressed, underscoring gaps in operational maturity and preparedness. Adjacent coverage highlighted Microsoft Copilot's repeated security control failures within regulated environments, specifically its inability to enforce sensitivity labels and boundaries across emails—most recently affecting the UK's National Health Service. The lack of vendor-announced architectural changes calls into question the viability of deploying AI tools in compliance-driven contexts. Separately, political and public backlash against surveillance technologies (such as Flock cameras) demonstrates that unchecked data collection is no longer a manageable passive risk, as data becomes increasingly actionable and retains liability beyond technical considerations. The practical takeaway for MSPs and IT leaders is a need to prioritize audit, documentation, and enforcement of controls within their technology stacks, especially where vendor tools or AI-driven automation intersect with compliance and client trust. Preserving operational optionality and scrutinizing vendor terms—particularly data sharing and architectural enforcement—are essential to reduce exposure. Waiting for vendor patches, disregarding documented control failures, or underestimating public scrutiny elevate liability across legal, reputational, and client relationship domains. Four things to know today: 00:00 Vendor Threat Reports Converge on One Risk MSPs Can't Outsource: The RMM as Breach Vector 05:11 Copilot Failed Compliance Controls Twice in Eight Months — A Patch Won't Fix That 07:03 Flock Backlash Exposes the Liability Hidden in Every Vendor Data-Sharing Contract 09:42 GTDC Summit: Distributors Pitch AI On-Ramp as Hyperscalers Compress Their Margin Sponsored by:
Recent data highlights a growing disconnect between technology spending and measurable business outcomes, with small business optimism softening and widespread skepticism about the benefits of artificial intelligence. The transcript cites an 80% rate of firms seeing no noticeable AI-driven productivity improvements, while trust in technology companies, particularly AI vendors, has declined globally according to the Edelman report. For MSPs, this presents a risk of credibility gaps, especially for those selling AI solutions without corresponding outcome data, as client trust and spending habits grow more discerning in the face of unfulfilled promises. Further context is provided by economic indicators showing a resilient U.S. economy, yet persistent challenges for small businesses. The NFIB Small Business Optimism Index has dropped slightly to 99.3, with insurance costs and labor quality as major pain points; only 16% of business owners expect higher sales. At the same time, IT professionals face salary compression—median IT salaries fell from $145,000 in 2023 to $115,000 in 2024—despite a severe shortage of skilled cloud, AI, and infrastructure talent, as less than 10% of hiring managers are confident in filling in-demand roles. Additional market pressures include rising technology budgets—three-quarters of CFOs anticipate larger tech allocations, but headcount increases are slowing and tech spending faces a widening affordability gap due to sector-specific inflation outpacing budget growth. Vendor-specific developments, such as Western Digital exhausting hard drive capacity for 2026 and Enable reporting 12.8% revenue growth alongside ongoing losses and a 65% stock decline since 2021, illustrate structural risks. Vendor rationalization and strategic uncertainty are likely outcomes for MSPs relying heavily on underperforming partners. Key takeaways for service providers and IT leaders include the need for caution in messaging and solution positioning: outcome data and defensible value propositions are essential when advocating AI or cloud services. Salary data should be weighed against demand-side evidence to avoid retention failures. Finally, dependency on vendors with deteriorating financial outlooks heightens operational risk; providers should proactively assess alternatives and align with financially sustainable partners to reduce exposure during vendor consolidation cycles or market restructures. Four things to know today 00:00 AI Productivity Gap Widens as Trust Drops — MSPs Selling Outcomes They Can't Measure Face CFO Audits 04:51 IT Median Salary Dropped 20% in 2024, But Only 7% of Hiring Managers Can Fill AI and Cloud Roles 07:26 IT Inflation Hits 6.9% as CFOs Concentrate Spend; Western Digital Fully Booked Through 2026 10:28 N-Able Beats Revenue, Misses Earnings as 2026 Growth Guidance Drops to 8–9% Sponsored by: CometBackup Small Biz Thoughts Community
The discussion centers on the implementation challenges and partner enablement strategies for artificial intelligence (AI) within the technology channel. According to TD Synnex's AI Accelerator program, only a small portion of AI projects achieve active deployment and measurable ROI, with widespread difficulties cited in scaling complex AI use cases. Jessica Yeck, SVP of Vendor Solutions at TD Synnex, highlights that progress is contingent upon engaging partners at their current state of AI readiness and aligning support resources accordingly. The evidence reflects a move away from one-size-fits-all approaches toward tailored frameworks that focus on tangible business outcomes and repeatable processes. TD Synnex's revised strategy prioritizes meeting partners “where they are,” using assessment frameworks that differentiate between partners with defined AI strategies and those seeking foundational guidance. Jessica Yeck references leveraging the broader technology ecosystem—including vendors, ISVs, and hyperscalers—to deliver solutions with multi-party input. This approach enables partners to identify actionable opportunities and develop pipelines, but demands cross-functional collaboration and technical-specialist engagement, particularly as customization—rather than rigid standardization—is required for effective deployment. The episode also addresses the evolving role of technology distribution in supporting partners beyond logistics. There is explicit recognition of the importance of financial mechanisms, marketplace access, and consultative guidance for services. Jessica Yeck underscores the interconnectedness of relationship-building, competency focus, and ecosystem utilization, noting that partners do not need exhaustive in-house technical skills if they can identify and collaborate with relevant specialists. This points to a strategic shift in what services and value partners can realistically deliver. For MSPs and IT service providers, the key implications involve re-evaluating approaches to AI enablement and partner relations. Instead of prioritizing technical uniformity or attempting to master every subsystem, providers should invest in relationship management and focused competency development while leveraging broader ecosystem resources. Adoption risk is reduced when partners clearly understand their customers' primary objectives and are prepared to orchestrate service delivery with targeted technical and financial support from their distribution networks. The episode reiterates that risk and accountability in AI projects hinge on practical readiness, process discipline, and honest assessment of operational capabilities, rather than technology enthusiasm or over-reliance on standardized templates.
Welcome to a feed drop ofthe SMB Community Podcast, the longest-running MSP-focused podcast in the industry. Hosts James Kernan and Amy Babinchak dive deep into AI go-to-market strategies for 2026, inspired by insights from Amy Babinchak's recent AI class for MSPs.They open with the latest news on Microsoft Copilot and Anthropic's integration, highlighting new privacy and security features for Office apps. Then, they explore how MSPs can not only adopt AI internally but also create new, innovative service offerings for their clients—like custom AI grant-writing agents for nonprofits, real-world business demonstrations, and the integration of AI readiness assessments.Pricing strategies, project sales versus monthly recurring revenue, and the importance of meaningful quarterly business reviews also come under the spotlight. Throughout the conversation, Amy Babinchak and James Kernan share practical examples, discuss industry challenges, and encourage listeners to rethink and monetize their approach to AI as we move toward 2026.Tune in for fresh ideas, actionable strategies, and a glimpse into the real-world experiences of MSPs shaping the future with AI, and find it on your favorite podcast player. Links at https://smbcommunitypodcast.com
Corsica Technologies' reported 105% year-over-year growth in managed services bookings stands out as the primary development, indicating heightened demand for flexible service models among businesses with existing IT functions. According to Brian Harmison, CEO of Corsica, this growth is attributed to the company's focus on operational integration, automation, and data-centric managed services that supplement, rather than replace, in-house IT capabilities. The significance for MSPs is not the expansion itself, but the operational choices that enable sustained trust and differentiated engagement in a competitive landscape. Supporting details clarify Corsica's operational strategy: instead of automating or deploying AI indiscriminately, Harmison emphasizes that automation and AI are only effective atop an already “operationally excellent” MSP framework. Practical deployments cited include user onboarding/offboarding workflows, which demand both internal process clarity and integration with client HR systems. The company positions data integration and workflow consulting as integral to MSP-client relationships, not as add-on projects. Corsica's contracts reportedly reduce friction and avoid asset-tracking or incremental billing, seeking to foster longer-term trust over short-term revenue optimization. The episode also addresses the implications of Corsica's acquisition of Accountability IT. Harmison cites alignment in operating models and targeted capabilities—especially in Microsoft security and AI expertise—as central to the integration's value, rather than generic synergies. He notes that continuity of client relationships and careful preservation of existing service structures were prioritized in the first 90 days, even at the expense of speed, to mitigate operational risk and maintain client trust. The discussion highlights the risk tradeoffs between scaling for broader capability and maintaining agility for specialized client needs. For MSPs and IT leaders, the takeaway is to focus on risk reduction through operational excellence and trusted client relationships. Embracing automation and AI is not a universal solution; process maturity and readiness in both the provider and customer are preconditions for any meaningful implementation. Acquisitions require careful cultural and operational integration, with an emphasis on continuity and incremental capability, rather than immediate consolidation or scale. The episode frames operational clarity and trust—not rapid expansion or technology adoption—as critical determinants of long-term viability and resilience in managed services.
Agentic AI is being deployed as production infrastructure in enterprise settings, but prevailing frameworks remain unreliable for mission-critical operations. Dave Sobel and Ron Aroussi from Muxie underscored that while AI agents are functional—especially in non-deterministic contexts like customer support—expectations of deterministic, workflow-based reliability are not met. The move from demonstration agents to production-scale tools brings heightened attention to issues of reliability, observability, and especially risk of vendor lock-in for Managed Service Providers (MSPs) and their clients.Operational deployment of AI agents currently gravitates toward roles with minimal operational risk, such as customer-facing chatbots or internal chief-of-staff assistants. Aroussi explained that while such agents can automate initial support tiers and internal daily briefings, their unpredictability and potential for error limit their use in processes demanding strict oversight and accountability. He identified two core use cases—external (customer support) and internal (personalized information management)—explicitly noting that agents are best positioned to augment rather than fully automate complex workflows at this stage.A critical risk for MSPs lies in attempting to retrofit existing software frameworks to support agents, which introduces integration complexity and increases the likelihood of operational failures. Purpose-built infrastructure for agentic AI offers better alignment between AI capabilities and production requirements, with Aroussi citing drastically reduced hallucination rates and improved oversight when using native tools. Open source is identified as a foundational element for AI development, but it incurs its own risks, particularly around third-party code quality and the long-term sustainability of community-driven projects.The practical implication for MSPs and IT service providers is clear: a cautious, incremental adoption approach focused on low-risk use cases, coupled with rigorous controls on agent permissions and robust audit trails, is essential. Decision-makers should avoid assuming agents operate with the reliability or accountability of traditional software, prioritize operational transparency, and ensure that responsibilities for agent actions are clearly defined and enforced at the implementation level. Vendor lock-in and software provenance remain significant governance concerns as agentic AI moves from experiment to infrastructure.
The central development addressed is the disconnect between rising overall IT spending and the declining channel share for MSPs and IT partners. Dave Sobel, in discussion with an industry analyst, highlights a reduction in indirect channel participation—from over 75% to a projected 66.7% in 2026—primarily due to the concentration of AI infrastructure investment among the largest technology firms. These hyperscalers and their associated CapEx do not translate into traditional channel opportunities, restricting partner involvement to areas outside large-scale AI data center buildouts.Supporting data point to a technological industry projected to reach $6.07 trillion in customer spend, growing at 10.2%, compared to significantly lower world GDP growth. However, almost none of the rapid AI-related CapEx from companies like Nvidia and Google flows down to channel partners, who instead rely on client-facing managed services, advisory, and security service work. The increasing complexity of customer demand—such as the shift toward managed security (15% growth) and AI services (35.3% compounded growth)—further pushes MSPs to focus on services surrounding the core product, rather than on direct product resale or thin margin opportunities.A significant operational shift within the channel also emerges: the distinction between “influence” and “execution” partners. Vendor programs increasingly recognize partner contributions outside of transactional resale, such as co-selling, advisory contributions, and services attached before or after the point of sale. This trend is reinforced as platforms move toward “point systems” and indirect revenue attribution, redefining how MSPs measure channel health and partner value in a more complex, multi-partner environment.For MSPs, IT providers, and decision-makers, the key operational implications are clear. Traditional growth through seat expansion is less reliable as hiring softens, and managed services must focus on multiplier opportunities—profitable service revenue attached to each dollar of product sold. Capturing value requires adapting to changing program structures, emphasizing trusted advisor roles, and collaborating effectively with adjacent partners. Near-term investment in understanding and building pre-sales AI and security services, and tracking evolving vendor economics, is essential for navigating the new realities of partner participation, risk allocation, and long-term business health.
Global technology spending is projected to reach $5.6 trillion by 2026, with nearly two-thirds of this investment directed toward software and computer equipment, particularly servers, according to Forrester. Generative AI is cited as a primary driver of this increase, shifting the balance of power toward cloud providers such as AWS and Azure. This escalation has implications for operational margins and the position of IT service providers, as businesses increasingly migrate complex workloads to cloud infrastructure ecosystems.Supporting data shows a disconnect between tech employment trends and hiring activity. In January 2026, technology companies cut approximately 20,155 jobs, mainly in telecommunications, while job postings for tech positions rose by 13% compared to the prior month, based on CompTIA analysis. Dave Sobel interprets this as a shift away from permanent IT headcount to project-based, AI-focused engagements. This development places pressure on service providers, who must adapt to buyers reallocating spend from traditional staffing models to short-term, outcome-oriented contracts.Adjacent discussion covered two press releases: VirtuaCare launched a support offering for Windows-based MSPs needing Apple expertise, delivering an externally verifiable, Apple-certified service. In contrast, Miso announced a roadmap for an autonomous AI L1 technician but did not substantiate claims with deliverables or customer data. Dave Sobel emphasized the need for MSPs to demand piloting, outcome metrics, and auditable product maturity, warning against reliance on unproven AI solutions and highlighting the risk of outsourcing as only a temporary solution.The core implication for MSPs and IT providers is a need for tactical negotiation and operational risk management. Dave Sobel recommends using AI first to reduce internal labor costs before introducing it as a client offering, prioritizing outcome-based pricing and adjusting contracts to retain value from efficiency gains. Providers should avoid becoming displaced labor, rigorously test new technologies before adoption, and remain vigilant regarding vendor claims. The emphasis remains on capturing and defending margins through accountable operations and contract governance rather than chasing speculative innovation.Three things to know today00:00 Tech Spending Hits $5.6T but MSPs Face Margin Squeeze Without AI Pricing Reset05:31 VirtuaCare Ships Apple Support; Mizo Announces Roadmap—One's Testable Today08:17 MSPs Must Capture AI Efficiency Value or Face Margin CompressionThis is the Business of Tech. Supported by: Small Biz Thought CommunityCheck out Killing IT
Mid-market organizations are transitioning from pilot projects to operationalizing generative AI and agentic workflows, according to a TechEYE article and Tech Isle survey cited by Dave Sobel. This shift centers on outcome-driven automation but exposes providers to new liability concerns, mainly due to fragmented, unreliable data and shadow AI usage—employees employing unauthorized tools outside official controls. The primary risk is that MSPs may be blamed for incidents where contract boundaries and technical controls do not cover browser-based generative AI use, making forensic evidence and documented enforcement essential for defending accountability. Supporting data from Tech Isle found that over 5,000 companies are pursuing structured approaches to AI-enabled growth, but face persistent issues in data trust, governance, and user fatigue. Additionally, European investment in sovereign cloud infrastructure is projected to triple between 2025 and 2027, driven by regulatory demands and concerns about U.S. data sovereignty. MSPs managing split architectures—sovereign providers for regulated data and hyperscalers for everything else—encounter API mismatches, operational complexity, and margin pressure. The recommendation is to standardize policy enforcement, identity management, and residency mapping while prioritizing audit-ready reporting and exception handling. AI-driven cyberattacks have increased, with reports from Level Blue and Check Point Research highlighting a surge in both attack volume and sophistication. Only 53% of CISOs feel prepared for AI threats, despite 45% expecting to be impacted within a year. Browser-based generative AI use introduces visibility gaps, raising the risk of negligence claims when service providers cannot demonstrate governance or forensic readiness. Reauthorization of the Cybersecurity Information Sharing Act (CISA) underscores that voluntary data sharing is inadequate, with CIRCA now requiring mandatory 72-hour incident reporting for critical infrastructure. The key takeaways for MSPs and IT leaders are to proactively define AI coverage and governance in contracts, enforce acceptable use policies, and instrument monitoring to close visibility gaps. Providers who can deliver forensic-grade telemetry, managed compliance programs, and operational readiness for incident reporting will be better positioned to defend against penalties, retain higher-value accounts, and offer meaningful differentiation. These structural challenges—fragmented control planes, increased compliance costs, and permanent risk friction—necessitate a strategic shift toward governance-led service models.Three things to know today00:00 Midmarket Shifts to Agentic AI as Europe Triples Sovereign Cloud Spending by 202706:08 Most Security Chiefs Say They're Not Ready for AI-Powered Cyberattacks Coming This Year09:46 CISA 2015 Reauthorized Through 2026; CIRCIA Mandates Expose Voluntary Sharing Failure This is the Business of Tech. Supported by: TimeZest IT Service Provider University
Artificial intelligence (AI) is intensifying workloads rather than alleviating them, leading to increased burnout and declining decision quality, according to findings published in the Harvard Business Review and cited by Dave Sobel. The episode underscores that AI lowers the cost of producing outputs such as drafts and summaries but raises throughput targets and introduces new verification burdens. Economic gains from AI remain concentrated where capital and skilled labor already exist, while negative impacts—like displacement and wage pressure—are felt locally. These dynamics highlight the need for robust governance, particularly for managed service providers (MSPs) who deploy AI solutions.Supporting studies referenced include the International AI Safety Report, which details heightened uncertainty around AI development and its risks, as well as research from Oxford documenting the unreliability of AI chatbots in real-world medical decision-making. Experts warn that rapid automation without corresponding improvements in control systems creates structural constraints, making traditional software governance frameworks inadequate for unpredictable AI behaviors. Without proactive measures, these gaps risk exacerbating economic inequality and liability in regulated environments.Additional developments include OpenAI's release of upgraded agent features—such as GPT-5.2, improved context retention, managed shell containers, and a new skills standard—presented as operational enhancements but raising concerns about black-box context handling, auditability, and dependency risk. T-Mobile's AI-powered live translation service offers greater convenience but eliminates audit trails, shifting compliance risk to customers and prohibiting independent verification. Quark Cyber's launch of an internal cyber risk score introduces further complexity, as the scoring methodology is embedded within a financial product structure and lacks transparent validation.For MSPs and IT service leaders, the key takeaway is to treat new AI features and risk metrics as tools with significant tradeoffs. AI deployments should focus on governance layers that include workload caps, quality gates, and measurable outcomes rather than simply accelerating productivity. New features should be used for low-stakes workflows and carefully avoided in high-risk or regulated contexts unless auditable controls and deterministic checkpoints are established. Vendor-managed risk scores and warranties require independent validation before being positioned as client-facing truth standards.Four things to know today00:00 Harvard, Oxford Studies Find AI Raises Workload, Delivers Inadequate Medical Advice05:01 OpenAI Updates Deep Research and Adds New Agent Runtime Capabilities07:33 T-Mobile Tests Real-Time Call Translation Built Into Its Network09:17 Cork Cyber Rolls Out New Risk Score for Managed Service ProvidersThis is the Business of Tech. Supported by: ScalePad Small Biz Thoughts Community
OpenAI's twin initiatives to monetize ChatGPT's free tier through ads and launch the Frontier enterprise agent platform represent a shift in the AI provider's business model, with substantial implications for compliance and operational governance. Free and low-cost ChatGPT users will now see sponsored links unless they opt to reduce daily usage; only customers paying $20 or more per month retain an ad-free experience. OpenAI is concurrently marketing Frontier to enterprise clients such as HP, Intuit, and Uber, offering AI agent orchestration and deploying a team of consultants to support custom AI applications. The company projects enterprise revenue will constitute 50% of its income by year-end, up from 40% the prior month.Operating in both the consumer funnel and the enterprise layer, OpenAI combines top-of-funnel data monetization with vertical integration of services. The ad-supported free tier raises compliance concerns, as user interactions become subject to additional data collection and monetization. For organizations, this means enforcement decisions around whether and how employees may use free AI tools in regulated or sensitive environments. The more consequential development, however, is the introduction of enterprise agent orchestration through Frontier, where questions persist regarding liability, governance, production stability, and how organizations are protected from errors committed by autonomous agents.Related market movements include Anthropic's release of Claude Opus 4.6—which enables multi-agent collaboration with context windows up to 1 million tokens—and Microsoft's planned shift for Windows to a signed-by-default trust model. Anthropic's enhancements to agent functionality remain constrained by key gaps, such as conflict arbitration mechanisms, rollback procedures, and documented cost models, and the expanded context remains limited to beta testers. Microsoft's strategy to enforce signed apps by default mirrors iOS's approach to application trust, but its operational viability depends on how override mechanisms are managed by both users and IT administrators. Additional developments in backup, asset management, and AI governance (as seen with NinjaOne, JumpCloud, and Zoom) reflect a general trend towards increased integration and platform consolidation, though with ongoing gaps in security and compliance as AI adoption accelerates.The practical takeaway for MSPs and IT service leaders is the need to re-evaluate policies around free AI tool usage, invest in governance and auditability for enterprise AI, and prepare operational systems for stricter software trust and exception management requirements. Structural changes in software security and AI orchestration are transferring costs and risks from incident response to ongoing policy enforcement and exception handling. Those offering AI services should prioritize model-agnostic governance and avoid reliance on a single vendor's automation layer, as vertical integration by platform providers is reducing the defensibility of narrow service offerings.Four things to know today:00:00 OpenAI Adds Ads to Free ChatGPT; Launches Frontier Platform for Enterprise Agents04:07 Anthropic Ships Opus 4.6 Agent Teams; Model Found 500 Zero-Days in Testing06:43 Microsoft Announces Signed-App-Only Mode for Windows 11; Phased Rollout Planned10:19 NinjaOne Adds Asset Management; Zoom Launches AI Workspace Tool; JumpCloud Opens VC ArmThis is the Business of Tech. Supported by: CometBackup IT Service Provider University
IT spending continues to expand, with North America projected to lead a 12.6% increase to $2.6 trillion, primarily due to hyperscaler investments in AI infrastructure. However, the proportion of technology spending funneled through channel partners is declining, now at 61% compared to over 70% four years ago, according to a survey by Omnia. This shift signals that while the market is growing, traditional margin and resale opportunities for MSPs are narrowing as vendors redirect a larger share of revenue direct while still relying on partners for implementation, support, and customer operations.Data from Salesforce underscores a near-universal trend toward partner involvement in sales, with 94% of surveyed global salespeople leveraging partners to close deals and 90% using tools to manage relationships. Despite this, Dave Sobel clarifies the distinction between involvement and compensation, highlighting that partner influence on deals does not guarantee economic participation at previous levels. These dynamics reinforce that MSPs must adapt to a reality where their role in the value chain is being separated into influence and execution, with the middle tier facing increasing pressure.Additional analysis draws attention to labor market changes and technology commoditization. U.S. job openings have fallen to their lowest point in over five years, undermining MSP growth strategies dependent on seat expansion. Simultaneously, the AI market is fragmenting at the application layer—with Google's Gemini app, Grok, and OpenAI's ChatGPT shifting market shares rapidly—while hyperscalers like Alphabet (Google) commit unprecedented capital expenditures, fueling an infrastructure arms race even as front-end AI tools become more interchangeable.The practical implication for MSPs and IT service providers is increased pressure to re-evaluate business models, operationalize AI offerings, and focus on defensible, productized services. Reliance on a single vendor or seat-based growth forecasts presents heightened risk. Successful adaptation will require a shift toward managed services around AI operations, governance, and productivity—emphasizing accountability, optionality, and measurable ROI—rather than assuming historic revenue models will persist.Three things to know today:00:00 Partners Essential to Sales but Losing Economic Share, Survey Shows05:44 US Job Market Shows Low Hiring, Low Firing Despite Falling Openings 08:00 Alphabet Plans $180B AI Capex as Gemini Hits 750M UsersThis is the Business of Tech. Supported by: Small Biz Thoughts Community
AI pilot programs are consistently failing to deliver measurable business value, with a primary cause identified as a lack of clearly defined problem statements guiding these initiatives. Ashwin Mehta, an AI strategist with experience leading enterprise transformations, emphasized that many organizations initiate AI pilots without specific objectives, resulting in projects that struggle to demonstrate impact or justify further investment. This lack of focus often leads to stalled initiatives, rather than progress into scalable production environments.The discussion outlined how mid-market and small businesses typically implement AI by acquiring SaaS tools with embedded AI features, rather than building bespoke solutions. Ashwin Mehta observed that while “build versus buy” considerations have shifted as orchestration and database platforms become more accessible, custom development still brings additional risk, skill requirements, and long-term maintenance burden. Even as technical barriers decrease, organizations are cautioned to weigh lifecycle costs and operational support needs before pursuing custom builds.Data management was highlighted as a recurrent challenge, both from an organizational readiness perspective and regarding regulatory risk. Ashwin Mehta underscored the importance of establishing a single source of truth for business-critical data and classifying information by its regulatory sensitivity. Without such data discipline, adoption of AI tools—especially in regulated sectors—becomes a source of uncertainty, with organizations defaulting to restrictive or prohibitive AI policies due to inadequate risk visibility.For MSPs and technology leaders, the operational implications are clear: pilots without rigorous scoping and problem definition are unlikely to progress, and sustainable AI adoption requires purposeful data governance and clear frameworks for project prioritization. With the complexity of AI implementations extending beyond technical issues to include cost volatility, compliance, change management, and skills gaps, providers must approach each initiative with a structured, risk-aware mindset and ensure ongoing oversight as both technology and regulatory landscapes evolve.Sponsored by: ScalePad
OpenAI's direct investment and technical involvement with Thrive Holdings, specifically through its partnership with SHIELD Technology Partners, presents a new precedent for AI's integration into the managed service provider (MSP) space. Unlike prior private equity roll-ups or traditional organic growth, this move involves embedding OpenAI's models and engineers directly within SHIELD's platform, an entity that has rapidly acquired and integrated nine MSPs and executed two $100 million funding rounds. The arrangement is characterized by efforts to optimize MSP operations through proprietary AI automation, raising immediate questions around operational dependency and the shifting locus of software control.According to Seth Robinson, this approach signals OpenAI's attempt to navigate both consumer and enterprise technology markets—a dynamic seen previously in mobility—and reflects the broader tension between individual AI use cases and deeply integrated stack solutions. The initiative may accelerate operational scale, but it also introduces new operational risks by centralizing key components of service delivery and support within a single AI-driven platform, potentially affecting vendor lock-in, data governance, and continuity of MSP business models.Parallel developments highlight new vendor integration strategies among MSP-focused software providers. One example is Lexfold's AI documentation system, which, rather than integrating directly with core PSA and RMM tools, utilizes intermediary platforms such as Scalepad and Liongard for data access. Seth Robinson emphasizes that these alternative integration points may alter an MSP's center of operational gravity and complexity management, underscoring the need to assess not just functional outcomes but also system dependencies and brittleness introduced by new integration paths.For MSPs and IT leaders, these trends underscore the necessity of rigorous due diligence in vendor relationships, clarity on operational dependencies, and attention to the long-term implications of AI-enabled automation. Management—not elimination—of complexity remains central, with the risk of oversimplification leading to commoditization and loss of differentiation. Moreover, advances in AI should prompt greater scrutiny about talent pipelines, upskilling strategies, and the potential risks of eroding early-career roles, which may impact long-term service quality and resilience. Careful evaluation of integration points, data integrity, and operational control is recommended to mitigate the practical and organizational risks emerging from these developments.
A PwC survey of over 4,400 CEOs across 105 countries found that 56% report artificial intelligence has not delivered meaningful revenue growth or cost savings in the past year. Only one in eight organizations saw both benefits. The core issue, as highlighted by Dave Sobel, lies in poor integration—largely due to data quality challenges and legacy systems—leaving many businesses stuck in what PwC terms “experimentation purgatory.” Despite significant investment, AI infrastructure is often failing to produce measurable returns.This lack of operational discipline is mirrored by the rising incident of AI bots, which now account for 1 out of every 50 website visits, a sixfold increase from earlier reports. AI is successfully extracting value from enterprise infrastructure through sophisticated scraping, as companies pay for tools that return little and simultaneously fund infrastructure serving AI bots. The operational cost and exposure from bot traffic and ineffective AI tool adoption highlight the disconnect between hype and practical benefit.Adjacent stories expand on the governance gap and evolving expectations around risk. The U.S. and China declined to sign a non-binding declaration on military AI, underlining global regulatory fragmentation. In contrast, the Cybersecurity and Infrastructure Security Agency (CISA) issued a binding directive for federal civilian agencies to remove unsupported devices within a year, signaling substantial operational risk from end-of-life technology. These regulatory movements are expected to drive similar risk accountability into the private sector, primarily through insurance requirements.For MSPs and IT service providers, the takeaway is not to chase AI-powered offerings but to prioritize readiness, control, and cost accountability. Vendor partner programs (Cisco and 1Password) reward lifecycle management and customer retention, not AI sales. The practical competitive advantage is operational honesty—delivering realistic assessments, proactive client interactions, and transparent guidance. Automation should fund genuine client relationship activities, not replace them. The focus should remain on safeguarding operational integrity, controlling technology risk, and building customer success capability.Four things to know today:00:00 PwC Survey Finds Most Business Leaders Still Waiting for AI Payoff05:00 Federal Agencies Ordered to Eliminate End-of-Life Devices Over Cyber Threats08:06 Cisco and 1Password Launch Partner Programs Focused on Customer Success10:52 Harvard Business Review Says Human Touch Remains Critical Advantage Over AIThis is the Business of Tech. Supported by: Small Biz Thought Community
The primary development centers on the shift toward smaller, task-specific AI models within enterprises and how this shift is primarily about transferring liability from AI vendors to operators. Dave Sobel notes that while narrower AI models are being marketed as safer and easier to govern, the reality is that they shift the burden of control, oversight, and risk directly onto the organizations deploying them. Hidden costs—particularly those related to data infrastructure, compliance, and ongoing governance—are substantial, often eclipsing the initial AI investment.Supporting data includes findings from a Salesforce survey indicating that CIOs allocate a median of 20% of their budgets to data and infrastructure management versus 5% to AI itself. Dave Sobel stresses that the real cost of an AI project can be significantly higher than client expectations, pointing out a 4:1 spending ratio between supporting infrastructure and the AI technology. This underscores the risk for MSPs who may fail to price in the operational and governance requirements appropriately, exposing themselves to financial and compliance liabilities.Adjacent stories address OpenAI's strategic expansion into advertising and direct consulting, marking a move from pure technology platform to direct competitor for services revenue. OpenAI is creating an Ads Integrity Team to manage advertiser verification and reduce scam risk but acknowledges the challenges of maintaining effective controls at scale. In parallel, OpenAI is embedding engineers within client operations—mirroring other internal AI initiatives such as those at Shield and Entegris—and reinforcing a market divide. MSPs who build such capabilities internally capture margin, while others face lasting margin compression as purchasers of external solutions.The implications for MSPs and IT leaders are direct. Success depends less on which AI model is selected and more on the provider's ability to establish rigorous governance, liability management, and ongoing operational control. The market is bifurcating: service providers who can build in-house AI platforms or attract strategic investment will retain efficiency as margin, while those relegated to purchasing third-party tools risk further erosion of profitability and competitive position. The decision to build or buy is becoming a business model risk, not just a procurement choice, and the opportunity to address it is narrowing.Three things to know today:00:00 Firms Shift to Task-Specific AI Models Amid Governance, Liability Concerns 04:35 OpenAI Launches Ads Integrity Team, Hires Hundreds as Services Push Begins08:34 MSP Market Splits as Integris, Shield Build Internal AI, Others Buy ToolsThis is the Business of Tech. Supported by: IT Service Provider University
The episode focuses on current security risks and limitations in industry intelligence, highlighting that CISA's Known Exploited Vulnerabilities (KEV) catalog often lags by years in tagging vulnerabilities exploited by ransomware. One cited vulnerability sat in the catalog for 1,353 days before being flagged as ransomware-exploited, illustrating a significant delay in actionable intelligence. This gap raises concerns for MSPs whose patching priorities rely on outdated catalogs, potentially leading to a misalignment between compliance activities and actual threat vectors.Supporting this, Dave Sobel underscores how evolving threat models frequently bypass traditional vulnerability management. The recent compromise of OpenClaw's skills marketplace, with a 12% malicious rate in submitted skills and basic post-facto reporting mechanisms, demonstrates that credential theft and malicious automation now present risks outside standard patch management. The core operational challenge for MSPs is not just software vulnerability but the governance of AI-enabled tools and uncontrolled marketplaces that can expose clients to breaches.Further contextualizing risk and automation, vendor launches include Lexful's AI-native documentation for MSPs and Cavelo Flash's agentless assessment tool. These offerings promise streamlined documentation and rapid risk assessment, but Dave Sobel notes their reliance on beta features, integration dependencies, and non-definitive compliance positions. Additionally, DocuSign's release of AI-generated contract summaries raises questions about liability, as inaccurate summaries can mislead signers, and responsibility defaults to the end user rather than the vendor.The primary implication for MSPs and technology leaders is the need to inventory all AI-powered tools with access to client environments, actively govern marketplace adoption, and critically evaluate automation claims. Compliance-focused patching is no longer sufficient; operational oversight must prioritize credential management and identity governance over checklist-based approaches. Caution is advised before rapid migration to beta solutions or locking into long-term contracts, as both reduce flexibility and increase exposure to emerging, non-traditional attack surfaces.Three things to know today00:00 CISA's Ransomware Tags Arrive Years Late While AI Tools Steal Credentials Now05:53 IT Glue Founder Launches AI Documentation Platform Lexful for MSPs at Right of Boom09:52 Cavelo and DocuSign Launch AI Tools That Automate Assessments and Contract ReviewsThis is the Business of Tech. Supported by: Small Biz Thoughts Community
The episode centers on the expanding adoption of artificial intelligence (AI) tools among workers alongside a notable decline in confidence. According to a Manpower Group study cited by Dave Sobel, AI confidence among workers decreased by 18% even as usage increased by 13% over the past year. This divergence highlights a governance and operational gap for MSPs, as enterprise clients confront both the potential and the risks of AI-enabled solutions, facing unresolved issues of output reliability, oversight, and liability when missteps occur.Supporting this trend, findings from the Stanford University Institute for Human-Centered Artificial Intelligence indicate that nearly 30% of AI chatbot users encountered harmful suggestions. While these statistics lack detailed breakdowns – such as which platforms or definitions of “harmful” – they shape widespread client perceptions and intensify scrutiny of AI guidance provided by IT service providers. Meanwhile, enterprise vendors like Zendesk report improved satisfaction rates from automated resolutions but emphasize the costly need to overhaul workflows and data management to effectively harness AI benefits.Additional focus is given to Microsoft's scheduled deprecation of the NTLM authentication protocol, replaced by newer mechanisms that are not yet fully deployed or reliable. Dave Sobel notes that legacy systems depending on NTLM present tangible operational and legal risks for MSPs, as clients may face authentication failures or re-enable insecure protocols unless thoroughly audited. Elsewhere, the "right to repair" movement is gaining ground as the Environmental Protection Agency affirms farmers' rights to repair their own equipment, with broader implications for IT hardware access and vendor-dependent service models.The confluence of these developments underscores the importance for MSPs and IT leaders to shift focus from product access and resale toward risk governance, lifecycle planning, and documenting client decisions—especially in AI, authentication methodologies, and hardware maintenance. Mitigating liability, clarifying accountability with clients, and tracking evolving vendor and regulatory actions are essential to maintain relevance and safeguard operations as service and product access models change. Three things to know today00:00 Workers Use More AI But Trust It Less, Creating New Service Risks03:44 Microsoft Plans NTLM Phase-Out Despite Unfinished Kerberos Replacement Technology06:32 Google, Adobe Launch AI Subscriptions While OpenAI Retires GPT-4o Next Month10:52 EPA Ruling Lets Farmers Repair Equipment, Pressures Tech Right-to-Repair LawsThis is the Business of Tech. Supported by:
The episode centers on the structural shift in managed services driven by the adoption of autonomous AI agents and the resulting accountability challenges for IT service providers. According to Dave Sobel, 22% of employees in Token Security's surveyed organizations are independently running AI agents such as OpenClaw with terminal and browser command capabilities, without formal IT oversight. This widespread shadow automation creates significant operational and security exposure, indicating unsanctioned user demand for advanced automation that IT has not provided. The core risk is not simply unauthorized technology use, but ineffective governance and lack of visibility into automation processes that can impact both client safety and provider liability.Context provided throughout the episode points to a disconnect between optimistic business sentiment and actionable IT spending. While the NFIB index reflects rising small business optimism and increased capital access, most technology-related investments appear to have already been made in prior periods. Only 19% of small businesses plan further equipment investments, suggesting limited near-term demand. Meanwhile, SBA workforce reductions signal longer loan processing times, affecting clients who depend on SBA-backed funding for technology projects—a concrete operational delay for MSPs whose services are linked to client capital expenditure timelines.Additional discussion focuses on evolving industry economics, notably a projected increase in the North American IT services market to $1.09 trillion by 2033, as reported by Research and Markets. However, Dave Sobel emphasizes that the majority of this growth is captured by hyperscalers and large integrators, not regional MSPs. Cooling wage inflation, detailed by Service Leadership, may present temporary margin opportunities but also introduces risk if MSPs respond with indiscriminate hiring rather than automation or upskilling strategies. The Shield Technology Partners investment, involving OpenAI's embedded research in IT operations, signals rapid automation of rules-based workflows and reiterates the urgency of addressing task displacement and margin compression.For MSPs and IT service leaders, the practical takeaway is clear: unmanaged, employee-driven AI automation presents both risk exposure and a mapping of unmet service demand. Blocking shadow agents is a reactive measure—long-term resilience depends on developing agent governance frameworks, including permissioning, audit, and incident response protocols. With shrinking margins and increasing automation, providers must reevaluate operational models, prioritize revenue-per-employee, and focus on delivering accountable, sanctioned automation services rather than competing on basic labor cost or commodity support.Four things to know today00:00 NFIB Index Hits 99.5 as 64% Face Inflation and SBA Cuts Half Its Workforce04:44 IT Services Market Growth to $1.09T Coincides With Declining Wage Inflation08:01 Shield Secures Second $100M From OpenAI-Backed Thrive Holdings for AI Operations Platform11:21 Token Security Reports 22% Shadow IT Adoption of OpenClawThis is the Business of Tech. Supported by: MSP Radio - Internal Ad
The appointment of Mike Riggs as Chief Product Officer at Empath signifies the company's transition from founder-led intuition to formalized product governance. According to Wes Spencer, Empath reached over 500 MSP customers and now requires more disciplined processes as it moves from early-stage, high-velocity development to operational maturity. Mike Riggs described his role as systematizing elements that were previously managed informally—covering areas from design to engineering—and explicitly stated the intent to strengthen operational accountability for both the platform and its customers.This structural change follows recognition by the founders that their limited technical background required complementary leadership to scale effectively. Advisors highlighted that, while growth and partner engagement met expectations, scaling Empath's platform now demands greater rigor and repeatable operational practices. Empath's platform has evolved from being a convenience service to an operational dependency, with MSPs using it for training, team accountability, and embedded workflows. Mike Riggs emphasized the importance of refining user experience, onboarding processes, and support mechanisms as MSP reliance grows.A central theme discussed is the shift in Empath's product category—from a basic learning management tool toward a broader learning, development, and accountability platform for MSPs. Features such as notification systems and visibility into required actions move the platform beyond content delivery into proactive management of personnel performance and compliance. This evolution brings Empath closer to intersecting with HR, policy, and managerial oversight, compelling the company to balance user engagement features with the need for reliable, auditable, and controlled change management.For MSPs and IT service providers, Empath's shift has operational implications and risk factors. Increasing dependency on a single platform heightens the significance of product stability, disciplined rollout of new features, and clarity of governance. As platforms like Empath become more embedded in day-to-day operations, service providers must reassess processes for vendor risk management, accountability, and internal policy alignment. The move described is not an indicator of problems but of maturation—a transition that typically introduces both new safeguards and greater operational complexity.
The episode centers on practical approaches for Managed Service Providers (MSPs) and IT leaders assessing artificial intelligence (AI) adoption, with David Espindola detailing the crucial distinction between “maker,” “shaper,” and “taker” strategies. David Espindola emphasizes that organizations must intentionally decide their role in AI development and use—whether building proprietary systems, shaping solutions atop existing models, or simply consuming pre-built capabilities. This decision, he notes, is foundational for aligning risk tolerance, investment, and technical capacity with business goals, especially given the rapid pace and inherent uncertainty in AI's evolution.Supporting this framework, David Espindola references insights from a Small Business Administration project, which found that most small businesses are struggling to define applicable use cases for AI and tend toward risk-avoidant stances despite external pressures to adopt the technology. He stresses that AI implementation should not be a solution in search of a problem; rather, an organization's readiness, risk, investment capability, and specific industry context must determine its approach. Key recommendations include conducting readiness assessments, appointing internal AI champions, and starting with small, low-risk pilot projects to build internal understanding and governance processes before scaling.The discussion broadens to ethical and governance considerations, with both David Espindola and the host cautioning that responsible AI adoption is a business necessity rather than a compliance checkbox. They advocate for formal employee training, the establishment of clear usage policies, and strict controls over tool access to mitigate risks such as data leakage, hallucinated outputs, and misaligned communications. The emphasis is on building practical safeguards rather than pursuing AI for its own sake, reflecting a pragmatic, risk-managed approach tailored to each organization's context.For MSPs and IT service providers, the practical takeaways are clear: pursuing AI adoption requires a methodical, risk-aware strategy focused on business relevance, operational governance, and targeted experimentation. The harms of rushed deployments, poor change management, or lack of internal education are underscored, with the implication that long-term value and reduced exposure are found in deliberate, well-governed adoption efforts. Readiness assessments, pilot programs, and robust policy frameworks emerge as the primary enablers of sustainable outcomes in this rapidly evolving landscape.
The current wave of managed service provider (MSP) consolidation and rollups is being distinguished by the integration of advanced artificial intelligence (AI) expertise, particularly among entities such as SHIELD and Titan. As discussed by Rich Freeman and Jessica Davis, these newer rollups are acquiring not just MSPs but also Silicon Valley AI talent and developing proprietary AI-driven services, a marked shift from earlier private equity-backed consolidators. Rich Freeman highlighted SHIELD's recent leadership hires from Palantir and direct collaboration agreements with OpenAI, signaling an intent to embed AI at the operational core rather than simply as a tool for optimization.The structure and access to data is central to these developments. As Rich Freeman elaborated, large rollups possess a scale-driven “AI flywheel” advantage: broader customer bases provide larger datasets, which in turn drive better AI performance, operational efficiency, and profitability. This concentration creates risks for smaller MSPs that lack equivalent data pools and resources for internal AI development. Jessica Davis noted that while tool vendors and platform companies such as ConnectWise and Kaseya are enhancing AI within their offerings, their efforts are not yet matching the focused investments of the largest rollups, and are simultaneously being pressured to accelerate innovation.Commercial and operational pressures are increasing throughout the MSP ecosystem. Jessica Davis cited indications of slowing managed services revenue growth projections (potentially below 10%), alongside potential cost-cutting or workforce reductions within large rollups as private equity owners seek AI-driven returns. Divergent rollup models are also emerging—with distinctions between platform centralization (e.g., retiring acquired brands) and decentralized, founder-friendly approaches (e.g., preserving local brands and founder involvement). Decisions around acquisition, platform engagement, and specialization are increasingly nuanced as founders and owners evaluate their options under new market dynamics.For MSPs and IT service leaders, these trends necessitate a measured response. The competitive risk posed by the AI-fueled scale of consolidated rollups underscores the importance of specialization, operational focus, and alignment with platform partners committed to democratizing AI resources. Community collaboration, best-practice sharing, and strategic use of vendor tools are positioned as potential mitigants to the structural disadvantages faced by smaller organizations. Governance, due diligence, and clear assessment of vendor or acquirer incentives should be prioritized, especially as service models and influencer dynamics continue to fragment. Remaining adaptable, resource-aware, and critically informed about the changing power landscape will be vital for sustainable operations.
The emergence of Moltbot, an open source AI agent designed to operate across various messaging platforms and automate tasks through local device execution, is creating new risk vectors for MSPs and IT providers. Functioning with admin-level access and connecting to services like OpenAI and Google, Moltbot's deployment has raised direct concerns around authority delegation without sufficient governance. Security researchers identified hundreds of exposed Moltbot instances, often due to misconfiguration, increasing the possibility of breaches and unauthorized data access. The episode underscores that these agents, treated as productivity tools, actually represent operational infrastructure capable of independent action, with potential impacts on client trust and regulatory liability.Expert sources cited in the discussion, including Cisco and Hudson Rock, have labeled Moltbot a security risk due to its storage of sensitive information in plain text and broad access permissions. The narrative warns that vendors and providers may underestimate the risks by normalizing deployment before establishing proper controls. Once these agents are embedded into workflows, reversing their use becomes difficult due to client reliance on perceived efficiency. The lack of mature governance frameworks, as shown by studies from Drexel University, means that many organizations lack even basic oversight of these autonomous agents.Adjacent industry developments highlight additional layers of operational complexity. Apple posted a 16% revenue increase, led by iPhone demand, and acquired Q AI to deepen its ambient automation capabilities, while shifting defaults that providers cannot easily influence or control. Simultaneously, the Linux community's succession planning and Microsoft's ongoing struggles with Windows 11 reliability further demonstrate systemic issues around authority, trust, and transparency in technology ecosystems.The episode's analysis signals clear expectations for MSPs and technology leaders: explicit approval protocols for AI agents are necessary, akin to traditional admin controls. Providers must proactively define governance boundaries, anticipate non-billable labor resulting from automation failures, and assess vendor behavior in terms of roadmap rigidity and escalation pathways. Teaching clients about authority in automated environments, not just managing installations, will reduce exposure and clarify accountability as agentic technologies become standard.Three things to know today00:00 Moltbot's Rise Highlights How AI Agents Are Becoming High-Risk Operators Without Governance03:49 Record iPhone Sales and a $2 Billion AI Acquisition Signal Apple's Long-Term Control Strategy06:04 Leadership Succession, Software Trust, and AI Agents Reveal a Shared Governance ProblemThis is the Business of Tech. Supported by: ScalePad
France's decision to discontinue American collaboration platforms such as Zoom and Microsoft Teams for government use—replacing them with the domestically developed Vizio platform—signals a shift toward digital sovereignty and data control within regulated jurisdictions. This move, formalized as part of France's Suite Numerique and to be implemented by 2027, highlights the increasing fragmentation of technology policy where national governments assert authority over platform selection and sensitive data handling. The development underscores operational risk for MSPs and IT service providers as assumptions of technology homogeneity across regions become unreliable.Supporting these shifts, South Korea enacted the world's first comprehensive AI legislation, requiring mandatory labeling of AI-generated content and risk assessments for high-impact systems, such as those in hiring and healthcare. According to the transcript, 98% of AI startups in South Korea report they are not prepared for compliance. Both developments reveal a pattern: early regulatory efforts tend to produce vague requirements, unclear enforcement, and real operational complexity. Providers operating in multiple jurisdictions must now anticipate compliance fragmentation and increased overhead as regulatory regimes diverge.Additional analysis focused on the continued evolution of the managed services stack, particularly through the lens of AI and workflow automation. Companies like Thrive are investing in enterprise platforms that embed AI-driven reasoning within workflow tools, shifting coordination away from traditional PSA ticketing systems. Meanwhile, integrations such as Quark Cyber with ScalePad's Lifecycle Manager X, and new partnerships between ServiceNow, TeamViewer, Anthropic, and OpenAI, illustrate a market splitting between providers focused on standardization and those managing more complex, enterprise-like environments. Microsoft's financial results further highlighted this trend, with record capital expenditure on AI infrastructure and increased reliance on proprietary chips to reduce dependency on external vendors like Nvidia and OpenAI.For MSPs, these developments raise practical governance and accountability questions. Shifts in regulatory authority and technology platforms create increased risk exposure for providers that do not proactively manage cross-jurisdictional compliance and secure defaults. Vendors are tightening control over platforms as AI becomes central to product architecture, often prioritizing internal risk management over shared upside with partners. Providers that fail to enforce robust data governance, understand cost drift, or plan for architectural lock-in are positioned less as strategic advisors and more as absorbers of client and vendor risk.Four things to know today00:00 France's Platform Ban and South Korea's AI Law Show Regulation Catching Up to Technology04:23 AI Is Reshaping the MSP Tool Stack as Thrive, ServiceNow, and ScalePad Take Different Paths07:37 Microsoft's SMTP AUTH Delay and CISA's AI Slip Show the Risk of Optional Security ControlsAND10:26 Earnings Show Microsoft Turning AI From Feature to Infrastructure as Partner Risk GrowsSponsored by: TimeZest
Global channel sales in IT are projected to exceed $4 trillion this year, with two-thirds of total spending driven by partner-led deals, according to Omdia research. However, managed service providers (MSPs) continue to encounter significant integration failures following mergers and acquisitions, leading to operational inefficiencies and diminished client trust. The Business of Tech analysis highlights that stacking acquisitions without comprehensive integration amplifies risks, particularly affecting margins, service consistency, and accountability.Supporting survey data from POPX indicates that 60% of UK MSPs report platform and data integration as critical hurdles post-acquisition, while 44% identify poor morale and lack of team alignment as sources of inefficiency. Notably, 38% experienced client disruption during transitional periods, signaling that rapid growth without sufficient operational coherence creates drag rather than leverage. These issues are compounded by rising technology budgets—nearly 75% of organizations expect increased IT spending—and intensifying reliance on AI and cloud services in MSP environments.Additional stories addressed include the widespread adoption of unsanctioned "Shadow AI" tools in healthcare settings, with over 40% of workers aware of unapproved usage, and the increasing tendency for AI platforms to reference general sources like YouTube over traditional medical authorities. The episode further examines new AI-driven arbitration tools, platform consolidations within managed security, and the centralization of authority across purchasing and service delivery ecosystems. Vendor integrations, such as Synchro's marketplace partnership with Ironscales and LevelBlue's acquisition of AlertLogic's unit, illustrate a shift away from component choices towards streamlined, but potentially opaque, accountability structures.For MSPs and IT service leaders, the central takeaway is not the urgency to adopt new tools, but the necessity to clarify ownership, governance, and liability as technology platforms accelerate efficiency and centralize control. Failure to address integration fundamentals, define formal oversight for AI-driven decisions, and maintain transparency amid automation will expose service providers to unpriced risks and erode client trust. Sustained growth is contingent upon operational discipline, not just expanding portfolios. Four things to know today 00:00 Channel Growth Accelerates While MSP Integration Failures Threaten Margins and Trust03:58 New Research Shows Agentic AI Adoption Outpacing Governance and Workforce Readiness07:25 AI Interfaces, Security Consolidation, and MSP Marketplaces Point to a Shift in Where Authority Lives10:27 AAA's AI Arbitrator Shows How Automation Changes Who Owns Decisions, Not Just How Fast They're Made This is the Business of Tech. Supported by:
AI adoption within organizations is increasingly polarized, with Gallup data cited showing that while 77% of technology professionals use AI at work, overall workplace adoption rose only marginally from 45% to 46% in late 2025. This stagnation is attributed not to employee reluctance, but to aggressive uptake by leadership without corresponding redesign of roles and workflows at lower organizational levels. In the UK, research presented notes an 8% net job loss tied to AI alongside a 11.5% productivity increase, with younger workers expressing heightened concern over future employment security.Supporting analysis emphasizes that AI utilized only in decision-making circles can compress organizations, trading resilience for short-term efficiency. Dave Sobel cautions that celebrating productivity gains without acknowledging operational fragility introduces organizational brittleness, as headcount reductions outpace tangible capability improvements across all layers. The discussion underscores the risk in pitching AI as a leadership tool without regard for its broader impact.Additional topics include the risks of encryption practices—specifically Microsoft's BitLocker—and the limits of user control over recovery keys when stored in the cloud. Dave Sobel highlights governance failures when MSPs assume encryption equates to privacy without explicit decisions regarding key custody and authority, noting that silent trade-offs can expose organizations to privacy vulnerabilities. Furthermore, coverage of CISA's absence from RSA conference outlines how diminished federal engagement increases liability and ambiguity for MSPs tasked with interpreting security policy. New video authentication features from Ring are examined as evidence of a broader shift where provenance and chain of custody outweigh convenience, directly affecting the evidentiary value of managed data.The overarching implication for MSPs and IT providers is clear: risk, authority, and liability are being systematically reallocated within the supply chain and between vendors, government, and service providers. Operational preparedness now depends on explicit documentation, governance choices, and advance recognition of liability transfer. Failing to adapt—by leaving deployment decisions, key management, and evidentiary workflows unexamined—may result in organizational fragility, legal exposure, and loss of client trust. Four things to know today 00:00 Stalled AI Adoption and UK Job Losses Show Productivity Gains Are Not Broadly Shared04:06 BitLocker Encryption Allows Microsoft Access to Recovery Keys Stored in the Cloud06:21 CISA Breaks From Past Practice, Declines RSA Conference Appearance08:36 Ring Uses Cryptographic Seals to Verify Video Authenticity as Evidence Trust Becomes a Governance Issue This is the Business of Tech. Supported by: https://scalepad.com/dave/
Global managed services contracts are experiencing reduced momentum as buyers display notable hesitation to commit to long-term agreements during a period defined by organizational pivots toward artificial intelligence. The Information Services Group reported only a 1.2% quarter-over-quarter increase in large managed services contracts in late 2025, totaling $10.9 billion, with full-year growth barely above 1%. While U.S. activity partially offsets contractions in EMEA and APAC, the prevailing environment is one of caution, shaped less by CIOs and more by business and finance leaders redirecting budgets to support internal AI initiatives and flexible operating arrangements.The growth in technology distributor activity in North America highlights increased market fragmentation rather than expanded service levels. Omdia Tech Services data indicates distributor billings grew almost 15% in 2024, reaching $16.6 billion, with over 72% of transactions concentrated among six distributors. Most billings originated with technology advisors, and both value-added resellers and MSPs contributed smaller shares. This shift points to a market emphasizing flexible sourcing—with more intermediaries and shorter deals—but raises questions about MSP control, as authority and accountability can become diluted.Intel's latest financial disclosures reveal persistent supply and execution challenges in delivering AI infrastructure solutions. Despite exceeding earnings expectations, weak revenue forecasts and admission of supply constraints resulted in a 13% decrease in company stock. The vendor attributed its underperformance to capacity shortages and forecasting issues, underscoring the risks MSPs now face in hardware planning for AI deployments. Additionally, the commoditization of key offerings such as Microsoft 365 backup and the automation of technology review processes further compress execution margins, reducing traditional revenue sources for service providers.For MSPs and IT leaders, these developments reinforce the need to reassess risk allocation, authority, and pricing models in client engagements. With execution becoming both cheaper and less differentiated, value must shift toward governance, outcome accountability, and explicit decision ownership. Delays or misjudgments related to hardware supply and service fulfillment present direct threats to project continuity and client satisfaction, emphasizing the importance of operational flexibility, active vendor management, and strategic repositioning of service offerings. Three things to know today 00:00 As Managed Services Stall Globally, Distributor-Led IT Buying Gains Momentum04:58 Intel Beats on Earnings but Misses on Confidence as AI Demand Outpaces Capacity07:27 As Backup and Reviews Are Automated, MSP Differentiation Shifts from Execution to Decision Ownership This is the Business of Tech. Supported by: https://scalepad.com/dave/
This Business of Tech episode delves into the critical alignment of technology with how people work, emphasizing the strategic advantage for businesses, particularly those leveraging Apple ecosystems and remote teams. Rob Calvert, President of Second Son Consulting, highlights common misconceptions in IT, where decisions are often made in a vacuum without considering company culture or workflows. This disconnect leads to daily friction and hinders growth. Calvert shares an example of implementing zero-touch MDM, where the technological aspect is straightforward, but the real challenge lies in adapting workflows and company culture to accommodate remote hiring and device deployment timelines, ultimately enabling faster growth with less operational friction.The discussion underscores the importance of integrating IT decisions with broader business objectives. Calvert explains that for small to mid-sized businesses, understanding and defining existing workflows is a crucial first step. His firm's process involves auditing technology platforms, establishing role-based standards for new hires, and documenting procedures for onboarding and offboarding. This systematic approach, exemplified by streamlining onboarding from hours to minutes, ensures that technology serves as an asset rather than an obstacle, optimizing efficiency and security.Further insights are provided on security and compliance within Apple-centric environments, contrasting them with Microsoft-centric approaches. Key differences include procurement styles, the utilization of Apple Business Manager, and the implementation of non-removable MDM for enhanced security and control. The episode also touches on the growing impact of AI, with a focus on enabling local, on-device AI to address privacy concerns and accelerate business processes like proposal writing and research, while emphasizing the need for leadership to guide AI adoption and manage associated security implications.For MSPs and IT service leaders, the episode offers actionable strategies for improving client IT infrastructure. It stresses the value of aligning technology with specific business workflows and company culture to reduce friction and boost productivity. The discussion on Apple-centric IT and AI adoption provides practical guidance on managing devices, implementing robust security measures, and leveraging new technologies responsibly. The emphasis on creating standardized, documented processes for onboarding and offboarding, while remaining flexible to client needs and potential risks, is a key takeaway for enhancing service delivery and client satisfaction.
The episode centers on structural changes in the Managed Service Provider (MSP) mergers and acquisitions (M&A) landscape, with a focus on the increased influence of private equity (PE), platform strategies, and disciplined deal execution. Dave Sobel and Abraham Garver highlight that the primary driver for buyers has shifted from merely acquiring revenue to seeking operating models that support scale, standardization, and automation. Size of institutional funds directly shapes acquisition targets: funds with $500 million or more increasingly pursue MSPs with minimum EBITDA thresholds, commonly $3–5 million, with larger funds only able to transact at the $10–15 million EBITDA level or above. This signals a market separation, where smaller MSPs face heightened risk of being excluded from future platform opportunities.Supporting these structural shifts, Abraham Garver explains that the buyers' value assessment increasingly prioritizes new customer acquisition over one-off gains from cross-sales like cybersecurity add-ons. Organic growth, shown through the consistent addition of new client logos, outweighs temporary revenue boosts in determining valuation. The episode also outlines that AI investment and automation stories are not materially lifting valuations for smaller MSPs, unless directly reflected in improved financials. Larger providers may have the resources to invest meaningfully in AI, but for the majority—especially those below $10 million in revenue—outsourcing or leveraging third-party solutions is more practical than bespoke, high-cost internal development.A further operational risk discussed is the prevalence of "retrading"—buyers renegotiating valuations post–Letter of Intent (LOI) based on due diligence findings. Abraham Garver reveals that 60% of transactions see price reductions after the LOI, often for factors such as recent customer losses or missed forecasts, diverging from initial headline multiples. This reality highlights the importance of diligent contract negotiation, clear documentation, and the value of experienced advisors to navigate buyer tactics. Rob Calvert contributes additional insight on workflow and technology alignment, emphasizing the role of standardized onboarding and offboarding processes in reducing both operational friction and security gaps.For MSPs and IT service providers, the discussion clarifies several critical implications. First, with platform buyers seeking scale, only MSPs meeting explicit EBITDA and growth metrics will attract competitive offers; others should realistically assess the cost and likelihood of reinvention versus sale. Second, buyers' focus on execution and organic growth, not headline multiples or claims of technological advancement, makes robust financial performance and client acquisition strategies essential to preserving value. Third, the commonality of post-LOI repricing underlines the need for rigorous pre-sale diligence, explicit contractual terms, and experienced representation to preserve deal value and protect against downside risk. Lastly, operational standardization—especially in device and data management—remains central to both platform attractiveness and risk mitigation.
Anthropic's disclosure of model drift within its Claude AI system highlights growing risks surrounding governance and ongoing alignment of artificial intelligence. The company has revised its guidelines using a “Constitutional AI” approach, aiming to instill reason-based behavior and ethical boundaries, and has openly acknowledged that an AI's internal controls may shift unpredictably over time—a concern when models are deeply embedded in business workflows. This admission places attention on governance and accountability rather than just model safety, making clear that the AI a company tests may become materially different after extended deployment, especially as personalization increases.Supporting these concerns, Anthropic's research demonstrated that large language models—including those from Google and Meta—can experience personality drift, with unintended shifts in behavior due to instability of internal control mechanisms. Google's updated AI offerings, tying personal data from Gmail and Photos to generative model responses, intensify challenges around data governance and organizational control. As vendors expand AI personalization and memory features, oversight gaps can emerge, raising questions about who retains authority over information, inference, and decision-making within automated systems.Adjacent findings indicate that the anticipated productivity gains from AI have yet to reach most enterprises. According to surveys cited by Dave Sobel, over half of CEOs report failing to realize ROI from AI investments, while frontline employees describe AI integrations as sources of friction and additional workload rather than relief. In the MSP sector, widespread adoption of “agentic” AI and digital labor is delivering financial upside for some providers, but it is also shifting operational liabilities—especially as contracts and security architectures lag behind new workflow realities.The core takeaway for MSPs and IT service providers is the necessity of reexamining control, authority, and contractual obligations in AI-enabled environments. Delegating tasks to automated agents increases exposure to unpriced and unmitigated risks if governance, liability, and monitoring mechanisms do not adapt accordingly. Effective harm reduction in this landscape requires treating workflows—not just models—as security perimeters, clarifying accountability for AI-driven actions, and ensuring that contractual and operational frameworks reflect these new sources of risk.00:00 AI Governance Moves Center Stage as Models Drift and Personalization Deepen05:08 AI Boosts Executive Productivity While Frontline ROI and Employee Experience Lag07:51 AI Exposes the Real Divide: Governance Failures vs. Effective Oversight in Government Systems10:39 MSPs Chase AI-Driven Margins, but Workflow Security and Liability Define the Real Risk This is the Business of Tech.
Escalating distrust in identity systems and misuse of AI are forcing a shift in security accountability for small and midsize businesses. Recent analysis highlights that the prevalence of deepfake-driven business email compromise and non-human digital identities is eroding confidence in traditional protective solutions. According to Techyle and supporting reports referenced by Dave Sobel, the ratio of non-human to human identities in organizations is now 144:1, further complicating authority and responsibility for managed service providers (MSPs). As trust in exclusive third-party control disintegrates, co-managed security models are becoming standard, repositioning decision-making and liability.The rise of AI-generated data—described as “AI slop”—has prompted increased adoption of zero trust models, with 84% of CIOs reportedly increasing funding for generative AI initiatives. However, as rogue AI agents are recognized as a significant insider threat, current security services are often ill-equipped to manage these new vulnerabilities. Regulatory bodies, including CISA, have issued guidance noting that the integration of AI into critical infrastructure introduces greater risk of outages and security breaches, particularly when governance remains ambiguous. High-profile vulnerabilities in open-source AI platforms used within cloud environments further highlight the persistence of operational risks.Adjacent technology updates include new releases from vendors such as 1Password, WatchGuard, JumpCloud, and ControlUp. These offerings focus on enhancing phishing prevention, expanding managed detection and response, and automating endpoint management for MSPs. However, Dave Sobel emphasizes that these tools introduce additional layers of automation and integration without adequately clarifying who ultimately holds authority and accountability when failures or breaches occur. There is a consistent warning that stacking solutions or outsourcing core functions without redefining operational control creates gaps between action and oversight.For MSPs and IT leaders, the key takeaway is that security risk is no longer defined by missing technology but by unclear governance, undefined authority, and misaligned incentives. Without explicit contractual and operational delineation of responsibility when deploying AI and automation, service providers are increasingly exposed to liability by default. The advice is to move beyond tool-centric strategies and focus on process clarity: define who authorizes, audits, and terminates non-human identities; establish which parties approve automation actions; and ensure clients understand shared responsibilities to mitigate silent risk accumulation. Four things to know today00:00 TechAisle Warns SMB Security Will Shift in 2026 as Identity Attacks and AI Agents Redefine Risk05:44 AI Moves Deeper Into Critical Infrastructure as Open-Source and Human Weaknesses Expand the Attack Surface09:35 MSP Security Platforms Automate Phishing Prevention and MDR—Outpacing Governance and Control Models12:12 AI-Powered MSP Tools Promise Control and Efficiency, But Shift Responsibility by Default This is the Business of Tech. Supported by: https://scalepad.com/dave/
PC spending has seen a significant rebound, with Gartner reporting a 9.3% rise in worldwide PC shipments in late 2025, primarily driven by corporate IT upgrades to meet Windows 11 requirements. This recovery, which saw 10.1% growth in Q4 2025 according to Omnia data, highlights a shift from consumer-led demand to necessity-driven upgrades. Despite supply chain challenges in memory and storage, leading to cost increases, 57% of B2B partners anticipate growth in their PC business, underscoring a sustained demand for hardware management and support among MSPs.Concurrently, worldwide spending on artificial intelligence is projected to reach approximately $2.5 trillion by 2026, a 44% increase from the previous year, according to Gartner. This surge is fueled by substantial investments in AI infrastructure, which is expected to account for $1.37 trillion of the total spending. John David Lovelock of Gartner emphasizes that AI adoption success hinges not only on financial investment but also on organizational maturity and self-awareness, suggesting that the value derived from this investment is not yet as certain as the spending itself. For MSPs, this indicates a growing need to navigate the complexities of AI infrastructure deployment and demonstrate tangible value to clients.In the realm of managed services, recent strategic moves by several companies signal an evolving MSP landscape. Corsica Technologies announced 105% year-over-year growth in managed services bookings for 2025 and expanded its portfolio through acquisition, aiming for consolidation and integrated offerings. Net at Work nearly doubled its managed services division size by acquiring a regional competitor, prioritizing scale. Rhubarb IT, spun out from Mac Center, is focusing on a niche Apple-focused IT managed services model, aiming for differentiation. These expansions highlight varying strategies—consolidation, scale, and specialization—that MSPs must consider when evaluating market opportunities and competitive positioning.The implications for MSPs are multi-faceted. The PC market's recovery emphasizes the continued importance of hardware lifecycle management and support services. The explosive growth in AI spending necessitates careful evaluation of infrastructure versus value, with potential risks for organizations rushing capacity purchases without clear demand justification. Furthermore, the diverse expansion strategies among MSPs underscore the need for clear operational, contractual, and financial planning to manage integration, delivery consistency, and customer expectations. The appointment of Rob Rae as a strategic advisor to Guards highlights the critical need for transparency in vendor relationships, particularly concerning incentives, as undisclosed financial arrangements can introduce bias and risk for MSPs who rely on objective evaluation of technologies and partners. Four things to know today 00:00 PC Spending Reflects Operational Necessity While AI Spending Bets on Unproven Demand03:57 OpenAI Promises to Offset Energy and Water Impact as AI Infrastructure Outpaces Regulation05:45 MSP Growth Paths Diverge as Corsica, Net at Work, and Rhubarb IT Make Different Strategic Bets09:09 Guardz's Rob Rae Advisory Appointment Raises Transparency and Governance Questions for MSPs This is the Business of Tech. Supported by: https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship
OpenAI is introducing advertisements into ChatGPT for free and ChatGPT Go users, aiming to fund artificial general intelligence development. These ads will be clearly labeled and separate from AI responses, with OpenAI stating user conversations will remain private and data will not be sold. Ads will be excluded from sensitive topics. Users can avoid ads by upgrading to a paid subscription, such as the new $8/month ChatGPT Go tier, which offers increased limits and access to the latest model but also features advertisements. This move signifies a monetization strategy, with OpenAI reporting significant revenue growth for ChatGPT.The broader impact of AI on jobs is also discussed, with data suggesting job losses attributed to AI may be overstated. While some roles are affected, particularly entry-level positions, the overall employment impact appears limited. Reports indicate that AI is often used as a justification for layoffs driven by economic factors or overhiring, rather than being the sole cause. The analysis highlights that AI's productivity gains are currently modest, requiring substantial increases to drive large-scale job replacement. However, the International Monetary Fund estimates nearly 40% of global jobs are at risk due to AI, with a growing demand for new skills that offer a wage premium.Automation and platform integrations are accelerating, with ConnectWise acquiring ZofIQ to automate service desk operations within its PSA workflow. D&H is expanding its logistics capabilities by acquiring fulfillment.com, enhancing its supply chain services. Microsoft MVPs are collaborating to showcase free Intune management tools to help organizations manage their Intune environments more effectively. These developments indicate a trend towards deeper platform integration and automation within IT service delivery and logistics.For MSPs and IT service providers, these developments highlight several critical considerations. The introduction of ads in AI tools like ChatGPT raises questions about trust and governance, particularly when these tools are integrated into client-facing workflows. The slowdown in hiring, especially for junior roles, underscores the need for strategic talent development to avoid future capacity gaps. Furthermore, the increasing automation within platforms and services, while offering efficiency, necessitates careful management of counterparty risk, clear contractual definitions of authority, and redefined pricing models to account for shifting liability and decision-making. Vendors retreating from emerging technologies like Meta's VR business also underscore the importance of diligent vendor selection and managing the credibility cost associated with adopted technologies. Four things to know today 00:00 Jobless Claims Fall as Small Businesses Pull Back on Hiring, Especially Entry-Level Roles05:45 OpenAI Adds Ads to ChatGPT as It Scales Revenue, Expands Go Tier, and Deepens Enterprise and SMB Adoption09:17 Automation Moves From Tools to Authority as ConnectWise, D&H, and Intune Ecosystems Shift Control—and Risk13:05 Meta's Retreat from Business VR Leaves MSPs Managing Cleanup, Data Deletion, and Client Expectations This is the Business of Tech. Supported by: https://scalepad.com/dave/
The discussion highlights the limitations of traditional cybersecurity training methods, emphasizing that a psychology-informed approach with positive reinforcement is crucial for developing genuine cyber literacy within organizations. Traditional "gotcha" tactics, such as fake phishing tests, are shown to be ineffective and can even lead to increased clicks, according to research from the University of Zurich and Black Hat. This approach risks creating a false sense of security without genuinely improving user behavior.Craig Taylor, CEO of Cyberhoot, advocates for a positive reinforcement model rooted in operant conditioning principles, where rewarded behaviors are repeated and internalized. This strategy is implemented through gamified modules, such as interactive "Hootfish" exercises that guide users in identifying threats with in-moment assistance. Progress is tracked via avatars that mature with learning, and an anonymous company leaderboard encourages engagement, particularly motivating management to complete assignments. These elements aim to foster intrinsic motivation for security best practices rather than relying on external pressure or punishment.The conversation also delves into the challenges of measuring security progress, noting that traditional phishing tests often fail to capture a complete picture of an organization's security posture, particularly with C-suite employees who may not engage with such tests. The episode touches upon the complexities of evolving cybersecurity threats, including AI-powered personalized attacks, and the inherent difficulties in relying solely on human training against sophisticated adversaries. Furthermore, the discussion addresses the lack of accountability for cybersecurity vendors with faulty software, contrasting it with product liability in other industries, and the debate around the absence of consistent federal regulations for AI and data privacy in the US compared to Europe's GDPR.For MSPs and IT service leaders, this episode underscores the need to adopt more effective, psychology-driven security awareness programs that focus on positive reinforcement and intrinsic motivation. It highlights the limitations of purely technical or punitive cybersecurity measures and emphasizes the importance of a comprehensive strategy that combines user education with robust technical defenses. The discussion also serves as a reminder for MSPs to critically evaluate vendor security practices and to advocate for stronger accountability and clearer regulatory frameworks to protect client data and services.
The discussion centers on the rapidly evolving landscape of AI adoption within businesses, highlighting a significant gap between individual user experimentation and formal organizational strategies. While employees across various departments are actively using AI tools for productivity gains, many executive leaders remain hesitant to formally adopt or fund these technologies due to concerns about governance and risk. This creates a "skunk works" environment where individual adoption outpaces official oversight, leading to a potential shadow IT problem for organizations. For MSPs, this presents an opportunity to offer AI governance and risk assessment services, helping clients navigate the complexities of safe and effective AI integration.The episode underscores the compressed adoption cycle of AI compared to previous technology waves like cloud and cybersecurity. While those technologies took years to move from early adopters to mainstream paid engagements, AI has accelerated this process into months. However, this rapid adoption has also exposed new pitfalls, particularly concerning the unknown risks of AI, such as intellectual property disclosure or compliance breaches. The conversation emphasizes that the focus for businesses is shifting from the capabilities of AI to the potential consequences of its misuse, making risk assessment a critical concern for IT service providers looking to guide their clients.A significant portion of the discussion addresses the growing concerns around delegated responsibility and the potential for autonomous systems to make decisions without human oversight. As AI-driven automation expands, the liability for errors or unintended consequences will increasingly fall on IT service providers who implement these systems, rather than solely on the software vendors. This is expected to lead to a "contract lag" as existing agreements fail to account for new AI-related liabilities, prompting a need for updated contractual language and risk allocation frameworks. MSPs will need to proactively address these emerging liabilities to build trust and secure new business in the AI-driven IT services market.The episode concludes by examining the dichotomy between highly technical users who approach AI with incremental caution within existing compliance frameworks, and business users who are adopting AI more rapidly, sometimes without fully considering regulatory or security implications. This necessitates tailored governance strategies for different user groups. For MSPs, the immediate opportunity lies in offering risk assessment and advisory services to help organizations understand their AI exposure. Furthermore, the industry faces a crucial challenge in evolving from rigid standardization to adaptable service models that can accommodate the unique risks and opportunities presented by AI, ultimately determining relevance and avoiding obsolescence in the rapidly changing tech landscape.
Apple has introduced Creator Studio, a subscription-based suite that embeds AI-assisted features directly into familiar productivity and creative tools while maintaining strict control over interfaces and user experience. Alongside this launch, Apple confirmed a multiyear partnership with Google to use Gemini and Google Cloud as foundational AI infrastructure, reportedly involving annual payments of around $1 billion. The approach reinforces Apple's strategy of treating AI models as interchangeable components while retaining authority at the application layer, shifting responsibility for governance and oversight away from the platform and toward downstream users and advisors.Google, meanwhile, expanded Gemini through a new Personal Intelligence feature that can reason across Gmail, Photos, Search, and YouTube data for consumer accounts. Available initially to paid subscribers and requiring explicit consent, the capability highlights Google's advantage in contextual data rather than model novelty. By keeping the feature out of Workspace for now, Google appears to be setting user expectations in consumer environments before enterprise deployment, a move that may influence how business users evaluate AI-enabled decision support in the future.Pax8 disclosed a data leak affecting approximately 1,800 MSP partners after an internal spreadsheet was mistakenly shared with a limited number of recipients. While no personally identifiable information was exposed, the data included licensing and commercial details that could be used for competitive intelligence or targeted attacks. The incident coincides with Pax8's rapid international expansion, new regional offices, and growing reliance by MSPs on its marketplace for procurement and security tooling, including the recent addition of Cork Cyber's risk intelligence platform.Taken together with renewed attention on AI governance, the Secure by Design initiative, and guidance on when to apply GenAI versus traditional code, the episode underscores a widening gap between automation and authority. Surveys show a majority of IT leaders now prioritize AI governance, reflecting concern over accountability, data flows, and failure handling. For MSPs and IT service providers, these developments reinforce the need to clearly define who has the power to approve, pause, or override AI-driven systems and platform dependencies, as clients increasingly expect service providers to explain and manage outcomes they may not fully control. Four things to know today Apple's Creator Studio and Google Partnership Show a Strategy Built on Control, Not AI OwnershipAs Gemini Reasons Across Gmail, Search, and YouTube, Google Redefines AI Advantage Around Context Pax8 Data Leak, Rapid Expansion, and Marketplace Growth Expose Risk Shift to MSPsAI Governance, Secure by Design, and GenAI Adoption Reveal a Growing Authority Gap for MSPs This is the Business of Tech. Supported by: https://scalepad.com/dave/
This episode examines why growing concern over AI-driven skills obsolescence is less about workforce displacement and more about authority, accountability, and liability for MSPs. As AI systems increasingly triage tickets, remediate issues, and shape outcomes, MSPs are absorbing responsibility for decisions made by tools they did not design and cannot fully audit. The mismatch between AI-driven operations and pre-AI contracts, SLAs, and pricing models creates a widening risk gap that directly threatens margins and client trust. The show then turns to AI infrastructure, focusing on Microsoft's response to rising power and water costs tied to data center expansion. While public commitments emphasize cost control and community investment, the underlying reality for IT service providers is continued volatility. AI workloads remain energy-intensive and politically sensitive, and those costs are likely to be passed downstream. MSPs that price AI-dependent services on today's assumptions risk margin erosion when infrastructure costs shift faster than contracts can be updated.Next, the episode explores how workplace AI tools from Anthropic and Slack are moving beyond assistance into shaping finished work. By summarizing conversations, organizing files, and producing artifacts that become the default record, these tools quietly define “what happened.” For MSPs, this pulls them deeper into advisory territory, as AI-generated outputs influence decisions, accountability, and client understanding—often without clear acknowledgment of what context or nuance was lost. Finally, the episode connects a wave of AI-driven acquisitions to a single strategic thread: vendors racing to own not just insight, but action. As platforms consolidate signals across usage, identity, cost, and observability, the pause between insight and execution disappears. For MSPs, the risk is not being replaced outright, but being sidelined as platforms decide faster than humans can intervene. The path forward is not resisting consolidation, but asserting value where judgment, context, and governance still matter.Four things to know today00:00 Report Warns 40 Percent of IT Skills May Become Obsolete as AI Reshapes Work04:42 Microsoft's AI Data Center Commitments Highlight the Growing Cost and Governance Risks of AI Infrastructure07:16 Anthropic and Slack Expand AI From Assistance to Shaping Finished Work11:00 AI-Driven Acquisitions Show Vendors Consolidating Signals to Move Faster From Insight to Action Supported by: https://cometbackup.com/
Rising workplace use of artificial intelligence is outpacing organizational governance, according to data from Microsoft and Gallup. Microsoft reports global AI adoption reached 16.3% in 2025, while Gallup finds nearly half of U.S. workers use AI tools at work at least annually. Despite that usage, only a minority of employees report clear employer guidance on AI ownership and purpose, creating accountability gaps that frequently surface during incidents or audits.Additional data underscores uneven adoption and oversight. Microsoft's AI Economy Institute notes adoption rates in the Global North are nearly double those in the Global South, correlating with earlier infrastructure and policy investment. Within organizations, most AI usage remains occasional rather than daily and is concentrated in knowledge roles, suggesting informal, user-driven deployment rather than standardized programs—conditions that complicate governance for MSP-supported environments.Microsoft's product moves further elevate the governance issue. The company is testing policies allowing IT administrators to uninstall Copilot on managed devices while simultaneously enforcing Windows and Office end-of-life timelines through 2026 and embedding purchasing directly into Copilot workflows. These changes expand administrative control but also place AI more firmly inside operational and economic decision paths that MSPs help manage.Platform announcements from Acronis, Hexnode, and Google extend automation from assistance to execution, while public comments from Nvidia CEO Jensen Huang and Linux creator Linus Torvalds highlight differing views on AI speed versus discipline. For MSPs and IT service providers, the practical takeaway centers on accountability: as AI systems take actions rather than make suggestions, governance, policy definition, and oversight become explicit services rather than implied responsibilities. Four things to know today 00:00 AI Use Expands at Work, but Employees Say Transparency and Ownership Are Missing04:37 Microsoft Lets IT Uninstall Copilot as Windows and Office End-of-Life Deadlines Near07:38 Acronis Launches Archival Storage as Hexnode and Google Advance Platform-Centric Automation11:07 Jensen Huang Warns Against AI Regulation as Linus Torvalds Limits AI's Role in Critical Code This is the Business of Tech. Supported by: https://scalepad.com/dave/
Slowing U.S. job growth alongside rising labor productivity highlights how organizations are replacing hiring with automation and AI-driven systems. Government labor data shows job growth in 2025 fell to roughly 584,000 positions, while productivity rose nearly five percent in the third quarter, allowing output to increase without additional staff. According to CompTIA, demand for AI-related skills rose more than 100 percent year over year, even as overall tech employment declined. For MSPs, this signals a shift where customers rely less on internal teams and more on external providers to absorb operational responsibility when automated systems fail.Survey data from TechAisle indicates that small and midmarket businesses are redirecting technology spending away from basic digitization toward autonomous, outcome-driven systems. The research, based on responses from 5,500 firms, shows profitable growth and cost control as top priorities for 2026, with increased adoption of generative AI, agentic automation, and managed security services. At the same time, rising RAM and storage prices—driven by AI data center demand, according to TrendForce—are delaying PC refresh cycles and pushing workloads into cloud environments, changing where performance, security, and cost risks surface.Vendor signals remain mixed. Kaseya reported layoffs affecting five percent of its workforce, following earlier reductions, while TD Synnex and Samsung reported strong revenue growth tied to AI infrastructure, memory, and server demand. Distributors cite continued hardware refresh activity, yet repeated workforce cuts at vendors suggest internal cost corrections rather than demand collapse. For MSPs, this combination increases environmental complexity, with longer device lifecycles, higher component costs, and more heterogeneous platforms to support.Operational AI announcements further extend decision-making authority into automated systems. New healthcare, printing, and service desk tools embed AI into intake, routing, authorization, and workflow execution, often acting before human review. For MSPs and IT service providers, the central issue is not efficiency gains but accountability: when AI-driven processes misroute work, generate compliance errors, or escalate incidents incorrectly, responsibility frequently defaults to the operator. The episode underscores the need for clearer governance, pricing, and contractual boundaries as AI assumes functional authority inside managed environments. Four things to know today 00:00 Slowing Job Growth, Rising Productivity, and AI Adoption Shift Operational Responsibility to Providers05:26 TechAisle Data Shows SMB Focus Moving From Digitization to Autonomous, Outcome-Driven Systems09:19 Kaseya Cuts Staff as Distributors and Chipmakers Report Strong AI-Driven Demand14:27 Operational AI Advances as Vendors Embed Automation Into Intake, Routing, and Authorization This is the Business of Tech. Supported by: https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship
Procurement is experiencing a significant shift as organizations increasingly adopt automation and artificial intelligence (AI) to enhance sourcing, purchasing, and spend management processes. Kevin Frechette, CEO and co-founder of Fairmarkit, discusses how their platform enables enterprises to transition from manual procurement methods to AI-driven autonomous sourcing. This shift allows procurement teams to handle up to ten times more sourcing events per buyer and potentially save $40,000 weekly per full-time employee (FTE) by streamlining operations and reducing the time spent on repetitive tasks.Frechette highlights a case study involving a Boston-based customer that reduced the time required to clarify requests and set up sourcing events from 40 minutes to just 2 minutes through Fairmarkit's automation capabilities. The platform employs AI to facilitate various stages of the sourcing process, including demand capture, supplier identification, and event awarding. However, Frechette emphasizes the importance of maintaining human oversight in certain areas, particularly where nuanced decision-making is required, ensuring that AI complements rather than completely replaces human judgment.The conversation also touches on the evolving role of procurement professionals in the face of automation. While some experienced workers may resist change, Frechette notes that younger professionals are often more adaptable and eager to embrace new technologies. This generational shift could lead to a more innovative approach to procurement, as new entrants to the field leverage AI tools to enhance their decision-making capabilities and drive efficiency.For Managed Service Providers (MSPs) and IT service leaders, the implications of these developments are clear. Embracing AI and automation in procurement can lead to significant operational efficiencies and cost savings. However, it is crucial for organizations to establish a framework for human involvement in decision-making processes, ensuring that the benefits of AI are maximized while maintaining accountability and oversight. As the landscape of procurement continues to evolve, staying informed and adaptable will be essential for MSPs and IT providers looking to remain competitive.
The recent acquisition of Small Biz Thoughts and IT Service Provider University by MSP Radio marks a significant shift in the landscape of resources available to Managed Service Providers (MSPs). This acquisition aims to ensure the continued stewardship of valuable intellectual property, including books and community resources, while allowing founder Karl Palachuk to refocus on his original goals of writing, speaking, and traveling. The deal emphasizes the importance of maintaining community engagement and enhancing the value of existing assets for the benefit of MSPs.Karl Palachuk discussed the filters he applied when selecting a buyer, prioritizing compatibility and the potential for growth within the community. He expressed a desire for the new ownership to actively utilize the acquired assets to foster a thriving environment rather than allowing them to stagnate. The conversation highlighted the importance of community in the tech industry, where collaboration and shared knowledge have historically driven success.In addition to the acquisition, the episode touched on the evolving role of AI in the MSP sector. Palachuk noted that while AI is set to enhance productivity, it will also necessitate a shift in the skills required for technicians and service providers. The discussion underscored the need for MSPs to adapt to these changes, as the industry faces a wave of mergers and acquisitions that could reshape service delivery models.For MSPs and IT service leaders, the implications of these developments are clear. The acquisition represents an opportunity to access a wealth of resources and knowledge while navigating the challenges posed by AI and market consolidation. Engaging with the Small Biz Thoughts community can provide valuable insights and support as MSPs work to enhance their service offerings and adapt to the changing landscape of technology and client needs.
Intel has launched its Core Ultra Series 3 central processing units, utilizing its new 18A process technology, which aims to enhance performance and efficiency across various applications, including gaming and professional workloads. This development is part of Intel's strategy to regain competitiveness in the CPU market, which has faced increasing pressure from rivals. The new processors promise improved performance per watt compared to previous generations, with further specifications expected soon. This advancement in chip technology is significant for Managed Service Providers (MSPs) as it enables the feasibility of edge AI applications, which require careful consideration of workload clarity and governance.Lenovo introduced Cura, an AI assistant designed to operate seamlessly across its computers and Motorola smartphones, emphasizing on-device processing and user privacy. This system-level AI aims to adapt to user habits over time, assisting with tasks such as email drafting and meeting summarization. However, the episode highlights a concerning trend where many users do not fully utilize existing tools, as evidenced by Microsoft's Copilot user statistics. The discussion underscores the importance of governance in AI deployment, as successful enterprise AI implementations, like those from Siemens, demonstrate that explicit authority and responsibility are crucial for effective outcomes.The episode also addresses the ongoing hype surrounding robotics and automation, noting that while advancements are being made, the reality remains that specialized robots are more practical than general-purpose ones. Companies are focusing on single-purpose robots, which contrasts with the expectation of multifunctional robots. The discussion emphasizes that automation in IT should follow a similar path, advocating for narrow automations with explicit authority to avoid misunderstandings and failures that could lead to accountability issues for MSPs.For MSPs and IT service leaders, the key takeaway is the necessity of redefining governance and responsibility in the face of advancing automation and AI technologies. As systems of action become more prevalent, the shift from traditional dashboards to autonomous decision-making systems requires MSPs to update their contracts and governance models accordingly. The opportunity lies not in simply adopting new technologies but in understanding where automation should be limited and ensuring that accountability is clearly defined to mitigate risks associated with automated systems. Three things to know today 00:00 Intel, Lenovo, and Siemens Signal AI Acceleration, Not Automatic Value, for IT Services06:02 CES 2026 Reveals Why Specialized Robotics and Disciplined Automation Deliver ROI Faster Than General AI09:34 Agentic AI, Action-First Platforms, and the End of Forgiving IT Systems Put New Accountability on MSPs This is the Business of Tech. Supported by:
Google has introduced an AI-powered inbox view for Gmail, designed to enhance user experience by transforming the traditional email interface into a personalized platform that includes to-do items and topic summaries. This feature, currently available to trusted testers in the U.S. for consumer Gmail accounts, aims to help users manage their emails more effectively. However, concerns have been raised about the potential for users to feel overwhelmed by excessive to-do suggestions, and Google has stated that users can opt out of these AI features. The company also reassured users that their Gmail content is not utilized for training AI models.OpenAI has launched ChatGPT Health, a tool that allows users to ask health-related questions in a secure environment while connecting their medical records and wellness apps. Although the tool is not intended for diagnosis or treatment, it raises significant concerns regarding safety and privacy, particularly in sensitive areas like mental health. OpenAI has collaborated with over 260 physicians to refine the model, but the lack of full compliance with the Health Insurance Portability and Accountability Act (HIPAA) has led to calls for caution, especially for users with health anxiety. The implications of these tools extend beyond user convenience, as they redefine the nature of work and authority in digital environments.The episode also discusses the growing backlash against AI infrastructure, particularly in local communities where data centers are being proposed. Reports indicate that opposition is rising across the political spectrum, with residents voicing concerns about the environmental and economic impacts of these developments. Additionally, polling data shows that 80% of American adults believe the government should regulate AI, reflecting a significant political opportunity for the Democratic Party. This sentiment is prompting leaders to adopt more vocal stances against AI, as many voters feel threatened by its rapid advancement.For Managed Service Providers (MSPs) and IT service leaders, these developments underscore the importance of understanding the evolving regulatory landscape surrounding AI technologies. As states implement new laws addressing AI safety and consumer rights, MSPs must navigate the complexities of compliance and governance. The episode highlights the necessity for providers to establish clear boundaries regarding AI's influence in client environments, ensuring accountability and minimizing liability risks. As AI continues to integrate into various aspects of technology, the need for informed decision-making and proactive engagement with regulatory changes becomes increasingly critical. Four things to know today 00:00 Google and OpenAI Recast AI as an Authority Layer Over Email and Health Data05:03 From Data Centers to Regulation, AI Expansion Encounters Political and Community Limits08:18 AI, Privacy, and Liability Converge as States Fill the Regulatory Vacuum Left by Washington14:07 Dell Says Consumers Aren't Buying PCs for AI Features, Despite NPU Push This is the Business of Tech. Supported by:
Microsoft has announced significant changes to its Teams platform, set to take effect on January 12, 2026. The platform will automatically enhance messaging security by blocking risky files and scanning shared links for potential phishing threats. This proactive measure aims to protect organizations, particularly smaller ones without dedicated security teams, from increasingly sophisticated cyber threats. IT administrators will have the opportunity to review and adjust these settings prior to implementation, ensuring a smoother transition to the new security measures.In addition to the Teams update, Microsoft has decided to retract its previously announced limit on bulk email recipients for Exchange Online, following customer feedback indicating that such restrictions would create operational challenges. The company will maintain existing limits while seeking less disruptive solutions to enhance email security. Furthermore, Microsoft has acquired the startup Osmos to bolster its Microsoft Fabric platform with autonomous data engineering capabilities, aiming to automate data preparation and reduce the manual workload for IT teams.The episode also highlights the rapid growth of Ninja One, which reported a 70% year-over-year increase in annual recurring revenue, surpassing $500 million. This growth positions Ninja One as a strong competitor in the remote monitoring and management market, particularly as managed service providers (MSPs) seek to consolidate tools to improve operational efficiency. The discussion emphasizes the importance of accountability and risk management as MSPs navigate the complexities of tool consolidation and automation.For MSPs and IT service leaders, these developments underscore the need for clear communication and governance in the face of increasing automation and vendor-driven changes. As Microsoft centralizes control over security and data management, MSPs must adapt by managing client expectations and pricing for the support burden that comes with these automated solutions. The evolving landscape necessitates a proactive approach to risk management, ensuring that MSPs are prepared to address client concerns and operational challenges effectively. Four things to know today 00:00 CompTIA Signals Confidence for 2026 as NinjaOne's Growth Highlights MSP Push to Simplify Operations05:12 Microsoft Tightens Defaults and Expands Automation Across Teams, Exchange, and Fabric09:22 Dell Reverses Course on Laptop Branding, Reintroducing XPS to Reduce Confusion and Reset Its AI PC Strategy11:53 Artificial Analysis Overhauls AI Benchmarks to Focus on Real-World Work, Lowering Scores and Raising Enterprise Expectations This is the Business of Tech. Supported by: https://timezest.com/mspradio/https://scalepad.com/dave/
Artificial intelligence adoption is accelerating without formal ownership as employees, customers, and patients integrate AI tools into daily decisions. Surveys from Gallup show 45% of U.S. employees use AI at work at least occasionally, while research cited by OpenAI indicates roughly 60% of American adults recently used AI for health-related questions. Zoho and Arion Research report that 41% of organizations have strengthened privacy measures after adopting AI, reflecting growing concern about data exposure and accountability. For MSPs, the shift places liability closer to the systems being used rather than the vendors supplying them.Trust in digital media is also eroding as AI-generated content becomes harder to distinguish from authentic material. Instagram CEO Adam Mosseri states that assuming photos or videos reflect real events is no longer reliable and suggests verification at the point of capture rather than labeling generated content. This approach reframes trust as a technical system rather than a social assumption. For IT providers, the issue extends beyond social platforms to security footage, compliance evidence, training data, and any asset where authenticity must be demonstrated.At the same time, automation and AI training are converging on the same constraint: expert judgment. HireArt's 2025 AI Trainer Compensation Report shows subject-matter experts earning $60 to more than $180 per hour, compared with under $20 for generalist data labelers, reflecting the cost of errors in regulated or technical fields. Kaseya's 2025 EMEA MSP Benchmark Report finds that while nearly 75% of MSPs expect revenue growth, 45% face staffing and skills shortages, increasing reliance on automation built on accurate data and curated exceptions.Major vendors are embedding judgment directly into platforms. ServiceNow's planned $7.75 billion acquisition of Armis expands asset classification and risk scoring within workflows. Freshworks' acquisition of FireHydrant integrates AI-driven incident management into ITSM. Google Cloud's revamped Partner Network shifts incentives toward outcome-based tiers beginning in 2026. For MSPs and IT service leaders, these moves concentrate responsibility around interpretation, governance, and accountability, even as tools increasingly define risk and success.Four things to know today00:00 Surveys Show AI Adoption Is Happening Without Ownership as Employees, Customers, and Patients Lead Usage04:50 Instagram's CEO Says Trust Is No Longer Assumed as AI Forces Proof-of-Reality Models07:22 AI and MSP Automation Are Converging on the Same Bottleneck: Expert Judgment09:52 Vendors Shift From Tools to Judgement as ServiceNow, Freshworks, and Google Cloud Embed Risk, Incidents, and Outcomes This is the Business of Tech. Supported by: https://scalepad.com/dave/
The U.S. economy demonstrated robust growth in the third quarter of 2025, with a gross domestic product (GDP) increase of 4.3%, according to the Commerce Department. This growth occurred despite consumer concerns and uncertainties related to tariffs, with military spending and corporate profits contributing significantly. However, the technology sector experienced substantial layoffs, with 1.1 million jobs cut in 2025, of which only 55,000 were attributed to artificial intelligence (AI). The majority of job losses stemmed from corporate restructuring and economic conditions rather than direct displacement by AI, leading to hiring freezes, particularly for entry-level positions.Small and medium-sized businesses (SMBs) are currently facing challenges in attracting talent, with over 70% reporting difficulties in finding qualified candidates due to competition from larger firms. The National Federation of Independent Business noted that nearly half of all small businesses are struggling to fill open positions, which is stalling growth and reducing productivity. Despite a slight increase in small business optimism, driven by expectations of higher sales, many owners cite labor quality as their top concern. Additionally, 64% of SMBs are experiencing supply chain disruptions, complicating their operations.The episode also discusses the ongoing chip and memory shortages, which are expected to persist into 2027, leading to rising prices for consumer electronics. Major memory manufacturers are prioritizing supply for AI companies, impacting pricing across various sectors. Furthermore, the shift towards outcome-based pricing models in software is highlighted, where companies may pay based on actual results delivered, potentially complicating the relationship between service providers and clients if expectations are not clearly defined.For Managed Service Providers (MSPs) and IT service leaders, these developments underscore the importance of clarity and realistic expectations in service delivery. As operational fragility becomes more pronounced amid rising costs and labor shortages, MSPs must reframe their roles from implementers to risk managers. This shift is crucial to avoid margin erosion and contract disputes, ensuring that they are not unduly burdened by decisions made outside their control. The evolving economic landscape necessitates a proactive approach to pricing and service design, particularly as automation and AI continue to reshape the industry. Four things to know today 00:00 Strong GDP Growth, Persistent Layoffs, and Weak AI Returns Expose Hidden Risk for SMB Operations07:04 AI Is Driving Hardware Shortages, Cloud Growth, and Outcome-Based Pricing—Raising Cost Risk for MSPs11:10 MSP Expense Volatility, AI-Driven Service Shifts, and Labor Shortages Are Colliding on Pricing Strategy15:04-- MSP Radio Expands Beyond News With Acquisition of Two MSP Education Brands This is the Business of Tech. Supported by: https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship
Managed Service Providers (MSPs) are encouraged to shift their focus from traditional infrastructure management to becoming Managed Intelligence Providers (MIPs), emphasizing the integration of artificial intelligence (AI) into their service offerings. Chance Weaver, VP of AI Adoption at PAX 8, highlights the necessity for MSPs to engage in deeper conversations with clients about their business processes rather than merely discussing technology tools. This approach aims to identify specific business challenges that can be addressed through tailored technological solutions, including AI, automation, and business intelligence.Weaver notes that while many MSPs have historically excelled in maintaining infrastructure, they often lack a comprehensive understanding of their clients' workflows and business needs. The transition to MIPs involves not only understanding business processes but also ensuring data readiness, which is critical for effective AI implementation. Instead of undertaking extensive data cleanup projects upfront, MSPs should focus on the data relevant to specific business processes, thereby demonstrating immediate ROI and building trust with clients.The episode also discusses the importance of outcome-driven services and the potential for MSPs to monetize AI solutions effectively. Weaver shares insights from his interviews with over 650 partners in the PAX 8 ecosystem, revealing that only a small percentage are currently generating revenue from AI-related services. Successful partners are leveraging their existing relationships and expertise to create value for clients by aligning pricing models with measurable outcomes, thus facilitating a smoother transition to AI adoption.For MSPs and IT service leaders, the key takeaway is the urgency to start conversations about AI with clients, even if they are not yet fully equipped to implement these solutions. By positioning themselves as knowledgeable partners in the AI transformation journey, MSPs can capitalize on emerging opportunities and enhance their service offerings. The discussion emphasizes that while some providers may choose to adopt a fast-follower strategy, those who proactively engage with clients about AI will likely gain a competitive advantage in the evolving market landscape.
The conversation centers on the evolving role of automation in Managed Service Providers (MSPs), particularly the implementation of human-in-the-loop AI systems. Mathieu Tougas, CEO of Mizo Technologies, emphasizes that while automation can significantly enhance efficiency—reporting a 26% increase in technician capacity and a 30% reduction in escalations—maintaining human oversight is crucial for ensuring service quality. This approach allows technicians to delegate low-value tasks to AI agents while focusing on higher-value customer interactions, thereby preserving the essential human element in service delivery.Tougas outlines the methodology behind these efficiency gains, which involves analyzing ticket resolution times before and after deploying Mizo's solutions. The data indicates that technicians can handle more tickets in less time, which can lead to reduced staffing needs without compromising service quality. He also addresses common misconceptions among MSPs regarding AI, particularly the fear of losing control over service quality when delegating tasks to automated systems. Instead, he argues that AI should be viewed as a tool to enhance technician capabilities rather than replace them.The discussion also touches on the importance of integrating AI tools into existing workflows without causing disruption. Mizo Technologies employs a two-week fine-tuning period to adapt its systems to the specific processes of each MSP, ensuring a smoother transition and minimizing the need for extensive retraining. This gradual approach allows organizations to build trust in the AI systems while optimizing their service desk operations.For MSPs and IT service leaders, the key takeaway is the necessity of balancing automation with human oversight to maintain service quality. As the industry moves towards greater automation, understanding the right contexts for AI deployment and ensuring robust processes are in place will be essential for maximizing efficiency and customer satisfaction. Embracing AI thoughtfully can lead to significant operational improvements while still addressing the critical need for human interaction in service delivery.