Podcasts about Pocs

  • 506PODCASTS
  • 847EPISODES
  • 56mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Dec 8, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Pocs

Latest podcast episodes about Pocs

The Superposition Guy's Podcast
Joe Ghalbouni, President of Ghalbouni Consulting

The Superposition Guy's Podcast

Play Episode Listen Later Dec 8, 2025 28:32


Dr. Joe Ghalbouni, a quantum communication PhD who moved from academia into Point72's innovation team and now runs Ghalbouni Consulting, is interviewed by Yuval Boger. They discuss how he helped a major hedge fund move from quantum curiosity to concrete education, use case discovery, and POCs, and why he believes the real bottleneck today is not hardware but algorithms and sector-aware problem mapping. The conversation explores where quantum is most promising in financial services, from optimization to quantum machine learning, and how quantum inspired methods on classical hardware are already delivering value. They also cover PQC and QKD roadmaps, what it really takes to move a quantum solution into production, and why Joe is surprisingly optimistic about seeing useful quantum advantage in specific use cases within the next few years.

Hipsters Ponto Tech
Tecnologia precisa ENTREGAR VALOR pro negócio: da dev aos 13 anos à CTO | Anaterra – Dasa – Hipsters.Talks #15

Hipsters Ponto Tech

Play Episode Listen Later Dec 4, 2025 40:00


“Tecnologia pela tecnologia tem que morrer. Área de tecnologia que não pensa em entregar valor pro negócio ou pro usuário final, ela tende a morrer. Você não deveria gastar dinheiro por gastar dinheiro” No décimo quinto episódio do Hipsters.Talks, PAULO SILVEIRA, CVO do Grupo Alun, conversa com ANATERRA OLIVEIRA, CTO da DASA, sobre inovação aberta, parcerias com startups e por que experiência do usuário é mais importante que tecnologia sofisticada. Uma conversa sobre o dia a dia de quem lidera tecnologia em uma das maiores empresas de saúde do Brasil. Prepare-se para um episódio cheio de conhecimento e inspiração!

The Ravit Show
From Kafka to Flink: What Aiven and Ververica Can Do Together

The Ravit Show

Play Episode Listen Later Nov 26, 2025 8:11


Real time is getting simpler. At Flink Forward, I sat down with Josep Prat, Director, Aiven. We discussed about Aiven and Ververica | Original creators of Apache Flink® new partnership and what it unlocks for data teams!!!!What we covered: • Why this partnership makes sense now and the outcomes it targets • Fastest ROI use cases for joint customers • How Aiven and Ververica split support, SLAs, and upgrades • The first deployment patterns teams should try. POCs, phased rollouts, or full cutovers • Support for AI projects that need fresh data with low latency • What is coming next on the shared roadmap over the next two quartersIf you care about streaming in production and a cleaner path to value, this one is worth a watch.Full interview now live!!!!#data #ai #streaming #Flink #Aiven #Ververica #realtimestreaming #theravitshow

Matrix Moments by Matrix Partners India
220: India's Secret Advantage in AI: The FDE Revolution

Matrix Moments by Matrix Partners India

Play Episode Listen Later Nov 19, 2025 44:30


70% of enterprise AI projects never reach production. The solution: Forward Deployed Engineers (FDEs). In this episode, Vikram Vaidyanathan and Rocketlane CEO, Srikrishnan Ganesan unpack the rise of the FDE model, from Palantir's origins to how AI companies use it today to bridge the gap between prototypes and production. They discuss why traditional SaaS orgs break in AI, the governance needed to scale FDE teams, and why India is emerging as the global engine room for AI deployment. A crisp breakdown of the role shaping the future of enterprise AI. Chapters  00:01:29 -  Introduction to the Z47 podcast  00:04:16 - The 70% problem: Why enterprise AI fails to scale 00:05:25 - The origin story: Inside Palantir, where it all began 00:12:43 - Evolving from deployment to GTM engine 00:14:23 - The Vision Selling era: from POCs to production ROI 00:17:09 - What does a great FDE motion look like? 00:18:59 - Building with FDE DNA: How Rocketlane practices what it preaches 00:24:06 - Product, success, or stand-alone: Where should FDEs sit? 00:26:22 - Scaling the FDE model: from speed to structured governance 00:33:38 - Pairing on-site FDEs with India's 24×7 talent engine 00:34:21 - AI adoption as India's next big export 00:37:49 - FDEs: The human bridge between AI promise and delivery  

Entre Chaves
#247 BaaS: o que é, vantagens, aplicações e resultados

Entre Chaves

Play Episode Listen Later Nov 18, 2025 37:36


Quantos projetos já foram adiados porque a infraestrutura consumiu mais tempo do que o previsto? Neste episódio, Christian Alexsander, Desenvolvedor para Android na dti digital, explica como o BaaS (Backend as a Service) pode acelerar o desenvolvimento de aplicações, permitindo que o time foque no que realmente importa: a lógica de negócio do produto. Descubra como essa tecnologia ajuda desenvolvedores a entregar soluções mais rapidamente, reduzindo a sobrecarga técnica e potencializando resultados. Dê o play e ouça agora! Assuntos abordados: O que é BaaS e como ele se diferencia; Aceleração no desenvolvimento de MVPs e POCs; Principais funcionalidades oferecidas; Responsabilidade técnica dos desenvolvedores; Limitações de customização e casos de uso; Segurança em ambientes BaaS; Análise de custo-benefício em diferentes escalas; Integração com tecnologias Serverless; Manutenção do conhecimento técnico fundamental; Evolução arquitetural em projetos escaláveis; Práticas recomendadas para equipes de desenvolvimento; Monitoramento e otimização de custos. Links importantes: Vagas disponíveis Newsletter Dúvidas? Nos mande pelo Linkedin Contato:  entrechaves@dtidigital.com.br O Entre Chaves é uma iniciativa da dti digital, uma empresa WPP

The Fraud Boxer Podcast
Stop Drowning in Dashboards: How to Deliver Actionable Insights with Deuna

The Fraud Boxer Podcast

Play Episode Listen Later Nov 17, 2025 46:11


"92% of all POCs today in the US are failing because they're trying to use bad data and LLMs that are standardized or generalized."   At Money 2020 I sat down with Deuna (www.deuna.com) co-founder Roberto Kafati (REKS) and their US head of GTM Chase Foster to explore the critical importance of leveraging high-quality, actionable data and intelligent systems to drive business value, especially in complex enterprise environments. The core challenge today is that while most companies possess vast amounts of data, a staggering 92% of AI pilot projects fail because they rely on data that isn't "AI-ready" that is lacking the necessary context, cleanliness, and standardization to be effectively used by large language models (LLMs). The key is transforming raw data, such as the 638 direct and indirect data points per payment transaction, into a strategically usable asset that goes beyond cost-cutting to unlock significant revenue growth across the organization.   The company's platform, Athia, is designed to solve this by acting as an agentic intelligence platform that utilizes merchant-specific data from massive commerce operations (like major airlines, movie chains, and retailers) to provide proactive, highly focused insights. Instead of forcing teams to manually analyze hundreds of performance dashboards, Athia surfaces the most critical information, alerting teams to revenue leakages and recommending direct, real-time actions, such as optimizing payment routing or detecting opportunities in developing economies. This approach allows businesses to embrace the future of "agentic commerce" by maintaining control over the customer experience and ensuring data-driven decision-making is implemented automatically and continuously across all critical functions, fostering a new era of cross-departmental collaboration between areas like payments and marketing.

Category Visionaries
How Wultra built category leadership as the only post-quantum provider for banking digital identity | Peter Dvorak

Category Visionaries

Play Episode Listen Later Nov 17, 2025 18:13


Wultra provides post-quantum authentication for banks, fintechs, and governments—protecting digital identities from emerging quantum computing threats. In this episode, Peter Dvorak shares how he broke into the notoriously closed banking ecosystem by leveraging his early experience in mobile banking development. From navigating multi-stakeholder enterprise sales to positioning quantum-safe cryptography when the threat timeline remains uncertain (consensus: 2035, but could accelerate), Peter reveals the specific strategies required to sell mission-critical security infrastructure to regulated financial institutions. Topics Discussed How post-quantum cryptography runs on classical computers while protecting against quantum threats Why European banking regulation drives global authentication standards The multi-stakeholder sales process: quantum threat teams, CISOs, CTOs, and digital product owners Conference strategy and analyst relationships (Gartner, KuppingerCole) for category positioning Banking budget cycles and why June/July approaches fail Breaking the "who else is using this?" barrier with banking-specific proof points Positioning as the only post-quantum cryptography provider for digital identity in banking GTM Lessons For B2B Founders Layer future-proofing onto immediate ROI: Post-quantum cryptography doesn't require quantum computers to function—it runs on classical infrastructure while providing superior security. Peter sells banks on moving from SMS OTP to mobile app authentication (tangible, immediate benefit) while positioning quantum resistance as migration insurance: "You won't have to rip-and-replace in three years." For emerging tech, anchor value in today's operational wins, not future scenarios. Give struggling departments concrete wins: Large banks have quantum threat teams tasked with replacing every piece of software by 2030-2035. Peter gives them measurable progress: "We move you from 5% to 10% completion on authentication and digital identity." These teams need defensible projects to justify their existence. Identify which internal groups are fighting for relevance and deliver projects they can report upward. Banking references are binary gatekeepers: Every bank asks "who else is using this?" Non-banking customers (telcos, gaming, lottery) don't count—banking regulation and systems are fundamentally different. The first banking customer is the hardest barrier. Once cleared, subsequent conversations become tractable. Budget aggressively to land that first bank, even at unfavorable terms. Respect the annual budget cycle: Banks allocate resources 12 months ahead. Approaching in Q2/Q3 means budgets are locked—even free POCs fail because internal resources are committed. Peter's pipeline strategy: build relationships and maintain visibility throughout the year, then activate when budget windows open. Don't confuse market education with active pipeline. Map and sequence multi-stakeholder buys: Authentication purchases require alignment across quantum threat teams (if they exist), cybersecurity/compliance, CTO/CIO (infrastructure acceptance), and digital product owners (UX concerns affecting their KPIs). Start at director level—board executives are too removed from technical details. Research each bank's org structure before engaging, then tailor sequencing. EU regulatory leadership creates expansion vectors: European regulations like PSD2 and strong authentication requirements get replicated in Southeast Asia, MENA, and other regions. Peter benefits from solving EU compliance first, then riding regulatory diffusion. The US remains fragmented with smaller regional banks still using username/password. Founders should analyze which geographies lead regulatory adoption in their category. Maintain composure through 18+ month cycles: Peter's regret: losing his temper during negotiations cost him time. Banking doesn't buy impulsively—sales require patience through lengthy security reviews, compliance checks, and committee approvals. Incremental progress and rational positioning matter more than aggressive closing. Emotional control is operational discipline. // Sponsors:  Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role.  Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

The Tech Trek
How Data and Engineering Make the Impossible Real

The Tech Trek

Play Episode Listen Later Nov 11, 2025 27:15


Svetlana Zavelskaya, Head of Software Engineering for Data Platform and Infrastructure at Quanata, joins the show to unpack what it really takes to make the “impossible” possible in tech. From re-architecting a startup codebase to scaling innovation inside an insurance giant, she shares how her team turns complex R&D challenges into production-ready systems. This conversation dives deep into engineering discipline, AI tool adoption, and why the next wave of insurance innovation is powered by data and software.Key Takeaways• Real innovation often means balancing speed with long-term architecture decisions• AI coding tools are valuable for exploration but need governance and clear security guardrails• POCs fail when expectations aren't aligned, not because the tech doesn't work• Insurance tech is evolving fast through telematics and context-based data models• Well-structured, well-documented code is still the foundation for scalable innovationTimestamped Highlights00:33 How telematics is changing the economics of insurance and rewarding better drivers03:59 Cars as software platforms and what that means for data privacy and innovation06:02 The growing pains of re-architecting an organically built startup codebase08:38 Evaluating new AI tools and maintaining data security across teams11:08 Why most AI POCs never make it to production16:29 How Quanata's R&D work feeds into State Farm's larger technology initiatives20:40 Safe-driving challenges, behavioral change, and saving lives with dataA Thought That Stuck“If we can prevent just 1 percent of drivers in the world from using their phone behind the wheel, imagine how many lives we can save.”Pro Tips• Before starting a POC, define if it's an experiment or a potential product foundation• Let engineers explore new tools but build frameworks to govern how data and results are handledCall to ActionIf you enjoy exploring how data, AI, and engineering innovation come together to solve real-world problems, follow The Tech Trek on Apple Podcasts or Spotify and share this episode with a colleague who builds at the edge of what's possible.

Fülke: a HVG Online közéleti podcastja
IDŐPOCSÉKOLÁS a munkaidő fele. I Mérlegen Egerszegi Krisztián cégépítővel

Fülke: a HVG Online közéleti podcastja

Play Episode Listen Later Nov 11, 2025 66:07


A mesterséges intelligencia alapjaiban változtatja meg a cégvezetést mikéntjeit, amit saját bőrén is megtapasztalt Egerszegi Krisztián, a MiniCRM igazgatósági tagja. A szakember az idei egyik legnagyobb magyar céges felvásárlásáról, illetve az AI-használat mindennapjairól mesélt a HVG üzleti podcastjában, a Mérlegenben. Váltsuk az ígéretes pénzügyi terveket együtt valóra! A műsor támogatója a Raiffeisen Bank.

Digital Pacemaker
#80 AI & Experiments: Is the proof-of-concept dead? with Ahi Gvirtsman (Spyre Group)

Digital Pacemaker

Play Episode Listen Later Nov 10, 2025 45:28


In this episode, Markus and Uli talk with Ahi Gvirtsman, Co-Founder and Chief Knowledge Officer at Spyre Group, about why the traditional Proof of Concept no longer works in today's innovation landscape. They explore how companies can move from slow, over-engineered PoCs to fast, low-cost experiments that create real learning and impact. Ahi explains why AI only delivers value when it truly changes how you work and why a great demo means nothing unless it drives real decisions, transformation, and measurable business outcomes. Ahi's new book: https://www.amazon.de/Spark-practical-handbook-managers-innovation-ebook/dp/B0FVW61NFP/ Connect on LinkedIn: - Ahi Gvirtsman – linkedin.com/in/ahigvirtsman/ - Ulrich Irnich – linkedin.com/in/ulrich-irnich - Markus Kuckertz – linkedin.com/in/markuskuckertz Alle podcast episodes: digitalpacemaker.de We would love to hear your feedback. Let's talk!

Predictable Revenue Podcast
411: The Challenge of Authentic Selling with Kunick Kapadia

Predictable Revenue Podcast

Play Episode Listen Later Nov 6, 2025 26:17


In this episode of the Predictable Revenue Podcast, Collin Stewart interviews Kunick Kapadia, co-founder of Anova, as they discuss the journey of building a data analytics platform. They explore the importance of product market fit, learning from past mistakes, customer acquisition strategies, pricing strategies, and overcoming imposter syndrome. The conversation highlights the importance of honest feedback, the challenges of scaling a startup, and the significance of standing out in a crowded market. Highlights include: Validating Ideas: The Importance of Customer Feedback (03:04), Navigating Customer Development and POCs (09:54), Overcoming Imposter Syndrome in Entrepreneurship (11:23), Pricing Strategies: Finding the Right Value (14:41), Finding a Unique Go-to-Market Strategy Finding a Unique Go-to-Market Strategy (19:55), And more... Stay updated with our podcast and the latest insights on Outbound Sales and Go-to-Market Strategies!

Cables2Clouds
Monthly News Update: DNS Did That Thing Again...

Cables2Clouds

Play Episode Listen Later Nov 5, 2025 32:20 Transcription Available


Send us a textStart with a simple truth: when the platform breaks, your clever architecture won't save you. We dig into the AWS US‑East‑1 outage where DynamoDB's role in DNS planning for load balancers collided with a race condition, leaving empty records and stalled EC2 instances. Forget the finger‑wagging about “well‑architected” apps—this was a platform failure with limited customer escape routes. We weigh multi‑region and multi‑cloud trade‑offs with a sober look at cost, complexity, and operational burden.Security took center stage with two high‑risk stories you need to act on. First, a critical WSUS flaw enabling remote unauthenticated code execution against the very servers meant to protect fleets. If WSUS is still live, patch immediately or take it offline until you can. Then, the F5 source code theft: not a cloning threat, but a blueprint for discovering subtle bugs and crafting precise exploits. Attribution points toward Chinese state‑sponsored actors, which means targeted, quiet use rather than noisy mass exploitation. The risk isn't gone when headlines fade; it's just harder to see.We connect this to rising exploitation of vSock across hypervisors like VMware ESXi. With public PoCs and active abuse, vSock opens covert channels from host to guest, making segmentation and management plane isolation non‑negotiable. Patch aggressively, gate access through jump hosts, enforce MFA, and consider disabling vSock where viable on QEMU stacks. These are concrete steps that cut real risk.Then we turn to the elephant in the data center: AI ROI. Vendors keep shipping agentic assistants and copilots, but few can show durable returns outside a subsidized token economy. We share a pragmatic lens for measuring value—cycle time, MTTR, defect rates—while acknowledging the dot‑com‑style arc ahead: hype, correction, then durable wins that prioritize efficiency. As AI demand drives massive new builds, the physical footprint of the cloud is showing up in local power grids and skylines. Infrastructure choices now carry community and energy implications leaders can't ignore.Subscribe, share with a colleague who owns platform reliability or security, and leave a review with your biggest takeaway or question—what will you patch, segment, or measure first?Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

The Product Market Fit Show
He built a new database in his bedroom—now he powers Cursor, Notion and Anthropic. | Simon Eskildsen, Founder of turbopuffer

The Product Market Fit Show

Play Episode Listen Later Oct 30, 2025 53:27 Transcription Available


Simon spent 10 years at Shopify scaling databases to millions of requests per second. Then he discovered vector databases were so expensive that companies couldn't launch AI features. So he solved it. When Cursor emailed about their crushing costs, Simon flew to San Francisco unannounced. They migrated their entire workload within a week, cutting their bill by 95%. Then came Notion. Justin pulled 24-hour coding marathons during their POC, fixing 300 milliseconds of latency in three hours. They signed on July 25th—the same day Simon's daughter was born. Now TurboPuffer powers Cursor, Notion, and Linear while staying profitable with just 17 people. Simon shares why he turned down easy Series A money and his framework of exactly 6 legitimate reasons to ever raise capital.Why You Should Listen:The power of making something 10-100x cheaperWhy you need to be willing to fly to early customers (how that landed Cursor)The 6 reasons to raise money (and why you often shouldn't)How working 24-hour sprints during POCs converted enterprise customersWhy staying profitable with 17 people beats raising $30M you don't needKeywords:startup podcast, startup podcast for founders, TurboPuffer, Simon Eskildsen, vector database, Cursor, Notion, bootstrapping, database startup, AI infrastructure00:00:00 Intro00:07:52 Finding the problem00:12:25 Building alone00:22:27 Going viral on X00:26:18 Closing Cursor00:40:17 Closing Notion00:45:26 Why he didn't raise $30M when everyone expected him toSend me a message to let me know what you think!

AWS for Software Companies Podcast
Ep157: Beyond the Hype: Real-World AI Agent Deployments at Automation Anywhere, DataVisor, and Sumo Logic

AWS for Software Companies Podcast

Play Episode Listen Later Oct 13, 2025 32:27


ISV leaders from Automation Anywhere, DataVisor, and Sumo Logic share battle-tested strategies for deploying AI agents at scale, including pricing models, proof of concepts and ROI.Topics Include:Panel brings together ISV leaders from automation, fraud detection, and security operations.Companies rethinking entire business processes rather than automating incremental portions with agents.Start with immutable data before tackling real-time changing data in production.Intent for change must come from board, CEO, and customers simultaneously.Challenge: proving agent value beyond CSAT when internal teams block deployment.Sumo Logic measures Mean Time to Resolution, aiming to cut hours to zero.DataVisor cuts fraud alert resolution from one hour down to twenty minutes.Customers demand reliability as workflows shift from deterministic to probabilistic agent decisions.Automation Anywhere spent three years making every platform component fully agent-ready.Focus on business outcomes, not chasing every new model release each week.Human oversight still critical—agents are task-oriented and prone to hallucinations and drift.Humans validate agent findings, then let agents scale actions across hundreds instances.Pricing experiments range from platform-plus-consumption to outcome-based to decision-event models.Token pricing doesn't work due to varied data modalities and complexity.Next two quarters: more POCs moving to production with productive agents deployed.Future prediction: enterprise apps becoming systems of knowledge powered by MCP protocol.Participants:Jay Bala - Senior Vice President of Product, Automation AnywhereKedar Toraskar – VP Product Partnerships, DataVisorBill Peterson - Senior Director, Product Marketing, Sumo LogicJillian D'Arcy - ISV Senior Leader, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

The Leadership in Insurance Podcast (The LIIP)
"From Startup Founder to Insurance Innovation Leader: Hard Truths About InsurTech from IMA's Garrett Droege"

The Leadership in Insurance Podcast (The LIIP)

Play Episode Listen Later Oct 13, 2025 32:42


In our latest Leadership in Insurance Podcast episode, I sat down with Garrett Droege (SVP, Director of Innovation & Digital Risk Practice Leader at IMA Financial) for a fascinating discussion on InsurTech with some thought-provoking insights that challenge conventional thinking about insurance technology.With 20 years exclusively on the brokerage side, Garrett brings a unique perspective as a former startup founder and self-taught software developer. As both Innovation Lead and Digital Risk Practice Leader at IMA, his role sounds incredibly broad, but as Garrett says, both sides serve each other—you need wide ranging touchpoints across tech ecosystems to stay ahead in both innovation and risk.In this episode, we cover: Build vs. Buy Decision Framework: Garrett's approach is clear: build customer-facing proprietary solutions that differentiate your business and serve your customers, but make sure it works with existing technology. The POC Framework That Actually Works: Forget 12-month POCs that drag on and lose momentum. Garrett advocates for highly targeted, 45-60 day maximum POCs with clear KPIs and the right team selection upfront. His advice to founders? "You think you want a 12-month contract. You don't. Let's prove your platform works fast and furiously, or let's wait until you're ready."The Bold Take: Garrett's view on how the industry has gone about InsurTech all wrong and allowed it to become a series of Band-Aids for the real problem: antiquated core systems from the 1980s and 90s that were built before APIs even existed. The result? Frankenstein workflows requiring 7-15 platforms to complete a single task, with 80% of users still working around the technology the same way they did 20 years ago.The AI Wake-Up Call: Despite AI being "transformational unlike anything we've ever seen" (and Garrett argues it's under-hyped), its promise is severely limited without access to core data systems. Garrett stated "You could build a fully agentic AI brokerage much easier than you could reverse engineer and retrofit an existing brokerage."The Investment Landscape: With 80% of recent Y Combinator and Broker Tech Ventures companies being AI-focused InsurTech solutions, the momentum is undeniable. The dot-com parallels are real—there will be winners and losers, and consolidation is coming.What Technology Can't Replace: Despite all the transformation, some challenges remain timeless: renewal management, client communication, trust-building. As Garrett notes, these require human expertise that AI augments rather than replaces.This conversation is essential listening for anyone in insurance, InsurTech, or risk management. The future of insurance isn't just about innovation—it's about getting the foundation right first. Hosted on Acast. See acast.com/privacy for more information.

The Cloudcast
The 5-10-85 Reality of Enterprise AI

The Cloudcast

Play Episode Listen Later Oct 12, 2025 27:49


Three years since the launch of ChatGPT, what does the landscape of Enterprise AI look like today? What's working, what's struggling and what's still unknown?SHOW: 966SHOW TRANSCRIPT: The Cloudcast #966 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcast[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.SHOW NOTES:HOW ARE ENTERPRISES USING AI IN LATE 2025?5% have a clear vision of how to apply Predictive and Generative AI to a set of use-cases that drive differentiation, productivity improvements and cost reductions. They are keeping the details close to the vest.10% have allocated about 3-5% of their IT budgets to AI, typically from a C-level mandate, and have given it to Microsoft or Google. They have checked “the business is AI-enabled” and signaled to the market that they have fully embraced AI. The market is rewarding these companies at higher multiples. 85% aren't sure what use-cases to focus on, have unrealistic expectations during POCs, and are focused on the “no” areas instead of their own learning curves. Enterprises don't have great visibility into AI costs, and limited baselines of what AI should cost - pay for outcomes, pay for seats, pay for tokens, or pay for GPUs?Enterprises don't have easy access to GPUs outside of via SaaS services - makes it challenging for Private or Sovereign AI demand to be metRight now, there is no simple way for Enterprises to build AI AgentsRight now, there is no simple way for Enterprises to share AI experience / learning curve - AI is a very individualized experienceFEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod

The Automation Podcast
Software Toolbox: OPC Server, Router, DataHub and more (P248)

The Automation Podcast

Play Episode Listen Later Oct 8, 2025 57:48 Transcription Available


Shawn Tierney meets up with Connor Mason of Software Toolbox to learn their company, products, as well as see a demo of their products in action in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 248 Show Notes: Special thanks to Software Toolbox for sponsoring this episode so we could release it “ad free!” To learn about Software Toolbox please checkout the below links: TOP Server Cogent DataHub Industries Case studies Technical blogs Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney with Insights and Automation, and I wanna thank you for tuning back in this week. Now this week on the show, I meet up with Connor Mason from Software Toolbox, who gives us an overview of their product suite, and then he gives us a demo at the end. And even if you’re listening, I think you’re gonna find the demo interesting because Connor does a great job of talking through what he’s doing on the screen. With that said, let’s go ahead and jump into this week’s episode with Connor Mason from Software Toolbox. I wanna welcome Connor from Software Toolbox to the show. Connor, it’s really exciting to have you. It’s just a lot of fun talking to your team as we prepared for this, and, I’m really looking forward to because I just know in your company over the years, you guys have so many great solutions that I really just wanna thank you for coming on the show. And before you jump into talking about products and technologies Yeah. Could you first tell us just a little bit about yourself? Connor Mason (Guest): Absolutely. Thanks, Shawn, for having us on. Definitely a pleasure to be a part of this environment. So my name is Connor Mason. Again, I’m with Software Toolbox. We’ve been around for quite a while. So we’ll get into some of that history as well before we get into all the the fun technical things. But, you know, I’ve worked a lot with the variety of OT and IT projects that are ongoing at this point. I’ve come up through our support side. It’s definitely where we grow a lot of our technical skills. It’s a big portion of our company. We’ll get that into that a little more. Currently a technical application consultant lead. So like I said, I I help run our support team, help with these large solutions based projects and consultations, to find what’s what’s best for you guys out there. There’s a lot of different things that in our in our industry is new, exciting. It’s fast paced. Definitely keeps me busy. My background was actually in data analytics. I did not come through engineering, did not come through the automation, trainings at all. So this is a whole new world for me about five years ago, and I’ve learned a lot, and I really enjoyed it. So, I really appreciate your time having us on here, Shawn Tierney (Host): Shawn. Well, I appreciate you coming on. I’m looking forward to what you’re gonna show us today. I had a the audience should know I had a little preview of what they were gonna show, so I’m looking forward to it. Connor Mason (Guest): Awesome. Well, let’s jump right into it then. So like I said, we’re here at Software Toolbox, kinda have this ongoing logo and and just word map of connect everything, and that’s really where we lie. Some people have called us data plumbers in the past. It’s all these different connections where you have something, maybe legacy or something new, you need to get into another system. Well, how do you connect all those different points to it? And, you know, throughout all these projects we worked on, there’s always something unique in those different projects. And we try to work in between those unique areas and in between all these different integrations and be something that people can come to as an expert, have those high level discussions, find something that works for them at a cost effective solution. So outside of just, you know, products that we offer, we also have a lot of just knowledge in the industry, and we wanna share that. You’ll kinda see along here, there are some product names as well that you might recognize. Our top server and OmniServer, we’ll be talking about LOPA as well. It’s been around in the industry for, you know, decades at this point. And also our symbol factory might be something you you may have heard in other products, that they actually utilize themselves for HMI and and SCADA graphics. That is that is our product. So you may have interacted it with us without even knowing it, and I hope we get to kind of talk more about things that we do. So before we jump into all the fun technical things as well, I kind of want to talk about just the overall software toolbox experience as we call it. We’re we’re more than just someone that wants to sell you a product. We we really do work with, the idea of solutions. How do we provide you value and solve the problems that you are facing as the person that’s actually working out there on the field, on those operation lines, and making things as well. And that’s really our big priority is providing a high level of knowledge, variety of the things we can work with, and then also the support. It’s very dear to me coming through the the support team is still working, you know, day to day throughout that software toolbox, and it’s something that has been ingrained into our heritage. Next year will be thirty years of software toolbox in 2026. So we’re established in 1996. Through those thirty years, we have committed to supporting the people that we work with. And I I I can just tell you that that entire motto lives throughout everyone that’s here. So from that, over 97% of the customers that we interact with through support say they had an awesome or great experience. Having someone that you can call that understands the products you’re working with, understands the environment you’re working in, understands the priority of certain things. If you ever have a plant shut down, we know how stressful that is. Those are things that we work through and help people throughout. So this really is the core pillars of Software Toolbox and who we are, beyond just the products, and and I really think this is something unique that we have continued to grow and stand upon for those thirty years. So jumping right into some of the industry challenges we’ve been seeing over the past few years. This is also a fun one for me, talking about data analytics and tying these things together. In my prior life and education, I worked with just tons of data, and I never fully knew where it might have come from, why it was such a mess, who structured it that way, but it’s my job to get some insights out of that. And knowing what the data actually was and why it matters is a big part of actually getting value. So if you have dirty data, if you have data that’s just clustered, it’s in silos, it’s very often you’re not gonna get much value out of it. This was a study that we found in 2024, from Garner Research, And it said that, based on the question that business were asked, were there any top strategic priorities for your data analytics functions in 2024? And almost 50%, it’s right at ’49, said that they wanted to improve data quality, and that was a strategic priority. This is about half the industry is just talking about data quality, and it’s exactly because of those reasons I said in my prior life gave me a headache, to look at all these different things that I don’t even know where they became from or or why they were so different. And the person that made that may have been gone may not have the contacts, and making that from the person that implemented things to the people that are making decisions, is a very big task sometimes. So if we can create a better pipeline of data quality at the beginning, makes those people’s lives a lot easier up front and allows them to get value out of that data a lot quicker. And that’s what businesses need. Shawn Tierney (Host): You know, I wanna just data quality. Right? Mhmm. I think a lot of us, when we think of that, we think of, you know, error error detection. We think of lost connections. We think of, you know, just garbage data coming through. But I I think from an analytical side, there’s a different view on that, you know, in line with what you were just saying. So how do you when you’re talking to somebody about data quality, how do you get them to shift gears and focus in on what you’re talking about and not like a quality connection to the device itself? Connor Mason (Guest): Absolutely. Yeah. We I kinda live in both those worlds now. You know, I I get to see that that connection state. And when you’re operating in real time, that quality is also very important to you. Mhmm. And I kind of use that at the same realm. Think of that when you’re thinking in real time, if you know what’s going on in the operation and where things are running, that’s important to you. That’s the quality that you’re looking for. You have to think beyond just real time. We’re talking about historical data. We’re talking about data that’s been stored for months and years. Think about the quality of that data once it’s made up to that level. Are they gonna understand what was happening around those periods? Are they gonna understand what those tags even are? Are they gonna understand what those conventions that you’ve implemented, to give them insights into this operation. Is that a clear picture? So, yeah, you’re absolutely right. There are two levels to this, and and that is a big part of it. The the real time data and historical, and we’re gonna get some of that into into our demo as well. It it’s a it’s a big area for the business, and the people working in the operations. Shawn Tierney (Host): Yeah. I think quality too. Think, you know, you may have data. It’s good data. It was collected correctly. You had a good connection to the device. You got it. You got it as often as you want. But that data could really be useless. It could tell you nothing. Connor Mason (Guest): Right. Exactly. Shawn Tierney (Host): Right? It could be a flow rate on part of the process that irrelevant to monitoring the actual production of the product or or whatever you’re making. And, you know, I’ve known a lot of people who filled up their databases, their historians, with they just they just logged everything. And it’s like a lot of that data was what I would call low quality because it’s low information value. Right? Absolutely. I’m sure you run into that too. Connor Mason (Guest): Yeah. We we run into a lot of people that, you know, I’ve got x amount of data points in my historian and, you know, then we start digging into, well, I wanna do something with it or wanna migrate. Okay. Like, well, what do you wanna achieve at the end of this? Right? And and asking those questions, you know, it’s great that you have all these things historized. Are you using it? Do you have the right things historized? Are they even set up to be, you know, worked upon once they are historized by someone outside of this this landscape? And I think OT plays such a big role in this, and that’s why we start to see the convergence of the IT and OT teams just because that communication needs to occur sooner. So we’re not just passing along, you know, low quality data, bad quality data as well. And we’ll get into some of that later on. So to jump into some of our products and solutions, I kinda wanna give this overview of the automation pyramid. This is where we work from things like the field device communications. And you you have certain sensors, meters, actuators along the actual lines, wherever you’re working. We work across all the industries, so this can vary between those. Through there, you work up kind of your control area. A lot of control engineers are working. This is where I think a lot of the audience is very familiar with PLCs. Your your typical name, Siemens, Rockwell, your Schneiders that are creating, these hardware products. They’re interacting with things on the operation level, and they’re generating data. That that was kind of our bread and butter for a very long time and still is that communication level of getting data from there, but now getting it up the stack further into the pyramid of your supervisory, MES connections, and it’ll also now open to these ERP. We have a lot of large corporations that have data across variety of different solutions and also want to integrate directly down into their operation levels. There’s a lot of value to doing that, but there’s also a lot of watch outs, and a lot of security concerns. So that’ll be a topic that we’ll be getting into. We also all know that the cloud is here. It’s been here, and it’s it’s gonna continue to push its way into, these cloud providers into OT as well. There there’s a lot of benefit to it, but there there’s also some watch outs as this kind of realm, changes in the landscape that we’ve been used to. So there’s a lot of times that we wanna get data out there. There’s value into AI agents. It’s a hot it’s a hot commodity right now. Analytics as well. How do we get those things directly from shop floor, up into the cloud directly, and how do we do that securely? It’s things that we’ve been working on. We’ve had successful projects, continues to be an interest area and I don’t see it slowing down at all. Now, when we kind of begin this level at the bottom of connectivity, people mostly know us for our top server. This is our platform for industrial device connectivity. It’s a thing that’s talking to all those different PLCs in your plant, whether that’s brownfield or greenfield. We pretty much know that there’s never gonna be a plant that’s a single PLC manufacturer, that exists in one plant. There’s always gonna be something that’s slightly different. Definitely from Brownfield, things different engineers made different choices, things have been eminent, and you gotta keep running them. TopServe provides this single platform to connect to a long laundry list of different PLCs. And if this sounds very familiar to Kepserver, well, you’re not wrong. Kepserver is the same exact technology that TopServer is. What’s the difference then is probably the biggest question we usually get. The difference technology wise is nothing. The difference in the back end is that actually it’s all the same product, same product releases, same price, but we have been the biggest single source of Kepserver or Topsyra implementation into the market, for almost two plus decades at this point. So the single biggest purchase that we own this own labeled version of Kepserver to provide to our customers. They interact with our support team, our solutions teams as well, and we sell it along the stack of other things because it it fits so well. And we’ve been doing this since the early two thousands when, Kepware was a a much smaller company than it is now, and we’ve had a really great relationship with them. So if you’ve enjoyed the technology of of Kepserver, maybe there’s some users out there. If you ever heard of TopServer and that has been unclear, I hope this clear clarifies it. But it it is a great technology stack that that we build upon and we’ll get into some of that in our demo. Now the other question is, what if you don’t have a standard communication protocol, like a modbus, like an Allen Bradley PLC as well? We see this a lot with, you know, testing areas, pharmaceuticals, maybe also in packaging, barcode scanners, weigh scales, printers online as well. They they may have some form of basic communications that talks over just TCP or or serial. And how do you get that information that’s really valuable still, but it’s not going through a PLC. It’s not going into your typical agent mind SCADA. It might be very manual process for a lot of these test systems as well, how they’re collecting and analyzing the data. Well, you may have heard of our Arm server as well. It’s been around, like I said, for a couple decades and just a proven solution that without coding, you can go in and build a custom protocol that expects a format from that device, translates it, puts it into standard tags, and now that those tags can be accessible through the open standards of OPC, or to it was a a Veeva user suite link as well. And that really provides a nice combination of your standard communications and also these more custom communications may have been done through scripting in the past. Well, you know, put this onto, an actual server that can communicate through those protocols natively, and just get that data into those SCADA systems, HMIs, where you need it. Shawn Tierney (Host): You know, I used that. Many years ago, I had an integrator who came to me. He’s like, Shawn, I wanna this is back in the RSVUE days. He’s like, Shawn, I I got, like, 20 Euotherm devices on a four eighty five, and they speak ASCII, and I gotta I gotta get into RSVUE 32. And, you know, OmniSIR, I love that you could you could basically developing and we did Omega and some other devices too. You’re developing your own protocol, but it’s beautiful. And and the fact that when you’re testing it, it color codes everything. So you know, hey. That part worked. The header worked. The data worked. Oh, the trailing didn’t work, or the terminated didn’t work, or the data’s not in the right format. Or I just it was a joy to work with back then, and I can imagine it’s only gotten better since. Connor Mason (Guest): Yeah. I think it’s like a little engineer playground where you get in there. It started really decoding and seeing how these devices communicate. And then once you’ve got it running, it it’s one of those things that it it just performs and, is saved by many people from developing custom code, having to manage that custom code and integrations, you know, for for many years. So it it’s one of those things that’s kinda tried, tested, and, it it’s kind of a staple still our our base level communications. Alright. So moving along kind of our automation pyramid as well. Another part of our large offering is the Cogent data hub. Some people may have heard from this as well. It’s been around for a good while. It’s been part of our portfolio for for a while as well. This starts building upon where we had the communication now up to those higher echelons of the pyramid. This is gonna bring in a lot of different connectivities. You if you’re not if you’re listening, it it’s kind of this cog and spoke type of concept for real time data. We also have historical implementations. You can connect through a variety of different things. OPC, both the profiles for alarms and events, and even OPC UA’s alarming conditions, which is still getting adoption across the, across the industry, but it is growing. As part of the OPC UA standard, we have integrations to MQTT. It can be its own MQTT broker, and it can also be an MQTT client. That has grown a lot. It’s one of those things that lives be besides OPC UA, not exactly a replacement. If you ever have any questions about that, it’s definitely a topic I love to talk about. There’s space for for this to combine the benefits of both of these, and it’s so versatile and flexible for these different type of implementations. On top of that, it it’s it’s a really strong tool for conversion and aggregation. You kind of add this, like, its name says, it’s a it’s a data hub. You send all the different information to this. It stores it into, a hierarchy with a variety of different modeling that you can do within it. That’s gonna store these values across a standard data format. Once I had data into this, any of those different connections, I can then send data back out. So if I have anything that I know is coming in through a certain plug in like OPC, bring that in, send it out to on these other ones, OPC, DA over to MQTT. It could even do DDA if I’m still using that, which I probably wouldn’t suggest. But overall, there’s a lot of good benefits from having something that can also be a standardization, between all your different connections. I have a lot of different things, maybe variety of OPC servers, legacy or newer. Bring that into a data hub, and then all your other connections, your historians, your MAS, your SCADAs, it can connect to that single point. So it’s all getting the same data model and values from a single source rather than going out and making many to many connections. A a large thing that it was originally, used for was getting around DCOM. That word is, you know, it might send some shivers down people’s spines still, to this day, but it’s it’s not a fun thing to deal with DCOM and also with the security hardening. It’s just not something that you really want to do. I’m sure there’s a lot of security professionals would advise against EPRA doing it. This tunneling will allow you to have a data hub that locally talks to any of the DA server client, communicate between two data hubs over a tunnel that pushes the data just over TCP, takes away all the comm wrappers, and now you just have values that get streamed in between. Now you don’t have to configure any DCOM at all, and it’s all local. So a lot of people went transitioning, between products where maybe the server only supports OPC DA, and then the client is now supporting OPC UA. They can’t change it yet. This has allowed them to implement a solution quickly and cost and at a cost effective price, without ripping everything out. Shawn Tierney (Host): You know, I wanna ask you too. I can see because this thing is it’s a data hub. So if you’re watching and you’re if you’re listening and not watching, you you’re not gonna see, you know, server, client, UAD, a broker, server, client. You know, just all these different things up here on the site. Do you what how does somebody find out if it does what they need? I mean, do you guys have a line they can call to say, I wanna do this to this. Is that something Data Hub can do, or is there a demo? What would you recommend to somebody? Connor Mason (Guest): Absolutely. Reach out to us. We we have a a lot of content outline, and it’s not behind any paywall or sign in links even. You you can always go to our website. It’s just softwaretoolbox.com. Mhmm. And that’s gonna get you to our product pages. You can download any product directly from there. They have demo timers. So typically with, with coaching data hub, after an hour, it will stop. You can just rerun it. And then call our team. Yeah. We have a solutions team that can work with you on, hey. What do I need as well? Then our support team, if you run into any issues, can help you troubleshoot that as well. So, I’ll have some contact information at the end, that’ll get some people to, you know, where they need to go. But you’re absolutely right, Shawn. Because this is so versatile, everyone’s use case of it is usually something a little bit different. And the best people to come talk to that is us because we’ve we’ve seen all those differences. So Shawn Tierney (Host): I think a lot of people run into the fact, like, they have a problem. Maybe it’s the one you said where they have the OPC UA and it needs to connect to an OPC DA client. And, you know, and a lot of times, they’re they’re a little gunshot to buy a license because they wanna make sure it’s gonna do exactly what they need first. And I think that’s where having your people can, you know, answer their questions saying, yes. We can do that or, no. We can’t do that. Or, you know, a a demo that they could download and run for an hour at a time to actually do a proof of concept for the boss who’s gonna sign off on purchasing this. And then the other thing is too, a lot of products like this have options. And you wanna make sure you’re buying the ticking the right boxes when you buy your license because you don’t wanna buy something you’re not gonna use. You wanna buy the exact pieces you need. So I highly recommend I mean, this product just does like, I have, in my mind, like, five things I wanna ask right now, but not gonna. But, yeah, def definitely, when it when it comes to a product like this, great to touch base with these folks. They’re super friendly and helpful, and, they’ll they’ll put you in the right direction. Connor Mason (Guest): Yeah. I I can tell you that’s working someone to support. Selling someone a solution that doesn’t work is not something I’ve been doing. Bad day. Right. Exactly. Yeah. And we work very closely, between anyone that’s looking at products. You know, me being as technical product managers, well, I I’m engaged in those conversations. And Mhmm. Yeah. If you need a demo license, reach out to us to extend that. We wanna make sure that you are buying something that provides you value. Now kind of moving on into a similar realm. This is one of our still somewhat newer offerings, I say, but we’ve been around five five plus years, and it’s really grown. And I kinda said here, it’s called OPC router, and and it’s not it’s not a networking tool. A lot of people may may kinda get that. It’s more of a, kind of a term about, again, all these different type of connections. How do you route them to different ways? It it kind of it it separates itself from the Cogent data hub, and and acting at this base level of being like a visual workflow that you can assign various tasks to. So if I have certain events that occur, I may wanna do some processing on that before I just send data along, where the data hub is really working in between converting, streaming data, real time connections. This gives you a a kind of a playground to work around of if I have certain tasks that are occurring, maybe through a database that I wanna trigger off of a certain value, based on my SCADA system, well, you can build that in in these different workflows to execute exactly what you need. Very, very flexible. Again, it has all these different type of connections. The very unique ones that have also grown into kind of that OT IT convergence, is it can be a REST API server and client as well. So I can be sending out requests to, RESTful servers where we’re seeing that hosted in a lot of new applications. I wanna get data out of them. Or once I have consumed a variety of data, I can become the REST server in OPC router and offer that to other applications to request data from itself. So, again, it can kind of be that centralized area of information. The other thing as we talked about in the automation pyramid is it has connections directly into SAP and ERP systems. So if you have work orders, if you have materials, that you wanna continue to track and maybe trigger things based off information from your your operation floors via PLCs tracking, how they’re using things along the line, and that needs to match up with what the SAP system has for, the amount of materials you have. This can be that bridge. It’s really is built off the mindset of the OT world as well. So we kinda say this helps empower the OT level because we’re now giving them the tools to that they understand what what’s occurring in their operations. And what could you do by having a tool like this to allow you to kind of create automated workflows based off certain values and certain events and automate some of these things that you may be doing manually or doing very convoluted through a variety of solutions. So this is one of those prod, products as well that’s very advanced in the things that supports. Linux and Docker containers is, is definitely could be a hot topic, rightly fleet rightfully so. And this can run on a on a Docker container deployed as well. So we we’ve seen that with the I IT folks that really enjoy being able to control and to higher deployment, allows you to update easily, allows you to control and spin up new containers as well. This gives you a lot of flexibility to to deploy and manage these systems. Shawn Tierney (Host): You know, I may wanna have you back on to talk about this. I used to there’s an old product called Rascal that I used to use. It was a transaction manager, and it would based on data changing or on a time that as a trigger, it could take data either from the PLC to the database or from the database to the PLC, and it would work with stored procedures. And and this seems like it hits all those points, And it sounds like it’s a visual like you said, right there on the slide, visual workflow builder. Connor Mason (Guest): Yep. Shawn Tierney (Host): So you really piqued my interest with this one, and and it may be something we wanna come back to and and revisit in the future, because, it just it’s just I know that that older product was very useful and, you know, it really solved a lot of old applications back in the day. Connor Mason (Guest): Yeah. Absolutely. And this this just takes that on and builds even more. If you if anyone was, kind of listening at the beginning of this year or two, a conference called Prove It that was very big in the industry, we were there to and we presented on stage a solution that we had. Highly recommend going searching for that. It’s on our web pages. It’s also on their YouTube links, and it’s it’s called Prove It. And OPC router was a big part of that in the back end. I would love to dive in and show you the really unique things. Kind of as a quick overview, we’re able to use Google AI vision to take camera data and detect if someone was wearing a hard hat. All that logic and behind of getting that information to Google AI vision, was through REST with OPC router. Then we were parsing that information back through that, connection and then providing it back to the PLCs. So we go all the way from a camera to a PLC controlling a light stack, up to Google AI vision through OPC router, all on hotel Wi Fi. It’s very imp it’s very, very fun presentation, and, our I think our team did a really great job. So a a a pretty new offering I have I wanna highlight, is our is our data caster. This is a an actual piece of hardware. You know, our software toolbox is we we do have some hardware as well. It’s just, part of the nature of this environment of how we mesh in between things. But the the idea is that, there’s a lot of different use cases for HMI and SCADA. They have grown so much from what they used to be, and they’re very core part of the automation stack. Now a lot of times, these are doing so many things beyond that as well. What we found is that in different areas of operations, you may not need all that different control. You may not even have the space to make up a whole workstation for that as well. What this does, the data caster, is, just simply plug it plugs it into any network and into an HDMI compatible display, and it gives you a very easy configure workplace to put a few key metrics onto a screen. So if I have different things from you can connect directly to PLCs like Allen Bradley. You can connect to SQL databases. You can also connect to rest APIs to gather the data from these different sources and build a a a kind of easy to to view, KPI dashboard in a way. So if you’re on a operation line and you wanna look at your current run rate, maybe you have certain things in the POC tags, you know, flow and pressure that’s very important for those operators to see. They may not be, even the capacity to be interacting with anything. They just need visualizations of what’s going on. This product can just be installed, you know, industrial areas with, with any type of display that you can easily access and and give them something that they can easily look at. It’s configured all through a web browser to display what you want. You can put on different colors based on levels of values as well. And it’s just I feel like a very simple thing that sometimes it seems so simple, but those might be the things that provide value on the actual operation floor. This is, for anyone that’s watching, kind of a quick view of a very simple screen. What we’re showing here is what it would look like from all the different data sources. So talking directly to ControlLogs PLC, talking to SQL databases, micro eight eight hundreds, an arrest client, and and what’s coming very soon, definitely by the end of this year, is OPC UA support. So any OPC UA server that’s out there that’s already having your PLC data or etcetera, this could also connect to that and get values from there. Shawn Tierney (Host): Can I can you make it I’m I’m here I go? Can you make it so it, like, changes, like, pages every few seconds? Connor Mason (Guest): Right now, it is a single page, but this is, like I said, very new product, so we’re taking any feedback. If, yeah, if there’s this type of slideshow cycle that would be, you know, valuable to anyone out there, let us know. We’re definitely always interested to see the people that are actually working out at these operation sites, what what’s valuable to them. Yeah. Shawn Tierney (Host): A lot of kiosks you see when when you’re traveling, it’ll say, like, line one well, I’ll just throw out there. Line one, and that’ll be on there for five seconds, and then it’ll go line two. That’ll be on there for five seconds, and then line you know, I and that’s why I just mentioned that because I can see that being a question that, that that I would get from somebody who is asking me about it. Connor Mason (Guest): Oh, great question. Appreciate it. Alright. So now we’re gonna set time for a little hands on demo. For anyone that’s just listening, we’re gonna I’m gonna talk about this at at a high level and walk through everything. But the idea is that, we have a few different POCs, very common in Allen Bradley and just a a Siemens seven, s seven fifteen hundred that’s in our office, pretty close to me on the other side of the wall wall, actually. We’re gonna first start by connecting that to our top server like we talked about. This is our industrial communication server, that offers both OCDA, OC UA, SweetLink connectivity as well. And then we’re gonna bring this into our Cogent data hub. This we talked about is getting those values up to these higher levels. What we’ll be doing is also tunneling the data. We talked about being able to share data through the data hubs themselves. Kinda explain why we’re doing that here and the value you can add. And then we’re also gonna showcase adding on MQTT to this level. Taking beta now just from these two PLCs that are sitting on a rack, and I can automatically make all that information available in the MQTT broker. So any MQTT client that’s out there that wants to subscribe to that data, now has that accessible. And I’ve created this all through a a really simple workflow. We also have some databases connected. Influx, we install with Code and DataHub, has a free visualization tool that kinda just helps you see what’s going on in your processes. I wanna showcase a little bit of that as well. Alright. So now jumping into our demo, when we first start off here is the our top server. Like I mentioned before, if anyone has worked with KEP server in the past, this is gonna look very similar. Like it because it is. The same technology and all the things here. The the first things that I wanted to establish in our demo, was our connection to our POCs. I have a few here. We’re only gonna use the Allen Bradley and the Siemens, for the the time that we have on our demo here. But how this builds out as a platform is you create these different channels and the devices connections between them. This is gonna be your your physical connections to them. It’s either, IP TCPIP connection or maybe your serial connection as well. We have support for all of them. It really is a long list. Anyone watching out there, you can kind of see all the different drivers that that we offer. So allowing this into a single platform, you can have all your connectivity based here. All those different connections that you now have that up the stack, your SCADA, your historians, MAS even as well, they can all go to a single source. Makes that management, troubleshooting, all those a bit easier as well. So one of the first things I did here, I have this built out, but I’ll kinda walk through what you would typically do. You have your Allen Bradley ControlLogix Ethernet driver here first. You know, I have some IPs in here I won’t show, but, regardless, we have our our our drivers here, and then we have a set of tags. These are all the global tags in the programming of the PLC. How I got these to to kind of map automatically is in our in our driver, we’re able to create tags automatically. So you’re able to send a command to that device and ask for its entire tag database. They can come back, provide all that, map it out for you, create those tags as well. This saves a lot of time from, you know, an engineer have to go in and, addressing all the individual items themselves. So once it’s defined in the program project, you’re able to bring this all in automatically. I’ll show now how easy that makes it connecting to something like the Cogent data hub. In a very similar fashion, we have a connection over here to the Siemens, PLC that I also have. You can see beneath it all these different tag structures, and this was created the exact same way. While those those PLC support it, you can do an automatic tag generation, bring in all the structure that you’ve already built out your PLC programming, and and make this available on this OPC server now as well. So that’s really the basis. We first need to establish communications to these PLCs, get that tag data, and now what do we wanna do with it? So in this demo, what I wanted to bring up was, the code in DataHub next. So here, I see a very similar kind of layout. We have a different set set of plugins on the left side. So for anyone listening, the Cogent Data Hub again is kind of our aggregation and conversion tool. All these different type of protocols like OPC UA, OPC DA, and OPC A and E for alarms and events. We also support OPC alarms and conditions, which is the newer profile for alarms in OPC UA. We have all a variety of different ways that you can get data out of things and data’s into the data hub. We can also do bridging. This concept is, how you share data in between different points. So let’s say I had a connection to one OPC server, and it was communicating to a certain PLC, and there were certain registers I was getting data from. Well, now I also wanna connect to a different OPC server that has, entirely different brand of PLCs. And then maybe I wanna share data in between them directly. Well, with this software, I can just bridge those points between them. Once they’re in the data hub, I can do kind of whatever I want with them. I can then allow them to write between those PLCs and share data that way, and you’re not now having to do any type of hardwiring directly in between them, and then I’m compatible to communicate to each other. Through the standards of OPC and these variety of different communication levels, I can integrate them together. Shawn Tierney (Host): You know, you bring up a good point. When you do something like that, is there any heartbeat? Like, is there on the general or under under, one of these, topics? Is there are there tags we can use that are from DataHub itself that can be sent to the destination, like a heartbeat or, you know, the merge transactions? Or Connor Mason (Guest): Yeah. Absolutely. So with this as well, there’s pretty strong scripting engine, and I have done that in the past where you can make internal tags. And that that could be a a timer. It could be a counter. And and just kind of allows you to create your own tags as well that you could do the same thing, could share that, through bridge connection to a PLC. So, yeah, there there are definitely some people that had those cert and, you know, use cases where they wanna get something to just track, on this software side and get it out to those hardware PLCs. Absolutely. Shawn Tierney (Host): I mean, when you send out the data out of the PLC, the PLC doesn’t care to take my data. But when you’re getting data into the PLC, you wanna make sure it’s updating and it’s fresh. And so, you know, they throw a counter in there, the script thing, and be able to have that. As as long as you see that incrementing, you know, you got good data coming in. That’s that’s a good feature. Connor Mason (Guest): Absolutely. You know, another big one is the the redundancy. So what this does is beyond just the OPC, we can make redundancy to basically anything that has two things running of it. So any of these different connections. How it’s unique is what it does is it just looks at the buckets of data that you create. So for an example, if I do have two different OPC servers and I put them into two areas of, let’s say, OPC server one and OPC server two, I can what now create an OPC redundancy data bucket. And now any client that connects externally to that and wants that data, it’s gonna go talk to that bucket of data. And that bucket of data is going to automatically change in between sources as things go down, things come back up, and the client would never know what’s hap what that happened unless you wanted to. There are internal tasks to show what’s the current source and things, but the idea is to make this trans kind of hidden that regardless of what’s going on in the operations, if I have this set up, I can have my external applications just reading from a single source without knowing that there’s two things behind it that are actually controlling that. Very important for, you know, historian connections where you wanna have a full complete picture of that data that’s coming in. If you’re able to make a redundant connection to two different, servers and then allow that historian to talk to a single point where it doesn’t have to control that switching back and forth. It it will just see that data flow streamlessly as as either one is up at that time. Kinda beyond that as well, there’s quite a few other different things in here. I don’t think we have time to cover all of them. But for for our demo, what I wanna focus on first is our OPC UA connection. This allows us both to act as a OPC UA client to get data from any servers out there, like our top server. And also we can act as an OPC UA server itself. So if anything’s coming in from maybe you have multiple connections to different servers, multiple connections to other things that aren’t OPC as well, I can now provide all this data automatically in my own namespace to allow things to connect to me as well. And that’s part of that aggregation feature, and kind of topic I was mentioning before. So with that, I have a connection here. It’s pulling data all from my top server. I have a few different tags from my Alec Bradley and and my Siemens PLC selected. The next part of this, while I was meshing, was the tunneling. Like I said, this is very popular to get around DCOM issues, but there’s a lot of reasons why you still may use this beyond just the headache of DCOM and what it was. What this runs on is a a TCP stream that takes all the data points as a value, a quality, and a timestamp, and it can mirror those in between another DataHub instance. So if I wanna get things across a network, like my OT side, where NASH previously, I would have to come in and allow a, open port onto my network for any OPC UA clients, across the network to access that, I can now actually change the direction of this and allow me to tunnel data out of my network without opening up any ports. This is really big for security. If anyone out there, security professional or working as an engineer, you have to work with your IT and security a lot, they don’t you don’t wanna have an open port, especially to your operations and OT side. So this allows you to change that direction of flow and push data out of this direction into another area like a DMZ computer or up to a business level computer as well. The other things as well that I have configured in this demo, the benefit of having that tunneling streaming data across this connection is I can also store this data locally in a, influx database. The purpose of that then is that I can actually historize this, provide then if this connection ever goes down to backfill any information that was lost during that tunnel connection going down. So with this added layer on and real time data scenarios like OPC UA, unless you have historical access, you would lose a lot of data if that connection ever went down. But with this, I can actually use the back end of this InfluxDB, buffer any values. When my connection comes back up, pass them along that stream again. And if I have anything that’s historically connected, like, another InfluxDB, maybe a PI historian, Vue historian, any historian offering out there that can allow that connection. I can then provide all those records that were originally missed and backfill that into those systems. So I switched over to a second machine. It’s gonna look very similar here as well. This also has an instance of the Cogent Data Hub running here. For anyone not watching, what we’ve actually have on this side is the the portion of the tunneler that’s sitting here and listening for any data requests coming in. So on my first machine, I was able to connect my PLCs, gather that information into Cogent DataHub, and now I’m pushing that information, across the network into a separate machine that’s sitting here and listening to gather information. So what I can quickly do is just make sure I have all my data here. So I have these different points, both from my Allen Bradley PLCs. I have a few, different simulation demo points, like temperature, pressure, tank level, a few statuses, and all this is updating directly through that stream as the PLC is updating it as well. I also have my scenes controller. I have some, current values and a few different counters tags as well. All of this again is being directly streamed through that tunnel. I’m not connecting to an OPC server at all on this side. I can show you that here. There’s no connections configured. I’m not talking to the PLCs directly on this machine as well. But maybe we’ll pass all the information through without opening up any ports on my OT demo machine per se. So what’s the benefit of that? Well, again, security. Also, the ability to do the store and forward mechanisms. On the other side, I was logging directly to a InfluxDB. This could be my d- my buffer, and then I was able to configure it where if any values were lost, to store that across the network. So now with this side, if I pull up Chronic Graph, which is a free visualization tool that installs with the DataHub as well, I can see some very nice, visual workflows and and visual diagrams of what is going on with this data. So I have a pressure that is just a simulator in this, Allen Bradley PLC that ramps up and and comes back down. It’s not actually connected to anything that’s reading a real pressure, but you can see over time, I can kind of change through these different layers of time. And I might go back a little far, but I have a lot of data that’s been stored in here. For a while during my test, I turned this off and, made it fail, but then I came back in and I was able to recreate all the data and backfill it as well. So through through these views, I can see that as data disconnects, as it comes back on, I have a very cyclical view of the data because it was able to recover and store and forward from that source. Like I said, Shawn, data quality is a big thing in this industry. It’s a big thing for people both at the operations side, and both people making decision in the business layer. So being able to have a full picture, without gaps, it is definitely something that, you should be prioritizing, when you can. Shawn Tierney (Host): Now what we’re seeing here is you’re using InfluxDB on this, destination PC or IT side PC and chronograph, which was that utility or that package that comes, gets installed. It’s free. But you don’t actually have to use that. You could have sent this in to an OSI pi or Exactly. Somebody else’s historian. Right? Can you name some of the historians you work with? I know OSI pie. Connor Mason (Guest): Yeah. Yeah. Absolutely. So there’s quite a few different ones. As far as what we support in the Data Hub natively, Amazon Kinesis, the cloud hosted historian that we can also do the same things from here as well. Aviva Historian, Aviva Insight, Apache Kafka. This is a a kind of a a newer one as well that used to be a very IT oriented solution, now getting into OT. It’s kind of a similar database structure where things are stored in different topics that we can stream to. On top of that, just regular old ODBC connections. That opens up a lot of different ways you can do it, or even, the old classic OPC, HDA. So if you have any, historians that that can act as an OPC HDA, connection, we we can also stream it through there. Shawn Tierney (Host): Excellent. That’s a great list. Connor Mason (Guest): The other thing I wanna show while we still have some time here is that MQTT component. This is really growing and, it’s gonna continue to be a part of the industrial automation technology stack and conversations moving forward, for streaming data, you know, from devices, edge devices, up into different layers, both now into the OT, and then maybe out to, IT, in our business levels as well, and definitely into the cloud as we’re seeing a lot of growth into it. Like I mentioned with Data Hub, the big benefit is I have all these different connections. I can consume all this data. Well, I can also act as an MQTT broker. And what what a broker typically does in MQTT is just route data and share data. It’s kind of that central point where things come to it to either say, hey. I’m giving you some new values. Share it with someone else. Or, hey. I need these values. Can you give me that? It really fits in super well with what this product is at its core. So all I have to do here is just enable it. What that now allows is I have an example, MQTT Explorer. If anyone has worked with MQTT, you’re probably familiar with this. There’s nothing else I configured beyond just enabling the broker. And you can see within this structure, I have all the same data that was in my Data Hub already. The same things I were collecting from my PLCs and top server. Now I’ve embedded these as MPPT points and now I have them in JSON format with the value, their timestamp. You can even see, like, a little trend here kind of matching what we saw in Influx. And and now this enables all those different cloud connectors that wanna speak this language to do it seamlessly. Shawn Tierney (Host): So you didn’t have to set up the PLCs a second time to do this? Nope. Connor Mason (Guest): Not at all. Shawn Tierney (Host): You just enabled this, and now the data’s going this way as well. Exactly. Connor Mason (Guest): Yeah. That’s a really strong point of the Cogent Data Hub is once you have everything into its structure and model, you just enable it to use any of these different connections. You can get really, really creative with these different things. Like we talked about with the the bridging aspect and getting into different systems, even writing down the PLCs. You can make crust, custom notifications and email alerts, based on any of these values. You could even take something like this MTT connection, tunnel it across to another data hub as well, maybe then convert it to OPC DA. And now you’ve made a a a new connection over to something that’s very legacy as well. Shawn Tierney (Host): Yeah. That, I mean, the options here are just pretty amazing, all the different things that can be done. Connor Mason (Guest): Absolutely. Well, I, you know, I wanna jump back into some of our presentation here while we still got the time. And now after we’re kinda done with our demo, there’s so many different ways that you can use these different tools. This is just a really simple, kind of view of the, something that used to be very simple, just connecting OpenSea servers to a variety of different connections, kind of expanding onto with that that’s store and forward, the local influx usage, getting out to things like MTT as well. But there’s a lot more you can do with these solutions. So like Shawn said, reach out to us. We’re happy to engage and see what we can help you with. I have a few other things before we wrap up. Just overall, it we’ve worked across nearly every industry. We have installations across the globe on all continents. And like I said, we’ve been around for pushing thirty years next year. So we’ve seen a lot of different things, and we really wanna talk to anyone out there that maybe has some struggles that are going on with just connectivity, or you have any ongoing projects. If you work in these different industries or if there’s nothing marked here and you have anything going on that you need help with, we’re very happy to sit down and let you know if there’s there’s something we can do there. Shawn Tierney (Host): Yeah. For those who are, listening, I mean, we see most of the big energy and consumer product, companies on that slide. So I’m not gonna read them off, but, it’s just a lot of car manufacturers. You know, these are these are these, the household name brands that everybody knows and loves. Connor Mason (Guest): So kind of wrap some things up here. We talked about all the different ways that we’ve kind of helped solve things in the past, but I wanna highlight some of the unique ones, that we’ve also gone do some, case studies on and and success stories. So this one I actually got to work on, within the last few years that, a plastic packaging, manufacturer was looking to track uptime and downtime across multiple different lines, and they had a new cloud solution that they were already evaluating. They’re really excited to get into play. They they had a lot of upside to, getting things connected to this and start using it. Well, what they had was a lot of different PLCs, a lot of different brands, different areas, different, you know, areas of operation that they need to connect to. So what they used was to first get that into our top server, kind of similar to how they showed them use in their in our demo. We just need to get all the data into a centralized platform first, get that data accessible. Then from there, once they had all that information into a centralized area, they used the Cogent Data Hub as well to help aggregate that information and transform it to be sent to the cloud through MQTT. So very similar to the demo here, this is actually a real use case of that. Getting information from PLCs, structuring it into that how that cloud system needed it for MQTT, and streamlining that data connection to now where it’s just running in operation. They constantly have updates about where their lines are in operation, tracking their downtime, tracking their uptime as well, and then being able to do some predictive analytics in that cloud solution based on their history. So this really enabled them to kind of build from what they had existing. It was doing a lot of manual tracking, into an entirely automated system with management able to see real views of what’s going on at this operation level. Another one I wanna talk about was we we were able to do this success story with, Ace Automation. They worked with a pharmaceutical company. Ace Automation is a SI and they were brought in and doing a lot of work with some some old DDE connections, doing some custom Excel macros, and we’re just having a hard time maintaining some legacy systems that were just a pain to deal with. They were working with these older files, from some old InTouch histor HMIs, and what they needed to do was get something that was not just based on Excel and doing custom macros. So one product we didn’t get to talk about yet, but we also carry is our LGH file inspector. It’s able to take these files, put them out into a standardized format like CSV, and also do a lot of that automation of when when should these files be queried? Should they be, queried for different lengths? Should they be output to different areas? Can I set these up in a scheduled task so it can be done automatically rather than someone having to sit down and do it manually in Excel? So they will able to, recover over fifty hours of engineering time with the solution from having to do late night calls to troubleshoot a, Excel macro that stopped working, from crashing machines, because they were running a legacy systems to still support some of the DDE servers, into saving them, you know, almost two hundred plus hours of productivity. Another example, if we’re able to work with a renewable, energy customer that’s doing a lot of innovative things across North America, They had a very ambitious plan to double their footprint in the next two years. And with that, they had to really look back at their assets and see where they currently stand, how do we make new standards to support us growing into what we want to be. So with this, they had a lot of different data sources currently. They’re all kind of siloed at the specific areas. Nothing was really connected commonly to a corporate level area of historization, or control and security. So again, they they were able to use our top server and put out a standard connectivity platform, bring in the DataHub as an aggregation tool. So each of these sites would have a top server that was individually collecting data from different devices, and then that was able to send it into a single DataHub. So now their corporate level had an entire view of all the information from these different plants in one single application. That then enabled them to connect their historian applications to that data hub and have a perfect view and make visualizations off of their entire operations. What this allowed them to do was grow without replacing everything. And that’s a big thing that we try to strive on is replacing and ripping out all your existing technologies. It’s not something you can do overnight. But how do we provide value and gain efficiency with what’s in place and providing newer technologies on top of that without disrupting the actual operation as well? So this was really, really successful. And at the end, I just wanna kind of provide some other contacts and information people can learn more. We have a blog that goes out every week on Thursdays. A lot of good technical content out there. A lot of recast of the the awesome things we get to do here, the success stories as well, and you can always find that at justblog.softwaretoolbox.com. And again, our main website is justsoftwaretoolbox.com. You can get product information, downloads, reach out to anyone on our team. Let’s discuss what what issues you have going on, any new projects, we’ll be happy to listen. Shawn Tierney (Host): Well, Connor, I wanna thank you very much for coming on the show and bringing us up to speed on not only software toolbox, but also to, you know, bring us up to speed on top server and doing that demo with top server and data hub. Really appreciate that. And, I think, you know, like you just said, if anybody, has any projects that you think these solutions may be able to solve, please give them a give them a call. And if you’ve already done something with them, leave a comment. You know? To leave a comment, no matter where you’re watching or listening to this, let us know what you did. What did you use? Like me, I used OmniServer all those many years ago, and, of course, Top Server as an OPC server. But if you guys have already used Software Toolbox and, of course, Symbol Factory, I use that all the time. But if you guys are using it, let us know in the comments. It’s always great to hear from people out there. I know, you know, with thousands of you guys listening every week, but I’d love to hear, you know, are you using these products? Or if you have questions, I’ll funnel them over to Connor if you put them in the comments. So with that, Connor, did you have anything else you wanted to cover before we close out today’s show? Connor Mason (Guest): I think that was it, Shawn. Thanks again for having us on. It was really fun. Shawn Tierney (Host): I hope you enjoyed that episode, and I wanna thank Connor for taking time out of his busy schedule to come on the show and bring us up to speed on software toolbox and their suite of products. Really appreciated that demo at the end too, so we actually got a look at if you’re watching. Gotta look at their products and how they work. And, just really appreciate them taking all of my questions. I also appreciate the fact that Software Toolbox sponsored this episode, meaning we were able to release it to you without any ads. So I really appreciate them. If you’re doing any business with Software Toolbox, please thank them for sponsoring this episode. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Category Visionaries
How StrongestLayer achieved 85% meeting-to-POC and 100% POC-to-win rates using transparent one-week pilots | Alan LeFort

Category Visionaries

Play Episode Listen Later Oct 1, 2025 26:38


StrongestLayer is building AI-native email security architecture designed for threats that defeat pattern-matching systems. The company pivoted from security awareness training after early customers discovered its phishing detection plugin caught advanced threats that legacy gateway solutions missed. In a recent episode of Category Visionaries, we sat down with Alan LeFort, CEO of StrongestLayer, to discuss why architectural generation matters more than vendor reputation in email security, and how they're using transparent proof-of-concept methodology to displace 20-year incumbents.   Topics Discussed: Why AI-generated attacks with n=1 datasets break signature-based detection architectures The convergence of legitimate marketing automation and phishing techniques (lookalike domains, intent signals, AI-personalized messaging) How 2% of attack types represent 90% of breach value, forecast to reach 17% of volume by 2027 Transparent POC strategy achieving 85% meeting-to-POC and 100% qualified-POC-to-technical-win conversion Stage-based ICP selection: targeting 1,000-10,000 seats for sub-6-month sales cycles with enterprise compliance requirements Harvard Kennedy School research: AI enables 88% employee profiling from public data, 95% cost reduction for targeted campaigns, and 60% click rates versus 12% baseline   GTM Lessons For B2B Founders: Deploy transparent POCs as category displacement weapons: When attacking entrenched incumbents, StrongestLayer runs one-week POCs behind existing email security gateways with zero commercial pressure—just visibility into what's being missed. At a sub-1,000-seat company running behind a top-three market leader, they surfaced 80 advanced threats in one week. This approach converts 85% of first meetings to POC and 100% of qualified POCs to technical wins. The insight: In technical categories where buyers are sophisticated, removing evaluation friction and letting comparative performance speak eliminates trust barriers faster than enterprise reference selling. Stage-match your ICP to burn rate tolerance, not TAM: Alan deliberately excludes Fortune 500 despite universal email security need: "When their procurement team is bigger than your whole company, not a good scene." Instead, they target 1,000-10,000 seats—enterprises with SOC2/compliance obligations but without Fortune 500 security budgets or staffing. These accounts close in under 6 months. The framework: Define ICP by sales cycle length your runway can sustain, then expand segments as capital position improves. Your ICP should evolve with company stage, not remain static based on ideal long-term positioning. Trade IP opacity for velocity when architectural advantage compounds: Unlike security vendors protecting methodology behind NDAs, StrongestLayer publishes full product demos on YouTube and shares detection logic openly. Alan's thesis: "I'm going all in on velocity. I'm going to transparently share, get it in front of as many customers as we can." This works because their advantage is continuous AI model improvement velocity, not a static algorithm competitors could copy. If your moat is execution speed and iteration cycles rather than a single proprietary technique, transparency accelerates trust-building and shortens enterprise consideration periods. Quantify the shift from volume metrics to value-at-risk metrics: Rather than competing on total threat detection volume, StrongestLayer focuses on the 2% of attack types (BEC, advanced spear phishing) that represent 90% of breach value—and are growing to 17% of attack volume by 2027. They weaponize third-party research (Harvard Kennedy School) showing AI reduces targeted attack costs by 95% while increasing success rates from 12% to 60%. The pattern: Find authoritative external validation that the threat landscape is fundamentally shifting, making incumbent solutions architecturally insufficient regardless of brand strength. Bifurcate messaging by operational reality, not just title: Alan messages CISOs around risk buying-down and ROI, positioning email security as a solved problem that's becoming unsolved. For security operations teams, the pitch centers on eliminating 70% false-positive user submissions that waste skilled analyst time. Both personas use the same tools, but CISOs face board-level breach risk while SOC teams face daily toil from alert fatigue. The takeaway: Map distinct daily operational pains for each buying committee member rather than broadcasting unified value propositions that dilute relevance.   //   Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

IIoT Use Case Podcast | Industrie
#187 | IoT-Security richtig planen: SIM-ID, privater APN & Traffic-Analyse | A1 Digital

IIoT Use Case Podcast | Industrie

Play Episode Listen Later Oct 1, 2025 26:06


www.iotusecase.com#APN #TrafficAnalyse #SIMIn dieser Folge spricht Gastgeberin Ing. Madeleine Mickeleit mit Peter Gaspar, Vertical Market Solutions und Leitung Solution Architecture bei A1 Digital. Thema: IoT-Security vom ersten Prototyp bis zur Skalierung. Im Fokus stehen SIM-basierte Identität, private APN, Anomalieerkennung im Netz und Praxisbeispiele wie Smart Metering.Folge 187 auf einen Blick (und Klick):(08:32) Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus(15:27) Lösungen, Angebote und Services – Ein Blick auf die eingesetzten Technologien(23:42) Übertragbarkeit, Skalierung und nächste Schritte – So könnt ihr diesen Use Case nutzenPodcast ZusammenfassungWie sichern Unternehmen vernetzte Geräte wirtschaftlich und skalierbar?Diese Folge richtet sich an IT- und OT-Verantwortliche, die viele Devices ins Feld bringen und PoCs in den Betrieb überführen. Die zentrale Frage lautet: Wie balanciert man Risiko, Kosten und Gerätegrenzen wie Batteriebetrieb und geringe Rechenleistung? Peter zeigt, wie vorhandene Netzfunktionen gezielt genutzt werden: SIM-basierte Authentifizierung liefert eine eindeutige Geräteidentität; private APN, IP-Filter, statische IP-Adressen, MPLS oder IPsec schützen die Kommunikation bis ins eigene Netz; netzseitige Anomalieerkennung warnt bei Auffälligkeiten und verhindert den Missbrauch entnommener SIM-Karten.In kritischen Anwendungen wie Smart Metering kommen Zertifikatsmanagement, Ende-zu-Ende-Verschlüsselung, abgesicherte Firmware-Updates und Manipulationserkennung am Gerät hinzu.Blick nach vorn: 5G eröffnet Optionen wie Network Slicing für getrennte Sicherheitsdomänen und die SIM als sicheres Element; das Design bereitet zugleich auf NIS2-Anforderungen vor. Ergebnis: ein passendes Sicherheitskonzept, das Geräte schont, Budgets schützt und skalierbar bleibt.Hinweis: A1 Digital live treffen?Smart Country Convention Berlin 2025 (30.09–02.10),it-sa Expo&Congress (07.10–09.10),RecyclingAKTIV & TiefbauLIVE 2025 (09.10–11.10).Für alle drei Events gibt es kostenlose Tickets über die A1-Digital-Landingpages.----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Peter (https://www.linkedin.com/in/petergaspar/)A1 Digital Partnerprofil (https://iotusecase.com/de/unternehmen/a1-digital/)Lösungsbeispiel Hawle-Hydranten (https://www.a1.digital/de/case-studies/hawle-hydranten-iot/)A1 Digital (https://www.a1.digital/de/case-studies/)Jetzt IoT Use Case auf LinkedIn folgen1x monatlich IoT Use Case Update erhalten

HME News in 10
Doug Francis on the POC ‘tipping point'

HME News in 10

Play Episode Listen Later Sep 24, 2025 14:27


Welcome to a special episode of HME News in 10 sponsored and developed by Rhythm Healthcare.   In this episode of HME News in 10, Doug Francis, CEO of Rhythm Healthcare, talks about the HME “trailblazers” who are embracing portable oxygen concentrators.   “This is a provider who has invested, not only in the efficiencies that can be derived from POCs, but also turning that into clinical messaging to tell referral sources, ‘We're a technology-forward company,'” he said. “When they take the time to educate on the benefits to the patient, and when they tell that story, they are seeing a 30%-plus increase in respiratory referrals.”   For Francis, the industry is at a tipping point where both the clinical community and the consumer are going to start demanding POCs – a great time to make the switch. Hosts:  Liz Beaulieu  Theresa Flaherty    Guest:  Doug Francis  

Earley AI Podcast
Earley AI Podcast - Episode 74: AI in B2B Commerce: From Data Challenges to Differentiation

Earley AI Podcast

Play Episode Listen Later Sep 22, 2025 42:47


This episode of the Earley AI Podcast features Rudy Abitbol, a recognized expert in B2B commerce and AI. With years of hands-on experience helping global enterprises with digital transformation—especially around product information management, large-scale AI adoption, and making cutting-edge technology truly practical—Rudy brings a pragmatic view to how AI is revolutionizing B2B e-commerce. He's passionate about making AI accessible and effective for tackling real-world business challenges.Key Takeaways:AI-Driven Transformation: AI is dramatically streamlining the content generation and management process in B2B commerce, especially when dealing with vast, complex product catalogs.Platform Flexibility Matters: Tools like Shopify are lowering barriers for B2B companies by offering user-friendly interfaces and robust ecosystems that allow for rapid configurations—often with no coding required.Vibe Coding & Rapid Prototyping: “Vibe coding” empowers product owners to quickly move from conversations and requirements to working wireframes and functional specs, tightening feedback loops and boosting innovation.Balancing Efficiency & Differentiation: While AI tools help standardize processes, true competitive advantage comes from infusing proprietary business knowledge into digital solutions.Data Readiness & Quality: AI can enrich incomplete or messy data, but organizations must still focus on good data structure and master data management to fully capitalize on AI and automation.AI Adoption Hurdles: The most common blockers are cultural—resistance to change and undertraining. Hands-on learning, POCs, and fostering a culture of curiosity are essential.Empowering Teams: True AI readiness isn't just about tools but about giving teams the time, training, and encouragement to experiment—and even risk failure—as they integrate AI into daily workflows.Avoiding Over-Reliance on AI: While AI enhances productivity, it can't replace human judgment. Insightful, context-driven use of AI yields the biggest returns.Building Core Competency: Don't outsource the heart of your digital transformation; instead, build internal expertise and maturity to stay ahead in a rapidly evolving landscape.Insightful Quote from Rudy Abitbol:"All the insight within the phrasing, within the way that it's done...still needs to come from you. You still need to have someone that is a product owner with a great vision, because that's the sole person that is able to infuse [the business]."Tune in to learn how AI is fundamentally reshaping B2B commerce and how leaders can stay ahead of the digital curve. LinksLinkedIn: https://www.linkedin.com/in/rudyabitbol/Website: https://www.trustinsights.aiThanks to our sponsors: VKTR Earley Information Science AI Powered Enterprise Book

DeliCatessen
Christian Lee Hutson, meravelles en miniatura

DeliCatessen

Play Episode Listen Later Sep 17, 2025 60:04


Pocs artistes tenen la capacitat de concentrar tanta bellesa en una sola can

Grow Your B2B SaaS
S7E6 - How is AI Transforming Go To Market for B2B SaaS with Maja Voje

Grow Your B2B SaaS

Play Episode Listen Later Sep 9, 2025 44:28


How is AI Transforming Go To Market for B2B SaaS? Inbound go-to-market for SaaS is undergoing a major transformation. What once relied on blog posts, lead magnets, and cold outreach is now powered by artificial intelligence. AI is no longer just a content assistant. It now fuels end-to-end workflows, drives strategy, qualifies leads, and personalizes outreach at scale. SaaS teams are deploying AI agents to track LinkedIn signals, automate follow-ups, and even manage outbound efforts. This evolution is unlocking new levels of speed and scale, but it also brings real risks if automation isn't carefully managed. In this episode of the Grow Your B2B SaaS Podcast, Maja Voje breaks down how AI is reshaping inbound GTM. She shares what's working today, where teams should stay hands-on, and how to build AI-assisted systems without losing the human connection that still drives trust in B2B. If you're building or scaling a SaaS product, this is your playbook for doing it smarter with AI.Key Timecodes(0:00) - Boosting AI Content Performance & Automating Founder Workflows(0:53) - What Is AI's Role in SaaS GTM? [With Guest Maja Voje](1:48) - Is Everything Dead? Why AI Agents Are the Future of SaaS Workflows(2:55) - Multi-Agentic Workflows Explained: Tools, Agents & Human Oversight(4:28) - Why You Must Earn the Right to Automate with AI(5:27) - SaaS Automation Gone Wrong: Avoiding Enterprise Pitfalls(6:15) - AI Agents: Build or Buy? Key Considerations for GTM Leaders(6:38) - Mapping GTM Workflows: LinkedIn, DMs, Offers & Content Ops(8:00) - Real-Life AI Marketing Automations You Can Use Today(9:43) - How Many AI Agents Do You Really Need for LinkedIn & Lead Gen?(11:08) - Iterating AI Models Post-Training: Prompts, Builders & Feedback Loops(12:55) - AI Costs, Compliance & Rollouts: From POC to Scalable Deployment(15:07) - Data Security in AI: The Case for 'Least Privilege' Access(16:04) - Rule of Thumb: Don't Share Data You Wouldn't Give a Friend(16:13) - Sponsor Spotlight: SaaStock Dublin—Investor Matchmaking + Discounts(17:22) - Inbound Marketing with AI: LinkedIn Trends & Time-Wasters to Avoid(18:54) - External vs Internal Knowledge Bases: Training AI Without Garbage Input(20:31) - Why AI Design Often Fails: Creatives, Claude vs ChatGPT & Brand Gaps(21:53) - LinkedIn AI Strategy: Commenting, Publishing & Legal Risks in the EU(23:30) - AI-Powered Outbound Marketing: ICP Scoring, Lead Research & Social Selling(25:52) - Training Your Team on AI: Avoiding Content Quality Pitfalls(27:26) - Human-in-the-Loop Design: What to Automate vs Delegate(28:43) - The AI-First Founder Mindset: Culture, Talent & Psychological Safety(31:20) - AI Implementation Choices: From Prototypes to Governance Guardrails(33:29) - PR & Leadership: Why 'We Replaced 7 People with AI' Is a Bad Look(34:10) - 2-Year AI Roadmap: Think Strategically, Reflect Often, Stay Safe(36:20) - Going from 0 to 10K MRR: Learn to Sell, Test Pricing, and Stay Focused(38:53) - Bootstrapping with AI: Don't Waste Model Credits, Focus on ROI(39:32) - Scaling to $10M ARR with AI: Ecosystem Marketing & Creator-Led Trust(40:47) - Recap: AI Workflows, POCs, LinkedIn Automation & Strategic Thinking(42:35) - Connect with Guest Maja Voje on LinkedIn(42:58) - Subscribe to the GTM Strategies Newsletter on Substack(43:28) - Final CTA: Review the Show, Sponsor, Ask Questions, and Connect

AI DAILY: Breaking News in AI
CAN AI BE ROMANTIC?

AI DAILY: Breaking News in AI

Play Episode Listen Later Sep 8, 2025 3:58


Plus Should We Let AI Speak For The Dead?Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidailyus.substack.comDating Apps Out, AI Boyfriends InWriter Patricia Marx test-drives digital romance with Replika, Character.AI, and JanitorAI—chatbots that flirt, console, and glitch their way through “relationships.” Her experience shows the weird, funny, and sometimes sad truth of swapping real-world intimacy for algorithmic affection. I Hate My AI Friend (It's Creepy)A wearable pendant called “Friend” (think AirTag crossed with snarky AI) listens to your life and offers real-time commentary. Built using Gemini 2.5, it often feels abrasive and socially awkward—so much so that WIRED writers ditched it fast due to the cringe vibes and privacy creep. When AI Speaks for the Dead, Should We Listen?An AI-generated video resurrected a murder victim—voiced and scripted by his sister—to speak at sentencing. Patricia Williams warns this chilling blend of technology and emotion risks blurring truth with scripted performances, unsettling legal testimony, memory, and the sacred silence of loss.AI “Slop” Is Clogging Your BrainBrace yourself—your feed is drowning in “AI slop”: low-effort, repetitive content flooding TikTok, Facebook, and Insta for clicks and cash. It's addictive, shallow, and mind-numbing—turning us into scroll zombies while cheap bots cash in online. AI Has No Clue What It's Doing—And It's Threatening UsA Charles Darwin University study slams unregulated AI for eroding human dignity—pointing to privacy violations, bias, and lost autonomy hidden behind “black box” models. Dr. Maria Randazzo warns that without human-centered, global governance, AI risks reducing us to mere data points.AI Boom Leaves Consultants in the DustEven though consultants poured billions into AI hype, firms like Deloitte, PwC, McKinsey, and Bain are struggling to deliver real results. With client teams now being just as skilled—or better—many PoCs never scale, and businesses are increasingly going in-house or using freelancers. Candidates Don't Trust AI RecruitersA March 2025 Gartner survey found only 26% of job candidates trust AI to evaluate them fairly—even as over half suspect AI is screening their applications. Fears cluster around bias, lack of transparency, and being treated as data points, while concerns over “ghost jobs” fuel skepticism about the legitimacy of postings.Brain-Inspired Chips Are the Real AI DisruptorsTraditional CPUs/GPUs powering AI juggle power, heat, and lag—not ideal for real-time stuff like robots or self-driving cars. Neuromorphic chips mimic the brain's event-driven style, crunching data locally with tiny energy use, no cloud needed. CIO says they might eclipse quantum for making edge AI smarter and greener. 

Cloud Masters
GenAI at production scale: Why GenAI POCs fail and how AWS thinks about production readiness

Cloud Masters

Play Episode Listen Later Sep 5, 2025 50:44


We're joined by two GenAI experts from AWS and DoiT to understand why GenAI POCs fail at production scale, how to evaluate LLMs, and how to approach GenAI production readiness. The discussion covers four GenAI workload migration patterns, AWS Bedrock's systematic migration framework, enterprise compliance challenges including HIPAA and GDPR requirements, and practical approaches to evaluating model performance across speed, cost, and quality.

The Product Market Fit Show
He Bootstrapped to $55M ARR—without ever having to hit 100% YoY growth. | Stéphan Donzé, Founder of AODocs

The Product Market Fit Show

Play Episode Listen Later Aug 28, 2025 43:23 Transcription Available


Stéphan bootstrapped AODocs to $55M in revenue and 250 employees without taking a dime of VC money—while competing directly with venture-backed competitors. Starting as a services company in 2012, he spotted the cloud migration wave early and built document management for enterprises moving to Google Workspace. In this episode, Stéphan breaks down why doubling every two years beats hypergrowth, how to win enterprise deals with zero funding, and why touching business-critical documents means year-long sales cycles but 10-year retention. This is the anti-Silicon Valley playbook that actually works.Why You Should Listen:Why the founder must personally close every single deal in 0 to 1How doubling every 2 years (not every year) creates a more stable businessThe brutal reality of enterprise POCs: doing it for free before getting paidWhy you can't have both fast customer acquisition and high retentionHow being French/European became an advantage against US competitors KeywordsAODocs, bootstrapping, Stéphan Donzé, enterprise sales, document management, SaaS, Google Workspace, cloud migration, product market fit, B2B00:00:00 Intro00:01:12 Bootstrapping vs VC backed00:03:44 From services to SaaS00:19:08 Landing the first customer 00:20:47 Why they turned down VC money00:25:32 The 997 grind—four days on-site with customers every week00:35:21 Why you can't have fast sales and high retention00:40:33 Product-market fitSend me a message to let me know what you think!

Generation AI
95% AI pilots fail?, Process is the real AI killer, Buying wins 2x over building, Enter the forward deployed engineer

Generation AI

Play Episode Listen Later Aug 26, 2025 48:13


In this episode of Generation AI, hosts Ardis Kadiu and JC Bonilla examine the widely misinterpreted MIT report claiming "95% of GenAI pilots fail," exploring why this headline misses the real story. While individual employees are finding significant value with AI tools (90% use personal AI regularly), organizations struggle to capture this value at the enterprise level—not because the technology doesn't work, but due to change management, leadership alignment, and implementation challenges. Through Element451's own QBR automation struggles, the hosts illustrate how the gap between impressive demos and measurable business impact stems from organizational readiness, not technological limitations. They discuss why vendor solutions succeed at twice the rate of internal builds (67% vs 33%), introduce Forward Deployed Engineers as the bridge between technology and business context, and explain why back office automation delivers higher ROI than marketing despite budget allocation. This conversation provides practical guidance for higher education leaders on moving from shadow AI productivity gains to true enterprise transformation, emphasizing that the challenge isn't whether AI works—it's how organizations need to evolve to capture its value.AI Deployment Reality Check: The 95% Failure Rate (00:01:35)MIT report reveals 95% of GenAI pilots fail to deliver P&L impactOnly 5% achieve rapid revenue growthDiscussion of how this mirrors Element451's internal experiencesThe difference between pilots, POCs, and actual productsThe Shadow AI Phenomenon (00:02:43)90% of employees using personal AI tools vs enterprise subscriptionsBottom-up adoption through consumer tools like ChatGPTWhy organizations can't measure or control individual productivity gainsThe challenge of enterprise AI adoption vs consumer AIBuilding vs Buying: The Success Rate Gap (00:03:34)Internal build success rate: 33%Vendor purchase success rate: 67%Why vertical solutions outperform generic toolsThe importance of domain expertise in AI deploymentElement's QBR Case Study: When AI Projects Struggle (00:14:18)Quarterly Business Review automation challengesThe gap between data analytics and expert interpretationWhy AI needs embedded best practices and rubricsThe difference between finding patterns and implementing expertiseMarketing vs Back Office: Where Real ROI Lives (00:24:10)Over 50% of AI budgets go to sales and marketingWhy back office automation delivers higher returnsThe binary nature of workflow automation successExamples: fraud detection, application review, transcript analysisThe Forward Deployed Engineer Model (00:35:56)Origin from Palantir's government contractsHow OpenAI uses FDEs for enterprise clientsThe hybrid role: technical expertise + business understandingWhy traditional consultants can't fill this gapThe Unicorn Problem: Finding AI Operations Specialists (00:41:24)Scarcity of people who understand both workflows and AI technologyWhy agencies need to evolve their business modelsThe opportunity for innovative consultanciesElement's challenge in scaling deployment expertiseKey Recommendations for Institution Leaders (00:45:18)Move from bottom-up to top-down AI strategyProperly resource AI initiatives (not just IT side projects)Buy rather than build for 2x success rateLook for vertical solutions with deep domain knowledgeInclude internal champions in deployment projectsFinal Thoughts: Moving Beyond Productivity to Transformation (00:49:31)The shift from individual productivity to enterprise ROIWhy POC success doesn't equal business impactThe importance of AI workflow coverageAccepting that most organizations aren't behind—everyone is struggling - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Enrollify is made possible by Element451 — The AI Workforce Platform for Higher Ed. Learn more at element451.com.

The AI with Maribel Lopez (AI with ML)
Verint Executive Reveals: The 3 Best Starting Points for Enterprise Agentic AI Adoption

The AI with Maribel Lopez (AI with ML)

Play Episode Listen Later Aug 20, 2025 33:11


Episode OverviewIn this episode, Maribel Lopez sits down with David Singer, Global Vice President and Go-To-Market Strategy at Verint, to explore the rapid evolution from generative AI to agentic AI and how organizations can successfully implement AI solutions that deliver real business outcomes.Key Topics DiscussedThe Evolution from Generative to Agentic AIGenerative AI: Excellent at answering questions and synthesizing information from knowledge sourcesAgentic AI: Takes the next step by actually executing actions autonomously, not just providing recommendationsThe critical difference: autonomous decision-making versus rules-based automationBuilding Trust in Autonomous AI SystemsStart with human-in-the-loop monitoring for training and validationGradually reduce oversight from constant monitoring to spot checksApply quality monitoring practices to AI agents similar to human agentsConsider AI agents as "silicon-based employees" requiring training, access controls, and performance managementSuccessful AI Implementation StrategiesStart with Clear Outcomes: Define specific business goals before selecting technologyFocus on solutions that deliver outcomes, not just impressive technologyBegin with well-understood processes that can be enhanced rather than completely reimaginedThree Proven Starting Points:Call Wrap-up Automation: AI-powered summarization reduces agent workloadIVR Modernization: Convert top call flows to agentic conversational AIQuality Management: Scale from monitoring 1-3% of calls to near 100% coverageVendor Selection CriteriaProven outcomes at scale: Look for vendors with demonstrated success stories and customer referencesTechnology adaptability: Choose providers who can evolve with the rapidly changing AI landscapeProduction readiness: "POCs are easy, production is hard" - prioritize vendors with production deployment experienceChange Management for AI Adoption Deploy solutions that genuinely help employees firstBuild internal champions through positive early experiencesScale gradually to maintain trust and adoptionKey InsightsEmployee Experience Drives Customer Experience: AI solutions that improve employee satisfaction often lead to better customer outcomesObservability is Critical: Comprehensive monitoring and quality management become essential as AI systems gain autonomyOutcomes Over Technology: Success comes from focusing on business results rather than being enamored with the latest AI capabilitiesAbout the GuestDavid Singer is the Global Vice President and Go-To-Market Strategy at Verint, where he focuses on delivering AI-powered outcomes for customer experience automation. Verint has been incorporating AI into their platform for over a decade, evolving from call recording and workforce management to comprehensive CX automation solutions. You can follow David here: https://www.linkedin.com/in/dwsinger/You can follow Maribel here: Closing ThoughtsSinger emphasizes two crucial points for organizations embarking on AI initiatives:Avoid spending significant resources on new technology only to use it exactly as you did beforeAlways start with outcomes first - let business goals drive vendor selection, implementation strategy, and change management approaches

The Automation Podcast
PROFINET and System Redundancy (P244)

The Automation Podcast

Play Episode Listen Later Aug 13, 2025 45:13 Transcription Available


Shawn Tierney meets up with Tom Weingartner of PI (Profibus Profinet International) to learn about PROFINET and System Redundancy in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 244 Show Notes: Special thanks to Tom Weingartner for coming on the show, and to Siemens for sponsoring this episode so we could release it ad free on all platforms! To learn more PROFINET, see the below links: PROFINET One-Day Training Slide Deck PROFINET One-Day Training Class Dates IO-Link Workshop Dates PROFINET University Certified Network Engineer Course Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney from Insights and Automation, and I wanna thank you for tuning back in this week. Now on this show, I actually had the opportunity to sit down with Thomas Weingoner from PI to learn all about PROFINET. I actually reached out to him because I had some product vendors who wanted me to cover their s two features in their products, and I thought it would be first it’d be better to actually sit down and get a refresh on what s two is. It’s been five years since we’ve had a PROFINET expert on, so I figured now would be a good time before we start getting into how those features are used in different products. So with that said, I also wanna mention that Siemens has sponsored the episode, so it will be completely ad free. I love it when vendor sponsor the shows. Not only do we get the breakeven on the show itself, we also get to release it ad free and make the video free as well. So thank you, Siemens. If you see anybody from Siemens, thank them for sponsoring the Automation Podcast. As a matter of fact, thank any vendor who’s ever sponsored any of our shows. We really appreciate them. One final PSA that I wanna throw out there is that, speaking like I talked about this yesterday on my show, Automation Tech Talk, As we’ve seen with the Ethernet POCs we’re talking about, a lot of micro POCs that were $250 ten years ago are now $400. Right? That’s a lot of inflation, right, for various reasons. Right? And so one of the things I did this summer is I took a look at my P and L, my pros profit and loss statements, and I just can’t hold my prices where they are and be profitable. Right? So if I’m not breaking even, the company goes out of business, and we’ll have no more episodes of the show. So how does this affect you? If you are a student over at the automation school, you have until mid September to do any upgrades or purchase any, courses at the 2020 prices. Alright? So I I don’t wanna raise the prices. I’ve tried as long as I can, but at some point, you have to give in to what the prices are that your vendors are charging you, and you have to raise the prices. So, all my courses are buy one, sell them forever, so this does not affect anybody who’s enrolled in a course. Actually, all of you folks rolled in my PLC courses, I see it updates every week now. So and those who get the ultimate bundles, you’re seeing new lessons added to the new courses because you get that preorder access plus some additional stuff. So in any case but, again, I wanna reiterate, if you’re a vendor who has an old balance or if you are a student who wants to buy a new course, please, make your plans in the next couple of weeks because in mid September, I do have to raise the prices. So I just wanna throw that PSA out there. I know a lot of people don’t get to the end of the show. That’s what I wanted to do at the beginning. So with that said, let’s jump right into this week’s podcast and learn all about Profinet. I wanna welcome to the show, Tom from Profibus, Profinet North America. Tom, I really wanna just thank you for coming on the show. I reached out to you to ask about ask you to come on to to talk to us about this topic. But before we jump in, could you, first tell the audience a little bit about yourself? Tom Weingartner (PI): Yeah. Sure. Absolutely, Shawn. I’m gonna jump to the next slide then and and let everyone know. As Shawn said, my name is Tom, Tom Weingartner, and I am the technical marketing director at PI North America. I have a fairly broad set of experiences ranging from ASIC hardware and software design, and and then I’ve moved into things like, avionic systems design. But it seemed like no no matter what I was working on, it it always centered around communication and control. That’s actually how I got into industrial Ethernet, and I branched out into, you know, from protocols like MIL standard fifteen fifty three and and airing four twenty nine to other serial based protocols like PROFIBUS and MODBUS. And, of course, that naturally led to PROFINET and the other Ethernet based protocols. I I also spent quite a few years developing time sensitive networking solutions. But now I focus specifically on PROFINET and its related technologies. And so with that, I will jump into the the presentation here. And and, now that you know a little bit about me, let let me tell you a little bit about our organization. We are PROFIBUS and PROFINET International or PI for short. We are the global organization that created PROFIBUS and PROFINET, and we continue to maintain and promote these open communication standards. The organization started back in 1989 with PROFIBUS, followed by PROFINET in the early two thousands. Next came IO Link, a communication technology for the last meter, and that was followed by OmLux, a communication technology for wireless location tracking. And now, most recently, MTP or module type package. And this is a communication technology for easier, more flexible integration of process automation equipment. Now we have grown worldwide to 24 regional PI associations, 57 competent centers, eight test labs, and 31 training centers. It’s important to remember that we are a global organization because if you’re a global manufacturer, chances are there’s PROFINET support in the country in which you’re located, and you can get that support in the country’s native language. In the, lower right part of the slide here, we are showing our technologies under the PI umbrella. And I really wanted to point out that these, these technologies all the technologies within PI umbrella are supported by a set of working groups. And these working groups are made up of participants from member companies, and they are the ones that actually create and update the various standards and specifications. Also, any of these working groups are open to any member company. So, PI North America is one of the 24 regional PI associations, and we were founded in 1994. We are a nonprofit member supported organization where we think globally and act locally. So here in North America, we are supported by our local competence centers, training centers, and test labs. And and competence centers, provide technical support for things like protocol, interoperability, and installation type questions. Training centers provide educational services for things like training courses and hands on lab work. And test labs are, well, just that. They are labs that provide testing services and device certification. So any member company can be any combination of these three. You can see here if you’re looking at the slide, that the Profi interface center is all three, where we have JCOM Automation is both a competent center and a training center. And here in North in North America, we are pleased to have HMS as a training center and Phoenix Contact also as a competent center. Now one thing I would like to point out to everyone is that what you should be aware of is that every PROFINET, device must be certified. So if you make a PROFINET device, you need to go to a test lab to get it certified. And here in North America, you certify devices at the PROFINETERFACE center. So I think it’s important to begin our discussion today by talking about the impact digital transformation has had on factory networks. There has been an explosion of devices in manufacturing facilities, and it’s not uncommon for car manufacturers to have over 50,000 Ethernet nodes in just one of their factories. Large production cells can have over a thousand Ethernet nodes in them. But the point is is that all of these nodes increase the amount of traffic automation devices must handle. It’s not unrealistic for a device to have to deal with over 2,000 messages while it’s operating, while it’s trying to do its job. And emerging technologies like automated guided vehicles add a level of dynamics to the network architecture because they’re constantly entering and leaving various production cells located in different areas of the factory. And, of course, as these factories become more and more flexible, networks must support adding and removing devices while the factory is operating. And so in response to this digital transformation, we have gone from rigid hierarchical systems using field buses to industrial Ethernet based networks where any device can be connected to any other device. This means devices at the field level can be connected to devices at the process control level, the production level, even even the operations level and above. But this doesn’t mean that the requirements for determinism, redundancy, safety, and security are any less on a converged network. It means you need to have a network technology that supports these requirements, and this is where PROFINET comes in. So to understand PROFINET, I I think it’s instructive here to start with the OSI model since the OSI model defines networking. And, of course, PROFINET is a networking technology. The OSI model is divided into seven layers as I’m sure we are all familiar with by now, starting with the physical layer. And this is where we get access to the wire, internal electrical signals into bits. Layer two is the data link layer, and this is where we turn bits into bytes that make up an Ethernet frame. Layer three is the network layer, and this is where we turn Ethernet frames into IP packets. So I like to think about Ethernet frames being switched around a local area network, and IP packets being routed around a wide area network like the Internet. And so the next layer up is the transport layer, and this is where we turn IP packets into TCP or UDP datagrams. These datagrams are used based on the type of connection needed to route IP packets. TCP datagrams are connection based, and UDP datagrams are connectionless. But, really, regardless of the type of connection, we typically go straight up to layer seven, the application layer. And this is where PROFINET lives, along with all the other Ethernet based protocols you may be familiar with, like HTTP, FTP, SNMP, and and so on. So then what exactly is PROFINET, and and what challenges is it trying to overcome? The most obvious challenge is environmental. We need to operate in a wide range of harsh environments, and, obviously, we need to be deterministic, meaning we need to guarantee data delivery. But we have to do this in the presence of IT traffic or non real time applications like web servers. We also can’t operate in a vacuum. We need to operate in a local area network and support getting data to wide area networks and up into the cloud. And so to overcome these challenges, PROFINET uses communication channels for speed and determinism. It uses standard unmodified Ethernet, so multiple protocols can coexist on the same wire. We didn’t have this with field buses. Right? It was one protocol, one wire. But most importantly, PROFINET is an OT protocol running at the application layer so that it can maintain real time data exchange, provide alarms and diagnostics to keep automation equipment running, and support topologies for reliable communication. So we can think of PROFINET as separating traffic into a real time channel and a non real time channel. That mess messages with a particular ether type that’s actually eighty eight ninety two, and the number doesn’t matter. But the point here is that the the the real time channel, is is where all PROFINET messages with that ether type go into. And any other ether type, they go into the non real time channel. So we use the non real time channel for acyclic data exchange, and we use the real time channel for cyclic data exchange. So cyclic data exchange with synchronization, we we classify this as time critical. And without synchronization, it is classified as real time. But, really, the point here is that this is how we can use the same standard unmodified Ethernet for PROFINET as we can for any other IT protocol. All messages living together, coexisting on the same wire. So we take this a step further here and and look at the real time channel and and the non real time channel, and and these are combined together into a concept that we call an application relation. So think of an application relation as a network connection for doing both acyclic and cyclic data exchange, and we do this between controllers and devices. This network connection consists of three different types of information to be exchanged, and we call these types of information communication relations. So on the lower left part of the slide, you can see here that we have something called a a record data communication relation, and it’s essentially the non real time channel for acyclic data exchange to pass information like configuration, security, and diagnostics. The IO data communication relation is part of the real time channel for doing this cyclic data exchange that we need to do to periodically update controller and device IO data. And finally, we have the alarm communication relation. So this is also part of the real time channel, because, what we need to do here is it it’s used for alerting the controller to device false as soon as they occur or when they get resolved. Now on the right part of the slide, is we can see some use cases for, application relations, and and these use cases are are either a single application relations for controller to device communication, and we have an optional application relation here for doing dynamic reconfiguration. We also use an application relation for something we call shared device, and, of course, why we are here today and talking about applications relations is actually because of system redundancy. And so we’ll get, into these use cases in more detail here in a moment. But first, I wanted to point out that when we talk about messages being non real time, real time, or time critical, what we’re really doing is specifying a level of network performance. Non real time performance has cycle times above one hundred milliseconds, but we also use this term to indicate that a message may have no cycle time at all. In other words, acyclic data exchange. Real time performance has cycle times in the one to ten millisecond range, but really that range can extend up to one hundred milliseconds. So time critical performance has cycle times less than a millisecond, and it’s not uncommon to have cycle times around two hundred and fifty microseconds or less. Most applications are either real time or non real time, while high performance applications are considered time critical. These applications use time synchronization to guarantee data arrives exactly when needed, but we also must ensure that the network is open to any Ethernet traffic. So in order to achieve time critical performance here, and we do this for the most demanding applications like high speed motion control. And so what we did is we added four features to basic PROFINET here, and and we call this PROFINET ISOCRANESS real time or PROFINET IRT. These added features are synchronization, node arrival time, scheduling, and time critical domains. Now IRT has been around since 02/2004, but in the future, PROFINET will move to a new set of I triple e Ethernet standards called time sensitive networking or TSN. PROFINET over TSN will actually have the same functionality and performance as PROFINET IRT, but we’ll be able to scale to faster and faster, networks and and as bandwidth is is increasing. So this chart shows the differences between PROFINET, RT, IRT, and TSN. And the main difference is, obviously, synchronization. And these other features that, guarantee data arrives exactly when needed. Notice in in the under the, PROFINET IRT column here that that, the bandwidth for PROFINET IRT is a 100 mil a 100 megabits per second. And the bandwidth for PROFINET RT and TSN are scalable. Also, for those device manufacturers out there looking to add PROFINET IRT to their products, there are lots of ASICs and other solutions available in the market with IRT capability. Alright. So let’s take a minute here to summarize all of this. We have a a single infrastructure for doing real time data exchange along with non real time information exchange. PROFINET uses the same infrastructure as any Ethernet network. Machines that speak PROFINET do so, using network connections called application relations, and these messages coexist with all other messages so information can pass from devices to machines, to factories, to the cloud, and back. And so if you take away nothing else from this podcast today, it is the word coexistence. PROFINET coexists with all other protocols on the wire. So let’s start talking a little bit here about the main topic, system redundancy and and and why we got into talking about PROFINET at all. Right? I mean, what why do we need system redundancy and things like like, application relations and dynamic reconfiguration? Well, it’s because one of the things we’re pretty proud of with PROFINET is not only the depth of its capabilities, but also the breadth of its capabilities. And with the lines blurring between what’s factory automation, what’s process automation, and what’s motion control, we are seeing all three types of automation appearing in a single installation. So we wanna make sure PROFINET meets requirements across the entire range of industrial automation. So let’s start out here by looking at the differences between process automation versus factory automation, and then we’ll get into the details. First off, process signals typically change slower on the order of hundreds of milliseconds versus tens of milliseconds in factory automation. And process signals often need to travel longer distances and potentially into hazardous or explosive areas. Now with process plants operating twenty four seven, three sixty five, system must systems must provide high availability and support changes while the plant is in production. This is where system redundancy and dynamic reconfiguration come in. We’ll discuss these again here in in just a minute. I just wanted to finish off this slide with saying that an estop is usually not possible because while you can turn off the automation, that’s not necessarily gonna stop the chemical reaction or whatever from proceeding. Sensors and actuators and process automation are also more complex. Typically, we call them field instruments. And process plants have many, many, many more IO, tens of thousands of IO, usually controlled by a DCS. And so when we talk about system redundancy, I actually like to call it scalable system redundancy because it isn’t just one thing. This is where we add components to the network for increasing the level of system availability. So there are four possibilities, s one, s two, and r one, r two. The letter indicates if there are single or redundant network access points, and the number indicates how many application relations are supported by each network access point. So think of the network access point as a physical interface to the network. And from our earlier discussion, think of an application relation as a network connection between a controller and a device. So you have s one has, single network access points. Right? So each device has single network access points with one application relation connected to one controller. S two is where we also have single network access points, but with two application relations now connected to different controllers. R one is where we have redundant network access points, but each one of these redundant network access points only has one application relation, but those are connected to different controllers. And finally, we could kinda go over the top here with r two, and and here’s where we have redundant network access points with two application relations connected to different controllers. Shawn Tierney (Host): You know, I wanna just stop here and talk about s two. And for the people who are listening, which I know is about a quarter of you guys out there, think of s two is you have a primary controller and a secondary controller. If you’re seeing the screen, you can see I’m reading the the slide. But you have your two primary and secondary controllers. Right? So you have one of each, and, primary controller has the, application one, and secondary has application resource number two. And each device that’s connected on the Ethernet has both the one and two. So you went maybe you have a rack of IO out there. It needs to talk to both the primary controller and the secondary controller. And so to me, that is kinda like your classic redundant PLC system where you have two PLCs and you have a bunch of IO, and each piece of IO has to talk to both the primary and the secondary. So if the primary goes down, the secondary can take over. And so I think that’s why there’s so much interest in s two because that kinda is that that that classic example. Now, Tom, let me turn it back to you. Would you say I’m right on that? Or Tom Weingartner (PI): Spot on. I mean, I think it’s great, and and and really kinda emphasizing the point that there’s that one physical connection on the network access point, but now we have two connections in that physical, access point there. Right? So so you can then have one of those connections go to the primary controller and the other one to the secondary controller. And in case one of those controllers fails, the device still can get the information it needs. So, yep, that that’s how we do that. And and, just a little bit finer point on r one, if you think about it, it’s s two, but now all we’ve done is we’ve split the physical interface. So one of the physical interfaces has has, one of the connections, and the other physical interface has a has the other connection. So you really kinda have, the same level of redundant functionality here, backup functionality with the secondary controller, but here you’re using, multiple physical interfaces. Shawn Tierney (Host): Now let me ask you about that. So as I look at our one, right, it seems like they connect to port let’s I’ll just call it port one on each device to switch number one, which in this case would be the green switch, and port number two of each device to the switch number two, which is the blue switch. Would that be typical to have separate switches, one a different switch for each port? Tom Weingartner (PI): It it it doesn’t have to. Right? I I I think we chose to show it like this for simplicity kinda to Shawn Tierney (Host): Oh, I don’t care. Tom Weingartner (PI): Emphasize the point that, okay. Here’s the second port going to the secondary controller. Here’s the first port going to the primary controller. And we just wanted to emphasize that point. Because sometimes these these, diagrams can be, a bit confusing. And you Shawn Tierney (Host): may have an application that doesn’t require redundant switches depending on the maybe the MTBF of the of the switch itself or your failure mode on your IO. Okay. I’m with you. Go ahead. Tom Weingartner (PI): Yep. Yep. Good. Good. Good. Alright. So, I think that’s some excellent detail on that. And so, if you wouldn’t mind or don’t have any other questions, let’s let’s move on to the the, the the next slide. So you can see in that previous slide how system redundancy supports high availability by increasing system availability using these network access points and application relations. But we can also support high availability by using network redundancy. And the way PROFINET supports network redundancy is through the use of ring topologies, and we call this media redundancy. The reason we use rings is because if a cable breaks or the physical connection, somehow breaks as well or or even a device fails, the network can revert back to a line topology keeping the system operational. However, supporting network redundancy with rings means we can’t use protocols typically used in IT networks like, STP and RSTP. And this is because, STP and RSTP actually prevent network redundancy by blocking redundant paths in order to keep frames from circulating forever in the network. And so in order for PROFINET to support rings, we need a way to prevent frames from circulating forever in the network. And to do this, we use a protocol called the media redundancy protocol or MRP. MRP uses one media redundancy manager for each ring, and the rest, of the devices are called media redundancy clients. Managers are typically controllers or PROFINET switches, and clients are typically the devices in the network. So the way it works is this. A manager periodically sends test frames, around the network here to check the integrity of the ring. If the manager doesn’t get the test frame back, there’s a failure somewhere in the ring. And so the manager then notifies the clients about this failure, and then the manager sets the network to operate as a line topology until, the failure is repaired. Right? And so that’s how we can get, network redundancy with our media redundancy protocol. Alright. So now you you can see how system redundancy and media redundancy both support high availability. System redundancy does this by increasing system availability, Walmart. Media redundancy does this by increasing network availability. Obviously, you can use one without the other, but by combining system redundancy and media redundancy, we can increase the overall system reliability. For example, here we are showing different topologies for s one and s two, and these are similar to the the the topologies that were on the previous slide. So, if you notice here that, for s one, we can only have media redundancy because there isn’t a secondary controller to provide system redundancy. S two is where we combine system redundancy and media redundancy by adding an MRP ring. But I wanted to point out here that that even though we’re showing this MRP ring as as as a possible topology, there really are other topologies possible. It really depends on the level of of system reliability you’re trying to achieve. And so, likewise, on on this next slide here, we are showing two topologies for adding media redundancy to r one and r two. And so for r one, we’ve chosen, again, probably for simplistic, simplicity’s sake, we we add an MRP ring for each redundant network access point. With for r two, we do the same thing here. We also have an MRP ring for each redundant network access point, but we also add a third MRP ring for the controllers. Now this is really just to try to emphasize the point that you can, you you can really, come up with just about any topology possible, but it because it really depends on the number of ports on each device and the number of switches in the network and, again, your overall system reliability requirements. So in order to keep process plants operating twenty four seven three sixty five, dynamic reconfiguration is another use case for application relations. And so this is where we can add or remove devices on the fly while the plant is in production. Because if you think about it, typically, when there is a new configuration for the PLC, the PLC first has to go into stop mode. It needs to then re receive the configuration, and then it can go back into run mode. Well, this doesn’t work in process automation because we’re trying to operate twenty four seven three sixty five. So with dynamic reconfiguration, the controller continues operating with its current application relation while it sets up a new application relation. Right? I mean, again, it’s it’s really trying to get this a a new network connection established. So then the the the controller then switches over to the new application relation after the new configuration is validated. Once we have this validation and the configuration’s good, the controller removes the old application relations and continues operating all while staying in run mode. Pretty handy pretty handy stuff here for for supporting high availability. Now one last topic regarding system redundancy and dynamic reconfiguration, because these two PROFINET capabilities are compatible with a new technology called single pair Ethernet, and this provides power and data over just two wires. This version of Ethernet is now part of the I triple e eight zero two dot three standard referred to as 10 base t one l. So 10 base t one l is the non intrinsically saved version of two wire Ethernet. To support intrinsic safety, 10 base t one l was enhanced by an additional standard called Ethernet APL or advanced physical layer. So when we combine PROFINET with this Ethernet APL version of 10 base t one l, we simply call it PROFINET over APL. It not only provides power and data over the same two wires, but also supports long cable runs up to a kilometer, 10 megabit per second communication speeds, and can be used in all hazardous areas. So intrinsic safety is achieved by ensuring both the Ethernet signals and power on the wire are within explosion safe levels. And even with all this, system redundancy and dynamic reconfiguration work seamlessly with this new technology we call PROFINET over APL. Now one thing I’d like to close with here is a is a final thought regarding a new technology I think I think everyone should become aware of here. I mean, it’s emerging in the market. It’s it’s quite new, and it’s a technology called MTP or module type package. And so this is a technology being applied first here to, use cases considered to be a hybrid of both process automation and factory automation. So what MTP does is it applies OPC UA information models to create standardized, non proprietary application level descriptions for automation equipment. And so what these descriptions do is they simplify the communication, between equipment and the control system, and it does this by modularizing the process into more manageable pieces. So really, the point is to construct a factory with modular equipment to simplify integration and allow for better flexibility should changes be required. Now with the help of the process orchestration layer and this OPC UA connectivity, MTP enabled equipment can plug and operate, reducing the time to commission a process or make changes to that process. This is pretty cutting edge stuff. I think you’re gonna find and hear a lot more about NTP in the near future. Alright. So it’s time to wrap things up with a summary of all the resources you can use to learn even more about PROFINET. One of the things you can do here is you can get access to the PROFINET one day training class slide deck by going to profinet2025.com, entering your email, and downloading the slides in PDF format. And what’s really handy is that all of the links in the PDF are live, so information is just a click away. We also have our website, us.profinet.com. It has white papers, application stories, webinars, and documentation, including access to all of the standards and specifications. This is truly your one stop shop for locating everything about PROFINET. Now we do our PROFINET one day training classes and IO link workshops all over The US and parts of Canada. So if you are interested in attending one of these, you can always find the next city we are going to by clicking on the training links at the bottom of the slide. Shawn Tierney (Host): Hey, guys. Shawn here. I just wanted to jump in for a minute for the audio audience to give you that website. It’s us.profinet.com/0dtc or oscardeltatangocharlie. So that’s the website. And I also went and pulled up the website, which if you’re watching, you can see here. But for those listening, these one day PROFINET courses are coming to Phoenix, Arizona, August 26, Minneapolis, Minnesota, September 10, Newark and New York City, September 25, Greenville, South Carolina, October 7, Detroit, Michigan, October 23, Portland, Oregon, November 4, and Houston, Texas, November 18. So with that said, let’s jump back into the show. Tom Weingartner (PI): Alan, one of our most popular resources is Profinet University. This website structures information into little courses, and you can proceed through them at your own pace. You can go lesson by lesson, or you can jump around. You can even decide which course to take based on a difficulty tag. Definitely make sure to check out this resource. We do have lots of great, webinars on on the, on on the website, and they’re archived on the website. Now some of these webinars, they they rehash what we covered today, but in other cases, they expand on what we covered today. But in either case, make sure you share these webinars with your colleagues, especially if they’re interested in any one of the topics that we have listed on the slide. And finally, the certified network engineer course is the next logical step if you would like to dive deeper into the technical details of PROFINET. It is a week long in Johnson City, Tennessee, and it features hands on lab work. And if you would like us to provide training to eight or more students, we can even come to your site. If you would like more details about any of this, please head to the website to learn more. And with that, Chai, I think that is, my last slide and, covered the topics that I think we wanted some to cover today. Shawn Tierney (Host): Yeah. And I just wanna point out that to you guys, this, training goes out through all around The US. I definitely recommend getting up there. If you’re using PROFINET and you wanna get some training, they usually fill the room, like, you know, 50 to a 100 people. And, it’s you know, they do this every year. So check those dates out. If you need to get some hands on with PROFINET, I would definitely check out those. And, of course, we’ll have all the links in the description. I also wanna thank Tom for that slide. Really defining s one versus s two versus r one and r two. You know, a lot of people say we have s two compatibility. A matter of fact, we’re gonna be looking at some products that have s two compatibility here in the future. And, you know, just trying to understand what that means. Right? You know, when somebody just says s two, it’s like, what does that mean? So I really if that slide really doesn’t for you guys listening, I thought that slide really kinda lays it out, kinda gives you, like, alright. This is what it means. And, so in in in my from my perspective, that’s like it’s you’re supporting redundant controllers. Right? And so if you have an s two setup of redundant, seamless controllers that or CPUs, then you’ll be that product will support that. And that’s important. Right? Because if you had a product that didn’t support it, it’s not gonna work with your application. So I thought that and the the Ethernet APL is such a big deal in process because I you know, the the distance, right, and the fact that it’s it’s, intrinsically safe and supports all those zones and and areas and whatnot, that is, and everybody everybody all the instrumentation people are all over. Right? The, the, the Rosemonts, the fishes, the, the endless houses, everybody is is on that working group. We’ve covered that on the news show many times, and, just very interesting to see where that goes, but I think it’s gonna take over that part of the industry. So, but, Tom, was there anything else you want to cover in today’s show? Tom Weingartner (PI): No. I I think that that really, puts puts a a fine finale on on on this here. I I do wanted to maybe emphasize that, you you know, that point about network redundancy being compatible with, system redundancy. So, you know, you can really hone in on what your system reliability requirements are. And and also with with this this, PROFINET over APL piece of it, completely compatible with with PROFINET, in in of itself. And and, also, you don’t have to worry about it not supporting, system redundancy or or anything of of the like, whether, you know, you you wanted to get, redundant even redundant devices out there. So, that’s that’s, I think that’s that’s about it. Shawn Tierney (Host): Alright. Well, I again, thank you so much for coming on. We look forward to trying out some of these s two profanet devices in the near future. But with that, I I really wanted to have you on first to kinda lay the groundwork for us, and, really appreciate it. Tom Weingartner (PI): No problem. Thank you for having me. Shawn Tierney (Host): Well, I hope you guys enjoyed that episode. I did. I enjoyed sitting down with Tom, getting up to date on all those different products, and it’s great to know they have all these free hands on training days coming across United States. And, you know, what a great refresher from the original 2020 presentation that we had somebody from Siemens do. So I really appreciate Tom coming on. And speaking of Siemens, so thankful they sponsored this episode so we could release it ad free and make the video free to everybody. Please, if you see Siemens or any of the vendors who sponsor our episodes, please tell them to thank you from us. It really helps us keep the show going. Speaking of keeping the show going, just a reminder, if you’re a student or a vendor, price increases will hit mid September. So if you’re a student, you wanna buy another course, now is the time to do it. If you’re a vendor and you have a existing balance, you will want to schedule those podcasts before mid September or else you’ll be subject to the price increase. So with that said, I also wanna remind you I have a new podcast, automation tech talk. I’m reusing the old automation new news headlines podcast. So if you already subscribed to that, you’re just gonna get in the new the new show for free. It’s also on the automation blog, on YouTube, on LinkedIn. So I’m doing it as a live stream every lunchtime, just talking about what I learned, in that last week, you know, little tidbits here and there. And I wanna hear from you guys too. A matter of fact, I already had Giovanni come on and do an interview with me. So at one point, I’ll schedule that as a lunchtime podcast for automation tech talk. Again, it still shows up as automation news headlines, I think. So at some point, I’ll have to find time to edit that to change the name. But in any case, with that, I think I’ve covered everything. I wanna thank you guys for tuning in. Really appreciate you. You’re the best audience in the podcast world or the video world, you know, whatever you wanna look at it as, but I really appreciate you all. Please feel free to send me emails, write to me, leave comments. I love to hear from you guys, and I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Riding Unicorns
Scaling from startup to £70M+ funding in 16 months with Paul Anthony, Co-founder of Primer & Colossal

Riding Unicorns

Play Episode Listen Later Aug 13, 2025 55:58


Paul Anthony is the co-founder of Primer, a rapidly growing payment orchestration platform with over 200 employees and more than £70 million in funding from top-tier investors including Seed Camp, Balderton, Accel, and Iconic. His journey began as the first employee at a FinTech startup (now called Depay), followed by a stint at PayPal's Braintree division, before founding Primer to solve the complex payment infrastructure challenges he witnessed amongst enterprise merchants. Paul is now also a co-founder of Colossal, an innovative new venture that leverages AI and LLMs to help digital goods creators build customised commerce experiences through a prompt-based interface.Key Topics Discussed:The Payment Orchestration Insight - How Paul's experience at Braintree meeting enterprise merchants face-to-face revealed the need for a unified payment infrastructure layer that didn't exist in the marketHypergrowth Challenges - Scaling Primer from a three-person team to Series B funding (nearly half-billion valuation) within just 16 months, whilst building a robust enterprise product during COVIDHiring Philosophy and Culture - Paul's approach of interviewing 20-30 people for every hire, treating "autonomy as a requirement not a benefit," and maintaining a "we're not a real business yet" mentality to drive innovationProduct Development Approach - The importance of building POCs and technical spikes to understand how products "feel" rather than just look good on paper, especially when serving enterprise customersThe Colossal Vision - Paul's new venture described as "Lovable for commerce," using AI to help creators build sophisticated customer journeys without technical knowledge, targeting the £400 billion digital goods marketTune in to hear Paul's fascinating insights on building payment infrastructure for enterprise clients, navigating hypergrowth whilst maintaining product focus, and his bold new vision for democratising commerce through AI. This episode offers invaluable lessons for founders tackling complex technical problems and scaling rapidly in competitive markets.

HFS PODCASTS
Unfiltered Stories | From POCs to Real Results: AI in Finance with IBM Consulting

HFS PODCASTS

Play Episode Listen Later Aug 8, 2025 17:37


Why is enterprise AI still stuck in pilot mode? And what does it take to actually scale? In this episode of HFS Unfiltered Stories, Saurabh Gupta, President, Research and Advisory Services at HFS Research, is joined by Khalid Siddiqui, Global Finance Operations Offering Leader at IBM Consulting, for a no-fluff conversation about how to move from AI aspiration to enterprise-wide activation. Khalid unpacks real-world examples of where AI—and more importantly, agentic AI—has been successfully deployed to improve cash flow, reduce processing time, and create orchestration across systems. They go beyond the hype to explore: Why AI is “dying a death by a thousand POCs”—and how to avoid itA building materials client's journey from 30-minute query resolution to 2-minute outcomes using AI agentsThe critical role of data readiness and system simplification in AI scalabilityHow agentic AI and AI fabric are orchestrating finance operations end-to-endThe hidden cost of AI adoption: data debt, process debt, cultural resistanceTangible business results: improving DSO, boosting collections, and reducing controller workload across 50+ marketsTired of AI pilots that go nowhere? Watch this candid conversation to learn what it takes to make AI work at scale—and how you can get there faster. Tune in now to start turning your AI ambitions into enterprise-wide impact.

Lenny's Podcast: Product | Growth | Career
Pricing your AI product: Lessons from 400+ companies and 50 unicorns | Madhavan Ramanujam

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jul 27, 2025 71:43


Madhavan Ramanujam is the world's foremost expert on pricing and monetization strategy. As managing partner at Simon-Kucher, he helped over 250 companies, including 30 unicorns, architect their pricing strategies. He's the author of the definitive book on pricing, Monetizing Innovation. Now he's back with a sequel, Scaling Innovation, which reveals how to build enduring businesses by dominating both market share and wallet share. He recently left Simon-Kucher to launch his own fund, 49 Palms, focused on helping early-stage AI companies.In this conversation, we discuss:1. The 2x2 framework that identifies your optimal pricing model2. Why AI companies can capture 25% to 50% of value created, vs. 10% to 20% for traditional SaaS products3. Why popular AI coding tools may have already doomed themselves with underpricing4. The “give-and-get” framework top negotiators use to extract maximum value from every deal5. The negotiation strategy that helped one founder 4x their deal size overnight6. How to frame POCs as “business case creation” instead of technical demos (and why this changes everything)7. Why AI companies must get monetization right from day one—not “figure it out later”8. How companies like Intercom's Fin and Sierra pioneered outcome-based pricing (charging $0.99 per AI resolution)9. The single question that reveals if your pricing is too complex—Brought to you by:Enterpret—Transform customer feedback into product growth: https://enterpret.com/lennyDX—A platform for measuring and improving developer productivity: https://getdx.com/lennyPersona—A global leader in digital identity verification: https://withpersona.com/lenny—Transcript: https://www.lennysnewsletter.com/p/pricing-and-scaling-your-ai-product-madhavan-ramanujam— My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/168109183/my-biggest-takeaways-from-this-conversation—Where to find Madhavan Ramanujam:• X: https://x.com/madhavansf• LinkedIn: https://www.linkedin.com/in/madhavansf/• Promo email for Scaling Innovation: promo@49palmsvc.com — If you're purchasing more than five copies, send a screenshot of your receipt to enter Madhavan's exclusive bundle raffle.—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Madhavan and his work(04:30) The core thesis of Scaling Innovation(09:20) Common traps founders fall into(12:06) Beautifully simple pricing(15:00) Mastering negotiations(26:51) Other strategies for effective pricing and monetization(27:35) How AI pricing is different(31:33) Handling POCs(36:25) The importance of mastering monetization(38:58) Choosing the right AI pricing model(43:13) Current trends in AI pricing(44:48) Strategizing for outcome-based models(50:23) Packaging strategies for scaling(51:37) Adapting pricing strategies over time(53:40) Key axioms for pricing success(58:00) Takeaways for founders(01:01:33) Lightning round and final thoughts—Referenced:• The art and science of pricing | Madhavan Ramanujam (Monetizing Innovation, Simon-Kucher): https://www.lennysnewsletter.com/p/the-art-and-science-of-pricing-madhavan• Cursor: https://www.cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Sierra Finn: http://www.sierrafinn.com/• Chargeflow: https://www.chargeflow.io/• GitHub: https://github.com/• Intercom: https://www.intercom.com/• Warren Buffett's quote: https://www.goodreads.com/quotes/11478913-if-you-ve-got-the-power-to-raise-prices-without-losing• Sierra: https://sierra.ai/• Clay Bavor on LinkedIn: https://www.linkedin.com/in/claybavor/• Mission: Impossible—The Final Reckoning: https://www.imdb.com/title/tt9603208/• Delphi: https://www.delphi.ai/• Dara Ladjevardian on LinkedIn: https://www.linkedin.com/in/dara-ladjevardian/• Sam Spelsberg on LinkedIn: https://www.linkedin.com/in/samuel-spelsberg/• Lennybot: https://www.lennybot.com/• Granola: https://www.granola.ai/• Simon-Kucher: https://www.simon-kucher.com/• Josh Bloom on LinkedIn: https://www.linkedin.com/in/joshuabloompricingconsulting/—Recommended books:• Monetizing Innovation: How Smart Companies Design the Product Around the Price: https://www.amazon.com/Monetizing-Innovation-Companies-Design-Product/dp/1119240867• Scaling Innovation: How Smart Companies Architect Profitable Growth: https://www.amazon.com/dp/1119633060• Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers: https://www.amazon.com/Business-Model-Generation-Visionaries-Challengers/dp/0470876417• Thinking Fast and Slow: https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555/• Contagious: Why Things Catch On: https://www.amazon.com/Contagious-Things-Catch-Jonah-Berger/dp/1451686587/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

The Tech Trek
AI Isn't the Threat. It's the Upgrade

The Tech Trek

Play Episode Listen Later Jul 16, 2025 25:06


What does the “long tail” of AI really look like in a highly regulated industry? In this episode, Dave Wollenberg, VP of Enterprise Data & Analytics at Scan, breaks it down. From cautious experimentation to enabling non-technical users to build AI-driven POCs, Dave shares a grounded, practical perspective on AI adoption inside a Medicare Advantage organization.You'll hear why the real transformation isn't just technical—it's cultural. We talk about how to shift employee mindsets, educate business teams, and unlock self-service analytics while staying compliant. If you're a tech or data leader trying to separate hype from real value, this one's for you.Key Takeaways:The long tail of AI means rethinking roles—not just automating tasksReal AI enablement starts with data quality, governance, and semantic clarityNon-technical employees can (and should) build AI proof-of-conceptsChange management will make or break your AI strategyIn regulated industries, open source and secure deployment models matterTimestamped Highlights:00:55 – What Scan Health Plan does and why AI matters in healthcare03:10 – From machine learning to generative AI: how use cases have evolved08:15 – Three types of business users and how to upskill them for AI12:40 – Shifting expectations: stakeholders want AI-powered insights, now15:20 – Why self-service BI still falls short without a solid data foundation18:35 – AI adoption isn't just IT's job—business users need to lead too22:15 – Navigating AI in regulated industries: risks, rules, and realitiesQuote of the Episode:“It's not as if there's a certain amount of work in the world, and if AI takes some, there's nothing left to do. When you make people more powerful, they add more value—and you want more of them, not fewer.”Pro Tips:Host internal hackathons to build excitement and break down resistanceUse sandbox environments to safely encourage experimentationDon't wait for technical users—give your business teams the tools to tryCall to Action:Like what you heard? Share this episode with someone exploring AI adoption in their org. Subscribe to The Tech Trek for more candid conversations with tech leaders on building, scaling, and leading through change.

The Automation Podcast
Emerson Dust Collector Monitoring & Control Solution (P241)

The Automation Podcast

Play Episode Listen Later Jul 16, 2025 65:41 Transcription Available


Shawn Tierney meets up with Eugenio Silva of Emerson to learn all about Dust Collection Systems, and Emerson’s Monitoring and Control Solution in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Note: This episode was not sponsored so the video edition is a “member only” perk. The below audio edition (also available on major podcasting platforms) is available to the public and supported by ads. To learn more about our membership/supporter options and benefits, click here. Listen to The Automation Podcast from The Automation Blog: Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (host): Welcome back to the automation podcast. My name is Shawn from Insights, and I wanna thank you for tuning back in. Now in this episode, I had the pleasure of meeting up with Eugene Silva from Emerson to learn all about the industrial control and monitoring system that comes with their industrial dust collectors. Now I thought it was very interesting. I hope you do as well. But before we jump into this episode, I do wanna thank our members who made the video edition possible. So So when a vendor does a sponsor of the episode, the video becomes a member only perk, and that is just $5 a month to get started. So thank you members for making the video edition possible. With that, I also wanna thank our sponsor for this week’s show, the automationschool.com and the automationblog.com. I have an update later in the show what’s going on on both sites, and I hope you’ll, stick around and listen to that, towards the end of the show. But with that said, let’s go ahead and jump into this week’s episode of the automation podcast. It is my pleasure to welcome Emerson back on the show and Eugene on the show to talk about dust collector monitoring. You guys can see the slide if you’re watching dust collector monitoring and control solutions. I’m excited about this because this is a solution versus, like, a discrete product. So with that said, Eugene, would you please introduce yourself to our audience? Eugenio Silva (Emerson): Yes. Shawn, thank you very much for this opportunity. Hello, everyone. Here’s Eugenio Silva. I’m a product manager, intelligence automation within Emerson, the discrete automation part of Emerson. I’m glad today gonna share some, some of our understanding and learnings with the dust collector monitoring control solution. And, when I talk about that, Emerson is also involved in in others, types of solutions that, our purpose is to drive innovation that makes the world healthier, safer, smart, and more sustainable. And I’m also responsible for continuous emission monitoring, pest collectors is one, utility, energy and compressed air management solutions. So for today, I prepared something that, we go a little bit, into why this type of, test collector solution is important, from understand of our customers and industry point of view. We’re going to look into the fundamentals of a dust collection, from the particle sensors to the dust collector systems, and then dive in into the dust collector solution where I’m going to provide you, some features, also explanation why they are there, and how this kind of capabilities deliver value to our end users and customers, and, hopefully, to have time as well to have a short, recorded demo that, brings us, full scope how the operators look into into that solution when they they use it. Shawn Tierney (host): But before we jump in, I wanna thank the automationschool.com for sponsoring this episode of the show. That’s where you’ll find all of my online courses on Allen Bradley and Siemens PLCs and HMIs. So if you know anybody who needs to get up to speed on those products, please mention the automationschool.com to them. And now let’s jump back into the show. Eugenio Silva (Emerson): In terms of key applications, industries use cases, dust collector is essential for many industries that produce dust, produce any kind of a pounder, any kind of a fume, and typically air pollution control, boundary processing, handling, industrial dust, fume ventilation are covered by one or another way by dust collectors. And, the industries that I put in both, these are the the dirty ones in the sense that they produce a lot of, particle, either in terms of gases or dust. Therefore, the regulations that are in these industries are quite strong. So cement, metals, chemical plus, carbon, black and toner, like lithium battery assembly, disassembly, metal foundry. And what is interesting is the either you produce a waste that you have to manage it properly, can be also recycled, for example, in the industries like plastics in food or wood. All the collected dust that you have, you can also reuse and sometimes recycle. But why? Why this is important? Why is it important to extract dust from these industries? Let’s start on the right side because this is what the the customer is looking for. Because the cost of our pollution, the hazards, this this safe safety accidents that can be caused by this kind of harmful airborne and particles and forms are so substantial, then of course, it’s very much regulated in all these industries. And if you calculate the costs on the public health, Sometimes big accidents in plants where even big fires or hazards to people operating the plant. We talk about billions per year, the cost of that. And one of the consequences of having such issues is that when the dust extraction system is not working properly or you have really a downtime. For example, I’m going to explain that this really depends on components that are very, they use so often that they wear down, like filters, like post files. And each time that we have a downtime is not the cost of the dust collector downtime that’s important. It’s the overall downtime costs that imposes to the operation of the plant because in order to be conformist, they have to stop operating until they fix the issue. And these downtimes, of course, arise in many ways in different aspects. How complex is this dust collector. But I’m I’m going to give you, some insights that, if a dust collector system does not have any solution to monitoring real time or control, the efficiency. Basically, the personnel is managing these assets without any sight, and everything can go wrong. That’s why the TCO and the maintenance aspects are quite important. Because if you’re not aware where is the problem, when you have to plan and this becomes a firefighting or reactive mode, then your costs are going to be quite high. And when you talk about the TCO, it’s about the cost of the equipment, the acquisition, the cost of operation, meaning not only the personnel, but in this case, we use a lot of compressed air. I’m going to explain why. The maintenance costs, as we explained, and the disposal costs. Disposal means, the filter bags that must be replaced and and changed, but also the the dust, the fume, all the elements that must be, properly managed and recycling sometimes. So this is the aspects why it’s important. Now let’s turn us about, the benefits and savings. So if you use the dust collector solutions, of any kind that can monitor in real time all the aspects, of the operation of a dust collector system and, also contributes turning maintenance from reactive to preventative and maybe predictive, then the best thing that you can do is to avoid huge penalties. As you can see on this graph, every decade, let’s say, the fines are getting steeper. And the reason for that is because of the the damage and the result of a big, like, say, issue on the plant regarding to this dust part is is quite heavy. So, therefore, we talk about 100 k’s or even plus in some industries like primary metal and chemical, where one single incident, it’s about a 100 k in average or more. And then, of course, to avoid that and to be completely compliance, you have to operate that systems, in many cases, 24 by seven. And, therefore, any way possible to reduce downtime and, as a plus, reduce the energy costs because for compressed air, you have to use electricity, then, it pays off because you’re going to be full time compliant. And the other thing is if you do properly, monitor and control your dust collector system, you also increase the filtration efficiency. So that means you are far from the high levels, where after that threshold, you would be penalized. You can operate under, conformist, under compliance, but can also expand the equipment life. For example, the life bags, the post valves, you don’t have it to replace as often, which is the case if you don’t do any real time monitoring diagnostics. On the left side, the way that we talk about improving maintenance is the total cost. When we talk about the filter life, at least one unit of a filter, It’s about 18 k, US dollars. And you see that, the tip of a iceberg is just the purchase price. The dust collector system, like, of course, has an acquisition cost. But below that, as a total cost of ownership, you have the energy that you expand utilizing the systems. You have the filter bags. You have to keep parts in your inventory. You have to dispose of that. And, of course, you have the downtime costs and also the labors labor costs. Now I’m going to just to give, a chance to say, okay. Tell me how a dust collector system works. Shawn Tierney (host): Before we get to that, we gotta pay the bills. So I wanna tell you about our sponsor, the automationschool.com. It’s actually the next room over. We have a huge training room. We have, some of the most unique products you’ll be able to work on. You know, I know everybody has a bunch of CompactLogix or s seven twelve hundreds or 15 hundreds and, you know, VFDs and HMIs. But some of the products we have here, you’re not gonna find in anybody else’s training room, not even the factory’s training room because we cover all different products. Right? So if you’re coming over to do training with us, you can actually learn Siemens and Allen Bradley at the same time. You can learn how to get Siemens and Allen Bradley to talk together. You guys know I’ve covered that on the show, but you could do it hands on. And some of the other things is like working with third party products. Right? So, you know, if you go to a vendor’s course, they’re not gonna have third party products. But we have as you remember from the wall in my studio, we have all kinds of third party products. And I’m gonna be taking some more pictures of all the different labs we have, the equipment we use, with these third party products. So if you know anybody looking for training and we can do custom things too. So if you wanna start training at noontime or 01:00 because you’re gonna drive in three or four hours away, I was recently just at a, large vendor’s customer doing some training on their behalf. And, yeah, that was a long drive. So if you want your, students to show up in person at twelve or one and then train and then at the on the last day, leave around twelve or one, we can do that as well. I don’t care. We could actually run into the night if you wanted to go, do evenings. Or, again, some people don’t learn very well in the evenings, but in any case, because I own the company, we can do whatever you want. As long as we have the equipment and the time to put it together, we’ll do it for you. So I just wanted to make you aware of that. We also if you’re, just wanna come yourself, if you go to the automationschool.com forward slash live, you will see a place where you can preregister for an upcoming class. And when I get enough people to sign up, I’ll reach out to you and tell you what date is gonna be held. And by preregistering like that, you will save $50 off the $500 price. And if you’re already a student, you will save the price of your online course off of the in person course. So maybe you bought my $200 Siemens or CompactLogix, ControlLogix cost. They’re gonna get that off of that $500. Right? And if you don’t own the online cost, don’t worry about it. If you come here for in person training, at the end of your training, we’re gonna enroll you, in one of those online courses completely free of charge so you can continue your learning. And you don’t have to worry about trying to blitz all the content while you’re here because whether you’re here for a day or five, it doesn’t matter. Whatever you have left to learn, you’ll be able to do it after hours at home, and there’s no additional charge for that. So with that said, let’s get back into this week’s episode of the automation podcast. Eugenio Silva (Emerson): And these are going to be general principles and basics. In general, a dust collector system looks like this. It’s a unit where the air is pulled in at the bottom of the compartment, and this could be forced or not. And then the air gets out, on the top, the outlet, and the dust is collected on the outside of the bag. So if you see this, in this picture, we have one full bag in kind of light brown color with a specific fabric, could be porosis fabric, a PVC, or some even paper in some cases. And then the cleaner exceeds at the top. And the what happens is that the dust cake builds up on the bags, on the outside part of the bag. And, if you see the number one on top, in that particular, entry point, we have two pulse valves with, compressed air in order to shake a little bit these, post bags, filter bags, and then knocks down the dust out of these bags, and then they are collected by a hopper at the bottom. Okay? So that’s basically, in general, how it works the principle. It’s a bit more complicated. Here is just to show that in order to automate a dust collector system including the filter bags, we use, a combination of, electrical and pneumatic, components. And these are from post valves, the ones that continuously blow air into these pipes, the compressed air tanks that hold the right pressure and the right compressed air capacity in order to keep the filtration efficiency very high. Then you have the filter regulators that, you have to bring, the pressure of this line to higher enough, to to be efficient, but not so high to spend too much compressed air. Then you can use controllers, black boxes that are able to do a time based sequencing, but these are not so so much sometimes efficient because it doesn’t take into consideration all the diagnostics that you can get out of it. And then, basically, the very important element is this, particle sensor that is on the outside of the clean air because that is gonna be your canary in the mind. Right? It’s gonna be the one that indicates if the filter, system is efficient efficient and if the the job is done right. And then the other things. But let’s go back to a very interesting view. You remember this picture here that, you you’re looking at, a cross session of the dust collector. Now you could imagine how it looks like from the top. From the top, it looks like that. There is a compressed air tank, that covers, certain portion of the filters units. For example, it’s very common that a filter, complete filter unit, might have different compartments. And in each of these compartments, you have a series of filter bags. And then imagine that you provide short but very powerful pulses of compressed air that are periodically injected on top of this columns. And below, there’s a filter bag. So, therefore, they are going to to receive to expand a little bit, and the dust cake then, outside of of their surface follows. And by inertial forces, of course, this dust is accumulated at the bottom, which is, extracted into a hopper. Of course, now depending of the number of the filters per line, per roll, these pulse valves needs to pulse a little bit faster or not. And the interval time, if you just follow time based approach, could be three to six minutes. Now if you calculate the average filter units, you may have 12 of these filter bags. You can have about seven to 10 pulse valves per unit. It’s very common that, one large installation would have about, like, 500 pulse valves and four, six times more filters, install it. And imagine that if each of them having boost every three minutes, 24 by seven, during seven days a week. So can you imagine the amount of compressed air that can be spent? That’s why these pulses must be very short and powerful, in hundred milliseconds to avoid it also big waste. I think that, picture on the left side, just to simply say that, it’s a lot of, interesting things to to get the dust removal, but basically is a jet of compressed air on top, that shakes the filter. And then by gravity, the dust cake is removed. Shawn Tierney (host): It’s not just a filter. You know, I think main main people may just think, well, a dust collector is just this bag that catches all the dust. You’re actually, you know, you’re you you do have the bags, but, you’re using compressed air to sequentially, depends depending on how many you have, shake those bags in a sense by blowing air into them, to shake off the dust so it falls into the hopper. And so I can you can definitely see, like you were mentioning, if you have lots of these cylinders or these bags, then the sequencing has to be, you know, pretty pretty precise and and pretty, repeatable to make sure you’re you’re cleaning all of the bags off. And I’m I’m assuming too, you need to know when the hopper is full because everything stops working if if if the hopper gets, over full. So very interesting. I think your diagrams do a great job of explaining it as well. Eugenio Silva (Emerson): Yeah. If I play a little bit when I mention that, it’s a a little bit the reverse, way of our vacuum cleaner. Right? Because Yeah. We suck the the dust inside of the bags. Mhmm. And when the bags are completely full clogged, the suction, power, it’s far reduced. Right? So then you have to to empty our, let’s say, filter bags. Here is the although the all the dust is accumulated on the outside, the outer surface of the fabric, but the effect is the same. If there’s so much dust on the surface or out of the surface, then, the air that is shown here, the intake, the air, and then the filter simply stops. That’s why affects completely the efficiency of, that, unit. And the post jet cleaning is a way to unclog or to clean, the filters in order to bring them to the more efficient operation. Shawn Tierney (host): Yeah. Especially if you have lots of dust, you need an automatic way to continue to clean it and get it off of the filter and into the bin. So yeah. No. That makes a lot of sense. Eugenio Silva (Emerson): Yeah. In in other cases, although you talk about, dust, of course, it could be any kind of a pounder. Like, for example, in the foods and beverage industry, you don’t want this for example, let’s say, a dry milk production. You don’t want that dust to be floating around because it can bring contamination. But believe it or not, it can ignite fire sometimes. So that’s why it’s important to to get that completely eliminated. So this is the part that very people would say, okay, on the outlet where the the air should be cleaner, as you can see on the right side, that this, particle sensor is located at the outlet, clean air side. It has a very interesting the way it works is quite interesting. We use a we have a sensor in our portfolio called p 152 that, we take advantage of this triboelectric effect. Basically, this sensor, is coated with PTFE or a Teflon layer, so it’s completely electronic, electric isolated from from, of course, the media. And then when the dust starts touching, that probe, a DC charge is transferred. But because of this, sensor probe is completely isolated, we set the flow layer, the resolution and the electric charge is in the order of a peak ramp. So 10 minus 12. And that the resolution is about point five picoamp. So, therefore, if you’re touching the particles, depends of their size, They are going to generate more or less electricity that’s going to be transferred. And the ones that are just surround, they are not touching. For example, imagine that this, duct air exhausting pipe is quite big. A bit half meter, maximum one meter around that sensor, the particle also generates, induced charge in AC. And by measuring that, we have an idea about how clean is, of course, there that’s getting out. But it’s a bit more tricky than you can imagine because it looks like this. Shawn Tierney (host): Hey, everyone. I hope you enjoy this week’s show. I know I really enjoyed it. And, of course, I wanna thank our members for making the video edition possible. So this vendor did not sponsor this episode. So the video edition is available for members, and there’s some great graphics in their presentation you guys may wanna check out. Now with that said, we do have some really exciting podcast episodes coming up. I’m sitting down with Inductive. I’m sitting down with Software Toolbox. I’m sitting down with Siemens and a bunch of other vendors. So we have plenty of new podcasts coming up in the coming weeks this summer. And I also wanted to give you an update of what’s going on over at the automation blog. We’ve had some new articles come out. Brandon Cooper, one of our freelancers, wrote a great article about emulating Allen Bradley e threes. We had a vendor, actually, submit an article and sponsor the site to submit an article about what makes a good automated palletizer. We also had an update about the automation museum. That’s a fundraiser we’re running. We’re trying to open a automation museum. I got a lot of legacy stuff I’d like to donate to it, and I’d love to have it so you can come in and actually walk through, not just see the stuff, but actually learn on it. Right? So maybe you have some old stuff in your plant. You come out to the automation museum, and you can learn how to use it. With that said, we’re also looking at possibly doing a podcast for automation museum to drive awareness of legacy automation. So any of you out there interested in that, contact me directly. And, you can do so over at the automationblog.com. Just click on the contact button. And, we also have an article two articles from Brandon Cooper about things he learned as he transitioned from working in a plant to traveling around and visiting other plants to help them with their processes and automation. So check those articles out over at the automation blog. And finally, over at the automation school, you know, we have the new factor IO courses. We also have I just added a new lesson to the logics version of that course. Somebody wanted to try to use bit shifts instead of counters, so I added a lesson on that. Plus, I’m now starting to update all of the courses, including the brand new ones I’m working on. So you’re gonna see a brand new start here lesson later in the week, and I’m working on some cool emulation, lateral logic for my PLC courses that if you don’t have any push buttons or limit switches, you can actually use this code I’m gonna give you for free to simulate the widget machine that I use as kind of the basis for my teaching. So in any case, check that out if you’re in one of my PLC courses over at the automationschool.com. And with that said, you know, I’m very thankful for all the vendors who come on, especially those who sponsor the episodes so I don’t have to do these commercials. I’m not a big commercial guy, but I do wanna thank you for hanging in there and listening through this update. And now we’ll get right back into this episode of the automation podcast. Eugenio Silva (Emerson): Every time you get, use the jet boost with the boost valves on top of the filter bags, it creates a peak. So that means the cleaning cycles that are happening in a duration of, just a 100 milliseconds. That’s why they are very, very thin. And they happening every two, three minutes, per roll. They have to they have in nature a little bit of noise because imagine that every time that, you clean, more dust gets into inside of the the filter back. So that means it’s like when you clean your vacuum cleaner, immediately when you turn on that, some of this dust is gonna get inside immediately, and that’s the peak. But now imagine that, you have a rupture in the filter or you have a big role because, unfortunately, these the things are wear out. And then these peaks starts getting higher and higher. So, therefore, what we do when we, put that solution in place for a little time, let’s say, couple of days, we needed to kind of, set up, these thresholds. We need to figure out the level of noise that could be because depends very much the capacity, the types of, of a test. But once you do that, in our solution, we set the thresholds like alarming, a warning alarm, which means that after that point, the maintenance crew, starts looking at, that could be a early indication that a filter bag is not okay until the maximum point that avoids any any nonconformist, issue, which is already a rupture. You really pass the time where this filter, must be replaced. Shawn Tierney (host): So we’re looking at this chart for those who are listening. And the particle sensor, you know, it’s measuring the particles as air flows normally. But during the pulse, right, we’re forcing a lot of air back in, back down. So we’re getting a lot more, you know, than the average air would have x amount of particles. But if we’re forcing a bunch of it back in, we’re gonna see a lot more particles per, let’s say, hundred millisecond pulse. Right? So we do expect a peak when we when we pulse it because we’re just forcing a lot of get back go into the reverse direction. So we can we catch the bag loose. But what you’re saying here on this chart, I find so in so much interesting. So you can quantify, like, the expected increase in, in dust that you’re gonna sense with the sensor when you go in the reverse, when you pulse pulse, blow the ear downwards to, to shake the bag free. But you’re saying if that if that extra increased amount of detected dust is either too high, above normal, or too low below normal, then that tells you that you you could either have a clogged bag or you could have a burst bag. Is that am I understanding that correctly? Eugenio Silva (Emerson): Yes. Is this correct? And then the interesting thing is that as soon as you’re getting closer to replace a filter back, this baseline starts raising a bit with a kind of, how can I say, there is a drift? Why? Exactly what you said. A filter is completely clogged. It’s not yet any rupture, but is the efficiency of the cleaning is not so okay. So therefore, this slightly changes needs to be analyzed. Why I’m showing row one to row 10? Exactly in the picture, if you remember, a compartment filter with several, let’s say, filter bags, they are under the row. So under the row one, you may have 10 filter bags, row two, row three, and so on. So that means you are able to indicate which row is the problem, but it might be that you still need to check further which of the filters in that particular row have the problems. The more quick this peak happens, more number of, filter bags can have a problem. Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): Okay? Shawn Tierney (host): So you have one sensor on the exhaust, and you’re sequencing through, you know, blowing out or shaking out, you know, pulsing each of the rows. So that’s why we see, you know, one reading across the, you know, across the horizontal, and we see your row, row one, row two, row three, row four, each of them with discrete values or pulses. And like you just said, if you have multiple issues on a row, then you’re going to see, you know, a higher or lower peak depending on what the issue is. I’m with you. Eugenio Silva (Emerson): Yes. That’s why I’m going to show the other diagnostic capabilities that we needed to associate with this, particle sensor. And just to remember that, this particle sensor, we simply use one unit on the outlet part. That’s why I needed to make the sequence in serialization of the post because then I need to to synchronize with the post jets of every role. Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): No? Row by row. Shawn Tierney (host): And I think too, if you tried to do them all at once, the the you would need a lot higher pressure. So it it kinda makes sense to do it row by row because it reduces your maximum pressure required. Eugenio Silva (Emerson): Yeah. In this practical sense, we’re not be able to Shawn Tierney (host): Differentiate. Eugenio Silva (Emerson): Identify which of the roles, would be the problem. That’s why we kind of still have to do that. But now let’s give in a solution overview, and I think that, some of the key capabilities and features are going to highlight even more, the other, diagnostic capabilities that we are able to to provide in order to identify correctly and early as possible such issues. So this is a typical dust collector system. And if you look at around, if this dust collector system is just, let’s say, automated with nomadic electric components and they don’t have real time monitoring, you’re not really know the emission level. If it also this is not real time monitoring with some diagnostics, then you are not able to identify when this particle sensor, for example, is completely taken by, the dust because the humidity entrance in that, in that pipe, or it might be that, it’s so dirty, your dust that, is already ingrained so much on the probe. Mhmm. So that’s why the poor, reliability or the low level sensitivity of that could be affected. And if you were not monitoring, these signals that I showed the these peaks synchronized with the post valve jets Mhmm. You don’t have any early warning. Okay? The post valves basically are coils. They are solenoid coils Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): With tag diaphragms that open and close at the speed of a hundred milliseconds. The point is that their life time is about a couple of millions of cycles. Mhmm. But imagine, in some cases, one, two years is already enough to to have end end of life. So a fault valve, has to be connected to a control system because you need to know if this is a short circuit or if the diaphragm is completely open. And you can only do that if every time that you cycle the valve, you also, check that. For example, the power that, you drive the coil gives you a feeling if that is a coil that is already gone. Okay? Now let’s talk about the compressed air. Right? If you have a a filter that is open, there’s a rupture. If you have, a diaphragm that’s completely gone open, you start consuming higher and higher the compressed air. The point is this is continuously increasing. You can just imagine that this is normal. But if you go into average and look at this in a historical way, you’re gonna see that this trend is caused because of the broken post valves, for example. So that’s why it’s also important aspect of the automation solution is to minimize the usage of the compressed air is to have a clearly operating under a baseline that is normal. The filter bags, independent of the materials, because if you talk about life sciences, foods, chemical, or metal, they have a different materials. They have a different, where else, lifetime span. The point is the costs might be the filter itself is not so expensive. But going up there, exchange stopping, moving things around, getting the dust out before you change, putting all the personal protection equipment may take hours. So, therefore, that is the cost of that. And if you’re not able to prevent or even have an early warning when that is going to occur, is gonna be a reactive, maintenance issue. Right? So that’s why just convincing that, it’s worth looking into different aspects. And that’s why, on the left side, when we talk about solutions, we talk about, the connectivity part that, we have to work with devices that are hard or four to 20 milliamps. Some of devices are modbus to CP. Newer actuators in post faults could be mu m q t t or even OPC UA. That’s the the PLC part that, we have. And we can work with pneumatic systems, for example, that they turn at AP, PROFINET, or any other, standards. Then, of course, we have the IOs, that, we have to look at to control the post jet systems, but also to monitor the differential pressures, to measure the compressed here in some cases, until the parts where at the top, we put HMI SCADA software platform that, we pre engineered, in order to to make it simpler the development, of that solution by our OABS or many cases directly to our end users. And all in the right are the elements that we offer in our portfolio. Some cases, OEMs of a dust collector systems just to take from us, and they might be that they have their own solution as well. Shawn Tierney (host): So just for the audio audience, I know we’ve covered these products a lot, especially on the news show. But, I mean, I’m just wanna kinda go through a couple of these things. You got the ASCO product line. Right? So remote piloted valves and, you know, all of those, that category, you know, the, pulse valves. But we also got the Advantex, which we’ve talked about, like figure filter regulators and, different cylinders. Topworks, which I think we’re all familiar with, proximity sensors and whatnot. And, some of the other products you guys, Rosemount, differential pressure transmitters. We also see, we have, the PAC systems. In this case, you could have edge analytics, and so you may have one of the PAC systems, edge IPCs. And we even see the, down in the corner there, the Emerson PLC and IOs, which I think we’re all familiar with as well. So that kinda shows you how, you know, this solution, you know, they’re taking all these different products they have in their catalog and putting it together in one solution, which is, you know, you kinda need all this stuff. You know, basically understanding how it works. We just went through it. And so it’s interesting. I don’t think I’ve seen a slide yet from Emerson where they kinda include in one application all, if not all, many of their their, different product lines. And then, the the skate on the top, it looks like, just some beautiful screens and charts and and, you know, dials showing the current status. So, and and I I didn’t mean to interrupt you, Gino, but nonetheless say that, especially since the people listening, they’ll be familiar with all those trade names because we’ve covered those in the past. But, in any case, let me turn it back to you. Eugenio Silva (Emerson): No. No. It’s thanks for highlighting. And I I say that, when I introduce myself that I’m from the discrete automation part of Emerson. Mhmm. Because most of, people would know Emerson by the Rosemont, for example, pressure, Fisher valves, and then the, you know, the delta v, DCS. Right? This is the discrete automation part, and that’s why probably something new, for everybody here. Thank you very much. So when I look at that in a nutshell, we, of course, have to put the sensory devices, the PLC on top, the HMI scanner. And, basically, what we provide is real time monitoring of this particulate, emissions. We detect but also locate where the leak is by compartments in rows. You can see on the picture that, on the top of this HMI screen, we have a filter unit with three compartments, compartment one, two, three. And each compartment has these rows on top, which is the number of rows, then the more a number of filter bags that, within each, compartment. So, therefore, just locating which compartment and which row, you have a problem, I can tell you it saves half day of the people, in the maintenance. We also optimize the push at cleaning. It’s an, patent based algorithm that is completely adaptive, and works not just with the post valves, but, we put, head pressure sensors. And this fluctuation and the differential pressure that we measure from the outlet and inlet allows us, of course, to, increase or decrease the frequency of these push heads, which allows not only to be more efficient, but also minimize compressed air. And then finally, when you talk about solidoids involved diaphragms, these ones we can indicate one by one where they have problems. So, therefore, if you look at down to the other HMI screen, there are two rows on top. The one that is a solenoid, the one that is a diaphragm, and these vertical bars are the filter bag health. If they are getting closer to red with the high levels, meaning that, their life span is already gone. And if you have, light indicators on the solenoid, the diaphragm depend of the color might be that you have a short circuit fail, open diaphragm. Therefore, you have also to replace. And, basically, when we install that solution, sometimes our customers, ask it to also integrate with their control systems. So, therefore, they compress their generation, the fan, the hoppers, the safety alarms, of the plant sometimes are fully integrated as well. Now let’s talk very much about few features features because these are the ones that probably you haven’t seen yet. Wanna talk about our HMI control system is based on Movicon, Movicon next platform. And, basically, it provides everything that you know from the Scott HMI. And that’s why to use this in general for applications like OIE, energy management, in some others, infrastructure monitoring, like, smart cities, wastewater facilities, solar, mega mega plants, etcetera. Of course, it provides data visualization, but, I like to highlight that, you could ask we provide connectivity to all major POCs that you can imagine, with communication drivers. Of course, the open standards like OPC UA, like, Modbus. And on the lower part, the the green, let’s say, the the gray part here is what we used for that solution. Sometimes we use a geo maps, to indicate where the filters are. Some geo references, let’s say geo fences as well. The people have to be, with a personal protection equipment to be there. So there are some, real time, data that, of course, we are collecting for the particle emissions and other elements like differential pressure, header pressure. And then you have the headlines. You can see some screens that are completely dedicated to alarms and alerts. And one of these, diagnostics that you see are related to the solenoid, to the filter bag, and to the diaphragm diagnostics. A lot of them are diagnostic get diagnosed in different ways. For example, the solenoids, we look into the power output of our IO cards to see if the valve post the solenoid is open or complete short circuit. The filter bag, I already explained it. We detect with some logic with the the particle sensors, And the diaphragm diagnostics is based on the header pressure because if it’s this diaphragm is completely open, the differential pressure within the chamber, it starts fluctuating, and then you know that there’s something wrong there. But all of them increases the filtration efficiency, changes from reactive to predictive maintenance, of course, keeps the site compliant, minimize dust emissions, and for sure increase equipment lifetime, like the filter units, and reduce the compressed air usage. If you sum up all of that, the return in investment is it might be quite fast, of course, for large big large installations might be within two years, but it’s still a very fast return in investment for that particular solution. That’s what it looks like. A little bit, let’s say, zoom in. You see that they’re not nice looking, but they indicate graphically where the issues are, the number of issues, on this screen about thresholds alerts. The second one on the right side, is like the number of cycles. Imagine that every pulse valve would have, about a couple of millions of cycles of lifetime. Here, you can at least predict when or how many spare parts that, you need to have in the next quarter. And then, the yellow or red signals means that, red gone, you have is a faulty. And the white ones or the red the yellow ones are the ones that, you need to watch because they’re getting closer to the lifetime dead of lifetime. The other aspect is, like I said, when thus collector systems, you acquire that without the solution, it comes with this sequence box, which basically is a time based posting. So it keeps posting three to six minutes, like I I said, hundred milliseconds, but it can change. It’s it’s fixed. And that means that leads to, an excessive use of the post valve. So you’re going to wear out quite sooner than it should, but also reduce the valve back life because stretching the the the back filters, of course, you’re gonna also wear out, and you waste much more compressed air than than probably you should. That’s why we implemented this other two types of a post jet cleaning methodologies. One is on demand. That really depends on the high differential pressure between the the chamber and, you can set, in the in the solution how these multiple filter lines are going to operate normally, And this differential pressure threshold can be, for example, when the efficiency is getting bad, the differential pressure gets lower. And then if that is within a certain band, you can estimate that, there is accumulation of the cascade. The other one is very, intelligent. It’s a function block, in our PLC that, does a dynamic change. So, therefore, you put the single set point and the adaptive algorithm based on the virtual pressure starts controlling the intervals between the posts. So the idea is that to optimize by eliminating unnecessary posts in the cycle of these valves and also minimizing the compressed air. Of course, when you install the solution and, you put the set point for the first time, the system needs a little bit time to learn, and it’s a learning algorithm that, starts adapting. And very soon, it starts performing optimally. Okay? Shawn Tierney (host): Hey, everybody. I just wanna jump in here one more time. Just thank our members, both on YouTube and at the automationblog.com. I got some really exciting stuff coming up for you guys, in the fall. I’m I just have this huge plan that I’m working on. And so, I really just thank you guys for being members. Don’t forget, you get access to Discord. Don’t forget, there’s a whole library of older episodes you get to watch. It’s such just what I’m doing this month for members. It’s, you get a whole library of stuff. We did so much member only content over the last couple of years that you have hundreds literally hundreds of hours of content that you and only you get access to as a member, whether you’re on YouTube or you’re at theautomationblog.com. And, of course, if you have any questions about your membership, reach out to me directly, please. And with that, let’s go ahead and jump back into this week’s show. Eugenio Silva (Emerson): And that looks like that. This is just another, possibility to see. You see that, on the left side, you see a particular rows, and each of these rows have the filter bags. Each filter bag has a vertical bar that indicates the healthy of that solenoid diaphragm is on the top. And then, each of these compartments can navigate from one to another. Then you have other additional elements like the header pressure, differential pressure, particle density, and you have a trained diagram that, you are able also to generate reports, but you also also to to monitor, in order to to type a little bit, the parameters in order to be more efficient. And then, completely right side, if you have more than one dust collector, you can create different screens if you want. But the idea here is that the C1, C2 means compartment one, two, three. Again, a diagnostics that leads to preventative predictive maintenance and avoids completely reactive maintenance. Interesting, if you don’t know, in order to replace a single filter, in order to check if a solenoid valve is completely short circuits, In order to see if, a diaphragm valve is open, you needed to get there in this personal protection equipment using mask, gloves. You need to go up. You need to kind of get to know where these things are. And imagine that if you could avoid and just look at the screen and say, hey. I know that this is the compartment one of the filter a, and I know where I needed to look at. And by the way, I have the spare part because I had early indications to fix it. So then we are not just talk about reduction time, but, I guess, reduction costs and avoid to put people every time in such a very interesting environment. Okay? I’m not going through the the right part because you can imagine that this is a description of how things are usually done. And if you turn this around into a proactive predictive maintenance, then you have less and maybe faster steps. And you can prevent and can plan in advance when you wanna go with these, units, and you have to wear this equipment for protection. So very quickly in the developer position. Of course, like any solution, customers are interested to know if, they can pay off payback very quickly. So the return investment of that. So that’s why we check, the size, the number of, units, what’s the minimum size the customer could start with, because the it’s a pre engineered solution, how fast it could be that we implement in the whole site. It could be also, of course, calculate how much their current expenditure in terms of maintenance, reactive maintenance, the cost of utilities like compressed air, how many times they have to or they have downtime issues. And from that, we can prove very quickly, very simply that, it’s worth investing in automation. 20 to 30% of our reduction is a lot if you consider that they use a huge amount of compressed air. And compressors, they use electricity. So, therefore, if you’re able to reduce compressed air, you also increase your operation efficiency because cost of utilities is one of the points. Downtime is everything. Maintenance, it’s about preventing that you need to do these manual inspections. Just go there, check, and come back, and you see that, okay, we could wait for another week. But because I’m here, I’m going to change anyhow the filter. And that, of course, you’re not, increasing the lifetime of our equipment. And interesting that some downstream equipment, like the blowers, like the vacu pumps, if they get a lot of dust or excessive dust, they also, damage them. So therefore, maximizing maintenance, optimizing every step pays off in that sense. And finally, of course, customers do that because they want the full compliance. Every possible issue can be tracked, can be report. The efficiency of the systems can be audit ready, reports. It can re really prove that you can you are reducing part commissions. You provide a lot of visibility what’s going on. So, therefore, the technical teams are in very high confidence to operate the system. Because if without, they are operating blindly, And that’s why they feel a bit concerned many times that, might be that the bad things are just going to happen. In a nutshell, we talk about savings, extending the filter life. We talk about savings, reduce the compressed air. We can avoid downtime. Each downtime is one event that costs not only in the maintenance part here, but also the whole production costs that are not calculate here. And half the penalties that, if you have a single issue, it’s gonna be a big one. So, therefore, it’s a good way to give customers an idea why they should invest the CapEx parts and how we can help with the OPEX to save, their budgets in the sense of operating dust collector systems. So, Shawn, if I have time three minutes, I’m going to run this HMI demo because then you can see on the screen how the different screens are operated, but it’s up to you if I if I if if I need to do that. Shawn Tierney (host): Yeah. Go ahead. Eugenio Silva (Emerson): Okay. So this is an HMI demo, of course, simulated here because imagine it’s not possible to connect to live or to have all this whole equipment. So then I’m going to click here. So, basically, you see how a operator would navigate the type of information that, is provided. I made this click through very quickly because then we don’t owe too too much time here. But you see that, you are able to trend the particle density, the air consumption. You can set the alarms. You can indicate which boost valve is not okay. How is the level of filter bags? And now the settings. The cleaning, these are the parameters that you can adjust. Like I said, we have an adaptive learning algorithm, but in many cases, you needed to steer at least set up, the sensors as well, how sensible sensitivity of that. There are many different thresholds. And then the diagnostic part, for the diaphragm and the rupture where you can detect. And once this is done, you can see that, you have, quite, interesting information. For example, if you change, you devolve, you reset the counter. These are the alarms that you can acknowledge, etcetera. Okay? And, that’s it. That was the case. Shawn Tierney (host): Yeah. That gives you a good idea of what you’re getting with as far as the HMI is concerned, and, it’s good to see a full screen. I mean, it looks it looks like a very well designed HMI. From my perspective, it looks like it’s really giving you it’s focusing in on any errors. So you have, like, just standard graphics, a very good looking graphics, and then if there’s an error, you see it in red or yellow, really calls the eye to it. But, Eugeno, I see that, there’s a QR code on the screen right now. Can you tell people where that goes? Eugenio Silva (Emerson): Yes. It goes to the product page on our Emerson.com site. And from there, you can request for demo. We can request for proposal. We can request for more information. So this is the entry point for you to go to know, how it how we provide that solution, which kind of, basic elements. And there, we have also the related product pages if you wanna get, get to know more. Shawn Tierney (host): And I think the important part here is a lot of times you you, you know, when when you have a dust collector system that is that is constantly needing care, right, to keep you in compliance and make sure your products are products are being made correctly and you’re keeping people safe and all of that, You know, these systems, you’re gonna they’re they’re gonna be expensive. And, you know, larger systems, of course, are gonna be expensive. And so that cost savings, it’s like energy savings we do with VFDs on pumps and fans. Right? Or energy savings we do when we’re doing lighting, the folks over at Emerson are gonna wanna help you kinda quantify that because, you know, they know that for you to be able to justify not only, hey. This has given us a lot of problems. We know it’s costing us money. You also wanna know your ROI. Right? And so they’re gonna work with you on that because that’s on these big projects, those are those are some of the things that we have to look at to be able to, you know, to budget correctly. Anybody who has ever been in the budgeting part of a company knows you just don’t spend money because it’s fun. You know, you have to have a reason beyond everything. So I would I would guess I’m right on that, Eugenio. Eugenio Silva (Emerson): Yes. And, Shawn, although I just covered the technical part, of course, without any commitment, we can talk to customers and consult them Yeah. To look it around and see, in terms of maturity, how they operate this dust collector systems. We can, of course, check the install base. We have a questionnaire, that can fill it in. We can understand the size. We can, for example, talk about the energy consumption, the number of, hours that they are spend or active maintenance. And based on that, we give them opportunity to analyze whether they want to invest in that solution, which is a CapEx investment, but, also improve how much reduction they could have on the OPEX part. Shawn Tierney (host): Yeah. Which is which is, yeah, how they’re gonna justify it. Well, Eugeno, I wanna thank you for going through that. I really enjoyed your presentation. I learned a lot more about about, this product line and actually this product category than I that I knew coming in, and you’re I think you did a great job of walking us through it all. So thank you very much for coming on the show. Eugenio Silva (Emerson): Shawn, on behalf of Emerson, we appreciate this opportunity. It’s my first one here, so I also enjoy it, and this was was great. A great conversation, great questions, and, thank you. Shawn Tierney (host): Well, I hope you enjoyed that episode. I wanna thank Eugene for coming on the show and bringing us up to speed on dust collector systems. I really didn’t know all of those technical details, and I really appreciate him going through that. And it’s cool to see how they integrated so many different Emerson products into that solution. I mean, it’s just not like a PLC into my o. The sensors, this I mean, you guys, sorry. I’m not gonna go through it again. But in any case, really appreciate that. And I also appreciate our members who made the video addition possible. Thank you, members. Your $5 a month not only locks this video, but so many other videos that we’ve done, hundreds of videos I’ve done over the last twelve years. So thank you for being a member and supporting my work. I also wanna thank the automationschool.com and the automationblog.com. I hope you guys listened to that update that I included in the show. So many good things happen at both places. I hope you guys would take a moment to check out both websites. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. The Automation Podcast, Episode 241 Show Notes: To learn about becoming a member and unlocking hundreds of our “member’s only” videos, click here. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Autonomous IT
Patch [FIX] Tuesday – July 2025: [BitLocker Attack, Secure Boot Expiry, Linux chroot+sudo privesc, and Malicious .Zips], E21

Autonomous IT

Play Episode Listen Later Jul 8, 2025 21:29


In this July 2025 Patch [FIX] Tuesday episode, Automox security experts Tom, Seth, and Cody unpack four high-impact threats — from Microsoft updates, to Linux vulns, and .zip exploit PoCs.Topics include a physical attack method bypassing BitLocker encryption (CVE-2025-48001), the looming expiration of secure boot certificates, a Linux privilege escalation flaw in chroot and sudo (CVE-2025-32463), and a proof-of-concept .zip exploit that hides malicious content during preview but runs it on unzip.Expect sharp technical insights, practical mitigation tips, and as always, a few laughs. 

MLOps.community
Bridging the Gap Between AI and Business Data // Deepti Srivastava // #325

MLOps.community

Play Episode Listen Later Jun 20, 2025 57:13


Bridging the Gap Between AI and Business Data // MLOps Podcast #325 with Deepti Srivastava, Founder and CEO at Snow Leopard.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractI'm sure the MLOps community is probably aware – it's tough to make AI work in enterprises for many reasons, from data silos, data privacy and security concerns, to going from POCs to production applications. But one of the biggest challenges facing businesses today, that I particularly care about, is how to unlock the true potential of AI by leveraging a company's operational business data. At Snow Leopard, we aim to bridge the gap between AI systems and critical business data that is locked away in databases, data warehouses, and other API-based systems, so enterprises can use live business data from any data source – whether it's database, warehouse, or APIs – in real time and on demand, natively. In this interview, I'd like to cover Snow Leopard's intelligent data retrieval approach that can leverage business data directly and on-demand to make AI work.// BioDeepti is the founder and CEO of Snow Leopard AI, a platform that helps teams build AI apps using their live business data, on-demand. She has nearly 2 decades of experience in data platforms and infrastructure.As Head of Product at Observable, Deepti led the 0→1 product and GTM strategy in the crowded data analytics market. Before that, Deepti was the founding PM for Google Spanner, growing it to thousands of internal customers (Ads, PlayStore, Gmail, etc.), before launching it externally as a seminal cloud database service. Deepti started her career as a distributed systems engineer in the RAC database kernel at Oracle.// Related LinksWebsite: https://www.snowleopard.ai/AI SQL Data Analyst // Donné Stevenson - https://youtu.be/hwgoNmyCGhQ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Deepti on LinkedIn: /thedeepti/Timestamps:[00:00] Deepti's preferred coffee[00:49] MLflow vs Kubeflow Debate[04:58] GenAI Data Integration Challenges[09:02] GenAI Sidecar Spicy Takes[14:07] Troubleshooting LLM Hallucinations[19:03] AI Overengineering and Hype[25:06] Self-Serve Analytics Governance[33:29] Dashboards vs Data Quality[37:06] Agent Database Context Control[43:00] LLM as Orchestrator[47:34] Tool Call Ownership Clarification[51:45] MCP Server Challenges[56:52] Wrap up

Telecom Reseller
Cisco's AI Channel Playbook: Cassie Roach on Partner Enablement and Infrastructure Innovation, Podcast

Telecom Reseller

Play Episode Listen Later Jun 16, 2025


Cisco's AI Channel Playbook: Cassie Roach on Partner Enablement and Infrastructure Innovation, Podcast, With major announcements around AI infrastructure, including AI Pods, Nexus HyperFabric, and GPU-intensive servers, Cisco is positioning itself not just as a networking leader — but as the channel's go-to platform for AI-ready data centers "AI is a once-in-a-generation opportunity — and Cisco is making it real for partners." — Cassie Roach, Global VP, Cloud and AI Infrastructure Partner Sales, Cisco At Cisco Live 2025 in San Diego, Technology Reseller News publisher Doug Green spoke with Cassie Roach, Cisco's Global Vice President of Cloud and AI Infrastructure Partner Sales, about the company's bold steps to transform AI hype into tangible partner opportunity. With major announcements around AI infrastructure, including AI Pods, Nexus HyperFabric, and GPU-intensive servers, Cisco is positioning itself not just as a networking leader — but as the channel's go-to platform for AI-ready data centers. Key Cisco AI Updates for Partners: AI-Ready Infrastructure Specialization: A new certification that helps partners align with customer POCs, scale faster, and prove ROI. Black Belt Training & Partner Tools: Designed to educate, equip, and incentivize partner sellers with co-selling platforms, growth planning, and layered rewards. Marketing Velocity Central: Cisco-branded campaign kits and industry-specific go-to-market resources for partners. AI Pods: Modular infrastructure for training, fine-tuning, and inferencing workloads — with “small, medium, and large” sizing for pilot-to-production journeys. “We're creating an easy button for partners — even in a complex AI environment,” Roach explained. Cisco's approach focuses on frictionless engagement — empowering partners with everything from vertical use-case blueprints to hands-on support for opportunity identification through PXP Growth Finder. Roach emphasized that success depends on enabling partners at every level — not just executives or system integrators — but also frontline sellers, who now have access to tools that simplify the AI value proposition and drive sales. She also highlighted how AI is being securely embedded across Cisco's portfolio — from infrastructure to Webex Collaboration and end-to-end security, allowing customers to move from pilots to production with confidence. “This isn't just about AI,” Roach said. “It's about unlocking the entire Cisco portfolio — in a way that creates real stickiness, real customer outcomes, and real partner growth.” To explore Cisco's partner programs and AI infrastructure resources, visit cisco.com, or log into the partner portal via Sales Connect.

Fund/Build/Scale
Leveraging Founder-Market Fit to Win Over Risk-Averse Buyers

Fund/Build/Scale

Play Episode Listen Later Jun 13, 2025 39:46


What happens when a team of legal veterans decides to rebuild the dispute resolution process from the ground up?  To find out, I interviewed Rich Lee, co-founder and CEO of New Era ADR, a platform designed to resolve legal disputes faster, at lower cost, and with less friction for both companies and individuals.  We talked about building a team that had enough credibility to sell into one of the most risk-averse industries, how they approached trust-building with both customers and investors, and how they're scaling a capital-efficient business in a category that's been largely unchanged for decades. Thanks for listening! – Walter. RUNTIME 39:46 EPISODE BREAKDOWN (2:59) “ I'm an early adopter of, you know, anything.” (10:03) “ The core problem: why does it cost so damn much to resolve a legal dispute in this country?” (13:05) How Rich and his co-founders divided roles and responsibilities (17:15) Hurdle #1: “ Challenging the underlying assumption that litigation and a legal dispute doesn't have to be two-plus years.” (22:32) In the early days, New Era ADR developed multiple personas to overcome customer objections (25:50) “ Fortunately, we didn't have to do a lot of POCs.” (29:40) “ Our market's comically big.” (30:03) Finding your SAM and SOM when the TAM is $350 billion (32:22) Which came first: the pitch deck, or the revenue model? (35:50)  One question Rich would have to ask the CEO if he were interviewing for a role with an early-stage startup. LINKS Rich Lee New Era ADR The Future of ADR? New Era Bags $6.3m While Still at Seed Stage, Artificial Lawyer, 3/16/2022 SUBSCRIBE

The Tech Trek
How Core Values Drive Real AI Impact

The Tech Trek

Play Episode Listen Later Jun 5, 2025 24:00


In this episode of The Tech Trek, Brian Clifford, Chief Data Officer at Amica Insurance, shares how his team translates core company values—like exceptional customer service—into actionable AI and data strategies. We explore how Amica approaches pilots, vendor selection, internal adoption, and governance to scale AI effectively and responsibly.

Making Risk Flow | The Future of Insurance
Practitioner's Playbook: The Blueprint for Risk Digitization POCs | Zaheer Hooda and Richard Lewis, Cytora

Making Risk Flow | The Future of Insurance

Play Episode Listen Later May 27, 2025 28:24


Fan Mail: Got a challenge digitizing your intake? Share it with us, and we'll unpack solutions from our experience at Cytora.Welcome to Cytora's Practitioner's Guide, a new series from Making Risk Flow.In each episode, we sit down with experts from Cytora's global team to explore practical strategies, real-world applications, and emerging insights from the front lines of risk digitization and underwriting transformation.In this episode, Juan de Castro is joined by Rich Lewis, Cytora's Sales Director, and Zaheer Hooda, Head of North America, for a deep dive into what makes proof-of-concept (POC) initiatives in risk digitization succeed—or fail.Drawing on firsthand experience from working with leading carriers, they break down five essential capabilities insurers need to get right when implementing digitization initiatives—from extraction accuracy and full-spectrum intake handling, to scalable deployment and human-in-the-loop exception management.They also provide a practical, inside look at how insurers structure effective proof of concept  processes, including live workshops, data preparation, success metrics, and how to align POC design with measurable business outcomes.Whether you're a carrier planning a digitization journey or a leader seeking to optimize underwriting workflows, this episode offers tactical guidance to ensure your technology investments deliver meaningful impact.To receive a custom demo from Cytora, click here and use the code 'Making Risk Flow'.Our previous guests include: Bronek Masojada of PPL, Craig Knightly of Inigo, Andrew Horton of QBE Insurance, Simon McGinn of Allianz, Stephane Flaquet of Hiscox, Matthew Grant of InsTech, Paul Brand of Convex, Paolo Cuomo of Gallagher Re, and Thierry Daucourt of AXA.Check out the three most downloaded episodes: The Five Pillars of Data Analytics Strategy in Insurance | Craig Knightly, Inigo 20 Years as CEO of Hiscox: Personal Reflections and the Evolution of PPL | Bronek Masojada Implementing ESG in the Insurance and Underwriting Space | Simon Tighe, Chaucer, and Paul McCarney, Moody's

Revenue Boost: A Marketing Podcast
AI + EQ + GTM: The New Growth Equation for B2B Leaders

Revenue Boost: A Marketing Podcast

Play Episode Listen Later May 7, 2025 35:38


"If done right, AI will actually make us more human. It handles the busy work and surfaces real-time insights—so GTM teams can focus on what really drives revenue: building relationships, solving real problems, and creating long-term customer value." That's a quote from Roderick Jefferson and a sneak peek at today's episode.Hi there, I'm Kerry Curran—Revenue Growth Consultant, Industry Analyst, and host of Revenue Boost, A Marketing Podcast. In every episode, I sit down with top experts to bring you actionable strategies that deliver real results. So if you're serious about business growth, find us in your favorite podcast directory, hit subscribe, and start outpacing your competition today.In this episode, titled AI + EQ + GTM: The New Growth Equation for B2B Leaders, I sit down with keynote speaker, author, and enablement powerhouse Roderick Jefferson to unpack the modern formula for revenue growth: AI + EQ + GTM.We explore why traditional sales enablement isn't enough in today's landscape—and how real go-to-market success requires alignment across marketing, sales, and customer success, powered by emotional intelligence and smart technology integration.Whether you're a CRO, CMO, or GTM leader looking to scale smarter, this episode is packed with real-world insights and actionable strategies to align your teams and drive sustainable growth.Stick around until the end, where Roderick shares expert tips for building your own AI-powered revenue engine.If you're serious about long-term growth, it's time to get serious about AI, EQ, and GTM. Let's go.Kerry Curran, RBMA (00:01)Welcome, Roderick. Please introduce yourself and share your background and expertise.Roderick Jefferson (00:06)Hey, Kerry. First of all, thanks so much for having me on. I'm really excited—I've been looking forward to this one all day. So thanks again. I'm Roderick Jefferson, CEO of Roderick Jefferson & Associates. We're a fractional enablement company, and we focus on helping small to mid-sized businesses—typically in the $10M to $100M range—that need help with onboarding, ongoing education, and coaching.I'm also a keynote speaker and an author. I actually started my career in sales at AT&T years ago. I was a BDR, did well, got promoted to AE, made President's Club a couple of times. Then I was offered a sales leadership role—and I turned it down. I know they thought I was crazy, but there were two reasons: first, I realized I loved the process of selling more than just closing big deals. And second, oddly enough, I wasn't coin-operated. I did it because I loved it—it gave me a chance to interact with people and have conversations like this one.Kerry Curran, RBMA (01:16)I love that—and I love your background. As Roderick mentioned, he does a lot of keynote speaking, and that's actually where I met him. He was a keynote speaker at B2BMX West in Scottsdale last month. I also have one of your books here that I've been diving into. I can't believe how fast this year is flying—it's already the first day of spring!Roderick Jefferson (01:33)Thank you so much. Wow, that was just last month? It feels like last week. Where is the time going?Kerry Curran, RBMA (01:45)I appreciate your experience for so many reasons. One is that—like we talked about before the show—my dad was in sales at AT&T for over 20 years. It paid for my entire education. So we were comparing notes on that era of innovation and what we learned back then.Roderick Jefferson (02:02)Thank you, AT&T!Kerry Curran, RBMA (02:13)So much of what you talked about on stage and wrote about in your book is near and dear to my heart. My background is in building integrated marketing-to-sales infrastructure and strengthening it to drive revenue growth. I'm excited to hear more about what you're seeing and hearing. You talk to so many brands and marketers—what's hot right now? What's the buzz? What do we need to know?Roderick Jefferson (02:44)A couple of things. The obvious one is AI—but I'll add something: it's not just AI, it's AI plus EQ plus IQ. Without that combination, you won't be successful.The other big theme is the same old problem we've always had: Why is there such a disconnect between sales and marketing? As an enablement guy, it pains me. I spent 30 years in corporate trying to figure that out. I think we're getting closer to alignment—thank you, AI, for finally stepping in and being smarter than all of us! But we've still got a long way to go.Part of the issue is we're still making decisions in silos. That's why I've become a champion of moving away from just "sales enablement."Yes, I know I wrote the book on sales enablement—but I don't think that's the focus anymore. In hindsight, “sales enablement” is too myopic. It's really about go-to-market. How do we bring HR, marketing, product marketing, engineering, sales, and enablement all to the same table to talk about the entire buyer's journey?Instead of focusing on our internal sales process and trying to shoehorn prospects into it, we should be asking: How do they buy? Who buys? Are there buying committees? How many people are involved? And yes, ICP matters—but that's just the tip of the iceberg. It goes much deeper.Kerry Curran, RBMA (04:44)Yes, absolutely. And going back to why you loved your early sales roles—it was about helping people. That's how I've always approached marketing too: what are their business challenges, and what can I offer to solve them? In your keynote, you said, “I want sales to stop selling and start helping.” But that's not possible without partnering with marketing to learn and message around the outcomes we drive and the pain points we solve.Roderick Jefferson (05:22)Exactly. Let's unpack that. First, about helping vs. selling—that's why we have spam filters now. Nobody wants to be sold to. That's also why people avoid car lots—because you know what's coming: they'll talk at you, try to upsell you, and push you into something you don't need or want. Then you have buyer's remorse.Now apply that to corporate and entrepreneurship. If you're doing all the talking in sales, something's wrong. Too many people ask questions just to move the deal forward instead of being genuinely inquisitive.Let's take it further. If marketing is working in a silo—building messaging and positioning—and they don't bring in sales, then guess what? Sales won't use it. Newsflash, right? And second, it's only going to reflect marketing's perspective. But if you bring both teams together and say, “Hey, what are the top three to five things you're hearing from prospects over and over?”—then you can work collaboratively and cohesively to solve those.The third piece is: let's stop trying to manufacture pain. Not every prospect is in pain. Sometimes the goal is to increase efficiency or productivity. If there is pain, you get to play doctor for a moment. And by that, I mean: do they need an Advil, a Vicodin, a Percocet, or an extraction? Do you need to stop the bleeding right now? You only figure that out by getting sales, marketing, product, and even HR at the same table.Kerry Curran, RBMA (07:34)Yes, absolutely. I love the analogy of different levels of pain solutions because you're right—sometimes it's not pain, it's about helping the customer be more efficient, reduce costs, or drive revenue. I've used the doctor analogy before too: you assess the situation and then customize the solution based on where it “hurts” the most. One of the ongoing challenges, though, is that sales and marketing still aren't fully aligned. Why do you think that's been such a persistent issue, and where do you see it heading?Roderick Jefferson (08:14)Because sales speaks French and marketing speaks German. They're close enough that they can kind of understand each other—like ordering a beer or finding a bathroom—but not enough for a meaningful conversation.The core issue is that they're not talking—they're presenting to each other. They're pitching ideas instead of having a dialogue. Marketing says, “Here's what the pitch should look like,” and sales replies, “When's the last time you actually talked to a customer?”They also get stuck in “I think” and “I feel,” and I always tell both groups—those are the two things you cannot say in a joint meeting. No one cares what you think or feel. Instead, say: “Here's what I've seen work,” or “Here's what I've heard from prospects and customers.” That way, the conversation is rooted in data and real-world insight, not opinion or emotion.You might say, “Hey, when we get to slide six in the deck, things get fuzzy and deals stall.” That's something marketing can fix. Or you go to product and say, “I've talked to 10 prospects, and eight of them asked for this feature. Can we move it up in the roadmap?”Or go back to sales and say, “Only 28% of the team is hitting quota because they're struggling with discovery and objection handling.” So enablement and marketing can partner to create role plays, messaging guides, or accreditations. It sounds utopian, but I've actually done this six times over 30 years—it is possible.It's not because I'm the smartest guy in the room—it's because when sales and marketing align around shared definitions and shared goals, real change happens. Go back to MQLs and SQLs. One team says, “We gave you all these leads,” and the other says, “Yeah, but they all sucked.” Then you realize: you haven't even agreed on what a lead is.As a fractional enablement leader, that's the first question I ask: “Can you both define what an MQL and SQL mean to you?” Nine times out of ten, they realize they aren't aligned at all. That's where real progress starts.Once you fix communication, the next phase is collaboration. And what comes out of collaboration is the big one: accountability. That's the word nobody likes—but it's what gets results. You're holding each other to timelines, deliverables, and follow-through.The final phase is orchestration. That's what enablement really does—we connect communication, collaboration, and accountability across the entire go-to-market team so everyone has a voice and a vote.Kerry Curran, RBMA (13:16)You're so smart, and you bring up so many great points—especially around MQLs, SQLs, and the lack of collaboration. There's no unified North Star. Marketing may be focused on MQLs, but those criteria don't always match what moves an MQL to an SQL.There's also no feedback loop. I've seen teams where sales and marketing didn't even talk to each other—but they still complained about each other! I was brought in to help, and I said, “You're adults. It's time to talk to one another.” And you'd think that would be obvious.What I love is that we're starting to see the outdated framework of MQLs as a KPI begin to fade. As you said, it's about identifying a shared goal that everyone can be accountable to. We need to all be paddling in the same direction.Roderick Jefferson (14:16)Exactly. I wouldn't say we're all rowing yet, but we've definitely got our hands in the water, and we're starting to go in the same direction. You can see that North Star flickering out there.And I give big kudos to AI for helping with that. In some ways, it reminds me of social media. Would you agree that social media initially made us less social?Kerry Curran, RBMA (14:27)Yes, totally agree. We can see the North Star.Roderick Jefferson (14:57)Now I'm going to flip that idea on its head: if done right, I believe AI will actually make us more human—and drive more meaningful conversations. I know that sounds crazy, but I have six ways AI can help us do that.First, let's go back to streamlining lead scoring. If we use AI to prioritize leads based on their likelihood to convert, sales can focus efforts on the most promising opportunities. Once we align on those criteria, volume and quality both improve. With confidence comes competence—and vice versa.Second is automating task management. Whether it's data entry, appointment scheduling, or follow-up emails, those repetitive tasks eat up sales time. Less than 30% of a rep's time is spent actually selling. If we offload that admin work, reps can focus on high-value activities—like building relationships, doing discovery, and closing deals.Kerry Curran, RBMA (15:59)Yes! And pre-call planning. Having the time to prepare properly makes a huge difference.Roderick Jefferson (16:19)Exactly. Third is real-time analytics. If marketing and ops can provide sales reps with real-time insights—like funnel data, deal velocity, or content performance—we can start making decisions based on data, not assumptions or feelings.The fourth area is personalized sales coaching. I talk to a lot of leaders, and I'll make a bold statement: most sales leaders don't know how to coach. They either use outdated methods or try to “peanut butter” their advice across the team.But what if we could use AI to analyze calls, emails, and meetings—then provide coaching based on each rep's strengths and weaknesses? Sales leaders could shift from managing to leading.Kerry Curran, RBMA (17:55)Yes, I love that. It would completely elevate team performance.Roderick Jefferson (18:11)Exactly. Fifth is increasing efficiency in the sales process. AI can create proposals, contracts, and other documents, which frees up time for reps to focus on helping—not chasing paperwork. And by streamlining the process, we can qualify faster and avoid wasting time on poor-fit deals.Kerry Curran, RBMA (18:58)Right, and they can focus on the deals that are actually likely to move forward.Roderick Jefferson (19:09)Exactly. And sixth—and most overlooked—is customer success. That's often left out of GTM conversations, but it's critical. We can use AI-powered chatbots and virtual assistants to handle basic inquiries. That frees up CSMs to focus on more strategic tasks like renewals, cross-sell, and upsell.Let's be honest—most CSMs were trained for renewals, not selling. But cross-sell and upsell aren't really selling—they're reselling to warm, happy customers. The better trained and equipped CSMs are, the better your customer retention and growth.Because let's face it—we've all seen it: 90 days before renewal, suddenly a CSM becomes your best friend. Where were they for the last two years? If we get ahead of that and connect all the dots—sales, marketing, CS, and product—guess who wins?The prospect.The customer.The company—because revenue goes up.The employee—because bonuses happen, spiffs get paid, and KPIs are hit.But most importantly, we build customers for life. And that has to start from the very beginning, not just when the CSM steps in at the end.Kerry Curran, RBMA (20:47)Yes, this is so smart. I love that you brought customer success into the conversation. One of the things I love about go-to-market strategy is that it includes lifetime value—upsell and renewal are a critical part of the revenue journey.In my past roles, I've seen teams say, “Well, that's just client services—they don't know how to sell.” But to your point, if we coach them, equip them, and make them comfortable, it can go a long way.Roderick Jefferson (21:34)Absolutely. They become the lifeblood of your business. Yes, you need net-new revenue, but if sales builds this big, beautiful house on the front end and then customers just walk out the back door—what's the point?And I won't even get into the stats—you know them—about how much more expensive it is to acquire a new customer versus retaining one. The key is being human and actually helping.Kerry Curran, RBMA (21:46)Exactly. I love that. It leads perfectly into my next question—because one of the core components of your strategy and presentation was the importance of EQ, or emotional intelligence. Can you talk about why that's so critical?Roderick Jefferson (22:19)Yeah. It really comes down to this: AI can provide content—tons of it, endlessly. It can give you all the data and information in the world. But it still requires a human to provide context. For now, at least. I'm not saying it'll be that way forever, but for now, context is everything.I love analogies, so I'll give you one: it's like making gumbo. You sprinkle in some seasoning here, some spice there. In this case, AI provides the content. Then the human provides the interpretation—context. That's understanding how to use that generated content to reach the right person or company, at the right time, with the right message, in the right tone.What you get is a balanced, powerful approach: IQ + EQ + AI. That's what leads to truly optimal outcomes—if you do it right.Kerry Curran, RBMA (23:19)Yes! I love that. And I love every stage of your process, Roderick—it's so valuable. I know your clients are lucky to work with you.For people listening and thinking, “Yes, I need this,” how do they get started? What's the baseline readiness? How do they begin integrating sales and marketing more effectively—and leveraging AI?Roderick Jefferson (23:34)Thank you so much for that. It really starts with a conversation. Reach out—LinkedIn, social media, my website. And from there, we talk. We get to the core questions: Where are you today? Where have you been? Where are you trying to go? And most importantly: What does success look like?And not just, “What does success look like?” but, “Who is success for?”Then we move into an assessment. I want to talk to every part of the go-to-market team. Because not only do we have French and German—we've also got Dutch, Spanish, and every other language. My job is to become the translator—not just of language, but of dialects and context.“This is what they said, but here's what they meant. And this is what they meant, but here's what they actually need.”Then we dig into what's really going on. Most clients have a sense of what's “broken.” I'm not just looking for the broken parts—I'm looking at what you've already tried. What worked? What didn't? Why or why not?I basically become a persistent four-year-old asking, “Why? But why? But why?” And yes, it gets frustrating—but it's the only way to build a unified GTM team with a shared North Star.Kerry Curran, RBMA (25:32)Yes, I love that. And just to add—sometimes something didn't work not because it was a bad strategy, but because it was evaluated with the wrong KPI or misunderstood entirely.Like a top-of-funnel strategy did work—but the team expected it to generate leads that same month. It takes time. So much of this comes down to digging into the root of the issue, and I love your approach.Roderick Jefferson (26:10)Exactly. And it's also about understanding that every GTM function has different KPIs.If I'm talking to sales, I'm asking about average deal size, quota attainment, deal velocity, win rate, pipeline generation. If I'm talking to sales engineering, they care about number of demos per deal, wins and losses, and number of POCs. Customer success? They care about adoption, churn, CSAT, NPS, lifetime value.My job is to set the North Star and speak in their language—not in “enablement-ese.” Sometimes that means speaking in sales terms, sometimes marketing terms. And I always say, “Assume I know nothing about your job. Spell out your acronyms. Define your terms.”Because over 30 years, I've learned: the same acronym can mean 12 different things at 12 different companies.The goal is to get away from confusion and start finding commonality. When you break down the silos and the masks, you realize we're all working toward the same thing: new, long-term, happy customers for life.Kerry Curran, RBMA (27:55)Yes—thank you, Roderick. I love this. So, how can people find you?Roderick Jefferson (28:00)Funny—I always say if you can't find me on social media, you're not trying to find me.You can reach me at roderickjefferson.com, and you can find my book, Sales Enablement 3.0: The Blueprint to Sales Enablement Excellence and the upcoming Sales 3.0 companion workbook there as well.I'm on LinkedIn as Roderick Jefferson, Instagram and Threads at @roderick_j_associates, YouTube at Roderick Jefferson, and on BlueSky as @voiceofrod.Kerry Curran, RBMA (28:33)Excellent. I'll make sure to include all of that in the show notes—I'm sure this episode will have your phone ringing!Thank you so much, Roderick. I really appreciate you taking the time to join us. This was valuable for me, and I'm sure for the audience as well.Roderick Jefferson (28:40)Ring-a-ling—bring it on! Let's dance. Thank you again. This was an absolute honor, and I'm glad we got the chance to reconnect, Kerry.Kerry Curran, RBMA (28:59)For sure. Thank you—you too.Roderick Jefferson (29:01)Take care, all.Thanks for tuning in. If you're struggling with flat or slowing revenue growth, you're not alone. That's why Revenue Boost: A Marketing Podcast brings you expert insights, actionable strategies, and real-world success stories to help you scale faster.If you're serious about growth, search for us in your favorite podcast directory. Hit follow or subscribe, and leave a five-star rating—it helps us keep the game-changing content coming.New episodes drop regularly. Don't let your revenue growth strategy fall behind. We'll see you soon!

Business Excelleration Podcast
Unlocking AI: Importance of Proof of Concept

Business Excelleration Podcast

Play Episode Listen Later May 6, 2025 18:48


On this episode of the “Gen AI Breakthrough" podcast, Kyle McNabb hosts a discussion on the importance of proof of concept (POC) in testing AI solutions, highlighting its role in feasibility assessment and resource efficiency. Guests Kyle Robichaud and Jay Ruffin emphasize key factors such as data quality, risk mitigation, and alignment with organizational goals that influence the success of POCs. Additionally, it addresses the need for continuous education and effective communication to bridge the gap between executives and implementation teams.

The MAD Podcast with Matt Turck
Inside the Mind of Snowflake's CEO: Bold Bets in the AI Arms Race

The MAD Podcast with Matt Turck

Play Episode Listen Later Apr 10, 2025 83:41


In this episode, we sit down with Sridhar Ramaswamy, CEO of Snowflake, for an in-depth conversation about the company's transformation from a cloud analytics platform into a comprehensive AI data cloud. Sridhar shares insights on Snowflake's shift toward open formats like Apache Iceberg and why monetizing storage was, in his view, a strategic misstep.We also dive into Snowflake's growing AI capabilities, including tools like Cortex Analyst and Cortex Search, and discuss how the company scaled AI deployments at an impressive pace. Sridhar reflects on lessons from his previous startup, Neeva, and offers candid thoughts on the search landscape, the future of BI tools, real-time analytics, and why partnering with OpenAI and Anthropic made more sense than building Snowflake's own foundation models.SnowflakeWebsite - https://www.snowflake.comX/Twitter - https://x.com/snowflakedbSridhar RamaswamyLinkedIn - https://www.linkedin.com/in/sridhar-ramaswamyX/Twitter - https://x.com/RamaswmySridharFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro and current market tumult(02:48) The evolution of Snowflake from IPO to Today(07:22) Why Snowflake's earliest adopters came from financial services(15:33) Resistance to change and the philosophical gap between structured data and AI(17:12) What is the AI Data Cloud?(23:15) Snowflake's AI agents: Cortex Search and Cortex Analyst(25:03) How did Sridhar's experience at Google and Neeva shape his product vision?(29:43) Was Neeva simply ahead of its time?(38:37) The Epiphany mafia(40:08) The current state of search and Google's conundrum(46:45) “There's no AI strategy without a data strategy”(56:49) Embracing Open Data Formats with Iceberg(01:01:45) The Modern Data Stack and the future of BI(01:08:22) The role of real-time data(01:11:44) Current state of enterprise AI: from PoCs to production(01:17:54) Building your own models vs. using foundation models(01:19:47) Deepseek and open source AI(01:21:17) Snowflake's 1M Minds program(01:21:51) Snowflake AI Hub

The Tech Trek
Driving B2B AI Innovation

The Tech Trek

Play Episode Listen Later Mar 25, 2025 27:40


In this episode, Zachary Hanif, VP of AI, ML, and Data at Twilio, joins Amir to talk about the engine behind B2B AI innovation. From selecting the right tools to navigating the shift from POCs to production, Zachary offers an insider's look at how enterprises can thoughtfully and effectively integrate AI.We unpack:The danger of "boiling the ocean" with AIWhy chatbots aren't always the right starting pointWhat makes an AI POC actually valuableAnd why UX in the age of AI needs systems thinking

PreSales Podcast by PreSales Collective
Winning Complex Deals: How AI Empowers Sales Teams with Manisha Raisinghani

PreSales Podcast by PreSales Collective

Play Episode Listen Later Mar 24, 2025 37:11


In this episode, Jack Cochran and Matthew James are joined by Manisha Raisinghani, Founder and CEO of SiftHub, to discuss how AI is transforming the Presales landscape. They explore how SiftHub's AI sales engineer helps solutions teams consolidate tribal knowledge, automate repetitive tasks, and increase productivity. Manisha shares insights on leveraging AI for RFPs, POCs, and competitive intelligence, while emphasizing that AI serves as a sidekick to enhance SEs' strategic value rather than replace them. To join the show live, follow the Presales Collective's LinkedIn page or join the PSC Slack community for updates. The show is bi-weekly on Tuesdays, 8AM PT/11AM ET/4PM GMT. Follow the Hosts Connect with Jack Cochran: https://www.linkedin.com/in/jackcochran/  Connect with Matthew James: https://www.linkedin.com/in/matthewyoungjames/ Connect with Manisha Raisinghani: https://www.linkedin.com/in/manisharaisinghani/ Links and Resources Mentioned Join Presales Collective Slack: https://www.presalescollective.com/slack SiftHub: https://www.sifthub.io/  Timestamps 00:00 Welcome 03:34 Manisha's journey to founding SiftHub 08:12 SE to AE ratios in different organizations 13:10 The changing role of SEs in relationship building 15:06 Three main buckets of SE work and how AI can help 16:30 The evolution of tribal knowledge 20:40 SaaS proliferation and knowledge fragmentation 26:51 How SEs can leverage AI effectively 31:02 Using AI to analyze POCs and RFPs Key Topics Covered The Evolution of Tribal Knowledge From undocumented information to knowledge scattered across platforms How AI can consolidate knowledge from Slack, Salesforce, call recordings, and more Leveraging recorded conversations to preserve context and insights SE Challenges and AI Solutions Managing repetitive questions from sales teams and product managers Handling documentation tasks and RFP responses Creating custom solutions across different industries and regions The Changing SE Role Shift from technical support to relationship building Evolving buyer journeys requiring deeper technical engagement Balancing solutioning, question-answering, and demo responsibilities Measuring AI Impact Time savings on RFP responses and repetitive questions Using freed-up time for strategic activities and better customer engagement Supporting more deals simultaneously with AI assistance AI as a Teammate Using AI to enhance rather than replace SE capabilities How AI can work 24/7 across global teams The future of visual and diagram-based AI assistance

The Tech Trek
Why AI Adoption Fails & How to Fix It

The Tech Trek

Play Episode Listen Later Mar 17, 2025 23:56


In this episode of The Tech Trek, Amir Bormand sits down with Nirmal Ranganathan, CTO, Global Public Cloud at Rackspace, to dissect one of the hottest and most crucial topics in today's tech landscape—trust in AI applications. They explore how enterprises can drive adoption of AI solutions, what key factors are needed to foster trust, and why guardrails, security, and change management play a pivotal role. Whether you're a developer, tech leader, or AI enthusiast, this episode dives deep into the challenges and opportunities shaping the future of AI adoption.Key TakeawaysTrust is the Cornerstone: For AI adoption to succeed, users must trust the output. Trust hinges on data quality, security, responsible use, and model transparency.Change Management Matters: Adoption in enterprises isn't about trends—it's about clear processes, education, and user enablement.Guardrails Are Non-Negotiable: Especially when AI is exposed to external users, organizations need strong safety checks—think toxicity filters, bias mitigation, and strict data governance.Scaling AI = Scaling Costs: Unlike typical systems, scaling AI comes with heavy computational costs. Patterns like caching and model optimization are essential for sustainability.Prompt Engineering & Peer Learning: The secret to effective enterprise AI adoption is empowering users to master prompt engineering and fostering peer collaboration.Future of Adoption: 2025 might not yet be the year of mass AI production rollout, but the curve is gradually climbing—especially with evolving architectures and better model accuracy.Timestamped Highlights[00:00:00] Introduction to Nirmal Ranganathan & the importance of trust in AI[00:01:34] Why adoption is key—and why most tech projects fail due to lack of it[00:02:50] Three pillars of successful AI adoption: Trust, Change Management, Functionality[00:05:02] The trust barrier: Hallucinations, relevance, and grounding AI responses in enterprise knowledge[00:10:01] Why most AI projects are stuck in POCs—and what's preventing full-scale deployment[00:11:43] Technical guardrails: Security, scalability challenges, and compliance considerations[00:14:56] Cost & infrastructure challenges when scaling AI solutions to millions of users[00:17:52] How tech companies differ from enterprises in deploying AI—data privacy, safety checks, user unpredictability[00:20:00] The role of prompt engineering, peer learning, and experiential training in ensuring AI adoption success[00:22:16] What the future holds for AI adoption—and why the heavy lifting might get easierFeatured Quote "AI adoption compounds all of our existing challenges—and then multiplies them by five or ten times." — Nirmal RanganathanConnect with NirmalLinkedIn: https://www.linkedin.com/in/rnirmal/If you enjoyed this episode, please like, share, and subscribe! Don't forget to follow the podcast to stay updated on future episodes.

PreSales Podcast by PreSales Collective
From Novice to Expert: Presales Career Paths with Kalyan Ramkumar

PreSales Podcast by PreSales Collective

Play Episode Listen Later Mar 10, 2025 33:05


In this episode, Jack Cochran and Matthew James talk with Kalyan Ramkumar about his journey from novice to expert in the presales field. Kalyan shares his experience starting as an SDR at RSA Security with no technical background and how he worked his way up to become a skilled solutions engineer. He discusses the importance of domain knowledge in security, his framework for effective demos, and strategies for managing POCs and building your internal brand. To join the show live, follow the Presales Collective's LinkedIn page or join the PSC Slack community for updates. The show is bi-weekly on Tuesdays, 8AM PT/11AM ET/4PM GMT. Follow the Hosts Connect with Jack Cochran: https://www.linkedin.com/in/jackcochran/ Connect with Matthew James: https://www.linkedin.com/in/matthewyoungjames/ Connect with Kalyan Ramkumar: https://www.linkedin.com/in/kalyan-ramkumar-679927151/ Links and Resources Mentioned Join Presales Collective Slack: https://www.presalescollective.com/slack Presales Collective Linkedin: https://www.linkedin.com/company/presalescollective Presales Collective newsletter: https://www.presalescollective.com/newsletter CompTIA Security+ Certification: https://www.comptia.org/certifications/security CompTIA Network+ Certification: https://www.comptia.org/certifications/network Timestamps 00:00 Introduction 03:15 Kal's Background 10:12 First Ever Demo 14:10 When did you know you're an SC 19:40 The Four Do's of Demos 23:35 Challenging the challenger 27:02 Building your personal brand 29:35 POC strategies  Key Topics Covered Breaking into Presales Starting as an SDR and transitioning to SE Getting hired without technical background Importance of work ethic and eagerness to learn Building Domain Expertise Value of security certifications (Security+ and Network+) Moving from scripted to fluid demos Building trust with technical customers Demo Framework: The Four Do's Conducting your own discovery Framing every click and feature Evaluating specific customer KPIs Challenging difficult stakeholders POC and Success Strategies Customizing POC length to customer needs Collaborating with account executives Building internal relationships Creating workshop sequences for implementation

30 Minutes to President's Club | No-Nonsense Sales
Land & Expand Deals | Eleanor Dorfman | 30MPC Hall of Fame

30 Minutes to President's Club | No-Nonsense Sales

Play Episode Listen Later Feb 24, 2025 39:00


ACTIONABLE TAKEAWAYS: Segmented Team Structure: Down-market teams focus on landing new logos, passing them to expand teams, while up-market AEs handle both acquisition and expansion with retention-based comp. Enterprise Sales Strategies: Use top-down (sell wall-to-wall) or land-and-expand approaches, with the latter yielding higher LTV by scaling through business units first. Deal Inspection Triggers: Monitor $50K deals at stage 3 for POCs and access to power, and stage 5 for mutual action plans and the paper process. Consistent Review Rhythm: Reps update pipelines Monday, managers review Tuesday, deal reviews happen Wednesday, and Eleanor finalizes calls Thursday. ELEANOR'S PATH TO PRESIDENTS CLUB: Head of Sales @ Retool Global Head of Commercial Retention & Regional Director of Commercial Sales @ Segment Global Head of Commercial Renewals and Retention @ Segment Head of Customer Success and Solutions engineering @ Clever Inc RESOURCES DISCUSSED: Join our weekly newsletter Things you can steal 

The Cloudcast
How will AI impact your IT budget?

The Cloudcast

Play Episode Listen Later Feb 16, 2025 21:20


As AI begins to make its way into the Enterprise, how will the experimentation, adoption and operations of AI impact 2025 and beyond IT budgets? SHOW: 898SHOW TRANSCRIPT: The Cloudcast #898 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW NOTES:WHERE WILL IT BUDGETS MOST LIKELY BE IMPACTED BY AI IN 2025-2026?Is your focus on human-augmented capabilities or human-replacement capabilities? If you're buying AI as part of SaaS services, will the pricing be seat-based (human-augmented) or usage-based (human-replacement), or some combination of both? Are you directly paying for AI-capabilities, or is AI being positioned as “free”, but the other services are becoming more expensive?Are you expanding your predictive AI projects (and data-science teams), or primarily focused on GenAI related projects? Are you buying AI resources from the cloud providers, or from AI-specific hosting providers, or buying GPUs for private-cloud usage? Are you planning to use standard AI tools, or do you plan to training/align your own models with your data?Do you have a cost-control capability for your AI projects yet?What are your success-criteria for POCs, Experiments, Trials? Are you upskilling existing staff, or hiring AI-specific staff? Or using external consulting? FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod