POPULARITY
www.iotusecase.com#OEE #EnergyManagement #Shopfloor In Episode 194 des IoT Use Case Podcasts spricht Gastgeberin Ing. Madeleine Mickeleit mit Stefan Köhler, Manager Business Development bei PVA, sowie Thorsten Hardt, Teamleiter Technischer Service. Ergänzt wird die Runde durch Martin Falsner von Kontron AIS als IoT Umsetzungspartner. Im Fokus steht, wie industrielle Anlagenhersteller datenbasierte Services entwickeln, Maschinen sicher anbinden und Maschinendaten strukturiert bereitstellen. Dazu geben die drei Einblicke in interne Digitalisierungsprozesse, externe Kundenanforderungen, Edge Hardware, OPC UA und ein zentrales Kundenportal als digitale Servicebasis.Folge 194 auf einen Blick (und Klick):[17:45] Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus[26:23] Lösungen, Angebote und Services – Ein Blick auf die eingesetzten Technologien[33:44] Übertragbarkeit, Skalierung und nächste Schritte – So könnt ihr diesen Use Case nutzenPodcast ZusammenfassungDie Episode zeigt, wie die PVA Gruppe ihre Maschinen aus dem Bereich Hochvakuum und Wärmebehandlung datenfähig macht und daraus digitale Services entwickelt. Ausgangspunkt war die interne Herausforderung, Wissen zentral verfügbar zu machen, Transparenz zu schaffen und Serviceprozesse zu standardisieren. Parallel bestand die externe Kundenanforderung, Maschinendaten sicher in bestehende IT und MES Systeme integrieren zu können, ohne sensible Prozessdaten preiszugeben.Gemeinsam mit Kontron AIS wurden ein Edge Device mit OPC UA for Machinery Modell und ein digitales Kundenportal umgesetzt. Die Hardware ermöglicht eine sichere und normgerechte Bereitstellung von Maschinendaten. Das Portal dient als zentrale Plattform für Stammdaten, Dokumentation, Wartungspläne, Tickets und Kundenkommunikation. Perspektivisch entsteht daraus ein digitaler Zwilling inklusive Lebenszyklusdaten.Die modulare Lösung adressiert sowohl interne Effizienz als auch skalierbare Kundenservices und ermöglicht künftige Use Cases wie vorbeugende Wartung, automatisierte Ersatzteildiagnosen oder KI gestützte Empfehlungen. Besonders relevant ist die Episode für Maschinenbauer, Industrie Betreiber, Serviceleiter und Unternehmen, die eine eigene IoT Produktstrategie entwickeln möchten. Sie zeigt, warum Datenzugriff, Standardisierung, Plattformlogik, Vertrauen und iterative Umsetzung entscheidend sind.-----Relevant links from this episode:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Martin (https://www.linkedin.com/in/martin-falsner-equipmentcloud/)Stefan (https://www.linkedin.com/in/stefan-k%C3%B6hler-6b1ab6261/Thorsten (https://www.linkedin.com/in/thorsten-hardt-7637671b7/Equipment Cloud (https://equipmentcloud.de/)ManagedEdge IoT Bundle (https://www.susietec.com/sites/default/files/downloads/20250506_Kontron_OnePager_ManagedEdge-IoT-Bundle_EN-FIN_0.pdf)EU-Maschinenverordnung (https://eur-lex.europa.eu/eli/reg/2023/1230/oj)Jetzt IoT Use Case auf LinkedIn folgen1x monatlich IoT Use Case Update erhalten
Shawn Tierney meets up with Henrik Pedersen and Jacob Abel to learn about OTee Virtual PLCs in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 252 Show Notes: Special thanks to Henrik Pedersen and Jacob Abel for coming on the show, and to OTee for sponsoring this episode so we could release it “ad free!” To learn about the topics discussed in this episode, checkout the below links: OTee Virtual PLCs website Schedule an OTee demo Connect with Henrik Pedersen Connect with Jacob Abel Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Thank you for tuning back into the automation podcast. Shawn Tierney here from Insights. And this week on the show, I meet up with Henrik Pedersen and Jacob Abel to learn all about virtual PLCs from OTee. That’s o t e e. And, I just thought it was very interesting. So if you guys have ever thought about maybe running virtual PLCs to test some processes out, I think you’ll really enjoy this. With that said, I wanna welcome to the show for the very first time, Hendrik and Jacob. Guys, before we jump into your presentation and learn more about what you do, could you first introduce yourself to our audience? Henrik Pedersen (OTee): Yeah. Sweetly. So my name is Hendrik. I am the cofounder, COO, OT, a new industrial automation company, that, we’re really glad to present here today. I have a background from ABB. I worked eleven years at ABB. In terms of education, I have an engineering degree and a master degree in industrial economics. And, yeah, I’m I’m excited to be here. Thanks, Rom. And I’ll pass it over to Jake. Jacob Abel (Edgenaut): I’m, Jacob Abel. I’m the principal automation engineer at Edgnot. EdgeNaught is a systems integrator focusing on edge computing and virtual PLCs. My background is in mechanical engineering, and I’m a professional control systems engineer, and I have thirteen years experience in the machine building side of industrial automation, specifically in oil and gas making flow separators. And I’ll hand it back to Henrik here. Henrik Pedersen (OTee): K. Great. So OT, we are a a new industrial automation company, the new kid on the block, if you will. We’re a start up. So, we only started, about three years ago now. And, we focus solely on virtual PLCs and and the data architectures allow you to integrate virtual PLCs in in operations. And, you know, some of the listeners will be very familiar with this first, thing I’m gonna say, but I think it’s valuable to just take a take a little bit step back and and remember what has happened in in history when when it comes to to IT and OT and, and and what really what really happened with that split. Right? So it was probably around the ‘9 you know, around nineteen nineties where the the the domain computer science were really split into these two domains here, the IT and OT. And, and that, that was, that was kind of natural that that happened because we got on the, on the IT side of things, we got Internet, we got open protocols and, you know, we had the personal computers and innovation could truly flourish on the IT side. But whereas on the OT side, we were we were kind of stuck still in the proprietary, hardware software lock in situation. And and that has that has really not been solved. Right? That that that is still kind of the the situation today. And it this is what this is obviously what also, brought me personally to to really got really super motivated to solve this problem and and really dive deep into it. And I experienced this firsthand with with my role in NAD and, how how extremely locked we are at creating new solutions and new innovation on the OT side. So so we’re basically a company that wants to to truly open up the the the innovation in this space and and make it possible to adopt anything new and new solutions, that that sits above the PLC and and, you know, that integrate effectively to to the controller. So I I have this this, you know, this slide that kind of illustrates this point with with some some, you know, historical events or or at least some some some big shifts that has happened. And, Aurene mentioned a shift in nineteen nineties. And it wasn’t actually until ’20, 2006 that Gartner coined this term OT, to explain the difference really what what has happened. And and, you know, as we know, IT has just boomed with innovation since since the nineties and OT is, is, is slowly, slowly incrementally getting better, but it’s still, it’s still the innovation pace is really not, not fast. So, this is also, of course, illustrated with all the new developments in in GenAI and AgenTic AI, MCP, and things like that that is kinda booming on on the IT side of things. And and and yeah. So, but we do believe that there is actually something happening right now. And and we have data that they’re gonna show for for that. Like, the the large incumbents are now working on this as well, like virtual PLCs, software defined automation and all kinds of exciting things going on on the OT side. So we do believe that that we will see, we will see a shift, a true big shift on the OT side in terms of innovation, really the speed in which we can, we can improve and adopt new solutions on the OT side. And this is kind of exemplified by, like, what what is the endgame here? Like, you could say that the endgame could be that IT and OT once once again becomes the same high paced innovation domain. Right. But then we need to solve those underlying problems, the infrastructural problems that are still so persistent on the OT side of things. The fine point of this slide is to just illustrate what’s happening right now. It’s like cloud solutions for control is actually happening. Virtual PLC, software based automation, AI is happening all at once. And we see it with the big suppliers and and also the exciting startups that’s coming into this space. So I think there’s there’s lots of great excitement now that we can we can expect from the OT side, in in next few years. Shawn Tierney (Host): Yeah. You know, I wanna just, just for those listening, add a little, context here. If we look at 1980, why was that so important? Why is this on the chart? And if you think about it, right, we got networks like Modbus and, Data Highway in nineteen seventy eight, seventy nine, eighty. We also got Ethernet at that time as well. And so we had on the plant floor field buses for our controls, but in the offices, people were going to Ethernet. And then when we started seeing the birth of the public Internet, right, we’re talking about in the nineties, people who are working on the plant floor, they were like, no. Don’t let the whole world access by plant floor network. And so I think that’s where we saw the initial the the divide, you know, was 1980. It was a physical divide, just physically different topologies. Right? Different needs. Right? And then and and as the Internet came out in the early nineties, it was it was now like, hey. We need to keep us safe. We know there’s something called hackers on the Internet. And and I think that’s why, as you’re saying in 2006, when Gartner, you know, coined OT, we were seeing that there was this hesitant to bring the two together because of the different viewpoints and the the different needs of both systems. So I think it’s very interesting. I know you listeners, you can’t see this, but I kinda want to go back through that and kinda give some context to those early years. And and, you know, like Henrik says, you know, now that we’re past all that, now that we’re using Ethernet on the plant floor everywhere, right, almost everywhere, on all new systems, definitely, that that becomes the right now on this on the today on the, on the chart. And I’ll turn it back to you, Henrik. Henrik Pedersen (OTee): Yeah. I’ll search that. I just wanna echo that as I think that there are really good reasons for why this has happened. Like, the there has you could argue that innovation could flourish on the IT side because there was less critical systems, right, less, more, you know, you can do to fail fast and you can do, you can test out things on a different level. And so so there’s really lots of good reasons for why this has happened. We do believe that right now there is some really excitement around innovation, the OT side of things and and this pent up kind of, I wouldn’t call it frustration, but this pent up potential, I think is the right word, is is can be kind of unleashed in our industry for for the next, next decade. So so we are like this is really one of the key motivators for me personally. It’s, like, I truly believe there’s something truly big going on right now. And and I I do I do encourage everyone, everyone listening, like, get in get in on this. Like, this is happening. And, you know, be an entrepreneur as well. Like, build your company, build and, you know, create something new and exciting in this space. I think I think this is this is a time that there hasn’t been a better time to create a new new technology company or a new service company in this space. So this this, this is something at least that motivates me personally a lot. So let me move over to kind of what we do. I mentioned I mentioned that we focus solely on the virtual PLC. This this is now presented in the slide for those that are listening as a as a box inside a open hardware. We can deploy a virtual PLC on any, ARM thirty two thirty two and and sixty four bit processor and x eighty six sixty four bit with the Linux kernel. So so there are lots of great, options to choose from on the hardware side. And and, and yeah. So you can obviously when you have a Virtual PLC you can think of it new in terms of your system architecture. You could for instance, you know deploy multiple Virtual PLCs on this on the same hardware and you can also, think about it like you can use a virtual PLC in combination with your existing PLCs and could work as a master PLC or some kind of optimization deterministic controller. So it’s it’s really just opening up that, you know, that architectural aspect of things. Like you can think new in terms of your system architecture, and you have a wide range of hardware to choose from. And, and yeah, So the the flexibility is really the key here, flexibility in how you architect your system. That CPU that you deploy on will will obviously be need to be connected to to the field somehow, and that’s that’s true, classical remote IO, connections. So we currently support, Modbus TCP and Ethernet IP, which is kind of deployed to to, our production environment, as it’s called. So moving on to the next slide. Like, this is kind of the summary of our solution. We have built a cloud native IDE. So meaning anyone can can basically go to our website and log in to into the solution and and give it a spin. And, we’ll show you that afterwards with with Jake. And the system interacts through a PubSub data framework. We use a specific technology called NUTS, for the PubSub communication bus. And you can add MQTT or OPC UA to the PubSub framework, according to your needs. So, and from that, you can integrate with, whatever whatever other, software you might have, in your system. So we have these value points that we always like to bring up. Like, this obviously breaks some kind of vendor lock in in terms of the hardware and the software. But it’s also, our virtual PLC is based on on the six eleven thirty one. So it’s not a lock in to any kind of proprietary programming language or anything like that. There is, there’s obviously the cost, element to this that you can potentially save a lot of cost. We have, we have verified with with with some of our customers that they estimate to save up to 60% in total cost of ownership. This is there is obviously one part is the capex side and the other part is is the opex. And and is this data framework, as I mentioned, is in in in which itself is is future proof to some extent. You can you can integrate whatever comes comes in in a year or or in a few years down the line. And, there’s environmental footprint argument for this as you can save a lot on the on the infrastructure side. We have one specific customer that estimates to save a lot on and this this particular point is really important for them. And then final two points is essentially that we have built in a zero trust based security, principle into this solution. So we have role based access control. Everything is encrypted end to end, automatic certification, and things like that. The final point is, is that this is the infrastructure that allows you to bring AI and the classical, DevOps, the the thing that we’re very used to in the IT side of things. Like, you you commit and merge and release, instead of, instead of the traditional, way of working with your automation systems. So I know this is like, this is pro pretty much, like, the boring, sales pitch slide, but, but, yeah, I just wanted to throw this this out there for for the guys that there is some there is some, intrinsic values underneath here. The way the system works, you will you will see this very soon, through the demo, but it’s basically you just go to a website, you log in, you create a project. In there, you would create your your PLC program, test, you code, you simulate. You would onboard a device. So onboard that Linux device that you you want to deploy on. This can be as simple as a Raspberry Pi, or it can be something much more industrial grade. This depends on on on the use case. And then you would deploy services like, as I mentioned, MQTT and OPC UA, and then you would manage your your your system from from the interface. And, I have this nice quote that we got to use from one of the customers we had. This is a global, automotive manufacturer that, basically tells us that it’s, they they highlighted the speed in which you can set this up, as as one of the biggest values for them, saving them a lot of hours and setting setting up the system. So I also wanted to show you a real you know, this is a actual real deployment. It was it was deployed about a year ago, and this is a pump station, or a water and wastewater operator with around 200 pump stations. They had a mix of of Rockwell and Schneider PLCs, and they had a very high upkeep, and they were losing a lot of data from these stations because they were connected over four g. When the Internet was a bit poor, they lost a bit of data in their SCADA systems, so they had these data gaps and things like that. So pretty pretty, you know, standard legacy setup to be to be honest. Quite outdated PLCs as well. So what they what they did for the first, pump station was they they, you know, removed the PLC. They put in a Raspberry Pi for for, like, €60 or, like, $70, connected it to to a to a remote IO Ethernet IP module they had, in in the storage, and deploy this data framework as I’m showing on the screen now. So so they that was that was the first station they put online, and they they chose a Raspberry Pi because they thought, okay, this is interesting, but will it work? And then they chose a pump station, which was was really just poor from before. So they had very little to to to lose to to deploy on this station. So so, yeah, this has been running for a year now without any any problems on a Raspberry Pi. We have obviously advised against using a Raspberry Pi in a critical environment, but they just insisted that that what that’s what they wanted to do for this first case. Shawn Tierney (Host): And I’ll back that up too. Your generic off the shelf Raspberry Pi is just like a generic off the shelf computer. It’s not rated for these type of environments. Not that all pump houses are really bad, but they’re not air conditioned. And I think we’ve all had that situation when it’s a 120, 130 out that, you know, off the shelf computer components can act wonky as well as when they get below freezing. So just wanted to chime in there and agree with you on that. For testing, it’s great. But if you’re gonna leave it in there, if you were in my town and you say you’re gonna leave that in there permanently, I would ask to have you, assigned somewhere else for the town. Henrik Pedersen (OTee): Yeah. Yeah. Exactly. No. So and and that point is also illustrated with the second station they brought online. So there they chose a much more industrial grade CPU, that, that, was much, you know, cost cost a bit more, but it’s more suited for the environment. And, and yeah. So this was, I can disclose it was a Bayer Electronics, CPU. So so yeah. And, and they reported, some good, good metrics in terms of, like, the results. They they said around 50 on the hardware, 75% on the management of the PLC system. So this relates more to that they have very a lot of, you know, driving out with the car to these stations and doing changes to their systems and, and updates. They no longer have any, any data loss. It’s local buffer on the data framework. They’ve increased tag capacity with 15 x, resulting in in four fifty five x better data resolution and a faster scan frequency. And this is actually on the Raspberry Pi. So so just just think of it as as the the even the even the, kind of the lowest quality IT off the shelf, computers, are are able to to, to execute really fast in in in, or fast enough for for, for these cases. So, Shawn, that was actually what I wanted to say. And, and also, you know, yeah, we are we are a start up, but we do have, fifth users now in 57 different countries across the world. And it’s it’s really cool to see our our our, our technology being deployed around the world. And, and yeah. I’m really, really excited to to, to get more, users in and and hear what they what they, think of the solution. So so yeah. I’ll I’ll with that, I don’t know if, Shawn, you wanna you shoot any questions or if we should hand it over to Jake for for for a demo. Shawn Tierney (Host): Yeah. Just before we go to Jake, if somebody who’s listening is interested, this might be a good time. It said that, you already talked about being cloud based. It’s, o t e e. So Oscar Tom, Edward Edward for the the name of the company. Where would they go if if they like what Jake’s gonna show us next? Where will they go to find out more? Henrik Pedersen (OTee): Yeah. So I would honestly propose that they just, reach out to to me or Jake, on on one of the QR codes that we have on the presentation. But they can also obviously go to our website, 0t.io,0tee.io, and just, either just, log in and test the product, or they could reach out to us, through our website, through the contact form. So yeah. Shawn Tierney (Host): Perfect. Perfect. Alright, Jake. I’ll turn it over to you. Jacob Abel (Edgenaut): Thanks, Shawn. Fantastic stuff, Henrik. I wanna take a second too to kinda emphasize some of the technical points that you, presented on. Now first, the the fact that you have the built in zero trust cybersecurity is so huge. So, I mean, the OT cybersecurity is blowing up right now. So many certifications, you know, lots of, consulting and buzz on LinkedIn. I mean, it’s a very real concern. It’s for a good reason. Right? But with this, zero trust built in to the system, I I mean, you can completely close-up the firewall except for one outgoing port. And you have all the virtual PLCs connected together and it’s all done. You know, there’s no incoming ports to open up on the firewall to worry about, you know, that security concern. You know, it’s basically like, you know, you’ve already set up a VPN server, if you will. It’s it’s not the same, but similar and, you know, taking care of that connection already. So there’s an immense value in that, I think. Shawn Tierney (Host): And I wanted to add to the zero trust. We’ve covered it on the show. And just for people, maybe you’ve missed it. You know, with zero trust is you’re not trusting anyone. You authorize connections. Okay? So by default, nobody’s laptop or cell phone or tablet can talk to anything. You authorize, hey. I want this SCADA system to talk to this PLC. I want this PLC to talk to this IO. I want this historian to talk to this PLC. Every connection has to be implicitly I’m sorry. Explicitly, enabled and trusted. And so by default, you know, an an integrator comes into the plant, he can’t do anything because in a zero trust system, somebody has to give him and his laptop access and access to specific things. Maybe he only gets access to the PLC, and that makes sense. Think about it. Who knows whether his laptop has been? I mean, we’ve heard about people plug in to the USB ports of the airport and getting viruses. So it’s important that person’s device or a SCADA system or a historian only has access to exactly what it needs access to. Just like you don’t let the secretary walk on the plant floor and start running the machine. Right? So it’s a it’s an important concept. We’ve covered it a lot. And and, Jake, I really appreciate you bringing that up because zero trust is so huge, and I think it’s huge for OT to have it built into their system. Henrik Pedersen (OTee): Yeah. Absolutely. Absolutely. Jacob Abel (Edgenaut): I wanted to highlight too the Henrik mentioned that the the backbone of the system is running on a technology called NATS. That’s spelled n a t s. And why that’s important is this is a a lightweight messaging, service, and it’s designed to send millions of messages per second. You know, that’s opposed to, you know, probably the best Modbus TCP device that you can find. You might get a couple 100 messages through per second. It’s millions of messages per second. It’s, you know, especially with, you know, we’re dealing with AI machine learning, you know, training models. I mean, we’re data hungry. Right? So this gives you the backbone too. You know, it’s like it can push an immense amount of tag data, you know, with ease. I think that’s another really important point. With that, though, I’ll I’ll get on to the demo. Henrik Pedersen (OTee): Oh, that’s great. We do we do see that, Jay, that most of our customers report on that, you know, 400 or 700 x better data resolution. And so it’s it’s a step change for for for the data resolution there. Yeah. Jacob Abel (Edgenaut): Excellent. So one of the things that I personally love about OT is how quickly you can get into the PLC once everything’s set up. So this is OT’s website, obviously, ot.io. So once you’re here, you just go to log in. And that brings in the login screen. Now I’m are I’m using my Google account for single sign on, so I can just click continue with Google. And this brings me into the main interface. And another thing that I love is that, you know, it is very simple and straightforward, you know, and simple is not a bad thing. Simple is a good thing. I mean, the way that things should be is that it should be, it should be easy and the finer details are taken care of for you. So right here, we have our main project list. I just have this one benchmarking program that I’ve imported in here. And you also have device lists, just a a test device that I’ve installed the runtime on. Just real quick. You know, you have a Martha, the AI assistant in the corner here. And, the documentation guides is up here. So you can get help or look into reference material very easily. It’s all right there for you. So I’m gonna open up this program here. So just a quick tour here. Right up here in the top left is basically where where most everything’s done. So if you click on this little down arrow, you can choose what virtual PLC runtime to attach it to. I’ve already attached it to the device. I installed the runtime on. You can add, you know, a new program, driver, function blocks, custom data types real quick here. Compile your program, download it to the device. Check the release history, which is really, really great. As you can, you can go into release history and you can revert to a prior version very easily. We got built in, version control, which is another, great feature. Henrik Pedersen (OTee): I can also just comment on that, Jake, that we do have we do have, in the quite short term roadmap to also expand on that with Git integration, that, a lot of our customers are are asking for. So yeah. Jacob Abel (Edgenaut): Awesome. Yeah. I mean, that’s that’s another, very hot topic right now. It’s, you know, getting getting the revision control systems, as part of, you know, at least the textual, programming languages. See, so, you know, we have a few, like, housekeeping things here. I mean, you can delete the program, export it. It’s a good good point here is that, OT complies with the PLC open, XML specification. So you can import or export programs, in this XML format, and it should work with solid majority of other automation software out there. You know, if you need to, you want to transition over to OT, you know, you can export it from your other software and import it rather easily. Got your program list here and, you know, just the basic configuration of, you know, you can add global variables that you wanna share between the different programs and POUs or, you know, change the, cycle rate of the periodic tasks, add more tasks. Let’s just get jump into this program here. Both the system uses the IEC sixty one one thirty one dash three standard structured text. So here’s just a little, quick benchmark program that I’ve been using to do some performance testing. Like you, you have the, the code right here, obviously. And on our, our right, the variable list, very easy to add a new variable and pick out the type. You can set a set of default value, add some notes to it. Super easy. So let’s go online. So if you have these little glasses up here in the top, right, you display live tag values. And so it’s grabbing from the runtime that’s running and plopping it right in here in the editor, which I I love the way it’s displayed. It makes it. And, you know, it’s one of the question marks is if you’re doing structured text instead of letter logic, like how it’s gonna show up and how readable is it gonna be. I think the, the text, like the color contrast here helps a lot. It’s very, very readable and intuitive. And we also have the tag browser on the right hand side. Everything is, organized into, you know, different groups. There’s the the resources and instances that you’ve set up in the configuration tab. So the by default, the tag the tags are all listed under there. And here too, you know, you can set tag values doing some performance testing, as I said. So this is, recording some some jitter and task time metrics. And that’s that’s really it. That’s the that’s the cloud IV in a nutshell. Super easy, very intuitive. I mean, it’s there there’s zero learning curve here. Shawn Tierney (Host): For the, audio audience, just a little comment here. First of all, structured text to me seems to be, like, the most compatible between all PLCs. So, you know, everybody does ladder a little bit differently. Everybody does function blocks a little bit differently. But structured text and, again, I could be wrong if you guys think out there in the in listening, think I’m wrong about that. But when I’ve seen structured text and compared it between multiple different vendors, it always seems to be the closest from vendor to vendor to vendor. So I can see this makes a great a great place to start for OT to have a virtual PLC that supports that because you’re gonna be able to import or export to your maybe your physical PLCs. The other thing is I wanted to comment on what we’re seeing here. So, many of you who are familiar with structured text, you know, you may have an if then else, or an if then. And and you may have, like, tag x, equals, you know, either some kind of calculation, you know, maybe, you know, z times y or just maybe a a constant. But what we’re seeing here is as we’re running, they have inserted at a in a different color the actual value of, let’s say, tag x. So in between you know, right next to tag x, we see the actual value changing and updating a few times a second. And so it makes it very easy to kinda monitor this thing while it’s running and see how everything’s working, and I know that’s that’s huge. And I know a lot of vendors also do this as well, but I love the integration here, how it’s so easy to see what the current values are for each of these variables. And, I’ll turn it over to you, Hendrick. I think I interrupted you. Go ahead. Henrik Pedersen (OTee): Yeah. No. I was just gonna comment on that. Jake said, like, this is the this is the POC editor, and the next the next big feature that we’re releasing very soon is essentially the service, manager, which is the, which is the feature that will allow our users to deploy any kind of service very efficiently, like another runtime or OPC UA server or an entity server or or or whatever other, software components that that, you want to deploy, like a Knox server or things like that. So and that’s that’s, we were really excited about that because, that will kind of allow for a step change in how you kind of orchestrate and manage your system and your, your system and your, your, you have a very good overview of what’s going on with versions of, of the different software components running in your, your infrastructure and your devices and things like that. So we’re really excited about that, that it’s coming out. And it might be that actually when when this, episode airs, who knows if it’s if it’s done or or not, but we’re very close to release the first version of that. So excited about that. Shawn Tierney (Host): Now I have a question for you guys, and maybe this is off topic a little bit. So let’s say I’m up here in the cloud. I’m working on a program, and I have some IO on my desk I wanna connect it to. Is that something I can do? Is there a connector I can download and install my PC to allow the cloud to talk to my IO? Or is that something where I have to get a a, you know, a local, you know, like we talked about those industrial Linux boxes and and test it here with that? Henrik Pedersen (OTee): Yeah. So I think you what you what you’re you’re after is, like, the IO configuration of, if you wanna deploy a driver, right, or, like, a modbus driver and how you figure out the system. Right? Shawn Tierney (Host): Yeah. Because this is in the cloud. It’s not on my desk. The IO is on my desk. So how would I connect the two of them? How would I is is that something that can be done? Henrik Pedersen (OTee): Yep. Yeah. Exactly. That’s that’s actually the you know, I I think, Jake, you might just wanna show why you deploy a driver. Right? Jacob Abel (Edgenaut): Sure. Sure. And I just wanna take a second to, clarify. You know, it’s something that kinda comes up often, and I I don’t I don’t think it gets it’s it’s cleared up enough is that so, you know, we have this cloud ID here. So, you know, you can open this from anywhere in the world. But the virtual PLC run times get installed on computers preferably very locally, you know, on the machine, on the factory floor, something like that. I I’ve got, an edge computer right here. Just as an example. I mean, this is something you would just pop in the control panel and you can install OT on this. So to answer your question better, Shawn, you know, to get to, you know, the remote IO that you need essentially, or actually in the, in the case of this, this has onboard IO. You know, you’re looking at connecting with MOBAs, PCP, Ethernet IP. I I know that a lot more protocols are coming. Profinet. So how you would do that is that you have that plus sign up here and add a driver config. We’re just gonna do, Modbus real quick. Henrik Pedersen (OTee): Mhmm. Jacob Abel (Edgenaut): And we wanna add a TCP client. So you can name the client, tell it how fast to pull, you know, any delays, put in the IP address. Just an example. Do the port number if you need and then add your requests. You know, you have support for, all the main function codes and mod bus right here, you know, read holding, read input, you know, write multiple coils, all that good stuff, you know, tell address how many registers you wanna do, timeouts, slave ID. And then, you know, once you’ve done that, so let’s say, you know, I’m gonna read, and holding registers here, the table on the right auto updates. You can do aliases for each one of these. You can just do register one Mhmm. As an example Shawn Tierney (Host): It’s showing just for the audio audience, it’s showing the absolute address for all these modbus, variables and then, has the symbols, and he’s putting in his own symbol name. It has a default symbol name of symbol dash something, and he’s putting his own in, like, register one, which makes it easier. Yeah. Jacob Abel (Edgenaut): Good point. Yeah. Good point. Thanks, Shawn. So, yeah, once once you put in your request and you can throw in some aliases, for the different registers, you know, you can go back to your program and here’s this, sample variable that I just added from earlier. You know, you can the registers are 16 bits. I’m gonna select, an int. And what you can do here now is select those modbus requests that you just set up. So it automatically maps these to those variables for you. So that that way you don’t have to do anything anything manual, like have a separate program to say, you know, this tag equals, you know, register 40,001. You know, it’s already mapped for you. So that’s that’s essentially how you would connect to remote IO is, just add a client in the driver configs and, fill in all your info and be off and running. Shawn Tierney (Host): That’s excellent. I really liked how you were able to easily map the register to the modbus value you’re reading in or writing to to your, variable so you can use that in your program. That was very easy to do. Jacob Abel (Edgenaut): Oh, yeah. Yeah. It’s that it’s like I said, that’s one of the things that I love about this interface is that everything is just very straightforward. You know, it’s it’s super easy to just stumble upon whatever it is you need and figure it out. Henrik Pedersen (OTee): And just just, to add to to kinda your your processors, like, once you have created that connection between the IO and and and the program, you basically just, compile it and download it to the to the runtime again, and and it executes locally the based on the yeah. Nice. Jacob Abel (Edgenaut): Oh, right. Good point. Yeah. Of of course, after we add something, we do have to redownload. So Shawn Tierney (Host): Very interesting. Well, that answers my question. Jacob Abel (Edgenaut): I think that’s that’s about it for the the demo. I mean, unless, Shawn, you have any more questions about the interface here. Shawn Tierney (Host): No. It looked pretty straightforward to me, Hendrik. I don’t know. Did you have anything else you wanted to discuss while we have the demo up? Henrik Pedersen (OTee): Nope. Not nothing related to this except for that, you know, this is probably something that’s quite new in the OT space is that this is a software service, meaning that there are continuous development going on and releases, and improvements to the software all the time. Like literally every week we deploy new improvements. And, and what, I typically say is that like, the, you know, if you if you if you sign up with OT, what you what you will experience is that the actual software keeps on becoming better over time and not is not going to become outdated. It’s going to be just better over time. And I think that’s part of what I really loved about the innovation space, innovation happening around IT is that that, that has become the new de facto standard in how you develop software and great software. And I think we in, in, in the OT space, we need to adopt that same methodology of developing software, something that continuously becomes better over time. Shawn Tierney (Host): Yeah. And I would just say, you know, if you’re if you’re on the OT side of things, you wanna be in six eleven thirty one dash three languages, because these are things that your staff, you know, what you know, your electricians and technicians and even engineers, you know, should know, should be getting up to speed. I don’t know. We’re at the automation school. We’re teaching, structured text. And so, easier. I look at this, and I’m like, this is a lot easier than trying to learn c plus or or JavaScript. So in any case, I think, you know, if it’s an OT side real IO control, real control system or data collection, you know, you know, very important, you know, mission critical data collection, then, you know, I’d rather have this than somebody trying to write some custom code for me and, you know, use some kind of computer language who doesn’t understand, you know, the OT side of things. So, I could definitely see the advantage of your system, Henrik. Henrik Pedersen (OTee): Yep. I I I also wanted to say to that, Stike, the I I do not believe the EIC standards in general will disappear. They exist for a very good reason. Right. Exists to standardise to to ensure safety and determinists, determinism in this. So I don’t think they will disappear. But there are obviously advances now with AI and things like that that can can help us create these things much faster and much more efficient and things like that. So, so but, but the EIC standards, I think, will be there for a very long time. Obviously, the 06/4099 standard is is really exciting, and and we believe that that can be, yeah, that that can clearly be there, but it’s still a new EIC standard. So, Shawn Tierney (Host): it’s not think what we’re gonna see is we’re gonna see a lot more libraries fleshed out. There’ll be a lot less writing from scratch. We’ve interviewed on the History of Automation podcast. We’ve interviewed some big integrators, and they’re at a point now, you know, twenty, thirty years on that they have libraries for everything. And I think that’s where we’ll see, you know, much like the DCS, I think, vendors went two years ago. But I still think that the there’s a reason for these languages. There’s a reason to be able to edit things while they run. There’s a reason for different languages for different applications and different, people maintaining them. So I agree with you on that. I don’t I don’t think we’re we’re gonna see the end of these, these standard languages that have done us very well since the, you know, nineteen seventies. Jacob Abel (Edgenaut): I just wanna add a bit on there about, Shawn, you mentioned, you know, doing less code. I I did show earlier in the bottom right hand corner here, we have our our little AI assistant, Martha. I don’t believe the feature, it has been released yet. You know, Henrik, correct me if I’m wrong, but I know one of the things that’s coming is, AI code generation, you know, similar to that of cloud or chat GPT. So it’s going to, you know, you can open this guy up here. You know, right right now, I think it’s just for, help topics, but you’ll be able to talk to Martha and she’s gonna generate code for you in your program there all built in. Henrik Pedersen (OTee): Yeah. Yeah. That’s that’s coming really fast now. So, it’s it’s not been implemented yet, but it’s, it’s right around the corner. Shawn Tierney (Host): Yeah. And it’s it’s not gonna be able to it’s you’re not gonna be able to hook a camera up to it and, like, take pictures of your machine and say, okay. Write the control code for this. But, you know, if you had a, you know, process that had 12 steps in it, the AI could definitely help you generate that code and and other code. And we’ll have to have Henrik and Jake back on to talk about that when it comes out, but, you know, it’s gonna be able to save you, reduce the tedious part of the the coding. You know, if you need an array of so many tags and so many dimensions or, you know, the stuff that, you know, it would just be the typing intensive, it’s gonna be able to help you with that, and then you can actually put the context in there. Just like, you can pull up a template in Word for a letter, and then you can fill in the blanks. You know? And and, of course, AI is helping make that easier too. But, in any case, Henrik, maybe you can come back on when that feature launches. Henrik Pedersen (OTee): Yeah. Absolutely. And I’m also excited about just a simple a use case of of translating something. Right? Translating your existing let’s say if it’s a proprietary code or something like that, like, getting it getting it standardized and translating it to the ESE six eleven thirty one standard, for instance, or, so so the obviously AI is, like, perfect for this space. It’s there is no doubt, And and it’s, like, that’s also why I’m so excited about, like, what’s going on at the moment. It’s like there’s so much innovation potential, in the on the OT side now that, they are with all these new technologies. Shawn Tierney (Host): Yeah. Absolutely. Absolutely. Well, gentlemen, was there anything else you wanted to cover? Henrik Pedersen (OTee): I think just just one final thing from from me is, like, we thought a lot about it, like, before this this episode, and we thought, like, let’s offer let’s offer the listeners something something of of true value. So so we thought, the, you know, after this after this episode launched, we want to want to offer anyone out there that’s listening a free, completely hands on trial of our technology, in their in their in their environment or on their Raspberry Pi or whatever. So just just reach out to us if you wanna do that. And, and I yeah. We’ll get you set up for for for testing this, and it’s not gonna cost you anything. Shawn Tierney (Host): Well, that’s great. And, guys, if you’re listening, if you do take advantage of that free trial, please let me know what you thought about it. But, Henrik, thank you so much for, that offer to our listening audience. Guys, don’t be bashful. Reach out to him. Reach out to Jake. Jake, thank you for doing the demo as well. Really appreciate it. My pleasure. Any final words, Henrik, before we close out? Henrik Pedersen (OTee): No. It’s been great. Great, being here, Shawn, and thanks for for helping us. Shawn Tierney (Host): Well, I hope you enjoyed that episode. I wanna thank Hendrik and Jacob for coming on the show, telling us all about OT virtual PLCs, and then giving us a demo. I thought it was really cool. Now if any of you guys take them up on their free trial, please let me know what you think. I’d love to hear from you. And, with that, I do wanna thank OT for sponsoring this episode so we could release it completely ad free. And I also wanna thank you for tuning back in this week. We have another podcast coming out next week. It’ll be early because I will be traveling and doing an event with a vendor. And so expect that instead of coming out on Wednesday to come out on Monday if all goes as planned. And then we will be skipping the Thanksgiving, week, and then we’ll be back in the in the, in December, and then we have shows lined up for the new year already as well. So thank you for being a listener, a viewer, and, please, wherever you’re consuming the show, whether it’s on YouTube or on the automation blog or at iTunes or Spotify or Google Podcasts or anywhere, please give us a thumbs up and a like or a five star review because that really helps us expand our audience and find new vendors to come on the show. And with that, I’m gonna end by wishing you good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
Peter talks to Lars Nagel, CEO of the International Data Spaces Association about OPCUA and IDSA.
Shawn Tierney meets up with Connor Mason of Software Toolbox to learn their company, products, as well as see a demo of their products in action in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 248 Show Notes: Special thanks to Software Toolbox for sponsoring this episode so we could release it “ad free!” To learn about Software Toolbox please checkout the below links: TOP Server Cogent DataHub Industries Case studies Technical blogs Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney with Insights and Automation, and I wanna thank you for tuning back in this week. Now this week on the show, I meet up with Connor Mason from Software Toolbox, who gives us an overview of their product suite, and then he gives us a demo at the end. And even if you’re listening, I think you’re gonna find the demo interesting because Connor does a great job of talking through what he’s doing on the screen. With that said, let’s go ahead and jump into this week’s episode with Connor Mason from Software Toolbox. I wanna welcome Connor from Software Toolbox to the show. Connor, it’s really exciting to have you. It’s just a lot of fun talking to your team as we prepared for this, and, I’m really looking forward to because I just know in your company over the years, you guys have so many great solutions that I really just wanna thank you for coming on the show. And before you jump into talking about products and technologies Yeah. Could you first tell us just a little bit about yourself? Connor Mason (Guest): Absolutely. Thanks, Shawn, for having us on. Definitely a pleasure to be a part of this environment. So my name is Connor Mason. Again, I’m with Software Toolbox. We’ve been around for quite a while. So we’ll get into some of that history as well before we get into all the the fun technical things. But, you know, I’ve worked a lot with the variety of OT and IT projects that are ongoing at this point. I’ve come up through our support side. It’s definitely where we grow a lot of our technical skills. It’s a big portion of our company. We’ll get that into that a little more. Currently a technical application consultant lead. So like I said, I I help run our support team, help with these large solutions based projects and consultations, to find what’s what’s best for you guys out there. There’s a lot of different things that in our in our industry is new, exciting. It’s fast paced. Definitely keeps me busy. My background was actually in data analytics. I did not come through engineering, did not come through the automation, trainings at all. So this is a whole new world for me about five years ago, and I’ve learned a lot, and I really enjoyed it. So, I really appreciate your time having us on here, Shawn Tierney (Host): Shawn. Well, I appreciate you coming on. I’m looking forward to what you’re gonna show us today. I had a the audience should know I had a little preview of what they were gonna show, so I’m looking forward to it. Connor Mason (Guest): Awesome. Well, let’s jump right into it then. So like I said, we’re here at Software Toolbox, kinda have this ongoing logo and and just word map of connect everything, and that’s really where we lie. Some people have called us data plumbers in the past. It’s all these different connections where you have something, maybe legacy or something new, you need to get into another system. Well, how do you connect all those different points to it? And, you know, throughout all these projects we worked on, there’s always something unique in those different projects. And we try to work in between those unique areas and in between all these different integrations and be something that people can come to as an expert, have those high level discussions, find something that works for them at a cost effective solution. So outside of just, you know, products that we offer, we also have a lot of just knowledge in the industry, and we wanna share that. You’ll kinda see along here, there are some product names as well that you might recognize. Our top server and OmniServer, we’ll be talking about LOPA as well. It’s been around in the industry for, you know, decades at this point. And also our symbol factory might be something you you may have heard in other products, that they actually utilize themselves for HMI and and SCADA graphics. That is that is our product. So you may have interacted it with us without even knowing it, and I hope we get to kind of talk more about things that we do. So before we jump into all the fun technical things as well, I kind of want to talk about just the overall software toolbox experience as we call it. We’re we’re more than just someone that wants to sell you a product. We we really do work with, the idea of solutions. How do we provide you value and solve the problems that you are facing as the person that’s actually working out there on the field, on those operation lines, and making things as well. And that’s really our big priority is providing a high level of knowledge, variety of the things we can work with, and then also the support. It’s very dear to me coming through the the support team is still working, you know, day to day throughout that software toolbox, and it’s something that has been ingrained into our heritage. Next year will be thirty years of software toolbox in 2026. So we’re established in 1996. Through those thirty years, we have committed to supporting the people that we work with. And I I I can just tell you that that entire motto lives throughout everyone that’s here. So from that, over 97% of the customers that we interact with through support say they had an awesome or great experience. Having someone that you can call that understands the products you’re working with, understands the environment you’re working in, understands the priority of certain things. If you ever have a plant shut down, we know how stressful that is. Those are things that we work through and help people throughout. So this really is the core pillars of Software Toolbox and who we are, beyond just the products, and and I really think this is something unique that we have continued to grow and stand upon for those thirty years. So jumping right into some of the industry challenges we’ve been seeing over the past few years. This is also a fun one for me, talking about data analytics and tying these things together. In my prior life and education, I worked with just tons of data, and I never fully knew where it might have come from, why it was such a mess, who structured it that way, but it’s my job to get some insights out of that. And knowing what the data actually was and why it matters is a big part of actually getting value. So if you have dirty data, if you have data that’s just clustered, it’s in silos, it’s very often you’re not gonna get much value out of it. This was a study that we found in 2024, from Garner Research, And it said that, based on the question that business were asked, were there any top strategic priorities for your data analytics functions in 2024? And almost 50%, it’s right at ’49, said that they wanted to improve data quality, and that was a strategic priority. This is about half the industry is just talking about data quality, and it’s exactly because of those reasons I said in my prior life gave me a headache, to look at all these different things that I don’t even know where they became from or or why they were so different. And the person that made that may have been gone may not have the contacts, and making that from the person that implemented things to the people that are making decisions, is a very big task sometimes. So if we can create a better pipeline of data quality at the beginning, makes those people’s lives a lot easier up front and allows them to get value out of that data a lot quicker. And that’s what businesses need. Shawn Tierney (Host): You know, I wanna just data quality. Right? Mhmm. I think a lot of us, when we think of that, we think of, you know, error error detection. We think of lost connections. We think of, you know, just garbage data coming through. But I I think from an analytical side, there’s a different view on that, you know, in line with what you were just saying. So how do you when you’re talking to somebody about data quality, how do you get them to shift gears and focus in on what you’re talking about and not like a quality connection to the device itself? Connor Mason (Guest): Absolutely. Yeah. We I kinda live in both those worlds now. You know, I I get to see that that connection state. And when you’re operating in real time, that quality is also very important to you. Mhmm. And I kind of use that at the same realm. Think of that when you’re thinking in real time, if you know what’s going on in the operation and where things are running, that’s important to you. That’s the quality that you’re looking for. You have to think beyond just real time. We’re talking about historical data. We’re talking about data that’s been stored for months and years. Think about the quality of that data once it’s made up to that level. Are they gonna understand what was happening around those periods? Are they gonna understand what those tags even are? Are they gonna understand what those conventions that you’ve implemented, to give them insights into this operation. Is that a clear picture? So, yeah, you’re absolutely right. There are two levels to this, and and that is a big part of it. The the real time data and historical, and we’re gonna get some of that into into our demo as well. It it’s a it’s a big area for the business, and the people working in the operations. Shawn Tierney (Host): Yeah. I think quality too. Think, you know, you may have data. It’s good data. It was collected correctly. You had a good connection to the device. You got it. You got it as often as you want. But that data could really be useless. It could tell you nothing. Connor Mason (Guest): Right. Exactly. Shawn Tierney (Host): Right? It could be a flow rate on part of the process that irrelevant to monitoring the actual production of the product or or whatever you’re making. And, you know, I’ve known a lot of people who filled up their databases, their historians, with they just they just logged everything. And it’s like a lot of that data was what I would call low quality because it’s low information value. Right? Absolutely. I’m sure you run into that too. Connor Mason (Guest): Yeah. We we run into a lot of people that, you know, I’ve got x amount of data points in my historian and, you know, then we start digging into, well, I wanna do something with it or wanna migrate. Okay. Like, well, what do you wanna achieve at the end of this? Right? And and asking those questions, you know, it’s great that you have all these things historized. Are you using it? Do you have the right things historized? Are they even set up to be, you know, worked upon once they are historized by someone outside of this this landscape? And I think OT plays such a big role in this, and that’s why we start to see the convergence of the IT and OT teams just because that communication needs to occur sooner. So we’re not just passing along, you know, low quality data, bad quality data as well. And we’ll get into some of that later on. So to jump into some of our products and solutions, I kinda wanna give this overview of the automation pyramid. This is where we work from things like the field device communications. And you you have certain sensors, meters, actuators along the actual lines, wherever you’re working. We work across all the industries, so this can vary between those. Through there, you work up kind of your control area. A lot of control engineers are working. This is where I think a lot of the audience is very familiar with PLCs. Your your typical name, Siemens, Rockwell, your Schneiders that are creating, these hardware products. They’re interacting with things on the operation level, and they’re generating data. That that was kind of our bread and butter for a very long time and still is that communication level of getting data from there, but now getting it up the stack further into the pyramid of your supervisory, MES connections, and it’ll also now open to these ERP. We have a lot of large corporations that have data across variety of different solutions and also want to integrate directly down into their operation levels. There’s a lot of value to doing that, but there’s also a lot of watch outs, and a lot of security concerns. So that’ll be a topic that we’ll be getting into. We also all know that the cloud is here. It’s been here, and it’s it’s gonna continue to push its way into, these cloud providers into OT as well. There there’s a lot of benefit to it, but there there’s also some watch outs as this kind of realm, changes in the landscape that we’ve been used to. So there’s a lot of times that we wanna get data out there. There’s value into AI agents. It’s a hot it’s a hot commodity right now. Analytics as well. How do we get those things directly from shop floor, up into the cloud directly, and how do we do that securely? It’s things that we’ve been working on. We’ve had successful projects, continues to be an interest area and I don’t see it slowing down at all. Now, when we kind of begin this level at the bottom of connectivity, people mostly know us for our top server. This is our platform for industrial device connectivity. It’s a thing that’s talking to all those different PLCs in your plant, whether that’s brownfield or greenfield. We pretty much know that there’s never gonna be a plant that’s a single PLC manufacturer, that exists in one plant. There’s always gonna be something that’s slightly different. Definitely from Brownfield, things different engineers made different choices, things have been eminent, and you gotta keep running them. TopServe provides this single platform to connect to a long laundry list of different PLCs. And if this sounds very familiar to Kepserver, well, you’re not wrong. Kepserver is the same exact technology that TopServer is. What’s the difference then is probably the biggest question we usually get. The difference technology wise is nothing. The difference in the back end is that actually it’s all the same product, same product releases, same price, but we have been the biggest single source of Kepserver or Topsyra implementation into the market, for almost two plus decades at this point. So the single biggest purchase that we own this own labeled version of Kepserver to provide to our customers. They interact with our support team, our solutions teams as well, and we sell it along the stack of other things because it it fits so well. And we’ve been doing this since the early two thousands when, Kepware was a a much smaller company than it is now, and we’ve had a really great relationship with them. So if you’ve enjoyed the technology of of Kepserver, maybe there’s some users out there. If you ever heard of TopServer and that has been unclear, I hope this clear clarifies it. But it it is a great technology stack that that we build upon and we’ll get into some of that in our demo. Now the other question is, what if you don’t have a standard communication protocol, like a modbus, like an Allen Bradley PLC as well? We see this a lot with, you know, testing areas, pharmaceuticals, maybe also in packaging, barcode scanners, weigh scales, printers online as well. They they may have some form of basic communications that talks over just TCP or or serial. And how do you get that information that’s really valuable still, but it’s not going through a PLC. It’s not going into your typical agent mind SCADA. It might be very manual process for a lot of these test systems as well, how they’re collecting and analyzing the data. Well, you may have heard of our Arm server as well. It’s been around, like I said, for a couple decades and just a proven solution that without coding, you can go in and build a custom protocol that expects a format from that device, translates it, puts it into standard tags, and now that those tags can be accessible through the open standards of OPC, or to it was a a Veeva user suite link as well. And that really provides a nice combination of your standard communications and also these more custom communications may have been done through scripting in the past. Well, you know, put this onto, an actual server that can communicate through those protocols natively, and just get that data into those SCADA systems, HMIs, where you need it. Shawn Tierney (Host): You know, I used that. Many years ago, I had an integrator who came to me. He’s like, Shawn, I wanna this is back in the RSVUE days. He’s like, Shawn, I I got, like, 20 Euotherm devices on a four eighty five, and they speak ASCII, and I gotta I gotta get into RSVUE 32. And, you know, OmniSIR, I love that you could you could basically developing and we did Omega and some other devices too. You’re developing your own protocol, but it’s beautiful. And and the fact that when you’re testing it, it color codes everything. So you know, hey. That part worked. The header worked. The data worked. Oh, the trailing didn’t work, or the terminated didn’t work, or the data’s not in the right format. Or I just it was a joy to work with back then, and I can imagine it’s only gotten better since. Connor Mason (Guest): Yeah. I think it’s like a little engineer playground where you get in there. It started really decoding and seeing how these devices communicate. And then once you’ve got it running, it it’s one of those things that it it just performs and, is saved by many people from developing custom code, having to manage that custom code and integrations, you know, for for many years. So it it’s one of those things that’s kinda tried, tested, and, it it’s kind of a staple still our our base level communications. Alright. So moving along kind of our automation pyramid as well. Another part of our large offering is the Cogent data hub. Some people may have heard from this as well. It’s been around for a good while. It’s been part of our portfolio for for a while as well. This starts building upon where we had the communication now up to those higher echelons of the pyramid. This is gonna bring in a lot of different connectivities. You if you’re not if you’re listening, it it’s kind of this cog and spoke type of concept for real time data. We also have historical implementations. You can connect through a variety of different things. OPC, both the profiles for alarms and events, and even OPC UA’s alarming conditions, which is still getting adoption across the, across the industry, but it is growing. As part of the OPC UA standard, we have integrations to MQTT. It can be its own MQTT broker, and it can also be an MQTT client. That has grown a lot. It’s one of those things that lives be besides OPC UA, not exactly a replacement. If you ever have any questions about that, it’s definitely a topic I love to talk about. There’s space for for this to combine the benefits of both of these, and it’s so versatile and flexible for these different type of implementations. On top of that, it it’s it’s a really strong tool for conversion and aggregation. You kind of add this, like, its name says, it’s a it’s a data hub. You send all the different information to this. It stores it into, a hierarchy with a variety of different modeling that you can do within it. That’s gonna store these values across a standard data format. Once I had data into this, any of those different connections, I can then send data back out. So if I have anything that I know is coming in through a certain plug in like OPC, bring that in, send it out to on these other ones, OPC, DA over to MQTT. It could even do DDA if I’m still using that, which I probably wouldn’t suggest. But overall, there’s a lot of good benefits from having something that can also be a standardization, between all your different connections. I have a lot of different things, maybe variety of OPC servers, legacy or newer. Bring that into a data hub, and then all your other connections, your historians, your MAS, your SCADAs, it can connect to that single point. So it’s all getting the same data model and values from a single source rather than going out and making many to many connections. A a large thing that it was originally, used for was getting around DCOM. That word is, you know, it might send some shivers down people’s spines still, to this day, but it’s it’s not a fun thing to deal with DCOM and also with the security hardening. It’s just not something that you really want to do. I’m sure there’s a lot of security professionals would advise against EPRA doing it. This tunneling will allow you to have a data hub that locally talks to any of the DA server client, communicate between two data hubs over a tunnel that pushes the data just over TCP, takes away all the comm wrappers, and now you just have values that get streamed in between. Now you don’t have to configure any DCOM at all, and it’s all local. So a lot of people went transitioning, between products where maybe the server only supports OPC DA, and then the client is now supporting OPC UA. They can’t change it yet. This has allowed them to implement a solution quickly and cost and at a cost effective price, without ripping everything out. Shawn Tierney (Host): You know, I wanna ask you too. I can see because this thing is it’s a data hub. So if you’re watching and you’re if you’re listening and not watching, you you’re not gonna see, you know, server, client, UAD, a broker, server, client. You know, just all these different things up here on the site. Do you what how does somebody find out if it does what they need? I mean, do you guys have a line they can call to say, I wanna do this to this. Is that something Data Hub can do, or is there a demo? What would you recommend to somebody? Connor Mason (Guest): Absolutely. Reach out to us. We we have a a lot of content outline, and it’s not behind any paywall or sign in links even. You you can always go to our website. It’s just softwaretoolbox.com. Mhmm. And that’s gonna get you to our product pages. You can download any product directly from there. They have demo timers. So typically with, with coaching data hub, after an hour, it will stop. You can just rerun it. And then call our team. Yeah. We have a solutions team that can work with you on, hey. What do I need as well? Then our support team, if you run into any issues, can help you troubleshoot that as well. So, I’ll have some contact information at the end, that’ll get some people to, you know, where they need to go. But you’re absolutely right, Shawn. Because this is so versatile, everyone’s use case of it is usually something a little bit different. And the best people to come talk to that is us because we’ve we’ve seen all those differences. So Shawn Tierney (Host): I think a lot of people run into the fact, like, they have a problem. Maybe it’s the one you said where they have the OPC UA and it needs to connect to an OPC DA client. And, you know, and a lot of times, they’re they’re a little gunshot to buy a license because they wanna make sure it’s gonna do exactly what they need first. And I think that’s where having your people can, you know, answer their questions saying, yes. We can do that or, no. We can’t do that. Or, you know, a a demo that they could download and run for an hour at a time to actually do a proof of concept for the boss who’s gonna sign off on purchasing this. And then the other thing is too, a lot of products like this have options. And you wanna make sure you’re buying the ticking the right boxes when you buy your license because you don’t wanna buy something you’re not gonna use. You wanna buy the exact pieces you need. So I highly recommend I mean, this product just does like, I have, in my mind, like, five things I wanna ask right now, but not gonna. But, yeah, def definitely, when it when it comes to a product like this, great to touch base with these folks. They’re super friendly and helpful, and, they’ll they’ll put you in the right direction. Connor Mason (Guest): Yeah. I I can tell you that’s working someone to support. Selling someone a solution that doesn’t work is not something I’ve been doing. Bad day. Right. Exactly. Yeah. And we work very closely, between anyone that’s looking at products. You know, me being as technical product managers, well, I I’m engaged in those conversations. And Mhmm. Yeah. If you need a demo license, reach out to us to extend that. We wanna make sure that you are buying something that provides you value. Now kind of moving on into a similar realm. This is one of our still somewhat newer offerings, I say, but we’ve been around five five plus years, and it’s really grown. And I kinda said here, it’s called OPC router, and and it’s not it’s not a networking tool. A lot of people may may kinda get that. It’s more of a, kind of a term about, again, all these different type of connections. How do you route them to different ways? It it kind of it it separates itself from the Cogent data hub, and and acting at this base level of being like a visual workflow that you can assign various tasks to. So if I have certain events that occur, I may wanna do some processing on that before I just send data along, where the data hub is really working in between converting, streaming data, real time connections. This gives you a a kind of a playground to work around of if I have certain tasks that are occurring, maybe through a database that I wanna trigger off of a certain value, based on my SCADA system, well, you can build that in in these different workflows to execute exactly what you need. Very, very flexible. Again, it has all these different type of connections. The very unique ones that have also grown into kind of that OT IT convergence, is it can be a REST API server and client as well. So I can be sending out requests to, RESTful servers where we’re seeing that hosted in a lot of new applications. I wanna get data out of them. Or once I have consumed a variety of data, I can become the REST server in OPC router and offer that to other applications to request data from itself. So, again, it can kind of be that centralized area of information. The other thing as we talked about in the automation pyramid is it has connections directly into SAP and ERP systems. So if you have work orders, if you have materials, that you wanna continue to track and maybe trigger things based off information from your your operation floors via PLCs tracking, how they’re using things along the line, and that needs to match up with what the SAP system has for, the amount of materials you have. This can be that bridge. It’s really is built off the mindset of the OT world as well. So we kinda say this helps empower the OT level because we’re now giving them the tools to that they understand what what’s occurring in their operations. And what could you do by having a tool like this to allow you to kind of create automated workflows based off certain values and certain events and automate some of these things that you may be doing manually or doing very convoluted through a variety of solutions. So this is one of those prod, products as well that’s very advanced in the things that supports. Linux and Docker containers is, is definitely could be a hot topic, rightly fleet rightfully so. And this can run on a on a Docker container deployed as well. So we we’ve seen that with the I IT folks that really enjoy being able to control and to higher deployment, allows you to update easily, allows you to control and spin up new containers as well. This gives you a lot of flexibility to to deploy and manage these systems. Shawn Tierney (Host): You know, I may wanna have you back on to talk about this. I used to there’s an old product called Rascal that I used to use. It was a transaction manager, and it would based on data changing or on a time that as a trigger, it could take data either from the PLC to the database or from the database to the PLC, and it would work with stored procedures. And and this seems like it hits all those points, And it sounds like it’s a visual like you said, right there on the slide, visual workflow builder. Connor Mason (Guest): Yep. Shawn Tierney (Host): So you really piqued my interest with this one, and and it may be something we wanna come back to and and revisit in the future, because, it just it’s just I know that that older product was very useful and, you know, it really solved a lot of old applications back in the day. Connor Mason (Guest): Yeah. Absolutely. And this this just takes that on and builds even more. If you if anyone was, kind of listening at the beginning of this year or two, a conference called Prove It that was very big in the industry, we were there to and we presented on stage a solution that we had. Highly recommend going searching for that. It’s on our web pages. It’s also on their YouTube links, and it’s it’s called Prove It. And OPC router was a big part of that in the back end. I would love to dive in and show you the really unique things. Kind of as a quick overview, we’re able to use Google AI vision to take camera data and detect if someone was wearing a hard hat. All that logic and behind of getting that information to Google AI vision, was through REST with OPC router. Then we were parsing that information back through that, connection and then providing it back to the PLCs. So we go all the way from a camera to a PLC controlling a light stack, up to Google AI vision through OPC router, all on hotel Wi Fi. It’s very imp it’s very, very fun presentation, and, our I think our team did a really great job. So a a a pretty new offering I have I wanna highlight, is our is our data caster. This is a an actual piece of hardware. You know, our software toolbox is we we do have some hardware as well. It’s just, part of the nature of this environment of how we mesh in between things. But the the idea is that, there’s a lot of different use cases for HMI and SCADA. They have grown so much from what they used to be, and they’re very core part of the automation stack. Now a lot of times, these are doing so many things beyond that as well. What we found is that in different areas of operations, you may not need all that different control. You may not even have the space to make up a whole workstation for that as well. What this does, the data caster, is, just simply plug it plugs it into any network and into an HDMI compatible display, and it gives you a very easy configure workplace to put a few key metrics onto a screen. So if I have different things from you can connect directly to PLCs like Allen Bradley. You can connect to SQL databases. You can also connect to rest APIs to gather the data from these different sources and build a a a kind of easy to to view, KPI dashboard in a way. So if you’re on a operation line and you wanna look at your current run rate, maybe you have certain things in the POC tags, you know, flow and pressure that’s very important for those operators to see. They may not be, even the capacity to be interacting with anything. They just need visualizations of what’s going on. This product can just be installed, you know, industrial areas with, with any type of display that you can easily access and and give them something that they can easily look at. It’s configured all through a web browser to display what you want. You can put on different colors based on levels of values as well. And it’s just I feel like a very simple thing that sometimes it seems so simple, but those might be the things that provide value on the actual operation floor. This is, for anyone that’s watching, kind of a quick view of a very simple screen. What we’re showing here is what it would look like from all the different data sources. So talking directly to ControlLogs PLC, talking to SQL databases, micro eight eight hundreds, an arrest client, and and what’s coming very soon, definitely by the end of this year, is OPC UA support. So any OPC UA server that’s out there that’s already having your PLC data or etcetera, this could also connect to that and get values from there. Shawn Tierney (Host): Can I can you make it I’m I’m here I go? Can you make it so it, like, changes, like, pages every few seconds? Connor Mason (Guest): Right now, it is a single page, but this is, like I said, very new product, so we’re taking any feedback. If, yeah, if there’s this type of slideshow cycle that would be, you know, valuable to anyone out there, let us know. We’re definitely always interested to see the people that are actually working out at these operation sites, what what’s valuable to them. Yeah. Shawn Tierney (Host): A lot of kiosks you see when when you’re traveling, it’ll say, like, line one well, I’ll just throw out there. Line one, and that’ll be on there for five seconds, and then it’ll go line two. That’ll be on there for five seconds, and then line you know, I and that’s why I just mentioned that because I can see that being a question that, that that I would get from somebody who is asking me about it. Connor Mason (Guest): Oh, great question. Appreciate it. Alright. So now we’re gonna set time for a little hands on demo. For anyone that’s just listening, we’re gonna I’m gonna talk about this at at a high level and walk through everything. But the idea is that, we have a few different POCs, very common in Allen Bradley and just a a Siemens seven, s seven fifteen hundred that’s in our office, pretty close to me on the other side of the wall wall, actually. We’re gonna first start by connecting that to our top server like we talked about. This is our industrial communication server, that offers both OCDA, OC UA, SweetLink connectivity as well. And then we’re gonna bring this into our Cogent data hub. This we talked about is getting those values up to these higher levels. What we’ll be doing is also tunneling the data. We talked about being able to share data through the data hubs themselves. Kinda explain why we’re doing that here and the value you can add. And then we’re also gonna showcase adding on MQTT to this level. Taking beta now just from these two PLCs that are sitting on a rack, and I can automatically make all that information available in the MQTT broker. So any MQTT client that’s out there that wants to subscribe to that data, now has that accessible. And I’ve created this all through a a really simple workflow. We also have some databases connected. Influx, we install with Code and DataHub, has a free visualization tool that kinda just helps you see what’s going on in your processes. I wanna showcase a little bit of that as well. Alright. So now jumping into our demo, when we first start off here is the our top server. Like I mentioned before, if anyone has worked with KEP server in the past, this is gonna look very similar. Like it because it is. The same technology and all the things here. The the first things that I wanted to establish in our demo, was our connection to our POCs. I have a few here. We’re only gonna use the Allen Bradley and the Siemens, for the the time that we have on our demo here. But how this builds out as a platform is you create these different channels and the devices connections between them. This is gonna be your your physical connections to them. It’s either, IP TCPIP connection or maybe your serial connection as well. We have support for all of them. It really is a long list. Anyone watching out there, you can kind of see all the different drivers that that we offer. So allowing this into a single platform, you can have all your connectivity based here. All those different connections that you now have that up the stack, your SCADA, your historians, MAS even as well, they can all go to a single source. Makes that management, troubleshooting, all those a bit easier as well. So one of the first things I did here, I have this built out, but I’ll kinda walk through what you would typically do. You have your Allen Bradley ControlLogix Ethernet driver here first. You know, I have some IPs in here I won’t show, but, regardless, we have our our our drivers here, and then we have a set of tags. These are all the global tags in the programming of the PLC. How I got these to to kind of map automatically is in our in our driver, we’re able to create tags automatically. So you’re able to send a command to that device and ask for its entire tag database. They can come back, provide all that, map it out for you, create those tags as well. This saves a lot of time from, you know, an engineer have to go in and, addressing all the individual items themselves. So once it’s defined in the program project, you’re able to bring this all in automatically. I’ll show now how easy that makes it connecting to something like the Cogent data hub. In a very similar fashion, we have a connection over here to the Siemens, PLC that I also have. You can see beneath it all these different tag structures, and this was created the exact same way. While those those PLC support it, you can do an automatic tag generation, bring in all the structure that you’ve already built out your PLC programming, and and make this available on this OPC server now as well. So that’s really the basis. We first need to establish communications to these PLCs, get that tag data, and now what do we wanna do with it? So in this demo, what I wanted to bring up was, the code in DataHub next. So here, I see a very similar kind of layout. We have a different set set of plugins on the left side. So for anyone listening, the Cogent Data Hub again is kind of our aggregation and conversion tool. All these different type of protocols like OPC UA, OPC DA, and OPC A and E for alarms and events. We also support OPC alarms and conditions, which is the newer profile for alarms in OPC UA. We have all a variety of different ways that you can get data out of things and data’s into the data hub. We can also do bridging. This concept is, how you share data in between different points. So let’s say I had a connection to one OPC server, and it was communicating to a certain PLC, and there were certain registers I was getting data from. Well, now I also wanna connect to a different OPC server that has, entirely different brand of PLCs. And then maybe I wanna share data in between them directly. Well, with this software, I can just bridge those points between them. Once they’re in the data hub, I can do kind of whatever I want with them. I can then allow them to write between those PLCs and share data that way, and you’re not now having to do any type of hardwiring directly in between them, and then I’m compatible to communicate to each other. Through the standards of OPC and these variety of different communication levels, I can integrate them together. Shawn Tierney (Host): You know, you bring up a good point. When you do something like that, is there any heartbeat? Like, is there on the general or under under, one of these, topics? Is there are there tags we can use that are from DataHub itself that can be sent to the destination, like a heartbeat or, you know, the merge transactions? Or Connor Mason (Guest): Yeah. Absolutely. So with this as well, there’s pretty strong scripting engine, and I have done that in the past where you can make internal tags. And that that could be a a timer. It could be a counter. And and just kind of allows you to create your own tags as well that you could do the same thing, could share that, through bridge connection to a PLC. So, yeah, there there are definitely some people that had those cert and, you know, use cases where they wanna get something to just track, on this software side and get it out to those hardware PLCs. Absolutely. Shawn Tierney (Host): I mean, when you send out the data out of the PLC, the PLC doesn’t care to take my data. But when you’re getting data into the PLC, you wanna make sure it’s updating and it’s fresh. And so, you know, they throw a counter in there, the script thing, and be able to have that. As as long as you see that incrementing, you know, you got good data coming in. That’s that’s a good feature. Connor Mason (Guest): Absolutely. You know, another big one is the the redundancy. So what this does is beyond just the OPC, we can make redundancy to basically anything that has two things running of it. So any of these different connections. How it’s unique is what it does is it just looks at the buckets of data that you create. So for an example, if I do have two different OPC servers and I put them into two areas of, let’s say, OPC server one and OPC server two, I can what now create an OPC redundancy data bucket. And now any client that connects externally to that and wants that data, it’s gonna go talk to that bucket of data. And that bucket of data is going to automatically change in between sources as things go down, things come back up, and the client would never know what’s hap what that happened unless you wanted to. There are internal tasks to show what’s the current source and things, but the idea is to make this trans kind of hidden that regardless of what’s going on in the operations, if I have this set up, I can have my external applications just reading from a single source without knowing that there’s two things behind it that are actually controlling that. Very important for, you know, historian connections where you wanna have a full complete picture of that data that’s coming in. If you’re able to make a redundant connection to two different, servers and then allow that historian to talk to a single point where it doesn’t have to control that switching back and forth. It it will just see that data flow streamlessly as as either one is up at that time. Kinda beyond that as well, there’s quite a few other different things in here. I don’t think we have time to cover all of them. But for for our demo, what I wanna focus on first is our OPC UA connection. This allows us both to act as a OPC UA client to get data from any servers out there, like our top server. And also we can act as an OPC UA server itself. So if anything’s coming in from maybe you have multiple connections to different servers, multiple connections to other things that aren’t OPC as well, I can now provide all this data automatically in my own namespace to allow things to connect to me as well. And that’s part of that aggregation feature, and kind of topic I was mentioning before. So with that, I have a connection here. It’s pulling data all from my top server. I have a few different tags from my Alec Bradley and and my Siemens PLC selected. The next part of this, while I was meshing, was the tunneling. Like I said, this is very popular to get around DCOM issues, but there’s a lot of reasons why you still may use this beyond just the headache of DCOM and what it was. What this runs on is a a TCP stream that takes all the data points as a value, a quality, and a timestamp, and it can mirror those in between another DataHub instance. So if I wanna get things across a network, like my OT side, where NASH previously, I would have to come in and allow a, open port onto my network for any OPC UA clients, across the network to access that, I can now actually change the direction of this and allow me to tunnel data out of my network without opening up any ports. This is really big for security. If anyone out there, security professional or working as an engineer, you have to work with your IT and security a lot, they don’t you don’t wanna have an open port, especially to your operations and OT side. So this allows you to change that direction of flow and push data out of this direction into another area like a DMZ computer or up to a business level computer as well. The other things as well that I have configured in this demo, the benefit of having that tunneling streaming data across this connection is I can also store this data locally in a, influx database. The purpose of that then is that I can actually historize this, provide then if this connection ever goes down to backfill any information that was lost during that tunnel connection going down. So with this added layer on and real time data scenarios like OPC UA, unless you have historical access, you would lose a lot of data if that connection ever went down. But with this, I can actually use the back end of this InfluxDB, buffer any values. When my connection comes back up, pass them along that stream again. And if I have anything that’s historically connected, like, another InfluxDB, maybe a PI historian, Vue historian, any historian offering out there that can allow that connection. I can then provide all those records that were originally missed and backfill that into those systems. So I switched over to a second machine. It’s gonna look very similar here as well. This also has an instance of the Cogent Data Hub running here. For anyone not watching, what we’ve actually have on this side is the the portion of the tunneler that’s sitting here and listening for any data requests coming in. So on my first machine, I was able to connect my PLCs, gather that information into Cogent DataHub, and now I’m pushing that information, across the network into a separate machine that’s sitting here and listening to gather information. So what I can quickly do is just make sure I have all my data here. So I have these different points, both from my Allen Bradley PLCs. I have a few, different simulation demo points, like temperature, pressure, tank level, a few statuses, and all this is updating directly through that stream as the PLC is updating it as well. I also have my scenes controller. I have some, current values and a few different counters tags as well. All of this again is being directly streamed through that tunnel. I’m not connecting to an OPC server at all on this side. I can show you that here. There’s no connections configured. I’m not talking to the PLCs directly on this machine as well. But maybe we’ll pass all the information through without opening up any ports on my OT demo machine per se. So what’s the benefit of that? Well, again, security. Also, the ability to do the store and forward mechanisms. On the other side, I was logging directly to a InfluxDB. This could be my d- my buffer, and then I was able to configure it where if any values were lost, to store that across the network. So now with this side, if I pull up Chronic Graph, which is a free visualization tool that installs with the DataHub as well, I can see some very nice, visual workflows and and visual diagrams of what is going on with this data. So I have a pressure that is just a simulator in this, Allen Bradley PLC that ramps up and and comes back down. It’s not actually connected to anything that’s reading a real pressure, but you can see over time, I can kind of change through these different layers of time. And I might go back a little far, but I have a lot of data that’s been stored in here. For a while during my test, I turned this off and, made it fail, but then I came back in and I was able to recreate all the data and backfill it as well. So through through these views, I can see that as data disconnects, as it comes back on, I have a very cyclical view of the data because it was able to recover and store and forward from that source. Like I said, Shawn, data quality is a big thing in this industry. It’s a big thing for people both at the operations side, and both people making decision in the business layer. So being able to have a full picture, without gaps, it is definitely something that, you should be prioritizing, when you can. Shawn Tierney (Host): Now what we’re seeing here is you’re using InfluxDB on this, destination PC or IT side PC and chronograph, which was that utility or that package that comes, gets installed. It’s free. But you don’t actually have to use that. You could have sent this in to an OSI pi or Exactly. Somebody else’s historian. Right? Can you name some of the historians you work with? I know OSI pie. Connor Mason (Guest): Yeah. Yeah. Absolutely. So there’s quite a few different ones. As far as what we support in the Data Hub natively, Amazon Kinesis, the cloud hosted historian that we can also do the same things from here as well. Aviva Historian, Aviva Insight, Apache Kafka. This is a a kind of a a newer one as well that used to be a very IT oriented solution, now getting into OT. It’s kind of a similar database structure where things are stored in different topics that we can stream to. On top of that, just regular old ODBC connections. That opens up a lot of different ways you can do it, or even, the old classic OPC, HDA. So if you have any, historians that that can act as an OPC HDA, connection, we we can also stream it through there. Shawn Tierney (Host): Excellent. That’s a great list. Connor Mason (Guest): The other thing I wanna show while we still have some time here is that MQTT component. This is really growing and, it’s gonna continue to be a part of the industrial automation technology stack and conversations moving forward, for streaming data, you know, from devices, edge devices, up into different layers, both now into the OT, and then maybe out to, IT, in our business levels as well, and definitely into the cloud as we’re seeing a lot of growth into it. Like I mentioned with Data Hub, the big benefit is I have all these different connections. I can consume all this data. Well, I can also act as an MQTT broker. And what what a broker typically does in MQTT is just route data and share data. It’s kind of that central point where things come to it to either say, hey. I’m giving you some new values. Share it with someone else. Or, hey. I need these values. Can you give me that? It really fits in super well with what this product is at its core. So all I have to do here is just enable it. What that now allows is I have an example, MQTT Explorer. If anyone has worked with MQTT, you’re probably familiar with this. There’s nothing else I configured beyond just enabling the broker. And you can see within this structure, I have all the same data that was in my Data Hub already. The same things I were collecting from my PLCs and top server. Now I’ve embedded these as MPPT points and now I have them in JSON format with the value, their timestamp. You can even see, like, a little trend here kind of matching what we saw in Influx. And and now this enables all those different cloud connectors that wanna speak this language to do it seamlessly. Shawn Tierney (Host): So you didn’t have to set up the PLCs a second time to do this? Nope. Connor Mason (Guest): Not at all. Shawn Tierney (Host): You just enabled this, and now the data’s going this way as well. Exactly. Connor Mason (Guest): Yeah. That’s a really strong point of the Cogent Data Hub is once you have everything into its structure and model, you just enable it to use any of these different connections. You can get really, really creative with these different things. Like we talked about with the the bridging aspect and getting into different systems, even writing down the PLCs. You can make crust, custom notifications and email alerts, based on any of these values. You could even take something like this MTT connection, tunnel it across to another data hub as well, maybe then convert it to OPC DA. And now you’ve made a a a new connection over to something that’s very legacy as well. Shawn Tierney (Host): Yeah. That, I mean, the options here are just pretty amazing, all the different things that can be done. Connor Mason (Guest): Absolutely. Well, I, you know, I wanna jump back into some of our presentation here while we still got the time. And now after we’re kinda done with our demo, there’s so many different ways that you can use these different tools. This is just a really simple, kind of view of the, something that used to be very simple, just connecting OpenSea servers to a variety of different connections, kind of expanding onto with that that’s store and forward, the local influx usage, getting out to things like MTT as well. But there’s a lot more you can do with these solutions. So like Shawn said, reach out to us. We’re happy to engage and see what we can help you with. I have a few other things before we wrap up. Just overall, it we’ve worked across nearly every industry. We have installations across the globe on all continents. And like I said, we’ve been around for pushing thirty years next year. So we’ve seen a lot of different things, and we really wanna talk to anyone out there that maybe has some struggles that are going on with just connectivity, or you have any ongoing projects. If you work in these different industries or if there’s nothing marked here and you have anything going on that you need help with, we’re very happy to sit down and let you know if there’s there’s something we can do there. Shawn Tierney (Host): Yeah. For those who are, listening, I mean, we see most of the big energy and consumer product, companies on that slide. So I’m not gonna read them off, but, it’s just a lot of car manufacturers. You know, these are these are these, the household name brands that everybody knows and loves. Connor Mason (Guest): So kind of wrap some things up here. We talked about all the different ways that we’ve kind of helped solve things in the past, but I wanna highlight some of the unique ones, that we’ve also gone do some, case studies on and and success stories. So this one I actually got to work on, within the last few years that, a plastic packaging, manufacturer was looking to track uptime and downtime across multiple different lines, and they had a new cloud solution that they were already evaluating. They’re really excited to get into play. They they had a lot of upside to, getting things connected to this and start using it. Well, what they had was a lot of different PLCs, a lot of different brands, different areas, different, you know, areas of operation that they need to connect to. So what they used was to first get that into our top server, kind of similar to how they showed them use in their in our demo. We just need to get all the data into a centralized platform first, get that data accessible. Then from there, once they had all that information into a centralized area, they used the Cogent Data Hub as well to help aggregate that information and transform it to be sent to the cloud through MQTT. So very similar to the demo here, this is actually a real use case of that. Getting information from PLCs, structuring it into that how that cloud system needed it for MQTT, and streamlining that data connection to now where it’s just running in operation. They constantly have updates about where their lines are in operation, tracking their downtime, tracking their uptime as well, and then being able to do some predictive analytics in that cloud solution based on their history. So this really enabled them to kind of build from what they had existing. It was doing a lot of manual tracking, into an entirely automated system with management able to see real views of what’s going on at this operation level. Another one I wanna talk about was we we were able to do this success story with, Ace Automation. They worked with a pharmaceutical company. Ace Automation is a SI and they were brought in and doing a lot of work with some some old DDE connections, doing some custom Excel macros, and we’re just having a hard time maintaining some legacy systems that were just a pain to deal with. They were working with these older files, from some old InTouch histor HMIs, and what they needed to do was get something that was not just based on Excel and doing custom macros. So one product we didn’t get to talk about yet, but we also carry is our LGH file inspector. It’s able to take these files, put them out into a standardized format like CSV, and also do a lot of that automation of when when should these files be queried? Should they be, queried for different lengths? Should they be output to different areas? Can I set these up in a scheduled task so it can be done automatically rather than someone having to sit down and do it manually in Excel? So they will able to, recover over fifty hours of engineering time with the solution from having to do late night calls to troubleshoot a, Excel macro that stopped working, from crashing machines, because they were running a legacy systems to still support some of the DDE servers, into saving them, you know, almost two hundred plus hours of productivity. Another example, if we’re able to work with a renewable, energy customer that’s doing a lot of innovative things across North America, They had a very ambitious plan to double their footprint in the next two years. And with that, they had to really look back at their assets and see where they currently stand, how do we make new standards to support us growing into what we want to be. So with this, they had a lot of different data sources currently. They’re all kind of siloed at the specific areas. Nothing was really connected commonly to a corporate level area of historization, or control and security. So again, they they were able to use our top server and put out a standard connectivity platform, bring in the DataHub as an aggregation tool. So each of these sites would have a top server that was individually collecting data from different devices, and then that was able to send it into a single DataHub. So now their corporate level had an entire view of all the information from these different plants in one single application. That then enabled them to connect their historian applications to that data hub and have a perfect view and make visualizations off of their entire operations. What this allowed them to do was grow without replacing everything. And that’s a big thing that we try to strive on is replacing and ripping out all your existing technologies. It’s not something you can do overnight. But how do we provide value and gain efficiency with what’s in place and providing newer technologies on top of that without disrupting the actual operation as well? So this was really, really successful. And at the end, I just wanna kind of provide some other contacts and information people can learn more. We have a blog that goes out every week on Thursdays. A lot of good technical content out there. A lot of recast of the the awesome things we get to do here, the success stories as well, and you can always find that at justblog.softwaretoolbox.com. And again, our main website is justsoftwaretoolbox.com. You can get product information, downloads, reach out to anyone on our team. Let’s discuss what what issues you have going on, any new projects, we’ll be happy to listen. Shawn Tierney (Host): Well, Connor, I wanna thank you very much for coming on the show and bringing us up to speed on not only software toolbox, but also to, you know, bring us up to speed on top server and doing that demo with top server and data hub. Really appreciate that. And, I think, you know, like you just said, if anybody, has any projects that you think these solutions may be able to solve, please give them a give them a call. And if you’ve already done something with them, leave a comment. You know? To leave a comment, no matter where you’re watching or listening to this, let us know what you did. What did you use? Like me, I used OmniServer all those many years ago, and, of course, Top Server as an OPC server. But if you guys have already used Software Toolbox and, of course, Symbol Factory, I use that all the time. But if you guys are using it, let us know in the comments. It’s always great to hear from people out there. I know, you know, with thousands of you guys listening every week, but I’d love to hear, you know, are you using these products? Or if you have questions, I’ll funnel them over to Connor if you put them in the comments. So with that, Connor, did you have anything else you wanted to cover before we close out today’s show? Connor Mason (Guest): I think that was it, Shawn. Thanks again for having us on. It was really fun. Shawn Tierney (Host): I hope you enjoyed that episode, and I wanna thank Connor for taking time out of his busy schedule to come on the show and bring us up to speed on software toolbox and their suite of products. Really appreciated that demo at the end too, so we actually got a look at if you’re watching. Gotta look at their products and how they work. And, just really appreciate them taking all of my questions. I also appreciate the fact that Software Toolbox sponsored this episode, meaning we were able to release it to you without any ads. So I really appreciate them. If you’re doing any business with Software Toolbox, please thank them for sponsoring this episode. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
In this episode, I sit down with Armin Hadzalic to tackle the burning question: will the Model Context Protocol (MCP) truly transform industrial AI? We dive deep into how MCP connects large language models and agents to real-world data, what makes it different from protocols like OPC UA and MQTT, and why the biggest players in tech are rallying behind it. Armin shares practical examples, addresses skeptics, and explains what MCP means for engineers, technicians, and anyone working in industrial automation. If you're curious about the rise of agents, the future of dashboards, and what it takes to stay ahead in the evolving world of industrial AI, you won't want to miss this conversation. Join me as we explore where MCP is headed and what it could mean for your job and your factory.
Shawn Tierney meets up with Michael Bowne of PI to learn what IO-Link is, how it works, and when to use it in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 246 Show Notes: To learn about our online and in-person training courses please visit TheAutomationSchool.com. Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Thank you for tuning back in to the automation podcast. My name is Shawn Tierney from Insights in Automation. And this week on the show, we have a special guest, somebody who hasn’t been on in four years. We have Michael Bone from PI. They’re the folks who manage technologies like PROFINET and IO Link. And Michael’s come on this week to talk specifically about IO Link. We’re gonna talk about what it is. We’re gonna talk about when you should use it, and we’re gonna talk about the technical details of IO Link, like, all the things, like, engineering minds like to know about. So I think you guys are gonna enjoy this. It took about two to three hours to edit this one, and I really enjoyed going back through it. You know, we recorded it, I think, four weeks ago. So I hadn’t seen it in four weeks, but I really did enjoy it. I really think you guys will enjoy it too. And that brings up another point. Organizations like PI and ISA and other organizations, they’re not vendors. They don’t sell stuff. Right? And so this episode is not sponsored by a vendor. And, you know, as I was going through it yesterday, I was like, you know, there’s a lot of great slides in here. I wanna share it with the public. So I’ve decided to sponsor this episode myself, and I’ll use this as an opportunity to tell you a little bit about my company and the automation blog, the automation school, and the content I have planned to release this fall, including content on these products right here, all focused on IO Link. And I just actually did a live stream with these, products in front of me. I’ll be doing more tomorrow, and I’ll be adding lessons to my, courses as well on these products. So in any case, but before we get to that, let’s go ahead and jump right into the show and hear from Michael and learn all about IO Link. I wanna welcome back Michael to the show. It has been four years. He was last on in podcast 76, back in September 2021. So just going on four years. Michael, thank you for coming back on the show. If you could, a lot of people may not remember four years ago. Mhmm. So before we jump into your presentation, which which I am so excited about talking about IO Link again. But before we jump into that, could you please tell me a little bit about yourself and a little Michael Bowne (PI): bit about PI? Yeah. Sure. First of all, my pleasure, to be back on on the podcast. It was a lot, a lot of fun. I remember that back in in 2021, and, I’m glad to be be back doing it again. I started with PI North America in 2011 as the technical marketing director. And since 2016, I’ve been the executive director running the show and chairman of the board since last year. I, have the, let’s say, pleasure to serve as the deputy chairman of PI on a global scale since 2015, and I come from a prior to working for PI, I worked for a sensor manufacturer who had some interfaces on there that that brought me an introduct to to Profibus and Profinet. And before that, I studied, physics and and math at at Penn State University. Just, really quick for those. I’m I’m sure many of you are familiar with with PI, but, it was started in the late eighties. Half a dozen companies and universities got together, and they wrote the PROFIBUS spec, and that evolved into the into into PROFIBUS DP and PROFIBUS PA for process automation in the early two thousands. PROFINET came under the umbrella. And the reason I bring all this up is because there are some newer technologies under our umbrella that I I think the audience might wanna know about. Of course, EyeLink is is the one that we’ll talk about today, and that was in 02/2009. But there are some others like Umlocks, which is a location tracking standard. There’s one called MTP, module type package, NOAA, NAMR open architecture, also under our umbrella. And, basically, what we do is promote, maintain, write the specs, turn them into standards, and the work on those specs is done in working groups, which are staffed by volunteers, engineers from member companies. They donate their time to to develop the specs, for these technologies we have under our umbrella. And we’re a little bit unique in that we’re decentralized. So we have competence centers and test labs and training centers located throughout the world. It’s not all just in one headquarter kind of place, and they’re all independent. But they have a contract or quality of services agreement with PI that says, hey. If you have a question about the technologies, go to a competent center. If you want further training, go to a training center. If you want to to test the device, go to a test lab. And then they are all working with regional PI associations of which we PI North America is one of them. We were founded in 1994 by a guy by the name of Mike Bryant. At that time, we were called Probibus Trade Organization. And we are the and now I didn’t come up with this this name. We are the North American Rio League. This is a an IO Link designation, a regional IO Link interest group, which means that we have a a separate contract and and quality of services agreement with the IO Link community to to promote and and work with members, specifically for IO Link here in in North America. And we’re nonprofit, member supported. I got nothing. So you’re talking about products and and and stuff at the beginning. I got nothing to sell today. We’re we’re working solely on on technology. Shawn Tierney (Host): You know, I do wanna throw out there, though, you have a great update every month about all the new products that fall in the buckets of IO Link, PROFINET, PROFIBUS, and a lot of those new products across our IO Link. So while they may not have products of their own, they do keep the, industry up to date on who’s joining up and signing up, for these new these you know, the jump on board and release new products that, that, you know, meet these specifications. And you know what? Maybe you’re not using PROFINET because you’re using brand x or y. You still probably use an IO Link. So Oh, that’s for sure. Very interesting very interesting updates that you publish every month and, as a blog. And, I know when I was doing the news for a couple years, I would always, go to your site to look for new updates. Michael Bowne (PI): Cool. Yeah. Yeah. I guess, I got a slide on that at the end, but the the you’re referring to the, the PROFINews. Shawn Tierney (Host): Yes. The PROFINews. Yeah. Michael Bowne (PI): Yeah. That’s a that’s that’s been a baby and a labor of love, for a while now. And and, oh, man, it’s it’s it’s incredible because every month, the most when we track this kind of stuff, obviously, the most popular article is the new products. Well, because that’s what, right, that’s what people want is the stuff they can buy, the stuff they can use. Yeah. Yeah. That’s and we got another one coming out next week, and every month, we we push that out, and it’s always half a dozen or a dozen new products, half of which are are IO Link. I mean, it’s just growing like crazy. Shawn Tierney (Host): Yeah. And you guys have had some good articles. I think you had a great series, and I’m now I’m stretching it. So stretching the old memory here. I thought you had a great series on on, MTP, which I really enjoyed. Did did I remember that correctly? Michael Bowne (PI): Yeah. We we try and, you know, we try and get some editorial content in there. It’s it calls it falls into, like, three main buckets. What’s new products? What are new trainings and events that are coming up? And then and then some editorial content. I think I think what we’re driving at is I think we need to do maybe an MTP podcast here at some point in the down the road. Shawn Tierney (Host): Probably. Yeah. Down the road. Definitely. Definitely. I I’m still you know, I still have a very casual understanding of it. But, let me throw it back to you because I kinda jumped in and interrupted your your your, update. Michael Bowne (PI): No. It’s good. It saves it saves us at the end when when that slide, we can just just jump over it. Now we’ve we’ve got it covered, and it’s and it’s an and it’s an important one. But you kinda you kinda gave me a nice lead into the to the next one, which talks about, the Ireland community. And I’ll start from the bottom, work my way up as being fieldbus independent. Shawn Tierney (Host): I just wanna break in here for a moment and thank you folks in the audience who’ve signed up for my membership program. Really, really appreciate you all. Eighteen months ago, after reviewing ten plus years of being on YouTube, you know, it was pretty obvious that there’s no real revenue on YouTube. I mean, it comes in at maybe 1% of my monthly expenses. And so that ad revenue there is just not something to rely on going forward because it’s not something that’s been reliable in the past. And so I set up the membership program both on YouTube and at the automationblog.com. And I wanna thank all of you who signed up. I, we have a $5 tier, which I know most people sign up at, and then we have a couple other higher tiers. And so I just wanted to thank you all for doing that. You are actually the membership program’s probably 3% of my monthly, revenue. And so that’s, you know, one or two times more, than, what the YouTube revenue was. So thank you all for that. And I hope that, some of you who are not part of the membership program will consider becoming a member, supporting my work so I can do videos that are not always sponsored videos. Now I love sponsored videos. I love it when a vendor sends me a piece of hardware and then sits down with me and teach me how to use it so I can create a video ad free and share with you on how to use that product, or maybe they just come on the podcast and sponsor it to make it ad free so we can tell their story about their product or service. And I I will continue to do that going forward, but I would really also like to do more audience generated type of, content. So content where you generate the idea and say, Shawn, why don’t you try this? Or, Shawn, why don’t you do this? And a lot of those topics that the audience wants to see, they’re not necessarily topics that the vendor wants to promote with advertising dollars. Okay? And so that’s the whole purpose of the, membership program. Like I said, right now, it’s around 3% of my monthly income comes from and I’m talking about the business income, not my personal income, the business income. 3% of what the business needs to, to move forward and pay its bills every month. But, still, I that that, you know, so many of you have decided to jump in and support me. I just wanted to stop and say thank you very much from the bottom of my heart. And if you’re not part of the membership program and you’re doing financially well, please consider if you enjoy. This is episode two forty six of the automation podcast. Every episode has been free. The audio has been free for all 246 of them. And most of those episodes I funded myself just by well, you can understand how you fund something when you don’t have the income coming in. But in any case, if you enjoy it, please consider becoming a member, and we can branch out and do other things together. And with that, let’s go ahead and jump back into this week’s episode and learn more about IO Link. Michael Bowne (PI): So like you said, yeah. I mean, organizationally, the IO Link community came to PI in 2009 and organizationally under PI because we have the infrastructure for working groups and and IP policies and contracts and things like that. But the IO Link community has their own steering committee, and from the from the outset and from every IO Link event that we do and everything that we do is is independent of, of any Profibus or Profinet stuff. And we try really, really hard to maintain that independence, no matter what vendor you’re using. And there, at this point, we’ve got 500 companies in the IO Link community, and it’s really just growing by by leaps and bounds. So we kinda track this stuff by nodes and all the IO Link companies. They send their node count to an independent auditor, collects the counts, and gives us back an an an anonymous total. So we don’t know where or who is selling them, but we get the total. And you can just see this this hockey stick exponential growth. Particularly in 2023, there was some supply chain over purchasing that that went on. I mean, that’s like we’re looking at a a growth rate of 89% there, which is obviously unsustainable. But still, last year, 9,700,000 nodes were added. Again, because it’s field bus independent, it really has no competitor. And that’s what’s kinda cool about IO Link. I mean, you wanna do and and you don’t need to choose a field bus and therefore get IO Link. You can use any field bus or industrial container protocol, and IO Link works with it. Shawn Tierney (Host): You know, I wanna just, mention for the audio listeners. If we go back to 2012, it looks like we’re probably at the 1,000,000 mark or below it. And as you go to, you know, 2022, you look like you’re 35,700,000. Is that 2022 or 2023? Michael Bowne (PI): Yeah. That’s the 2022. Exactly. 35,700,000.0. Yeah. Shawn Tierney (Host): And then at, the end of twenty twenty three, we’re at 51.6. So you talked about that, you know, overbuying. And then at the end of 2024, we’re at 61,300,000.0. So you can just see from, you know, 2022 to, 2024, you went from 35 to 61. So the adoption, like you said, it’s a hockey stick. The adoption has really picked up. And I think you you hit the nail on the head because it is fieldbus independent. It’s a way to just get more information out of our devices, like sensors and photo eyes, you know, and it’s just you know? I mean, though, these chipsets that come in these, devices now are just amazing. Michael Bowne (PI): And that’s what, I mean, that’s what the whole point of this is. You’re you’re not gonna put a $5 ethernet chip, like, enter $5 ethernet interface on a $15 proximity sensor. But computing and memory has gotten really, really small and really, really cheap that it’s on just about everything. And so this proximity sensor not only can tell you if, like, for example, let’s say it’s on a conveyor belt. It cannot only tell you if the box is there or not, but it can tell you how many blue boxes would buy or how many red boxes would buy or if the box that’s going by is off kilter or or misaligned or something like that. But how do you get that data out in in inexpensively, and here we are. IO Link is is the way to do it. Shawn Tierney (Host): I’m sad to see a lot of these sensors too come with humidity, temperature, and all these other things should be like, really? I can get that out of my Michael Bowne (PI): photo eye. But yeah. Multivariable. Exactly. Yeah. Yeah. You know, traditionally, with an analog interface, how did you get that? You couldn’t get it. Mhmm. But now with a digital interface, which is what we’re talking about, digitalization in the last meter, now you can get that informate that data, that information, and do some pretty cool stuff with it. Shawn Tierney (Host): Yes. You can. Yeah. I’ll talk Michael Bowne (PI): a little bit about the architecture a little bit here to kind of get a little bit into the technical side of things about how IO Link works, but it they’re kind of some main devices, and that’s the IO Link masters and the IO Link devices. And these IO Link masters are available for we have here 16 different industrial Ethernet or field bus systems. 21 manufacturers offer central PLC, like an IO Link master built into the backplane of the PLC if you so desired. And the number of devices that so that hockey stick we showed before is just exploding. I mean, we’ve got 60 something million sold, and we have tens of thousands of unique IO Link devices from hundreds of different device manufacturers that have implemented this interface. And for those that if there’s anybody on the podcast that wants to do this and add this to their sensors, there are a number of different companies that help with, product design, either with the chips, the transceivers, the software stacks, and then a number of companies that help provide technical support in order to do that. So an IO Link system kind of is made up of four parts. Like I said, you have the IO Link master. That’s the gateway between the IO Link devices, the IO Link interface, and the higher level communication system, such as the fieldbus or the in industrial Ethernet protocol or backplane. You have the devices. This is the exciting part. Your sensors, your switch gears, your valves, your signal lamps, maybe some simple actuators, whatever the case may be. You’ve got a IO Link cable, just a three wire unshielded, super simple connection between the master and the devices. And then every device has an IODD or IO Link device description file, and I’ll explain how that gets used to engineer and parameterize the IO Link system and the and the devices. And what this kind of enables you know, traditionally, communication only reached the IO level. You had connection between the PLCs and the and the the the IO, and then it kinda stopped there because all those sensors and actuators were not accessible. They were analog, and you got your one process data. You brought process signal, and that’s where it ended. But with IO Link, what we do is we enable that communication bidirectional, cyclic and acyclic, and that’s the cool part, all the way from higher level systems, not only to the PLC or especially from the PLC, but down all the way down to the simple sensors and actuators, which are now accessible. And you kinda touched on this before where these chipsets have gotten really, really smart and really, really powerful. And it’s not that the it’s not that any of these use cases that are that are being solved with IO Link that none of them are new. What’s new is the ease with which they can be solved. So because you can get all this extra data out, things like OEE, showing things like downtime tracking, track and trace, predictive maintenance, for example, remote monitoring, recipe management, SPC, all these things. It’s not that these use cases are now being solved. The you know, we’ve we’ve been doing this for a long, long time. It’s just the ease with which because because it’s a standard and because all this stuff is standardized in how it gets from the the the device to the master and upwards to the controller, it just makes it easier. If you spend all your effort trying to gather and collect and sanitize the data because every device is different and, you know, that’s just that’s just a mess, and the ROI disappears really fast on any kind of project to do that. But if we have a standard on how to do that, then we make it very, very easy to do, and everything can come in, quite nicely. And and and it just and it just works a whole lot easier. You start getting access to that data. And so what we’re starting to see is connections being made. You know, you talk about the the flattening of the traditional automation hierarchy where now not only is that IO block or that sensor connected to a to a PLC, but it’s got some extra data. Like you said, like, this little photo I might have a a a temperature or a a moisture, you know, sensor also in there, just because it’s part of the the chipset. But the PLC don’t care about that. He just wants to know about the, you know, the information from the photo eye. So what do you do with all this extra beautiful information that isn’t necessarily processed data? Well, maybe the MES wants to know about that. So how do you get that? And in a running factory, in a brownfield environment, rule number one is don’t touch the running PLC. Shawn Tierney (Host): Yeah. Michael Bowne (PI): And rule number two is see rule number one. That thing is running, and any minute of downtime costs more than any one thing on the on the factory floor. Shawn Tierney (Host): Before we go on, I did wanna break in here and tell you a little bit about my website, theautomationschool.com, where I do my online training. I also do in person training. And you probably don’t know that that all started back in 2014 with a Kickstarter I ran for my first PLC basics course. At the time, it was called microprogrammable controller basics, and I ended up changing it just the PLC basics. But in any case, since then, I’ve had added a dozen courses on a various number of topics, and you’ll find them all at the automationschool.com. But what I really wanted to talk to you about is why. Why did I do that? Well, I had spent twenty five years as a certified authorized Rockwell Automation distributor specialist covering PLCs, HMIs, SCADA, MES, and other stuff too. Right? And I knew from visiting, customers in the plant every single workday, almost every workday, that there was a real need for affordable training. So the first thing is, you know, large companies have large expensive, large paychecks, and lots of overhead, so they gotta charge a lot. Right? And so that was a problem because a lot of the people I was working with, you know, the controls engineers, automation engineers, high end electricians and technicians, they had to fund their training themselves. Their company was sort of like, no. We trained this guy back in the nineties, and then he will have to get a better job. So we’re not spending money on training. And so all these people were having to train themselves, and it was unaffordable to either, you know, buy the the, vendors courses. Or even if the the company did have training dollars, it was unaffordable to send them away for a week to a $3,000 course somewhere halfway across the country, probably $3,000 worth of travel and hotels too. Right? And then they go where without one of their smartest guys, right, one of their best people, because you you that’s usually who you’re gonna train and and uplift through the through the organization. Either people are doing good on the lower level, you wanna bring them up and train them on automation. And so that’s why I started the the, automationschool.com because of the the try to provide I knew the the courses would never be Hollywood quality. I mean, this isn’t Hollywood quality. Right? But I knew it could be helpful and and, you know, be affordable by just filming them in my garage. Right? And, you know, picking up some used equipment and putting together the episodes. And the site has grown so much. We have thousands of, students from over a 150 countries. We have hundreds of, vendors we work with. But the other thing I did is, is made up by one’s own forever. Right? So more like an ebook or an audiobook or an m p three album. Right? And the reason I did that and I understand why the vendors don’t do that because they’re like, well, they’ll sign up one guy in the I and e shop, and he’ll share his password for everybody. You know, that could happen. Right? People could rob a bank too. But I’m like, you know, most people, when they buy a course and I saw this. I was on an independent platform for a while, and on that platform, they showed you how the progress of every student. Most people buy the course well before they’re ready to take it. And I’m like, I’m not gonna charge people a monthly fee or only give them access to to a short window if, you know, they have good intentions now, but it takes them a while to actually free up their schedule to get into the course and take it. So that’s why my courses are buy one’s own forever. And it can you know, as they grow, the price goes up because I’m adding more and more content, and I do split them out and make cheaper versions over time. But, those people who buy in early, they get the like, my s seven course. Like, I think it originally came out at 40 or $50, and now it’s $200 because I’ve added so much to it over the years. But in any case, same with ControlLogix and CompactLogix. And then the other thing too is I want them to be able to take it more than once. Right? So if you take a let’s say you take a ControlLogix course. Right? You don’t use it for a couple years, you probably gonna have to take it again. And I don’t want you to feel like you have to pay a monthly fee to do that. It’s like an ebook or an m p three album. You bought it. You bought access to it, I guess I should say, and now it’s yours. Right? And the other thing too is I support my students personally. Okay? So I check the website every day for questions, every work day. I should say, you know, I do take Sundays off. So in any case, if you’re if it’s a work day, though, and I’m working, I’m not on vacation or traveling for business, I’m up there. I’m answering questions. And I should say, even when I’m traveling on business, I’m I’m on there answering questions. So although if I don’t have any hardware, there’s some questions you can’t ask. Right? I guess I should have said some questions you can’t answer. But in any case, I just wanted to share that with you. Theautomationschool.com, a high quality online courses, five star rated, buy once, own forever, and guess what? I’m updating all the PLC courses, and if you already own or buy one of the existing PLC courses, you not only get the updated lessons that get added to that course, you get the new course completely free. So I’m not gonna charge you for just an updated version of a class on the same core on the same product. Right? That would be kinda silly in my opinion. So, I hope you guys appreciate that. Again, if you didn’t know any of this, if you have any questions, if you go over to the automationschool.com, at the very top of the site, you’ll see links to contact me, set up a meeting, leave me a voice mail, fill out a form. You know, I have many ways you can get in touch with me. And if you have multiple people you wanna sign up, I do have multiple seat discounts starting at three seats. And, I do actually work with a number of Fortune 500 companies who, you know, enroll maybe 10 people at a time to get that discount. And you know what? Unlike the big vendors, if somebody you sign somebody up and they all take the courses, I’ll let you replace that person for free of charge. You don’t have to pay anything extra. If you sign up Joe and he decides to quit or leave or not to learn, you can put Bob in his place. That’s not a problem. Now I have said some situations where the same spot kept getting replaced or replaced or replaced. At some point, I do charge a maintenance fee to to switch the names out. And then, hey. Look. If Joe leaves and he took, you know, two out of three courses, I’ll prorate refilling that seat with the new person. Right? So whatever percentage of the lessons he took versus the total number of lessons, I’ll prorate it. So, you know, we’ve had number of cases where somebody goes through half of the content then leaves, so we can reset that seat for half price. And I that’s something you won’t find, any major vendors doing as well. So if you have any questions about that, reach out to me over at the automation school dot com. And with that said, let’s jump right back into this week’s episode of the automation podcast. Michael Bowne (PI): In a brownfield installation, what we’re seeing these these cool little edge gateways, And what they’ll do is they’ll grab the bus, they’ll collect some data, and pump it out the other side via, you know, maybe an IT protocol that that the IT guys wanna know about or, you know, like an MQTT or an OPC UA. Of course, in a in a greenfield, in a new installation where you’ve got a brand new PLC, yeah, get the data there. That guy has all the brains, has all the all the information in one ply in all in one place, so get it from the PLC. But in Brownfield, I the edge gateways, even some IO Link masters are being put on the market that have not only an industrial Ethernet interface, you know, just on one port, on the same port, industrial Ethernet interface for control, but that interface will also speak like a higher level IT protocol like an MQTT or an OPC UA, so you can get it even from the IO Link master that data is is accessible. So the different ways to get it, and, and that’s kind of the whole point is is getting that data from the sensors to the to the master and then further upwards. Shawn Tierney (Host): We actually covered a product on the show that had two ports. It had one for your fieldbus Michael Bowne (PI): Yeah. And then it Shawn Tierney (Host): had a separate one for your IT or your IOT or your MQTT, which I thought was so inventive too because now the control system gets its data, and it’s under control. But reporting wise, you know, that’s kind of the best of both both worlds. You don’t have to have two sensors. You can send it to data both ways. And, yeah, just it’s the way you can do with these things and, you know, a lot of the sensors you probably have out there, I’ve noticed that some vendors, every sensor they sell is IO Link. So Yeah. You may already have it installed and not know it because the price difference to add it to some products. Once you get up to the fanciest sensors, of course, not the simplest sensors, but once you get up to the fanciest sensors, it’s it’s, you know, there’s a lot of horsepower in that chipset. So, you know, they can add IO Link for for pennies on the dollar. So very interesting stuff, though. Michael Bowne (PI): Yeah. That’s that’s a good point. And and, you know, of course, we could spend all day talking about IT, OT, and the segmentation of networks and all who’s who owns the IP addresses. And we I mean, that’s a whole separate topic. But in cases like that, yeah, it’s cool. You got a separate port. IT can do what they want on their one port. And if but, hey, don’t touch me in the control realm because Mhmm. This is my this is my realm. And and you bring up another good point, and that’s kind of there’s a I don’t I don’t wanna say that, you know, there’s there isn’t, like, a thick black line between, okay, this sensor is simple, therefore, should have IO link, or this sensor is complex, therefore, should have its own industrial Ethernet, interface. There’s almost a little bit of a gray area, but you’re right. I mean Mhmm. We kinda leave it up to the vendors to decide. Hey. My thing needs the horsepower that and it’s so complex that I need something like, like, an industrial Ethernet protocol. But, oh, you know what? This other central line is tailored for low cost, and so, therefore, I’m gonna put IO Link on it. But that’s, you know, that’s up to them to to decide. So when we talk about IO Link in terms of benefits, we kinda like to make the analogy with USB because everybody knows USB. You got your USB cable. You plug it into your computer on one end. On the other end, you plug it into your you know, you plug your mouse in or you plug your keyboard in, and you plug your key your printer in. Automatically, it it uses the same cable. It’s always the same. Everything everybody’s using that interface, and we kinda see the same thing with IO Link where it’s just a unified, unshielded three wire sensor cable, and it can use be used with all IO Link devices. Up until now, you know, if you had smart devices, right, memory and computing power is smaller and cheaper. Up until now, to get that extra information out, you would need multiple cables. The wiring is time consuming. It’s expensive. They’re large, costly to to install and maintain. But But with iolink, you just you just plug it in. It’s a simple m 12 plug, and then you don’t have all these spare parts of different cable types. It’s just one cable and, easy to maintain, thin, flexible. I’ve got a I’ve got an example here I’d like to highlight, and I’ll try and talk through it for those that are that are listening instead of instead of viewing. This is an example of 256 IOs via 16 fieldbus modules. So, like, fieldbus like remote IOs or whatever the case may be. So we’re connecting them to a PLC out in the field. And to do that, we would need 16 fieldbus modules in order to do that. These are just let’s let’s call them simple DI, you know, digital input proximity sensors. Mhmm. Shawn Tierney (Host): Mhmm. Michael Bowne (PI): With IO Link, we can do that via just one fieldbus module. So that’s just one IP address or one IO Link master. So already you’re cutting out 15 of those more expensive devices. And then we use what are called so called IO Link hubs, which bring those DI signals, put it all on one IO Link connection, put it into IO Link master, and send it out the other side. And with that, we can connect if you imagine these 272 IOs as shown here via just one fieldbus module. So it’s showing just huge, huge, huge savings simply on cost alone, due to the wiring. And, that that one cable, it fits all sensor types. So simple sensors, like a proximity sensor all the way up to complex devices like pressure, temperature, signal lamps, and even simple actuators all use the same IO Link cable. Shawn Tierney (Host): So where an IO Link device would be giving you not just on or off, but a lot of other information and some of that analog information. If all you had was a dumb device, well, now I can put 16 of them or so, you know, some number of them together Mhmm. Bring them into a hub. And each since each device only has an on or off, where a regular IO Link device would have lots of other information, you can now just join them all together and say, okay. Here we go. Here’s inputs one through x. Michael Bowne (PI): It’s, almost like multiplexing, put it all together on one and then Mhmm. Pump it out the other side. Yeah. Shawn Tierney (Host): Perfect. Michael Bowne (PI): The other way we relate IO Link to USB is kind of in the the identification and parameterization. So if we look at how you plug your printer into your computer, you plug it in, and automatically, your computer says, oh, okay. I know that that’s a HP something something desk check printer and and okay. How do you wanna do you wanna do color or black and white? Do you wanna do full duplex? Do you wanna do back and white, back and front on on the printing? And the same is true for for IO Link. So you plug in that IO Link sensor into your IO Link master. It reads it. It says, hey. The dialing says, hey. This is who I am. This is my type. This is my serial number. Every device has a vendor ID and a device ID. And then the IO Link master goes up and gets the IODD file, and I’ll show that here in a little bit, and then you can start that parametrization. And it’s just like it’s just like a USB. It’s it’s, no special knowledge is required. You can format changes very, very easily. You can even do them on the fly, for example, with an HMI on the on the machine. And, the identification methods make sure that you don’t plug in a wrong device into an IO Nialink port, which could stop the machine. It’ll it’ll it’ll recognize that and prevent, incorrect connections. It allows you to exchange devices very easily of the same type or the the same same manufacturer, same same device. So just like USB, it it it kinda works in that way. And then the other way, it’s kind of like USBs in the diagnostics, and this is a really, really powerful part of IO Link. So when your printer says, I’m out of paper or I’m out of toner or there’s a paper jam, it sends that signal, standardized signal to the to the computer, to your computer, your PC, and you know exactly what what to do, how to fix your your printer, why your printer isn’t working the same as true for IO Link. We’ve standardized these diagnostics. So this is a, a photo eye saying, hey, under voltage or over temperature or the the window on the photo eye has gotten dirty, so signal quality is deteriorating. So we standardized all this, so that these diagnostics all come in the same way, and, you can, you know, fix any any problem as fast as possible to to to, minimize downtime. And in the case of things like signal quality, hey. The the the window’s getting dirty. This enables things like preventative maintenance. Oh, I know I’m going into a planned shutdown next week. Now’s the time to go out and clean those sensors kind of thing, because I know that they’re I know that the signal’s going is deteriorating. So some cool things like that, that wouldn’t be possible with a traditional analog signal, which we’re showing here. And it also makes really no sense. I mean, in this example, what we’re showing here is a generic this is a pre pressure sensor. You know, it does its measurement. It then does some amplification, and then to stabilize the signal, it does an a to d, puts it into a micro, which does some temperature compensation linearization. But then, traditionally, prior to IO Link, what you do is then do another data a to send it out via zero to 10 volts or four to 20 milliamps, whatever, into the into a, an a to d card on the backplane of the PLC, I mean, this is just this is just crazy. It’s it’s time consuming. It’s, the the signal is still susceptible to interference. The the analog inputs on the cards on the PLC are expensive. There’s manual calibration of the signal. But with IO Link, it just makes sense. You take that signal right from the micro, pump it out digitally via an IO Link inexpensive interface to your, to your IO. And, we use that unshielded three wire inexpensive cable, Shawn Tierney (Host): and Michael Bowne (PI): then you get all those parameters and diagnostics. And, really, that’s the point of using IO Link is all that extra data, all that extra information that that comes along with the the process data. Shawn Tierney (Host): Yeah. And so those of you who are listening, I mean, what we saw there was to to shoot out a four to 20 milliamp signal or zero to 10 volt signal, it had to convert it from the digital value that was inside the device to analog, then I have to pump it out. And, you know, we always have to worry about noise and, you know, shielding and all that, you know, depending on the length of the run. And then in the PLC analog card, it’s converting it from analog back to digital, so you have that zero to 32,000 value or zero to 64,000, whatever your PLC does. And so IO Link does eliminate that. It eliminates the noise of your traditional analog. And I know I’ve met so many customers say we have no noise issues on our analog, and that’s great. But not everybody’s in that same boat. So you’re eliminating that d to a and then a to d, and that’s that’s you’re keeping everything digital. So you’re not only getting a cleaner, more accurate value from your device, you’re also getting all those additional pieces of information and the ability to be maybe configured to products. Some of these products need to be changed based on the type of product they’re sensing, you know, the type of fluid going through, the recipe that’s being drawn, the lighting, the colors. So all those different things, you you know, with a typical analog signal, you’re not gonna be able to send back and do a configuration to it. So, go ahead. Back to you, Michael. Michael Bowne (PI): No. You’re right. Exactly. We we have I I took this line out of this deck for the for, you know, for for brevity, but we show examples of of particularly food and bev, right, where you have batches, different I’m running a different batch. I’m running a different product. I need a different label on the on the bottle or whatever I’m running through the the the machine. You reconfigure that via the HMI. It sends all that stuff down to the sensors. Okay. Now I know I’m looking for I should be sensing this instead of this. Shawn Tierney (Host): Yeah. It could be a clear bottle sensor, the clear bottle detector that the bottles change colors. So it’s has a different setting, or it could be background suppression depending on the color of the product. You need a different setting or a color sensor. Maybe you’re making different products and the different colors, and so, you know, all this is now configurable through your PLC, through your control system, through your HMI, which I just think is so cool. Michael Bowne (PI): Yeah. It’s it’s it’s super cool. Alright. Let’s get a little bit technical here. I think for some of the engineers, that might be nice. The IO Link signal and 24 volt power supply, like like we talked about before, it’s it’s an m 12 connector. So you’ve got five pins. Your pin one is your high, pin three is your low, and then pin four is your CQ line. That’s that’s where the IO Link digital signal lives. It’s serial. It’s bidirectional. It’s point to point. And then we also have on that same pin four, if you so desired, you could also parameterize your device via IO Link, set it all up, and then put it in what’s known as a CO mode or simple IO mode. And I’ll show that on the next slide too if maybe you’ve just got a digital IO, that you want a fast switching interface. So pins one and three are our power. Pins two and five are freely assignable. So for example, if you wanted to use that pin four for your IO Link signal and then separately have your own DI or DQ line, you could do that using a three wire, four wire, five wire cable. And then what’s cool also in IO Link and we’re starting to see this more and more is we call this port class b, same m 12 connector, same five pins, but pins two and five provide a separate power supply for additional power because and this is cool. We’re starting to see more and more IO link just, like, simple actuators Mhmm. On the market. And that’s really neat. So let’s say you’ve got some simple linear actuator, not not a complex, you know, driver, you know, or motor or something like that, but a a simple linear actuator. You can drive that via IO Link if you just gotta move something really, you know, maybe maybe even within connected to the same ports, on the master as some other sensors, and so you can do that logic in the master itself, you know, simple simple stuff like that. But that’s also possible with IO Link where you can drive it, not just sense it, but also actuate it with with IO Link. So that’s that’s some cool stuff that’s coming down the line. Shawn Tierney (Host): You know, and I found that all the IO Link devices I had here, they came with the SIO mode already set up. So I was able to use the photo eyes and the proxies and all the other devices just as simple IO devices and without even touching the IO Link side of it, which I think is cool because, you know, in in many cases, you just need a photo eye to get up and running. Right? Michael Bowne (PI): Yeah. And that’s and that’s how they come out of the box. So out of the box, it’s in that CO mode. And I think you you kinda touched on this before. Maybe many customers have IO Link devices Yeah. On their machine. They don’t even know it Mhmm. Shawn Tierney (Host): Because they Michael Bowne (PI): took it out of the box. They needed that photo. They plugged it in and away they went. But there’s also that all all that extra stuff. If they wanted to, they could get down into the IO Link part of it. Mhmm. Maybe to reparameterize it, or what if you got to change, you still wanna use the CO mode. You just want that digital input. What if you wanna change the switching distance, for example, something like that? I don’t want it to switch at one meter. I want it to switch at two meters or whatever. So all that all that can be configured via IO Link. So on the if we if we talk about the the IO Link communication itself, there are three transmission speeds, comms one, two, and three. Comm one is 4.8 kilobits per second. COM two is 38.4 kilobits per second, and COM three is 230.4 kilobits per second. IO Link masters support all three comm modes, but devices are free to choose based on what they’re sending. If it’s temperature, maybe you don’t need COM three because that’s changing more slowly than something like like like we’re talking about a proximity sensor, which may want to send that a little bit more quickly and uses that that COM three mode. Many, many devices use COM three mode because still two hundred two hundred thirty kilobits per second, that’s, you know, that’s not gonna that’s not gonna kill you. And then a typical cycle time, because this is the question we get all the time, is what kind of cycle time can be achieved? It’s about a millisecond at at com three. So if you’re, you know, trying to go submillisecond, you know, maybe IO Link is not is not the solution at that point. But for many, many applications, that one millisecond cycle time can can, can accomplish whatever they need to. And then what’s cool is that from the EyeLink master’s perspective, it’ll have eight or 16 sensors connected to it. Each device can be set independently. So on this port this device, I’m talking at this comm rate and this cycle time. This other port number two, I’m speaking at a different transmission speed and a different cycle time and so on and so forth, you know, so that you’re not sending data unnecessarily that is simply just being sent for the purposes of being sent. And that’s and that’s pretty cool. Shawn Tierney (Host): And a lot of times, you don’t because you’re not reading a digital on off, you don’t the speed, you’re you’re actually getting a value, and that value a lot of times your PLC is not gonna be running faster than a millisecond scan time. So if you’re getting your value updated, you know, faster than the PLC, then that’s a then then that’s really what you need. Do you know how fast is your PLC running? How fast can your program controller use that value? And, you know, I’d be hard pressed to see a lot of applications where they’re breaking that one millisecond update rate. The other thing too is just because we’re talking at the speed doesn’t mean the actual calculation is even possible in a millisecond. So, you know, temperature changes, things that that sensors there’s limit limitations to the physical world. You know? And, you know, I I don’t know if anybody’s ever said this to you before, Michael, but when I first saw the whole comm thing, I thought that was confusing because having grown up with PCs, I always thought of comp one, comp two, comp one group. Right? And these are really just bought what I would call from the old days, sewer rates. Right? Michael Bowne (PI): Yeah. Exactly. Shawn Tierney (Host): Exactly. Insight why why they is it just maybe because it was the standard started overseas or any idea why they went with CALM? Michael Bowne (PI): I’m not gonna lie to you. That’s the first time I’ve gotten that question. Shawn Tierney (Host): Really? Okay. Michael Bowne (PI): Why they’re called that yeah. Let’s just let’s just rewrite this. They call it BOD one, BOD two, BOD three. Shawn Tierney (Host): I know. It’s just so weird. But, anyways, sorry sorry, audience. I just have Michael Bowne (PI): That’s a good one. That’s a good one. Nope. I’ll take that one back. Alright. So IO Link data comes in a couple different flavors. You have your process data. That’s your bread and butter, what you’re using to run the run the factory. Transmitted cyclically in a Telegram, the the data size is defined by the device, and it can be up to 32 bytes for each device, both input and output. Along with that comes a value bit indicating whether the process data is valid or invalid, and this can be transmitted is transmitted cyclically with the the process data. And then you have things that happen acyclically. These would be device data like parameters, identification data, diagnostic information, and these happen on request of the IO Link master. Obviously, a lot of that happens during startup, but also can happen during runtime if, as shown here on the slide with the with the last case, events can be error messages. So the the, the device will set a flag. Hey. There’s a short circuit or so, and then the the master can pull that device for more information, more diagnostic information, based on that event flag that’s that’s set by the set by the device. And so, the the question we always get at this point is, how do I make this all work? How do I integrate this stuff into my into my plant? Shawn Tierney (Host): Before we go any further, I did wanna jump back and tell you about a service I’m doing that I don’t think I’ve talked about very much, and it’s comes in two different flavors. First of all, I’ve actually had some vendors and companies reach out to me and say, Shawn, I know you don’t wanna travel all around the country with all your equipment. Right? That’s not what you do, but we want you to come out and teach us something. Would you come out and do a lecture? We’ll set up our own equipment. And, can you come out and just run us through some of the products and teach us some of your knowledge, and you don’t have to worry about bringing all the equipment with you. And so that’s something I really don’t talk about much, but I do wanna tell you that if you’re looking for training and you need it on-site, of course, you do have to pay for my travel time. But if you do want me to come out for a day or two days or for a week and do training on any of the products I train online now. Now if you want me to come out and do training on a product I don’t already have a curriculum on, I can’t do it. The building the curriculum is where all of my costs is on the training. Right? I shouldn’t say that. The web service in in in the back end does cost something every month as well, but most of the time it goes into and that’s really what being self employed is it’s time. Right? Most of the time goes into build building the curriculum. So if you have a need for somebody like Shawn, we can’t do a webinar. We can’t do a Teams meeting. We we can’t do online training. We want you to come out. And, again, I just got a call on this yesterday. Yes. I can do that. As long as the curriculum I’m gonna teach you is something I already have existing. And, I’m not gonna hand out lab books. We can buy you lab books if you want. People sell great lab books for $80.90 dollars a pop. If you want lab books, I’d be more than happy to include that in the quote. But in any case, I that’s one thing I do. The other thing I’ve been doing with vendors is they’ve hired me to come out and interview them at their trade show. So, usually, what happens is somebody will sponsor a podcast for $5.99. They’ll come on. We’ll do the interview. I’ll edit it all up. I’ll put their links in. We’ll talk about the thumbnail, and then we’ll release it ad free. Right? And so that covers my cost of producing that episode roughly. Right? We just raised it from $4.99 to $5.99 because most of the shows were were actually upside down on, so we need to raise it a little bit to make sure we’re covering our cost. But in any case, sometimes vendors have, you know, they have their own trade show, and they may have all of their product specialists there. And they’re like, hey, Shawn. We would like to do six or seven interviews at the trade show. Would you come out and actually record them there? We’ll pay your flight. We’ll pay your hotel and your expenses to get there and back. And so that’s another thing I haven’t talked about much that I’m doing. I’m working with some, you know, top five vendors to do that, and I’ve done it in the past. And so I did wanna explain it to you if you’re a vendor listening or if you are, talking to your vendor, like, you should have Shawn come out and interview all your people. You have them all in one place. Let them know that they can contact me about doing that. Again, you can contact me at theautomationblog.com, LinkedIn, YouTube, theautomationschool.com, pretty much any way you want. You can write me snail mail if you want. But in any case, I do wanna share that, and we also have in person training. I think I’ve talked to you guys about this quite a bit. We do custom in person training for as little as two people, $900 a day up to four people. And so if you wanna get some people in here, we can actually do Allen Bradley and Siemens in two days back to back. One day Allen Bradley, one day Siemens. So if you wanna learn two PLCs in two days back to back now I do have somebody ask me, hey, Shawn. Where’s your schedule of upcoming courses? And back in my previous life of twenty five years, we were always trying to sign people up and then canceling, you know, events and classes because, we wouldn’t get enough people to meet the vendors minimum. So I don’t wanna do that. So I don’t have actually any dates now. I have been talking with doing a intensive POC boot camp, but, you know, I just got so much things going on in my life right now that I don’t think I could pull that together this fall. But in any case, if you need some training, you wanna send your people here, we can even start at, like, noontime and then end the final day at noontime so you can get your flights and travel and all that. We’re one hour away from Albany, New York, and that’s a great little airport to fly in and out of. Actually, I’m flying out of it in November. They’ll go to a trade show, to interview vendors, vendors, product people. But in any case, I just wanna break in. There’s something about my company. I don’t think I ever talked to you guys about much, and so I just wanted to insert it here since I’m sponsoring this episode and eating the cost to produce it. I wanted to share that with you. And now, I won’t be back until the end of the show, so please enjoy the rest of this episode. Send any feedback you have to me, and, we’ll talk to you at the end of the show. Michael Bowne (PI): And it kinda works like this. So you have your IO Link device, which has an IODD file, which we mentioned earlier, that gets ingested by a parameterization tool. The parameterization tool comes with the IO Link master. Could be a separate piece of software. In some cases, could be a web page built into the IO Link master itself. Depends on depends on the vendor. But then what happens after that, how that data goes from the IO Link master to the controller, the PLC, is fieldbus specific. So you have your own, fieldbus file, you know, GSD or EDS or ESI, whatever the case may be, which is ingested by the engineering tool of the of the PLC and kind of outside way outside the scope of of of IO Link. And so the EDS file, the GSD file, and and that is the that data then gets sent via fieldbus, and that’s the sum of all the IO Link device data from all the ports on the IO Link master, where that IO Link communication as as defined by the IODD file, configures the port for the master and for the devices. And so an IODD file is provided by the devices, and every device manufacturer must provide an IODD for their device. It can be downloaded from the IODD finder, which is a website, and, it it describes what the entire device does. It describes the process data length, the process data structure, the parameter the name of the parameters, what range to expect, the data types, the addresses of the parameters in the in the in the indexes and subindexes. It can talk about GUI information, pages on which a parameter shall be displayed, names of parameter pages, all this kind of stuff is in an IODD file. It’s a it’s a zip file where you have that IODD as an XML. So that’s how we format the file. So it’s it’s both and this is the key part, both machine readable and and human readable. It’s got a little picture of the device, picture of the manufacturer logo. And with your permission, maybe I can show the IODD finder. It’s, ioddfinder.io-link.com. Mhmm. Looks simple enough. Let’s say we wanna look at a I’m gonna type in something here. Max ref. Let’s pick this. So this is just a this is a reference design, not an actual product that that, an end user would employ in their in their factory, but a reference design of something that maybe a device manufacturer would use. And it’s shows the manufacturer name, the article name number, the product name, the device ID. All that stuff is ingested by the parameterization tool, which then uses that information to go up to this IODD finder and grab the IODD file shown here, which can be downloaded if you wanted to look at it yourself. But in the past few years, we implemented what’s called an IODD viewer, which is pretty cool, which takes that nice XML file and parses it. So in human readable form, if you wanted to compare quickly, hey. I’m an end user. I wanna compare the IODD file from device vendor a to device vendor b to kinda see what kind of features they have. You could do that all very easily, and that’s shown here in the IODD viewer. What’s really what’s really neat about this IODD finder is that it has two ways it it it gets accessed. That’s this website that I just showed here. So as in humans are are accessing it, but it’s also accessible via API. And we we track the the traffic to the Audi divider, and the vast, vast majority of the traffic comes via API. So these are IO Link masters that just had a device connected to them. Parameter is I’m sorry. Parameterization tool that has a you know, or connected to the IO Link master that had device connected them. They go up to the AudiD finder, and they pull down that IODD file for the device that was just connected so that now they they can be, configured. And that’s really, really cool stuff. So all these IODD files are in one spot, in one database up there for for viewing or via the IODD viewer or for access from any number of IO Link tools out there. Shawn Tierney (Host): So when we’re talking about API access, we’re talking about the tool we’re using to configure the master. So it could be a web page built into the master, or it could be a separate software program. Do I have that correct? Michael Bowne (PI): Yeah. Right. So the parameterization tool, yeah, is usually is usually a software package that’ll run on your computer connecting to your, IO Link master that parameterizes the IO Link master. Yeah. Shawn Tierney (Host): Excellent. Or Michael Bowne (PI): through the network somehow. Maybe through the network. Yeah. Goes out and grabs that IODD file from the IODD finder to, you know, to parameterize that port in that device. Shawn Tierney (Host): Which is excellent because in previous iterations of smart networks and smart devices, you always have to go searching a vendor’s website, and then people would get the wrong file, and then I would be in the field saying this is never gonna work because you get the wrong device file. If they can’t give you the right device file, you’ll never get it to work. You know? And so this is much better having the organization have everybody require everybody who has IO Link to put their IODD files in the one place so everybody can always find it. And so the software tools can find it automatically for you, which is just a huge a huge change versus what we went through in the nineties. Michael Bowne (PI): Exactly. We came on a CD or something or what I mean, God only knows. I’m gonna switch gears a little bit here, talk about two topics subtopics within the IO Link domain, and one of them is IO Link wireless. This is, what we call is bridging the gap. So it’s an IEC standard, six eleven thirty nine as of November 2023, Shawn Tierney (Host): and Michael Bowne (PI): it’s enabling connections that simply weren’t possible before for IO Link. And in an example here, we’re showing a a smart machine tool where the IO Link sensor is integrated into the chuck of the lathe. Now that guy is spinning at 6,000 RPMs. That connection simply couldn’t be possible couldn’t be done any other way than with IO Link wireless or, let’s say, independent movers. So you’ve seen these moving systems where you’ve got the either floating or on a on a rail the other track systems exactly. If you integrate the smarts of IO Link onto the movers themselves instead of using, SCARA or Delta robots to do the to I mean, that’s you’re saving huge amounts of cost Mhmm. That way if the if those guys can move on their own, and they use IO Link wireless to do that. Slip rings where certainly sending power, is is well known, but sometimes communication can be tricky via slip ring. Mhmm. Yeah. End of arm tooling, like robot robot end of arms where you have a you’re gonna change the tool at the end of the arm. It’s more lightweight, saving on on robot cost that way. Less fewer lighter robots can be used, but it’s it’s, it’s cool. It the architecture looks pretty much the same, where you have your field level, your IO, and instead of wired connections, it’s it’s simply a wireless connection. Is that wired? It’s it’s wireless. And and what’s different about IO Link Wireless is that it was built for industry. So I think in the past, people have been burned by wireless technologies that made some promises that didn’t maybe you know, they they couldn’t meet the the the the the rigorous environment and and requirements of of industry, but that was different. It was built for industry from the start. So it uses the two point two point four gig license free ISM band. And what we do is a is this frequency hopping so that we use the same IO link, you know, data structure. We do this frequency hopping, and it’s it’s a cycle of five milliseconds. So you’re not going to get that one millisecond time that you get via wired IO link. We do a five millisecond cycle time, and then it’s using this frequency hopping method. It’s basically cable grade, connection, 10 to the minus nine error probability. You can have hundreds of wireless devices in a machine, and it’s deterministic. It’s designed it is designed from the outside for both for control, of course, but, of course, also for for monitoring and maybe, like, a brownfield. You wanna you can’t get IO Link to a sensor or something that you can maybe use IO Link wireless to get access to some some hard to reach sensor. Shawn Tierney (Host): Well, you know, I thought that I think this is so
Peter Seeberg talks to Thomas Obermeyer, Lead Dataspace Architect at Catena-X about OPC UA and Catena-X.
Shawn Tierney meets up with Tom Weingartner of PI (Profibus Profinet International) to learn about PROFINET and System Redundancy in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 244 Show Notes: Special thanks to Tom Weingartner for coming on the show, and to Siemens for sponsoring this episode so we could release it ad free on all platforms! To learn more PROFINET, see the below links: PROFINET One-Day Training Slide Deck PROFINET One-Day Training Class Dates IO-Link Workshop Dates PROFINET University Certified Network Engineer Course Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney from Insights and Automation, and I wanna thank you for tuning back in this week. Now on this show, I actually had the opportunity to sit down with Thomas Weingoner from PI to learn all about PROFINET. I actually reached out to him because I had some product vendors who wanted me to cover their s two features in their products, and I thought it would be first it’d be better to actually sit down and get a refresh on what s two is. It’s been five years since we’ve had a PROFINET expert on, so I figured now would be a good time before we start getting into how those features are used in different products. So with that said, I also wanna mention that Siemens has sponsored the episode, so it will be completely ad free. I love it when vendor sponsor the shows. Not only do we get the breakeven on the show itself, we also get to release it ad free and make the video free as well. So thank you, Siemens. If you see anybody from Siemens, thank them for sponsoring the Automation Podcast. As a matter of fact, thank any vendor who’s ever sponsored any of our shows. We really appreciate them. One final PSA that I wanna throw out there is that, speaking like I talked about this yesterday on my show, Automation Tech Talk, As we’ve seen with the Ethernet POCs we’re talking about, a lot of micro POCs that were $250 ten years ago are now $400. Right? That’s a lot of inflation, right, for various reasons. Right? And so one of the things I did this summer is I took a look at my P and L, my pros profit and loss statements, and I just can’t hold my prices where they are and be profitable. Right? So if I’m not breaking even, the company goes out of business, and we’ll have no more episodes of the show. So how does this affect you? If you are a student over at the automation school, you have until mid September to do any upgrades or purchase any, courses at the 2020 prices. Alright? So I I don’t wanna raise the prices. I’ve tried as long as I can, but at some point, you have to give in to what the prices are that your vendors are charging you, and you have to raise the prices. So, all my courses are buy one, sell them forever, so this does not affect anybody who’s enrolled in a course. Actually, all of you folks rolled in my PLC courses, I see it updates every week now. So and those who get the ultimate bundles, you’re seeing new lessons added to the new courses because you get that preorder access plus some additional stuff. So in any case but, again, I wanna reiterate, if you’re a vendor who has an old balance or if you are a student who wants to buy a new course, please, make your plans in the next couple of weeks because in mid September, I do have to raise the prices. So I just wanna throw that PSA out there. I know a lot of people don’t get to the end of the show. That’s what I wanted to do at the beginning. So with that said, let’s jump right into this week’s podcast and learn all about Profinet. I wanna welcome to the show, Tom from Profibus, Profinet North America. Tom, I really wanna just thank you for coming on the show. I reached out to you to ask about ask you to come on to to talk to us about this topic. But before we jump in, could you, first tell the audience a little bit about yourself? Tom Weingartner (PI): Yeah. Sure. Absolutely, Shawn. I’m gonna jump to the next slide then and and let everyone know. As Shawn said, my name is Tom, Tom Weingartner, and I am the technical marketing director at PI North America. I have a fairly broad set of experiences ranging from ASIC hardware and software design, and and then I’ve moved into things like, avionic systems design. But it seemed like no no matter what I was working on, it it always centered around communication and control. That’s actually how I got into industrial Ethernet, and I branched out into, you know, from protocols like MIL standard fifteen fifty three and and airing four twenty nine to other serial based protocols like PROFIBUS and MODBUS. And, of course, that naturally led to PROFINET and the other Ethernet based protocols. I I also spent quite a few years developing time sensitive networking solutions. But now I focus specifically on PROFINET and its related technologies. And so with that, I will jump into the the presentation here. And and, now that you know a little bit about me, let let me tell you a little bit about our organization. We are PROFIBUS and PROFINET International or PI for short. We are the global organization that created PROFIBUS and PROFINET, and we continue to maintain and promote these open communication standards. The organization started back in 1989 with PROFIBUS, followed by PROFINET in the early two thousands. Next came IO Link, a communication technology for the last meter, and that was followed by OmLux, a communication technology for wireless location tracking. And now, most recently, MTP or module type package. And this is a communication technology for easier, more flexible integration of process automation equipment. Now we have grown worldwide to 24 regional PI associations, 57 competent centers, eight test labs, and 31 training centers. It’s important to remember that we are a global organization because if you’re a global manufacturer, chances are there’s PROFINET support in the country in which you’re located, and you can get that support in the country’s native language. In the, lower right part of the slide here, we are showing our technologies under the PI umbrella. And I really wanted to point out that these, these technologies all the technologies within PI umbrella are supported by a set of working groups. And these working groups are made up of participants from member companies, and they are the ones that actually create and update the various standards and specifications. Also, any of these working groups are open to any member company. So, PI North America is one of the 24 regional PI associations, and we were founded in 1994. We are a nonprofit member supported organization where we think globally and act locally. So here in North America, we are supported by our local competence centers, training centers, and test labs. And and competence centers, provide technical support for things like protocol, interoperability, and installation type questions. Training centers provide educational services for things like training courses and hands on lab work. And test labs are, well, just that. They are labs that provide testing services and device certification. So any member company can be any combination of these three. You can see here if you’re looking at the slide, that the Profi interface center is all three, where we have JCOM Automation is both a competent center and a training center. And here in North in North America, we are pleased to have HMS as a training center and Phoenix Contact also as a competent center. Now one thing I would like to point out to everyone is that what you should be aware of is that every PROFINET, device must be certified. So if you make a PROFINET device, you need to go to a test lab to get it certified. And here in North America, you certify devices at the PROFINETERFACE center. So I think it’s important to begin our discussion today by talking about the impact digital transformation has had on factory networks. There has been an explosion of devices in manufacturing facilities, and it’s not uncommon for car manufacturers to have over 50,000 Ethernet nodes in just one of their factories. Large production cells can have over a thousand Ethernet nodes in them. But the point is is that all of these nodes increase the amount of traffic automation devices must handle. It’s not unrealistic for a device to have to deal with over 2,000 messages while it’s operating, while it’s trying to do its job. And emerging technologies like automated guided vehicles add a level of dynamics to the network architecture because they’re constantly entering and leaving various production cells located in different areas of the factory. And, of course, as these factories become more and more flexible, networks must support adding and removing devices while the factory is operating. And so in response to this digital transformation, we have gone from rigid hierarchical systems using field buses to industrial Ethernet based networks where any device can be connected to any other device. This means devices at the field level can be connected to devices at the process control level, the production level, even even the operations level and above. But this doesn’t mean that the requirements for determinism, redundancy, safety, and security are any less on a converged network. It means you need to have a network technology that supports these requirements, and this is where PROFINET comes in. So to understand PROFINET, I I think it’s instructive here to start with the OSI model since the OSI model defines networking. And, of course, PROFINET is a networking technology. The OSI model is divided into seven layers as I’m sure we are all familiar with by now, starting with the physical layer. And this is where we get access to the wire, internal electrical signals into bits. Layer two is the data link layer, and this is where we turn bits into bytes that make up an Ethernet frame. Layer three is the network layer, and this is where we turn Ethernet frames into IP packets. So I like to think about Ethernet frames being switched around a local area network, and IP packets being routed around a wide area network like the Internet. And so the next layer up is the transport layer, and this is where we turn IP packets into TCP or UDP datagrams. These datagrams are used based on the type of connection needed to route IP packets. TCP datagrams are connection based, and UDP datagrams are connectionless. But, really, regardless of the type of connection, we typically go straight up to layer seven, the application layer. And this is where PROFINET lives, along with all the other Ethernet based protocols you may be familiar with, like HTTP, FTP, SNMP, and and so on. So then what exactly is PROFINET, and and what challenges is it trying to overcome? The most obvious challenge is environmental. We need to operate in a wide range of harsh environments, and, obviously, we need to be deterministic, meaning we need to guarantee data delivery. But we have to do this in the presence of IT traffic or non real time applications like web servers. We also can’t operate in a vacuum. We need to operate in a local area network and support getting data to wide area networks and up into the cloud. And so to overcome these challenges, PROFINET uses communication channels for speed and determinism. It uses standard unmodified Ethernet, so multiple protocols can coexist on the same wire. We didn’t have this with field buses. Right? It was one protocol, one wire. But most importantly, PROFINET is an OT protocol running at the application layer so that it can maintain real time data exchange, provide alarms and diagnostics to keep automation equipment running, and support topologies for reliable communication. So we can think of PROFINET as separating traffic into a real time channel and a non real time channel. That mess messages with a particular ether type that’s actually eighty eight ninety two, and the number doesn’t matter. But the point here is that the the the real time channel, is is where all PROFINET messages with that ether type go into. And any other ether type, they go into the non real time channel. So we use the non real time channel for acyclic data exchange, and we use the real time channel for cyclic data exchange. So cyclic data exchange with synchronization, we we classify this as time critical. And without synchronization, it is classified as real time. But, really, the point here is that this is how we can use the same standard unmodified Ethernet for PROFINET as we can for any other IT protocol. All messages living together, coexisting on the same wire. So we take this a step further here and and look at the real time channel and and the non real time channel, and and these are combined together into a concept that we call an application relation. So think of an application relation as a network connection for doing both acyclic and cyclic data exchange, and we do this between controllers and devices. This network connection consists of three different types of information to be exchanged, and we call these types of information communication relations. So on the lower left part of the slide, you can see here that we have something called a a record data communication relation, and it’s essentially the non real time channel for acyclic data exchange to pass information like configuration, security, and diagnostics. The IO data communication relation is part of the real time channel for doing this cyclic data exchange that we need to do to periodically update controller and device IO data. And finally, we have the alarm communication relation. So this is also part of the real time channel, because, what we need to do here is it it’s used for alerting the controller to device false as soon as they occur or when they get resolved. Now on the right part of the slide, is we can see some use cases for, application relations, and and these use cases are are either a single application relations for controller to device communication, and we have an optional application relation here for doing dynamic reconfiguration. We also use an application relation for something we call shared device, and, of course, why we are here today and talking about applications relations is actually because of system redundancy. And so we’ll get, into these use cases in more detail here in a moment. But first, I wanted to point out that when we talk about messages being non real time, real time, or time critical, what we’re really doing is specifying a level of network performance. Non real time performance has cycle times above one hundred milliseconds, but we also use this term to indicate that a message may have no cycle time at all. In other words, acyclic data exchange. Real time performance has cycle times in the one to ten millisecond range, but really that range can extend up to one hundred milliseconds. So time critical performance has cycle times less than a millisecond, and it’s not uncommon to have cycle times around two hundred and fifty microseconds or less. Most applications are either real time or non real time, while high performance applications are considered time critical. These applications use time synchronization to guarantee data arrives exactly when needed, but we also must ensure that the network is open to any Ethernet traffic. So in order to achieve time critical performance here, and we do this for the most demanding applications like high speed motion control. And so what we did is we added four features to basic PROFINET here, and and we call this PROFINET ISOCRANESS real time or PROFINET IRT. These added features are synchronization, node arrival time, scheduling, and time critical domains. Now IRT has been around since 02/2004, but in the future, PROFINET will move to a new set of I triple e Ethernet standards called time sensitive networking or TSN. PROFINET over TSN will actually have the same functionality and performance as PROFINET IRT, but we’ll be able to scale to faster and faster, networks and and as bandwidth is is increasing. So this chart shows the differences between PROFINET, RT, IRT, and TSN. And the main difference is, obviously, synchronization. And these other features that, guarantee data arrives exactly when needed. Notice in in the under the, PROFINET IRT column here that that, the bandwidth for PROFINET IRT is a 100 mil a 100 megabits per second. And the bandwidth for PROFINET RT and TSN are scalable. Also, for those device manufacturers out there looking to add PROFINET IRT to their products, there are lots of ASICs and other solutions available in the market with IRT capability. Alright. So let’s take a minute here to summarize all of this. We have a a single infrastructure for doing real time data exchange along with non real time information exchange. PROFINET uses the same infrastructure as any Ethernet network. Machines that speak PROFINET do so, using network connections called application relations, and these messages coexist with all other messages so information can pass from devices to machines, to factories, to the cloud, and back. And so if you take away nothing else from this podcast today, it is the word coexistence. PROFINET coexists with all other protocols on the wire. So let’s start talking a little bit here about the main topic, system redundancy and and and why we got into talking about PROFINET at all. Right? I mean, what why do we need system redundancy and things like like, application relations and dynamic reconfiguration? Well, it’s because one of the things we’re pretty proud of with PROFINET is not only the depth of its capabilities, but also the breadth of its capabilities. And with the lines blurring between what’s factory automation, what’s process automation, and what’s motion control, we are seeing all three types of automation appearing in a single installation. So we wanna make sure PROFINET meets requirements across the entire range of industrial automation. So let’s start out here by looking at the differences between process automation versus factory automation, and then we’ll get into the details. First off, process signals typically change slower on the order of hundreds of milliseconds versus tens of milliseconds in factory automation. And process signals often need to travel longer distances and potentially into hazardous or explosive areas. Now with process plants operating twenty four seven, three sixty five, system must systems must provide high availability and support changes while the plant is in production. This is where system redundancy and dynamic reconfiguration come in. We’ll discuss these again here in in just a minute. I just wanted to finish off this slide with saying that an estop is usually not possible because while you can turn off the automation, that’s not necessarily gonna stop the chemical reaction or whatever from proceeding. Sensors and actuators and process automation are also more complex. Typically, we call them field instruments. And process plants have many, many, many more IO, tens of thousands of IO, usually controlled by a DCS. And so when we talk about system redundancy, I actually like to call it scalable system redundancy because it isn’t just one thing. This is where we add components to the network for increasing the level of system availability. So there are four possibilities, s one, s two, and r one, r two. The letter indicates if there are single or redundant network access points, and the number indicates how many application relations are supported by each network access point. So think of the network access point as a physical interface to the network. And from our earlier discussion, think of an application relation as a network connection between a controller and a device. So you have s one has, single network access points. Right? So each device has single network access points with one application relation connected to one controller. S two is where we also have single network access points, but with two application relations now connected to different controllers. R one is where we have redundant network access points, but each one of these redundant network access points only has one application relation, but those are connected to different controllers. And finally, we could kinda go over the top here with r two, and and here’s where we have redundant network access points with two application relations connected to different controllers. Shawn Tierney (Host): You know, I wanna just stop here and talk about s two. And for the people who are listening, which I know is about a quarter of you guys out there, think of s two is you have a primary controller and a secondary controller. If you’re seeing the screen, you can see I’m reading the the slide. But you have your two primary and secondary controllers. Right? So you have one of each, and, primary controller has the, application one, and secondary has application resource number two. And each device that’s connected on the Ethernet has both the one and two. So you went maybe you have a rack of IO out there. It needs to talk to both the primary controller and the secondary controller. And so to me, that is kinda like your classic redundant PLC system where you have two PLCs and you have a bunch of IO, and each piece of IO has to talk to both the primary and the secondary. So if the primary goes down, the secondary can take over. And so I think that’s why there’s so much interest in s two because that kinda is that that that classic example. Now, Tom, let me turn it back to you. Would you say I’m right on that? Or Tom Weingartner (PI): Spot on. I mean, I think it’s great, and and and really kinda emphasizing the point that there’s that one physical connection on the network access point, but now we have two connections in that physical, access point there. Right? So so you can then have one of those connections go to the primary controller and the other one to the secondary controller. And in case one of those controllers fails, the device still can get the information it needs. So, yep, that that’s how we do that. And and, just a little bit finer point on r one, if you think about it, it’s s two, but now all we’ve done is we’ve split the physical interface. So one of the physical interfaces has has, one of the connections, and the other physical interface has a has the other connection. So you really kinda have, the same level of redundant functionality here, backup functionality with the secondary controller, but here you’re using, multiple physical interfaces. Shawn Tierney (Host): Now let me ask you about that. So as I look at our one, right, it seems like they connect to port let’s I’ll just call it port one on each device to switch number one, which in this case would be the green switch, and port number two of each device to the switch number two, which is the blue switch. Would that be typical to have separate switches, one a different switch for each port? Tom Weingartner (PI): It it it doesn’t have to. Right? I I I think we chose to show it like this for simplicity kinda to Shawn Tierney (Host): Oh, I don’t care. Tom Weingartner (PI): Emphasize the point that, okay. Here’s the second port going to the secondary controller. Here’s the first port going to the primary controller. And we just wanted to emphasize that point. Because sometimes these these, diagrams can be, a bit confusing. And you Shawn Tierney (Host): may have an application that doesn’t require redundant switches depending on the maybe the MTBF of the of the switch itself or your failure mode on your IO. Okay. I’m with you. Go ahead. Tom Weingartner (PI): Yep. Yep. Good. Good. Good. Alright. So, I think that’s some excellent detail on that. And so, if you wouldn’t mind or don’t have any other questions, let’s let’s move on to the the, the the next slide. So you can see in that previous slide how system redundancy supports high availability by increasing system availability using these network access points and application relations. But we can also support high availability by using network redundancy. And the way PROFINET supports network redundancy is through the use of ring topologies, and we call this media redundancy. The reason we use rings is because if a cable breaks or the physical connection, somehow breaks as well or or even a device fails, the network can revert back to a line topology keeping the system operational. However, supporting network redundancy with rings means we can’t use protocols typically used in IT networks like, STP and RSTP. And this is because, STP and RSTP actually prevent network redundancy by blocking redundant paths in order to keep frames from circulating forever in the network. And so in order for PROFINET to support rings, we need a way to prevent frames from circulating forever in the network. And to do this, we use a protocol called the media redundancy protocol or MRP. MRP uses one media redundancy manager for each ring, and the rest, of the devices are called media redundancy clients. Managers are typically controllers or PROFINET switches, and clients are typically the devices in the network. So the way it works is this. A manager periodically sends test frames, around the network here to check the integrity of the ring. If the manager doesn’t get the test frame back, there’s a failure somewhere in the ring. And so the manager then notifies the clients about this failure, and then the manager sets the network to operate as a line topology until, the failure is repaired. Right? And so that’s how we can get, network redundancy with our media redundancy protocol. Alright. So now you you can see how system redundancy and media redundancy both support high availability. System redundancy does this by increasing system availability, Walmart. Media redundancy does this by increasing network availability. Obviously, you can use one without the other, but by combining system redundancy and media redundancy, we can increase the overall system reliability. For example, here we are showing different topologies for s one and s two, and these are similar to the the the topologies that were on the previous slide. So, if you notice here that, for s one, we can only have media redundancy because there isn’t a secondary controller to provide system redundancy. S two is where we combine system redundancy and media redundancy by adding an MRP ring. But I wanted to point out here that that even though we’re showing this MRP ring as as as a possible topology, there really are other topologies possible. It really depends on the level of of system reliability you’re trying to achieve. And so, likewise, on on this next slide here, we are showing two topologies for adding media redundancy to r one and r two. And so for r one, we’ve chosen, again, probably for simplistic, simplicity’s sake, we we add an MRP ring for each redundant network access point. With for r two, we do the same thing here. We also have an MRP ring for each redundant network access point, but we also add a third MRP ring for the controllers. Now this is really just to try to emphasize the point that you can, you you can really, come up with just about any topology possible, but it because it really depends on the number of ports on each device and the number of switches in the network and, again, your overall system reliability requirements. So in order to keep process plants operating twenty four seven three sixty five, dynamic reconfiguration is another use case for application relations. And so this is where we can add or remove devices on the fly while the plant is in production. Because if you think about it, typically, when there is a new configuration for the PLC, the PLC first has to go into stop mode. It needs to then re receive the configuration, and then it can go back into run mode. Well, this doesn’t work in process automation because we’re trying to operate twenty four seven three sixty five. So with dynamic reconfiguration, the controller continues operating with its current application relation while it sets up a new application relation. Right? I mean, again, it’s it’s really trying to get this a a new network connection established. So then the the the controller then switches over to the new application relation after the new configuration is validated. Once we have this validation and the configuration’s good, the controller removes the old application relations and continues operating all while staying in run mode. Pretty handy pretty handy stuff here for for supporting high availability. Now one last topic regarding system redundancy and dynamic reconfiguration, because these two PROFINET capabilities are compatible with a new technology called single pair Ethernet, and this provides power and data over just two wires. This version of Ethernet is now part of the I triple e eight zero two dot three standard referred to as 10 base t one l. So 10 base t one l is the non intrinsically saved version of two wire Ethernet. To support intrinsic safety, 10 base t one l was enhanced by an additional standard called Ethernet APL or advanced physical layer. So when we combine PROFINET with this Ethernet APL version of 10 base t one l, we simply call it PROFINET over APL. It not only provides power and data over the same two wires, but also supports long cable runs up to a kilometer, 10 megabit per second communication speeds, and can be used in all hazardous areas. So intrinsic safety is achieved by ensuring both the Ethernet signals and power on the wire are within explosion safe levels. And even with all this, system redundancy and dynamic reconfiguration work seamlessly with this new technology we call PROFINET over APL. Now one thing I’d like to close with here is a is a final thought regarding a new technology I think I think everyone should become aware of here. I mean, it’s emerging in the market. It’s it’s quite new, and it’s a technology called MTP or module type package. And so this is a technology being applied first here to, use cases considered to be a hybrid of both process automation and factory automation. So what MTP does is it applies OPC UA information models to create standardized, non proprietary application level descriptions for automation equipment. And so what these descriptions do is they simplify the communication, between equipment and the control system, and it does this by modularizing the process into more manageable pieces. So really, the point is to construct a factory with modular equipment to simplify integration and allow for better flexibility should changes be required. Now with the help of the process orchestration layer and this OPC UA connectivity, MTP enabled equipment can plug and operate, reducing the time to commission a process or make changes to that process. This is pretty cutting edge stuff. I think you’re gonna find and hear a lot more about NTP in the near future. Alright. So it’s time to wrap things up with a summary of all the resources you can use to learn even more about PROFINET. One of the things you can do here is you can get access to the PROFINET one day training class slide deck by going to profinet2025.com, entering your email, and downloading the slides in PDF format. And what’s really handy is that all of the links in the PDF are live, so information is just a click away. We also have our website, us.profinet.com. It has white papers, application stories, webinars, and documentation, including access to all of the standards and specifications. This is truly your one stop shop for locating everything about PROFINET. Now we do our PROFINET one day training classes and IO link workshops all over The US and parts of Canada. So if you are interested in attending one of these, you can always find the next city we are going to by clicking on the training links at the bottom of the slide. Shawn Tierney (Host): Hey, guys. Shawn here. I just wanted to jump in for a minute for the audio audience to give you that website. It’s us.profinet.com/0dtc or oscardeltatangocharlie. So that’s the website. And I also went and pulled up the website, which if you’re watching, you can see here. But for those listening, these one day PROFINET courses are coming to Phoenix, Arizona, August 26, Minneapolis, Minnesota, September 10, Newark and New York City, September 25, Greenville, South Carolina, October 7, Detroit, Michigan, October 23, Portland, Oregon, November 4, and Houston, Texas, November 18. So with that said, let’s jump back into the show. Tom Weingartner (PI): Alan, one of our most popular resources is Profinet University. This website structures information into little courses, and you can proceed through them at your own pace. You can go lesson by lesson, or you can jump around. You can even decide which course to take based on a difficulty tag. Definitely make sure to check out this resource. We do have lots of great, webinars on on the, on on the website, and they’re archived on the website. Now some of these webinars, they they rehash what we covered today, but in other cases, they expand on what we covered today. But in either case, make sure you share these webinars with your colleagues, especially if they’re interested in any one of the topics that we have listed on the slide. And finally, the certified network engineer course is the next logical step if you would like to dive deeper into the technical details of PROFINET. It is a week long in Johnson City, Tennessee, and it features hands on lab work. And if you would like us to provide training to eight or more students, we can even come to your site. If you would like more details about any of this, please head to the website to learn more. And with that, Chai, I think that is, my last slide and, covered the topics that I think we wanted some to cover today. Shawn Tierney (Host): Yeah. And I just wanna point out that to you guys, this, training goes out through all around The US. I definitely recommend getting up there. If you’re using PROFINET and you wanna get some training, they usually fill the room, like, you know, 50 to a 100 people. And, it’s you know, they do this every year. So check those dates out. If you need to get some hands on with PROFINET, I would definitely check out those. And, of course, we’ll have all the links in the description. I also wanna thank Tom for that slide. Really defining s one versus s two versus r one and r two. You know, a lot of people say we have s two compatibility. A matter of fact, we’re gonna be looking at some products that have s two compatibility here in the future. And, you know, just trying to understand what that means. Right? You know, when somebody just says s two, it’s like, what does that mean? So I really if that slide really doesn’t for you guys listening, I thought that slide really kinda lays it out, kinda gives you, like, alright. This is what it means. And, so in in in my from my perspective, that’s like it’s you’re supporting redundant controllers. Right? And so if you have an s two setup of redundant, seamless controllers that or CPUs, then you’ll be that product will support that. And that’s important. Right? Because if you had a product that didn’t support it, it’s not gonna work with your application. So I thought that and the the Ethernet APL is such a big deal in process because I you know, the the distance, right, and the fact that it’s it’s, intrinsically safe and supports all those zones and and areas and whatnot, that is, and everybody everybody all the instrumentation people are all over. Right? The, the, the Rosemonts, the fishes, the, the endless houses, everybody is is on that working group. We’ve covered that on the news show many times, and, just very interesting to see where that goes, but I think it’s gonna take over that part of the industry. So, but, Tom, was there anything else you want to cover in today’s show? Tom Weingartner (PI): No. I I think that that really, puts puts a a fine finale on on on this here. I I do wanted to maybe emphasize that, you you know, that point about network redundancy being compatible with, system redundancy. So, you know, you can really hone in on what your system reliability requirements are. And and also with with this this, PROFINET over APL piece of it, completely compatible with with PROFINET, in in of itself. And and, also, you don’t have to worry about it not supporting, system redundancy or or anything of of the like, whether, you know, you you wanted to get, redundant even redundant devices out there. So, that’s that’s, I think that’s that’s about it. Shawn Tierney (Host): Alright. Well, I again, thank you so much for coming on. We look forward to trying out some of these s two profanet devices in the near future. But with that, I I really wanted to have you on first to kinda lay the groundwork for us, and, really appreciate it. Tom Weingartner (PI): No problem. Thank you for having me. Shawn Tierney (Host): Well, I hope you guys enjoyed that episode. I did. I enjoyed sitting down with Tom, getting up to date on all those different products, and it’s great to know they have all these free hands on training days coming across United States. And, you know, what a great refresher from the original 2020 presentation that we had somebody from Siemens do. So I really appreciate Tom coming on. And speaking of Siemens, so thankful they sponsored this episode so we could release it ad free and make the video free to everybody. Please, if you see Siemens or any of the vendors who sponsor our episodes, please tell them to thank you from us. It really helps us keep the show going. Speaking of keeping the show going, just a reminder, if you’re a student or a vendor, price increases will hit mid September. So if you’re a student, you wanna buy another course, now is the time to do it. If you’re a vendor and you have a existing balance, you will want to schedule those podcasts before mid September or else you’ll be subject to the price increase. So with that said, I also wanna remind you I have a new podcast, automation tech talk. I’m reusing the old automation new news headlines podcast. So if you already subscribed to that, you’re just gonna get in the new the new show for free. It’s also on the automation blog, on YouTube, on LinkedIn. So I’m doing it as a live stream every lunchtime, just talking about what I learned, in that last week, you know, little tidbits here and there. And I wanna hear from you guys too. A matter of fact, I already had Giovanni come on and do an interview with me. So at one point, I’ll schedule that as a lunchtime podcast for automation tech talk. Again, it still shows up as automation news headlines, I think. So at some point, I’ll have to find time to edit that to change the name. But in any case, with that, I think I’ve covered everything. I wanna thank you guys for tuning in. Really appreciate you. You’re the best audience in the podcast world or the video world, you know, whatever you wanna look at it as, but I really appreciate you all. Please feel free to send me emails, write to me, leave comments. I love to hear from you guys, and I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
In Episode 220 of Manufacturing Hub, we welcome back Caleb Flanigan to explore one of the most critical yet least understood topics in the evolution of smart manufacturing: MTP (Module Type Package), MCP (Machine Context Protocol), and how they are becoming essential enablers of safe and scalable AI adoption on the factory floor.Throughout this deep-dive episode, we uncover how these emerging standards form the backbone of adaptive plants: facilities capable of safely orchestrating decisions between humans, machines, and AI models. From OPC UA and AutomationML to edge computing and LLM-driven control systems, Caleb explains the architecture, mindset shifts, and implementation considerations that make this vision a reality.Key topics covered include:Why traditional SCADA and MES architectures are not AI-readyThe real-world value of MTP in legacy Brownfield plantsHow Siemens' Machine Proxy App and OPC UA servers act as translators between AI models and legacy PLCsDifferences between machine states, control interfaces, and orchestrated services in modular manufacturingWhy CLI skills and edge computing are foundational for the modern control engineerHow to pitch digital transformation and AI investments to hesitant executivesWe also touch on organizational psychology, how internal champions get ignored without executive alignment, and the grim future for manufacturers still betting on ice cube relays.Whether you're a plant engineer, systems integrator, or digital transformation leader, this conversation offers a bold but practical look at how to safely integrate AI into manufacturing control environments: starting with protocols and principles, not just hype.
We're throwing a party in Vegas! Someone called it SCWPodCon last year, and the name stuck. It's sponsored by Teleport, the infrastructure identity company. Get SSO for SSH! If Thomas was here, I'm sure he'd tell you that Fly.io uses Teleport internally. Oh also there's some thing called Black..pill? Black Pool? Something like that happening in Vegas, with crypto talks, so we chatted about them a bit, plus some other stuffSCWPodCon 2025: https://securitycryptographywhatever.com/events/blackhatTranscript: https://securitycryptographywhatever.com/2025/07/29/vegas-baby/Links:- Fault Injection attacks on PQCS signatures: https://www.blackhat.com/us-25/briefings/schedule/index.html#bypassing-pqc-signature-verification-with-fault-injection-dilithium-xmss-sphincs-46362- Another attack on TETRA: https://www.blackhat.com/us-25/briefings/schedule/index.html#2-cops-2-broadcasting-tetra-end-to-end-under-scrutiny-46143- Attacks on SCADA / ICS protocols (OPC UA): https://www.blackhat.com/us-25/briefings/schedule/index.html#no-vpn-needed-cryptographic-attacks-against-the-opc-ua-protocol-44760- Attacks on Nostr: https://www.blackhat.com/us-25/briefings/schedule/index.html#not-sealed-practical-attacks-on-nostr-a-decentralized-censorship-resistant-protocol-45726- https://signal.org/blog/the-ecosystem-is-moving/- https://en.wikipedia.org/wiki/Nostr- https://eurosp2025.ieee-security.org/program.html- https://cispa.de/en/research/publications/84648-attacking-and-fixing-the-android-protected-confirmation-protocol- https://hal.science/hal-05038009v2/file/main.pdf- 8-bit, abacus, and a dog: https://eprint.iacr.org/2025/1237.pdf- https://www.youtube.com/watch?v=Dlsa9EBKDGI- https://www.quantamagazine.org/computer-scientists-figure-out-how-to-prove-lies-20250709/- https://eprint.iacr.org/2025/118"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)
Shawn Tierney meets up with Eugenio Silva of Emerson to learn all about Dust Collection Systems, and Emerson’s Monitoring and Control Solution in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Note: This episode was not sponsored so the video edition is a “member only” perk. The below audio edition (also available on major podcasting platforms) is available to the public and supported by ads. To learn more about our membership/supporter options and benefits, click here. Listen to The Automation Podcast from The Automation Blog: Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (host): Welcome back to the automation podcast. My name is Shawn from Insights, and I wanna thank you for tuning back in. Now in this episode, I had the pleasure of meeting up with Eugene Silva from Emerson to learn all about the industrial control and monitoring system that comes with their industrial dust collectors. Now I thought it was very interesting. I hope you do as well. But before we jump into this episode, I do wanna thank our members who made the video edition possible. So So when a vendor does a sponsor of the episode, the video becomes a member only perk, and that is just $5 a month to get started. So thank you members for making the video edition possible. With that, I also wanna thank our sponsor for this week’s show, the automationschool.com and the automationblog.com. I have an update later in the show what’s going on on both sites, and I hope you’ll, stick around and listen to that, towards the end of the show. But with that said, let’s go ahead and jump into this week’s episode of the automation podcast. It is my pleasure to welcome Emerson back on the show and Eugene on the show to talk about dust collector monitoring. You guys can see the slide if you’re watching dust collector monitoring and control solutions. I’m excited about this because this is a solution versus, like, a discrete product. So with that said, Eugene, would you please introduce yourself to our audience? Eugenio Silva (Emerson): Yes. Shawn, thank you very much for this opportunity. Hello, everyone. Here’s Eugenio Silva. I’m a product manager, intelligence automation within Emerson, the discrete automation part of Emerson. I’m glad today gonna share some, some of our understanding and learnings with the dust collector monitoring control solution. And, when I talk about that, Emerson is also involved in in others, types of solutions that, our purpose is to drive innovation that makes the world healthier, safer, smart, and more sustainable. And I’m also responsible for continuous emission monitoring, pest collectors is one, utility, energy and compressed air management solutions. So for today, I prepared something that, we go a little bit, into why this type of, test collector solution is important, from understand of our customers and industry point of view. We’re going to look into the fundamentals of a dust collection, from the particle sensors to the dust collector systems, and then dive in into the dust collector solution where I’m going to provide you, some features, also explanation why they are there, and how this kind of capabilities deliver value to our end users and customers, and, hopefully, to have time as well to have a short, recorded demo that, brings us, full scope how the operators look into into that solution when they they use it. Shawn Tierney (host): But before we jump in, I wanna thank the automationschool.com for sponsoring this episode of the show. That’s where you’ll find all of my online courses on Allen Bradley and Siemens PLCs and HMIs. So if you know anybody who needs to get up to speed on those products, please mention the automationschool.com to them. And now let’s jump back into the show. Eugenio Silva (Emerson): In terms of key applications, industries use cases, dust collector is essential for many industries that produce dust, produce any kind of a pounder, any kind of a fume, and typically air pollution control, boundary processing, handling, industrial dust, fume ventilation are covered by one or another way by dust collectors. And, the industries that I put in both, these are the the dirty ones in the sense that they produce a lot of, particle, either in terms of gases or dust. Therefore, the regulations that are in these industries are quite strong. So cement, metals, chemical plus, carbon, black and toner, like lithium battery assembly, disassembly, metal foundry. And what is interesting is the either you produce a waste that you have to manage it properly, can be also recycled, for example, in the industries like plastics in food or wood. All the collected dust that you have, you can also reuse and sometimes recycle. But why? Why this is important? Why is it important to extract dust from these industries? Let’s start on the right side because this is what the the customer is looking for. Because the cost of our pollution, the hazards, this this safe safety accidents that can be caused by this kind of harmful airborne and particles and forms are so substantial, then of course, it’s very much regulated in all these industries. And if you calculate the costs on the public health, Sometimes big accidents in plants where even big fires or hazards to people operating the plant. We talk about billions per year, the cost of that. And one of the consequences of having such issues is that when the dust extraction system is not working properly or you have really a downtime. For example, I’m going to explain that this really depends on components that are very, they use so often that they wear down, like filters, like post files. And each time that we have a downtime is not the cost of the dust collector downtime that’s important. It’s the overall downtime costs that imposes to the operation of the plant because in order to be conformist, they have to stop operating until they fix the issue. And these downtimes, of course, arise in many ways in different aspects. How complex is this dust collector. But I’m I’m going to give you, some insights that, if a dust collector system does not have any solution to monitoring real time or control, the efficiency. Basically, the personnel is managing these assets without any sight, and everything can go wrong. That’s why the TCO and the maintenance aspects are quite important. Because if you’re not aware where is the problem, when you have to plan and this becomes a firefighting or reactive mode, then your costs are going to be quite high. And when you talk about the TCO, it’s about the cost of the equipment, the acquisition, the cost of operation, meaning not only the personnel, but in this case, we use a lot of compressed air. I’m going to explain why. The maintenance costs, as we explained, and the disposal costs. Disposal means, the filter bags that must be replaced and and changed, but also the the dust, the fume, all the elements that must be, properly managed and recycling sometimes. So this is the aspects why it’s important. Now let’s turn us about, the benefits and savings. So if you use the dust collector solutions, of any kind that can monitor in real time all the aspects, of the operation of a dust collector system and, also contributes turning maintenance from reactive to preventative and maybe predictive, then the best thing that you can do is to avoid huge penalties. As you can see on this graph, every decade, let’s say, the fines are getting steeper. And the reason for that is because of the the damage and the result of a big, like, say, issue on the plant regarding to this dust part is is quite heavy. So, therefore, we talk about 100 k’s or even plus in some industries like primary metal and chemical, where one single incident, it’s about a 100 k in average or more. And then, of course, to avoid that and to be completely compliance, you have to operate that systems, in many cases, 24 by seven. And, therefore, any way possible to reduce downtime and, as a plus, reduce the energy costs because for compressed air, you have to use electricity, then, it pays off because you’re going to be full time compliant. And the other thing is if you do properly, monitor and control your dust collector system, you also increase the filtration efficiency. So that means you are far from the high levels, where after that threshold, you would be penalized. You can operate under, conformist, under compliance, but can also expand the equipment life. For example, the life bags, the post valves, you don’t have it to replace as often, which is the case if you don’t do any real time monitoring diagnostics. On the left side, the way that we talk about improving maintenance is the total cost. When we talk about the filter life, at least one unit of a filter, It’s about 18 k, US dollars. And you see that, the tip of a iceberg is just the purchase price. The dust collector system, like, of course, has an acquisition cost. But below that, as a total cost of ownership, you have the energy that you expand utilizing the systems. You have the filter bags. You have to keep parts in your inventory. You have to dispose of that. And, of course, you have the downtime costs and also the labors labor costs. Now I’m going to just to give, a chance to say, okay. Tell me how a dust collector system works. Shawn Tierney (host): Before we get to that, we gotta pay the bills. So I wanna tell you about our sponsor, the automationschool.com. It’s actually the next room over. We have a huge training room. We have, some of the most unique products you’ll be able to work on. You know, I know everybody has a bunch of CompactLogix or s seven twelve hundreds or 15 hundreds and, you know, VFDs and HMIs. But some of the products we have here, you’re not gonna find in anybody else’s training room, not even the factory’s training room because we cover all different products. Right? So if you’re coming over to do training with us, you can actually learn Siemens and Allen Bradley at the same time. You can learn how to get Siemens and Allen Bradley to talk together. You guys know I’ve covered that on the show, but you could do it hands on. And some of the other things is like working with third party products. Right? So, you know, if you go to a vendor’s course, they’re not gonna have third party products. But we have as you remember from the wall in my studio, we have all kinds of third party products. And I’m gonna be taking some more pictures of all the different labs we have, the equipment we use, with these third party products. So if you know anybody looking for training and we can do custom things too. So if you wanna start training at noontime or 01:00 because you’re gonna drive in three or four hours away, I was recently just at a, large vendor’s customer doing some training on their behalf. And, yeah, that was a long drive. So if you want your, students to show up in person at twelve or one and then train and then at the on the last day, leave around twelve or one, we can do that as well. I don’t care. We could actually run into the night if you wanted to go, do evenings. Or, again, some people don’t learn very well in the evenings, but in any case, because I own the company, we can do whatever you want. As long as we have the equipment and the time to put it together, we’ll do it for you. So I just wanted to make you aware of that. We also if you’re, just wanna come yourself, if you go to the automationschool.com forward slash live, you will see a place where you can preregister for an upcoming class. And when I get enough people to sign up, I’ll reach out to you and tell you what date is gonna be held. And by preregistering like that, you will save $50 off the $500 price. And if you’re already a student, you will save the price of your online course off of the in person course. So maybe you bought my $200 Siemens or CompactLogix, ControlLogix cost. They’re gonna get that off of that $500. Right? And if you don’t own the online cost, don’t worry about it. If you come here for in person training, at the end of your training, we’re gonna enroll you, in one of those online courses completely free of charge so you can continue your learning. And you don’t have to worry about trying to blitz all the content while you’re here because whether you’re here for a day or five, it doesn’t matter. Whatever you have left to learn, you’ll be able to do it after hours at home, and there’s no additional charge for that. So with that said, let’s get back into this week’s episode of the automation podcast. Eugenio Silva (Emerson): And these are going to be general principles and basics. In general, a dust collector system looks like this. It’s a unit where the air is pulled in at the bottom of the compartment, and this could be forced or not. And then the air gets out, on the top, the outlet, and the dust is collected on the outside of the bag. So if you see this, in this picture, we have one full bag in kind of light brown color with a specific fabric, could be porosis fabric, a PVC, or some even paper in some cases. And then the cleaner exceeds at the top. And the what happens is that the dust cake builds up on the bags, on the outside part of the bag. And, if you see the number one on top, in that particular, entry point, we have two pulse valves with, compressed air in order to shake a little bit these, post bags, filter bags, and then knocks down the dust out of these bags, and then they are collected by a hopper at the bottom. Okay? So that’s basically, in general, how it works the principle. It’s a bit more complicated. Here is just to show that in order to automate a dust collector system including the filter bags, we use, a combination of, electrical and pneumatic, components. And these are from post valves, the ones that continuously blow air into these pipes, the compressed air tanks that hold the right pressure and the right compressed air capacity in order to keep the filtration efficiency very high. Then you have the filter regulators that, you have to bring, the pressure of this line to higher enough, to to be efficient, but not so high to spend too much compressed air. Then you can use controllers, black boxes that are able to do a time based sequencing, but these are not so so much sometimes efficient because it doesn’t take into consideration all the diagnostics that you can get out of it. And then, basically, the very important element is this, particle sensor that is on the outside of the clean air because that is gonna be your canary in the mind. Right? It’s gonna be the one that indicates if the filter, system is efficient efficient and if the the job is done right. And then the other things. But let’s go back to a very interesting view. You remember this picture here that, you you’re looking at, a cross session of the dust collector. Now you could imagine how it looks like from the top. From the top, it looks like that. There is a compressed air tank, that covers, certain portion of the filters units. For example, it’s very common that a filter, complete filter unit, might have different compartments. And in each of these compartments, you have a series of filter bags. And then imagine that you provide short but very powerful pulses of compressed air that are periodically injected on top of this columns. And below, there’s a filter bag. So, therefore, they are going to to receive to expand a little bit, and the dust cake then, outside of of their surface follows. And by inertial forces, of course, this dust is accumulated at the bottom, which is, extracted into a hopper. Of course, now depending of the number of the filters per line, per roll, these pulse valves needs to pulse a little bit faster or not. And the interval time, if you just follow time based approach, could be three to six minutes. Now if you calculate the average filter units, you may have 12 of these filter bags. You can have about seven to 10 pulse valves per unit. It’s very common that, one large installation would have about, like, 500 pulse valves and four, six times more filters, install it. And imagine that if each of them having boost every three minutes, 24 by seven, during seven days a week. So can you imagine the amount of compressed air that can be spent? That’s why these pulses must be very short and powerful, in hundred milliseconds to avoid it also big waste. I think that, picture on the left side, just to simply say that, it’s a lot of, interesting things to to get the dust removal, but basically is a jet of compressed air on top, that shakes the filter. And then by gravity, the dust cake is removed. Shawn Tierney (host): It’s not just a filter. You know, I think main main people may just think, well, a dust collector is just this bag that catches all the dust. You’re actually, you know, you’re you you do have the bags, but, you’re using compressed air to sequentially, depends depending on how many you have, shake those bags in a sense by blowing air into them, to shake off the dust so it falls into the hopper. And so I can you can definitely see, like you were mentioning, if you have lots of these cylinders or these bags, then the sequencing has to be, you know, pretty pretty precise and and pretty, repeatable to make sure you’re you’re cleaning all of the bags off. And I’m I’m assuming too, you need to know when the hopper is full because everything stops working if if if the hopper gets, over full. So very interesting. I think your diagrams do a great job of explaining it as well. Eugenio Silva (Emerson): Yeah. If I play a little bit when I mention that, it’s a a little bit the reverse, way of our vacuum cleaner. Right? Because Yeah. We suck the the dust inside of the bags. Mhmm. And when the bags are completely full clogged, the suction, power, it’s far reduced. Right? So then you have to to empty our, let’s say, filter bags. Here is the although the all the dust is accumulated on the outside, the outer surface of the fabric, but the effect is the same. If there’s so much dust on the surface or out of the surface, then, the air that is shown here, the intake, the air, and then the filter simply stops. That’s why affects completely the efficiency of, that, unit. And the post jet cleaning is a way to unclog or to clean, the filters in order to bring them to the more efficient operation. Shawn Tierney (host): Yeah. Especially if you have lots of dust, you need an automatic way to continue to clean it and get it off of the filter and into the bin. So yeah. No. That makes a lot of sense. Eugenio Silva (Emerson): Yeah. In in other cases, although you talk about, dust, of course, it could be any kind of a pounder. Like, for example, in the foods and beverage industry, you don’t want this for example, let’s say, a dry milk production. You don’t want that dust to be floating around because it can bring contamination. But believe it or not, it can ignite fire sometimes. So that’s why it’s important to to get that completely eliminated. So this is the part that very people would say, okay, on the outlet where the the air should be cleaner, as you can see on the right side, that this, particle sensor is located at the outlet, clean air side. It has a very interesting the way it works is quite interesting. We use a we have a sensor in our portfolio called p 152 that, we take advantage of this triboelectric effect. Basically, this sensor, is coated with PTFE or a Teflon layer, so it’s completely electronic, electric isolated from from, of course, the media. And then when the dust starts touching, that probe, a DC charge is transferred. But because of this, sensor probe is completely isolated, we set the flow layer, the resolution and the electric charge is in the order of a peak ramp. So 10 minus 12. And that the resolution is about point five picoamp. So, therefore, if you’re touching the particles, depends of their size, They are going to generate more or less electricity that’s going to be transferred. And the ones that are just surround, they are not touching. For example, imagine that this, duct air exhausting pipe is quite big. A bit half meter, maximum one meter around that sensor, the particle also generates, induced charge in AC. And by measuring that, we have an idea about how clean is, of course, there that’s getting out. But it’s a bit more tricky than you can imagine because it looks like this. Shawn Tierney (host): Hey, everyone. I hope you enjoy this week’s show. I know I really enjoyed it. And, of course, I wanna thank our members for making the video edition possible. So this vendor did not sponsor this episode. So the video edition is available for members, and there’s some great graphics in their presentation you guys may wanna check out. Now with that said, we do have some really exciting podcast episodes coming up. I’m sitting down with Inductive. I’m sitting down with Software Toolbox. I’m sitting down with Siemens and a bunch of other vendors. So we have plenty of new podcasts coming up in the coming weeks this summer. And I also wanted to give you an update of what’s going on over at the automation blog. We’ve had some new articles come out. Brandon Cooper, one of our freelancers, wrote a great article about emulating Allen Bradley e threes. We had a vendor, actually, submit an article and sponsor the site to submit an article about what makes a good automated palletizer. We also had an update about the automation museum. That’s a fundraiser we’re running. We’re trying to open a automation museum. I got a lot of legacy stuff I’d like to donate to it, and I’d love to have it so you can come in and actually walk through, not just see the stuff, but actually learn on it. Right? So maybe you have some old stuff in your plant. You come out to the automation museum, and you can learn how to use it. With that said, we’re also looking at possibly doing a podcast for automation museum to drive awareness of legacy automation. So any of you out there interested in that, contact me directly. And, you can do so over at the automationblog.com. Just click on the contact button. And, we also have an article two articles from Brandon Cooper about things he learned as he transitioned from working in a plant to traveling around and visiting other plants to help them with their processes and automation. So check those articles out over at the automation blog. And finally, over at the automation school, you know, we have the new factor IO courses. We also have I just added a new lesson to the logics version of that course. Somebody wanted to try to use bit shifts instead of counters, so I added a lesson on that. Plus, I’m now starting to update all of the courses, including the brand new ones I’m working on. So you’re gonna see a brand new start here lesson later in the week, and I’m working on some cool emulation, lateral logic for my PLC courses that if you don’t have any push buttons or limit switches, you can actually use this code I’m gonna give you for free to simulate the widget machine that I use as kind of the basis for my teaching. So in any case, check that out if you’re in one of my PLC courses over at the automationschool.com. And with that said, you know, I’m very thankful for all the vendors who come on, especially those who sponsor the episodes so I don’t have to do these commercials. I’m not a big commercial guy, but I do wanna thank you for hanging in there and listening through this update. And now we’ll get right back into this episode of the automation podcast. Eugenio Silva (Emerson): Every time you get, use the jet boost with the boost valves on top of the filter bags, it creates a peak. So that means the cleaning cycles that are happening in a duration of, just a 100 milliseconds. That’s why they are very, very thin. And they happening every two, three minutes, per roll. They have to they have in nature a little bit of noise because imagine that every time that, you clean, more dust gets into inside of the the filter back. So that means it’s like when you clean your vacuum cleaner, immediately when you turn on that, some of this dust is gonna get inside immediately, and that’s the peak. But now imagine that, you have a rupture in the filter or you have a big role because, unfortunately, these the things are wear out. And then these peaks starts getting higher and higher. So, therefore, what we do when we, put that solution in place for a little time, let’s say, couple of days, we needed to kind of, set up, these thresholds. We need to figure out the level of noise that could be because depends very much the capacity, the types of, of a test. But once you do that, in our solution, we set the thresholds like alarming, a warning alarm, which means that after that point, the maintenance crew, starts looking at, that could be a early indication that a filter bag is not okay until the maximum point that avoids any any nonconformist, issue, which is already a rupture. You really pass the time where this filter, must be replaced. Shawn Tierney (host): So we’re looking at this chart for those who are listening. And the particle sensor, you know, it’s measuring the particles as air flows normally. But during the pulse, right, we’re forcing a lot of air back in, back down. So we’re getting a lot more, you know, than the average air would have x amount of particles. But if we’re forcing a bunch of it back in, we’re gonna see a lot more particles per, let’s say, hundred millisecond pulse. Right? So we do expect a peak when we when we pulse it because we’re just forcing a lot of get back go into the reverse direction. So we can we catch the bag loose. But what you’re saying here on this chart, I find so in so much interesting. So you can quantify, like, the expected increase in, in dust that you’re gonna sense with the sensor when you go in the reverse, when you pulse pulse, blow the ear downwards to, to shake the bag free. But you’re saying if that if that extra increased amount of detected dust is either too high, above normal, or too low below normal, then that tells you that you you could either have a clogged bag or you could have a burst bag. Is that am I understanding that correctly? Eugenio Silva (Emerson): Yes. Is this correct? And then the interesting thing is that as soon as you’re getting closer to replace a filter back, this baseline starts raising a bit with a kind of, how can I say, there is a drift? Why? Exactly what you said. A filter is completely clogged. It’s not yet any rupture, but is the efficiency of the cleaning is not so okay. So therefore, this slightly changes needs to be analyzed. Why I’m showing row one to row 10? Exactly in the picture, if you remember, a compartment filter with several, let’s say, filter bags, they are under the row. So under the row one, you may have 10 filter bags, row two, row three, and so on. So that means you are able to indicate which row is the problem, but it might be that you still need to check further which of the filters in that particular row have the problems. The more quick this peak happens, more number of, filter bags can have a problem. Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): Okay? Shawn Tierney (host): So you have one sensor on the exhaust, and you’re sequencing through, you know, blowing out or shaking out, you know, pulsing each of the rows. So that’s why we see, you know, one reading across the, you know, across the horizontal, and we see your row, row one, row two, row three, row four, each of them with discrete values or pulses. And like you just said, if you have multiple issues on a row, then you’re going to see, you know, a higher or lower peak depending on what the issue is. I’m with you. Eugenio Silva (Emerson): Yes. That’s why I’m going to show the other diagnostic capabilities that we needed to associate with this, particle sensor. And just to remember that, this particle sensor, we simply use one unit on the outlet part. That’s why I needed to make the sequence in serialization of the post because then I need to to synchronize with the post jets of every role. Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): No? Row by row. Shawn Tierney (host): And I think too, if you tried to do them all at once, the the you would need a lot higher pressure. So it it kinda makes sense to do it row by row because it reduces your maximum pressure required. Eugenio Silva (Emerson): Yeah. In this practical sense, we’re not be able to Shawn Tierney (host): Differentiate. Eugenio Silva (Emerson): Identify which of the roles, would be the problem. That’s why we kind of still have to do that. But now let’s give in a solution overview, and I think that, some of the key capabilities and features are going to highlight even more, the other, diagnostic capabilities that we are able to to provide in order to identify correctly and early as possible such issues. So this is a typical dust collector system. And if you look at around, if this dust collector system is just, let’s say, automated with nomadic electric components and they don’t have real time monitoring, you’re not really know the emission level. If it also this is not real time monitoring with some diagnostics, then you are not able to identify when this particle sensor, for example, is completely taken by, the dust because the humidity entrance in that, in that pipe, or it might be that, it’s so dirty, your dust that, is already ingrained so much on the probe. Mhmm. So that’s why the poor, reliability or the low level sensitivity of that could be affected. And if you were not monitoring, these signals that I showed the these peaks synchronized with the post valve jets Mhmm. You don’t have any early warning. Okay? The post valves basically are coils. They are solenoid coils Shawn Tierney (host): Mhmm. Eugenio Silva (Emerson): With tag diaphragms that open and close at the speed of a hundred milliseconds. The point is that their life time is about a couple of millions of cycles. Mhmm. But imagine, in some cases, one, two years is already enough to to have end end of life. So a fault valve, has to be connected to a control system because you need to know if this is a short circuit or if the diaphragm is completely open. And you can only do that if every time that you cycle the valve, you also, check that. For example, the power that, you drive the coil gives you a feeling if that is a coil that is already gone. Okay? Now let’s talk about the compressed air. Right? If you have a a filter that is open, there’s a rupture. If you have, a diaphragm that’s completely gone open, you start consuming higher and higher the compressed air. The point is this is continuously increasing. You can just imagine that this is normal. But if you go into average and look at this in a historical way, you’re gonna see that this trend is caused because of the broken post valves, for example. So that’s why it’s also important aspect of the automation solution is to minimize the usage of the compressed air is to have a clearly operating under a baseline that is normal. The filter bags, independent of the materials, because if you talk about life sciences, foods, chemical, or metal, they have a different materials. They have a different, where else, lifetime span. The point is the costs might be the filter itself is not so expensive. But going up there, exchange stopping, moving things around, getting the dust out before you change, putting all the personal protection equipment may take hours. So, therefore, that is the cost of that. And if you’re not able to prevent or even have an early warning when that is going to occur, is gonna be a reactive, maintenance issue. Right? So that’s why just convincing that, it’s worth looking into different aspects. And that’s why, on the left side, when we talk about solutions, we talk about, the connectivity part that, we have to work with devices that are hard or four to 20 milliamps. Some of devices are modbus to CP. Newer actuators in post faults could be mu m q t t or even OPC UA. That’s the the PLC part that, we have. And we can work with pneumatic systems, for example, that they turn at AP, PROFINET, or any other, standards. Then, of course, we have the IOs, that, we have to look at to control the post jet systems, but also to monitor the differential pressures, to measure the compressed here in some cases, until the parts where at the top, we put HMI SCADA software platform that, we pre engineered, in order to to make it simpler the development, of that solution by our OABS or many cases directly to our end users. And all in the right are the elements that we offer in our portfolio. Some cases, OEMs of a dust collector systems just to take from us, and they might be that they have their own solution as well. Shawn Tierney (host): So just for the audio audience, I know we’ve covered these products a lot, especially on the news show. But, I mean, I’m just wanna kinda go through a couple of these things. You got the ASCO product line. Right? So remote piloted valves and, you know, all of those, that category, you know, the, pulse valves. But we also got the Advantex, which we’ve talked about, like figure filter regulators and, different cylinders. Topworks, which I think we’re all familiar with, proximity sensors and whatnot. And, some of the other products you guys, Rosemount, differential pressure transmitters. We also see, we have, the PAC systems. In this case, you could have edge analytics, and so you may have one of the PAC systems, edge IPCs. And we even see the, down in the corner there, the Emerson PLC and IOs, which I think we’re all familiar with as well. So that kinda shows you how, you know, this solution, you know, they’re taking all these different products they have in their catalog and putting it together in one solution, which is, you know, you kinda need all this stuff. You know, basically understanding how it works. We just went through it. And so it’s interesting. I don’t think I’ve seen a slide yet from Emerson where they kinda include in one application all, if not all, many of their their, different product lines. And then, the the skate on the top, it looks like, just some beautiful screens and charts and and, you know, dials showing the current status. So, and and I I didn’t mean to interrupt you, Gino, but nonetheless say that, especially since the people listening, they’ll be familiar with all those trade names because we’ve covered those in the past. But, in any case, let me turn it back to you. Eugenio Silva (Emerson): No. No. It’s thanks for highlighting. And I I say that, when I introduce myself that I’m from the discrete automation part of Emerson. Mhmm. Because most of, people would know Emerson by the Rosemont, for example, pressure, Fisher valves, and then the, you know, the delta v, DCS. Right? This is the discrete automation part, and that’s why probably something new, for everybody here. Thank you very much. So when I look at that in a nutshell, we, of course, have to put the sensory devices, the PLC on top, the HMI scanner. And, basically, what we provide is real time monitoring of this particulate, emissions. We detect but also locate where the leak is by compartments in rows. You can see on the picture that, on the top of this HMI screen, we have a filter unit with three compartments, compartment one, two, three. And each compartment has these rows on top, which is the number of rows, then the more a number of filter bags that, within each, compartment. So, therefore, just locating which compartment and which row, you have a problem, I can tell you it saves half day of the people, in the maintenance. We also optimize the push at cleaning. It’s an, patent based algorithm that is completely adaptive, and works not just with the post valves, but, we put, head pressure sensors. And this fluctuation and the differential pressure that we measure from the outlet and inlet allows us, of course, to, increase or decrease the frequency of these push heads, which allows not only to be more efficient, but also minimize compressed air. And then finally, when you talk about solidoids involved diaphragms, these ones we can indicate one by one where they have problems. So, therefore, if you look at down to the other HMI screen, there are two rows on top. The one that is a solenoid, the one that is a diaphragm, and these vertical bars are the filter bag health. If they are getting closer to red with the high levels, meaning that, their life span is already gone. And if you have, light indicators on the solenoid, the diaphragm depend of the color might be that you have a short circuit fail, open diaphragm. Therefore, you have also to replace. And, basically, when we install that solution, sometimes our customers, ask it to also integrate with their control systems. So, therefore, they compress their generation, the fan, the hoppers, the safety alarms, of the plant sometimes are fully integrated as well. Now let’s talk very much about few features features because these are the ones that probably you haven’t seen yet. Wanna talk about our HMI control system is based on Movicon, Movicon next platform. And, basically, it provides everything that you know from the Scott HMI. And that’s why to use this in general for applications like OIE, energy management, in some others, infrastructure monitoring, like, smart cities, wastewater facilities, solar, mega mega plants, etcetera. Of course, it provides data visualization, but, I like to highlight that, you could ask we provide connectivity to all major POCs that you can imagine, with communication drivers. Of course, the open standards like OPC UA, like, Modbus. And on the lower part, the the green, let’s say, the the gray part here is what we used for that solution. Sometimes we use a geo maps, to indicate where the filters are. Some geo references, let’s say geo fences as well. The people have to be, with a personal protection equipment to be there. So there are some, real time, data that, of course, we are collecting for the particle emissions and other elements like differential pressure, header pressure. And then you have the headlines. You can see some screens that are completely dedicated to alarms and alerts. And one of these, diagnostics that you see are related to the solenoid, to the filter bag, and to the diaphragm diagnostics. A lot of them are diagnostic get diagnosed in different ways. For example, the solenoids, we look into the power output of our IO cards to see if the valve post the solenoid is open or complete short circuit. The filter bag, I already explained it. We detect with some logic with the the particle sensors, And the diaphragm diagnostics is based on the header pressure because if it’s this diaphragm is completely open, the differential pressure within the chamber, it starts fluctuating, and then you know that there’s something wrong there. But all of them increases the filtration efficiency, changes from reactive to predictive maintenance, of course, keeps the site compliant, minimize dust emissions, and for sure increase equipment lifetime, like the filter units, and reduce the compressed air usage. If you sum up all of that, the return in investment is it might be quite fast, of course, for large big large installations might be within two years, but it’s still a very fast return in investment for that particular solution. That’s what it looks like. A little bit, let’s say, zoom in. You see that they’re not nice looking, but they indicate graphically where the issues are, the number of issues, on this screen about thresholds alerts. The second one on the right side, is like the number of cycles. Imagine that every pulse valve would have, about a couple of millions of cycles of lifetime. Here, you can at least predict when or how many spare parts that, you need to have in the next quarter. And then, the yellow or red signals means that, red gone, you have is a faulty. And the white ones or the red the yellow ones are the ones that, you need to watch because they’re getting closer to the lifetime dead of lifetime. The other aspect is, like I said, when thus collector systems, you acquire that without the solution, it comes with this sequence box, which basically is a time based posting. So it keeps posting three to six minutes, like I I said, hundred milliseconds, but it can change. It’s it’s fixed. And that means that leads to, an excessive use of the post valve. So you’re going to wear out quite sooner than it should, but also reduce the valve back life because stretching the the the back filters, of course, you’re gonna also wear out, and you waste much more compressed air than than probably you should. That’s why we implemented this other two types of a post jet cleaning methodologies. One is on demand. That really depends on the high differential pressure between the the chamber and, you can set, in the in the solution how these multiple filter lines are going to operate normally, And this differential pressure threshold can be, for example, when the efficiency is getting bad, the differential pressure gets lower. And then if that is within a certain band, you can estimate that, there is accumulation of the cascade. The other one is very, intelligent. It’s a function block, in our PLC that, does a dynamic change. So, therefore, you put the single set point and the adaptive algorithm based on the virtual pressure starts controlling the intervals between the posts. So the idea is that to optimize by eliminating unnecessary posts in the cycle of these valves and also minimizing the compressed air. Of course, when you install the solution and, you put the set point for the first time, the system needs a little bit time to learn, and it’s a learning algorithm that, starts adapting. And very soon, it starts performing optimally. Okay? Shawn Tierney (host): Hey, everybody. I just wanna jump in here one more time. Just thank our members, both on YouTube and at the automationblog.com. I got some really exciting stuff coming up for you guys, in the fall. I’m I just have this huge plan that I’m working on. And so, I really just thank you guys for being members. Don’t forget, you get access to Discord. Don’t forget, there’s a whole library of older episodes you get to watch. It’s such just what I’m doing this month for members. It’s, you get a whole library of stuff. We did so much member only content over the last couple of years that you have hundreds literally hundreds of hours of content that you and only you get access to as a member, whether you’re on YouTube or you’re at theautomationblog.com. And, of course, if you have any questions about your membership, reach out to me directly, please. And with that, let’s go ahead and jump back into this week’s show. Eugenio Silva (Emerson): And that looks like that. This is just another, possibility to see. You see that, on the left side, you see a particular rows, and each of these rows have the filter bags. Each filter bag has a vertical bar that indicates the healthy of that solenoid diaphragm is on the top. And then, each of these compartments can navigate from one to another. Then you have other additional elements like the header pressure, differential pressure, particle density, and you have a trained diagram that, you are able also to generate reports, but you also also to to monitor, in order to to type a little bit, the parameters in order to be more efficient. And then, completely right side, if you have more than one dust collector, you can create different screens if you want. But the idea here is that the C1, C2 means compartment one, two, three. Again, a diagnostics that leads to preventative predictive maintenance and avoids completely reactive maintenance. Interesting, if you don’t know, in order to replace a single filter, in order to check if a solenoid valve is completely short circuits, In order to see if, a diaphragm valve is open, you needed to get there in this personal protection equipment using mask, gloves. You need to go up. You need to kind of get to know where these things are. And imagine that if you could avoid and just look at the screen and say, hey. I know that this is the compartment one of the filter a, and I know where I needed to look at. And by the way, I have the spare part because I had early indications to fix it. So then we are not just talk about reduction time, but, I guess, reduction costs and avoid to put people every time in such a very interesting environment. Okay? I’m not going through the the right part because you can imagine that this is a description of how things are usually done. And if you turn this around into a proactive predictive maintenance, then you have less and maybe faster steps. And you can prevent and can plan in advance when you wanna go with these, units, and you have to wear this equipment for protection. So very quickly in the developer position. Of course, like any solution, customers are interested to know if, they can pay off payback very quickly. So the return investment of that. So that’s why we check, the size, the number of, units, what’s the minimum size the customer could start with, because the it’s a pre engineered solution, how fast it could be that we implement in the whole site. It could be also, of course, calculate how much their current expenditure in terms of maintenance, reactive maintenance, the cost of utilities like compressed air, how many times they have to or they have downtime issues. And from that, we can prove very quickly, very simply that, it’s worth investing in automation. 20 to 30% of our reduction is a lot if you consider that they use a huge amount of compressed air. And compressors, they use electricity. So, therefore, if you’re able to reduce compressed air, you also increase your operation efficiency because cost of utilities is one of the points. Downtime is everything. Maintenance, it’s about preventing that you need to do these manual inspections. Just go there, check, and come back, and you see that, okay, we could wait for another week. But because I’m here, I’m going to change anyhow the filter. And that, of course, you’re not, increasing the lifetime of our equipment. And interesting that some downstream equipment, like the blowers, like the vacu pumps, if they get a lot of dust or excessive dust, they also, damage them. So therefore, maximizing maintenance, optimizing every step pays off in that sense. And finally, of course, customers do that because they want the full compliance. Every possible issue can be tracked, can be report. The efficiency of the systems can be audit ready, reports. It can re really prove that you can you are reducing part commissions. You provide a lot of visibility what’s going on. So, therefore, the technical teams are in very high confidence to operate the system. Because if without, they are operating blindly, And that’s why they feel a bit concerned many times that, might be that the bad things are just going to happen. In a nutshell, we talk about savings, extending the filter life. We talk about savings, reduce the compressed air. We can avoid downtime. Each downtime is one event that costs not only in the maintenance part here, but also the whole production costs that are not calculate here. And half the penalties that, if you have a single issue, it’s gonna be a big one. So, therefore, it’s a good way to give customers an idea why they should invest the CapEx parts and how we can help with the OPEX to save, their budgets in the sense of operating dust collector systems. So, Shawn, if I have time three minutes, I’m going to run this HMI demo because then you can see on the screen how the different screens are operated, but it’s up to you if I if I if if I need to do that. Shawn Tierney (host): Yeah. Go ahead. Eugenio Silva (Emerson): Okay. So this is an HMI demo, of course, simulated here because imagine it’s not possible to connect to live or to have all this whole equipment. So then I’m going to click here. So, basically, you see how a operator would navigate the type of information that, is provided. I made this click through very quickly because then we don’t owe too too much time here. But you see that, you are able to trend the particle density, the air consumption. You can set the alarms. You can indicate which boost valve is not okay. How is the level of filter bags? And now the settings. The cleaning, these are the parameters that you can adjust. Like I said, we have an adaptive learning algorithm, but in many cases, you needed to steer at least set up, the sensors as well, how sensible sensitivity of that. There are many different thresholds. And then the diagnostic part, for the diaphragm and the rupture where you can detect. And once this is done, you can see that, you have, quite, interesting information. For example, if you change, you devolve, you reset the counter. These are the alarms that you can acknowledge, etcetera. Okay? And, that’s it. That was the case. Shawn Tierney (host): Yeah. That gives you a good idea of what you’re getting with as far as the HMI is concerned, and, it’s good to see a full screen. I mean, it looks it looks like a very well designed HMI. From my perspective, it looks like it’s really giving you it’s focusing in on any errors. So you have, like, just standard graphics, a very good looking graphics, and then if there’s an error, you see it in red or yellow, really calls the eye to it. But, Eugeno, I see that, there’s a QR code on the screen right now. Can you tell people where that goes? Eugenio Silva (Emerson): Yes. It goes to the product page on our Emerson.com site. And from there, you can request for demo. We can request for proposal. We can request for more information. So this is the entry point for you to go to know, how it how we provide that solution, which kind of, basic elements. And there, we have also the related product pages if you wanna get, get to know more. Shawn Tierney (host): And I think the important part here is a lot of times you you, you know, when when you have a dust collector system that is that is constantly needing care, right, to keep you in compliance and make sure your products are products are being made correctly and you’re keeping people safe and all of that, You know, these systems, you’re gonna they’re they’re gonna be expensive. And, you know, larger systems, of course, are gonna be expensive. And so that cost savings, it’s like energy savings we do with VFDs on pumps and fans. Right? Or energy savings we do when we’re doing lighting, the folks over at Emerson are gonna wanna help you kinda quantify that because, you know, they know that for you to be able to justify not only, hey. This has given us a lot of problems. We know it’s costing us money. You also wanna know your ROI. Right? And so they’re gonna work with you on that because that’s on these big projects, those are those are some of the things that we have to look at to be able to, you know, to budget correctly. Anybody who has ever been in the budgeting part of a company knows you just don’t spend money because it’s fun. You know, you have to have a reason beyond everything. So I would I would guess I’m right on that, Eugenio. Eugenio Silva (Emerson): Yes. And, Shawn, although I just covered the technical part, of course, without any commitment, we can talk to customers and consult them Yeah. To look it around and see, in terms of maturity, how they operate this dust collector systems. We can, of course, check the install base. We have a questionnaire, that can fill it in. We can understand the size. We can, for example, talk about the energy consumption, the number of, hours that they are spend or active maintenance. And based on that, we give them opportunity to analyze whether they want to invest in that solution, which is a CapEx investment, but, also improve how much reduction they could have on the OPEX part. Shawn Tierney (host): Yeah. Which is which is, yeah, how they’re gonna justify it. Well, Eugeno, I wanna thank you for going through that. I really enjoyed your presentation. I learned a lot more about about, this product line and actually this product category than I that I knew coming in, and you’re I think you did a great job of walking us through it all. So thank you very much for coming on the show. Eugenio Silva (Emerson): Shawn, on behalf of Emerson, we appreciate this opportunity. It’s my first one here, so I also enjoy it, and this was was great. A great conversation, great questions, and, thank you. Shawn Tierney (host): Well, I hope you enjoyed that episode. I wanna thank Eugene for coming on the show and bringing us up to speed on dust collector systems. I really didn’t know all of those technical details, and I really appreciate him going through that. And it’s cool to see how they integrated so many different Emerson products into that solution. I mean, it’s just not like a PLC into my o. The sensors, this I mean, you guys, sorry. I’m not gonna go through it again. But in any case, really appreciate that. And I also appreciate our members who made the video addition possible. Thank you, members. Your $5 a month not only locks this video, but so many other videos that we’ve done, hundreds of videos I’ve done over the last twelve years. So thank you for being a member and supporting my work. I also wanna thank the automationschool.com and the automationblog.com. I hope you guys listened to that update that I included in the show. So many good things happen at both places. I hope you guys would take a moment to check out both websites. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. The Automation Podcast, Episode 241 Show Notes: To learn about becoming a member and unlocking hundreds of our “member’s only” videos, click here. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
www.iotusecase.com#SmartBuilding #WorkplaceExperience #FacilityManagementIn Episode 176 des IoT Use Case Podcasts spricht Gastgeberin Ing. Madeleine Mickeleit mit André Lange und Sebastian Creischer von ICONICS über smarte Arbeitsplatzlösungen in modernen wie bestehenden Gebäuden.Im Fokus: Wie sich heterogene Infrastrukturen mit IoT effizient vernetzen lassen – modular, kabellos und skalierbar. Praxisnah, aus erster Hand.Folge 176 auf einen Blick (und Klick):(14:30) Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus(22:42) Lösungen, Angebote und Services – Ein Blick auf die eingesetzten Technologien(29:40) Übertragbarkeit, Skalierung und nächste Schritte – so könnt Ihr diesen Use Case nutzenPodcast ZusammenfassungSmarte Gebäude gibt's nicht nur im Neubau. In dieser Folge zeigen André Lange und Sebastian Creischer von ICONICS, wie sich selbst ältere Büro- und Industriegebäude mit IoT-Lösungen intelligent vernetzen lassen – ganz ohne aufwendige Umbauten.Im Fokus stehen zwei Ansätze:Building-Centric Anwendungen für Energie, Klima, Lüftung – und People-Centric Lösungen für Arbeitsplatzbuchung, Navigation und Raumauslastung. Beides lässt sich mit ICONICS Software modular integrieren, etwa über Genesis64 und den Intelligent Building Software Stack (IBSS).Die Gäste erklären, wie sich verschiedenste Systeme und Sensoren – ob BACnet, Modbus, OPC UA oder MQTT – über eine Integrationsplattform sicher verknüpfen lassen. Selbst Herausforderungen wie denkmalgeschützte Bestandsgebäude lassen sich damit smart meistern.Spannend sind Use Cases wie digitale Raumbuchung, ad-hoc Navigation per App oder Präsenz-Tracking fürs Flächenmanagement. Alles lässt sich kabellos, skalierbar und ohne Störung des laufenden Betriebs integrieren.Zum Ausblick wird's zukunftsweisend: KI-gestützte Anomalieerkennung, Kollegen-Finder per Bluetooth oder smarte Paketservices im Büro zeigen, wohin die Reise geht.
www.iotusecase.com#Kubernetes #ManagedKubernetes #CloudSovereigntyIn Episode 171 des IoT Use Case Podcasts spricht Gastgeberin Ing. Madeleine Mickeleit mit Fabian Peter, CEO von ayedo, über den industriellen Einsatz von Kubernetes – jenseits von IT-Silos und DevOps-Klischees.Die Folge zeigt, wie Unternehmen Use Cases wie Predictive Maintenance effizient skalieren, Updates automatisiert ausrollen und Compliance-Anforderungen erfüllen können. Fabian gibt praxisnahe Einblicke in Projekte aus Maschinenbau und Energieversorgung – und erklärt, wie ein europäischer Tech-Stack Kubernetes auch On-Premise möglich macht.Folge 171 auf einen Blick (und Klick):(13:52) Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus(20:15) Lösungen, Angebote und Services – Ein Blick auf die eingesetzten Technologien(27:23) Übertragbarkeit, Skalierung und nächste Schritte – So könnt ihr diesen Use Case nutzenPodcast ZusammenfassungKubernetes ist längst mehr als ein Thema für IT-Teams – es wird zur Schlüsseltechnologie, um industrielle Use Cases vom Prototypen in den Rollout zu bringen. In dieser Folge spricht Fabian Peter, CEO von ayedo, über konkrete Herausforderungen in der Fertigung: verteilte Maschinen, aufwendige Updates, fehlende Standardisierung und wachsende Compliance-Anforderungen.Er zeigt, wie Kubernetes als Betriebsplattform für containerisierte Anwendungen genutzt werden kann – etwa für Predictive Maintenance, Datenanbindung über OPC UA oder API-basierte Herstellerintegration. Der Vorteil: Updates laufen automatisiert, Änderungen lassen sich in Minuten live bringen und Anwendungen bleiben auch bei komplexer Infrastruktur ausfallsicher.Viele Unternehmen, so Fabian, unterschätzen die Einstiegshürden – gerade im Mittelstand fehlt oft das Know-how. ayedo bietet deshalb Managed Services, mit denen Firmen ihre Software auf Kubernetes betreiben können: ob für Drittanbieteranwendungen oder eigene Entwicklungen. Besonders wichtig ist dabei der europäische Stack – datenschutzkonform, flexibel und mit persönlichem 24/7-Support.Diese Episode richtet sich an Digitalisierungsverantwortliche aus Industrie, Maschinenbau und Energieversorgung, die ihre IT/OT-Systeme standardisieren und zukunftssicher betreiben wollen – ohne sich in der Hyperscaler-Komplexität zu verlieren.-----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Fabian (https://www.linkedin.com/in/derfabianpeter/)ayedo (https://ayedo.de/)ayedo Discord Community (https://discord.com/invite/uymn6HdCNB)Souveräne Cloud (https://ayedo.de/posts/souverane-cloud-anspruch-realitat-und-technische-auswege/)Jetzt IoT Use Case auf LinkedIn folgen
Many of you still remember the fieldbus wars, then peace came and we solved industrial communication with TSN and many other technologies. But now the agents are coming. What are they – “just” a new layer?
www.iotusecase.com#MASCHINENBAU #SECURITY #PENETRATIONTEST #IOT-PLATTFORMIn der 165. Episode des IoT Use Case Podcasts spricht Gastgeberin Ing. Madeleine Mickeleit mit Michael Buchenberg, Head of IT Security bei XITASO, über die Absicherung vernetzter Produkte im industriellen Umfeld. Am Beispiel eines Projekts mit DMG MORI und der Plattform CELOS X zeigt die Folge, wie Penetration Tests in der Praxis ablaufen, welche Angriffsvektoren im IoT-Kontext eine Rolle spielen und wie Konzepte wie DevSecOps und der Cyber Resilience Act die Entwicklung sicherer Lösungen beeinflussen.Folge 165 auf einen Blick (und Klick):(10:55) Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus(16:08) Lösungen, Angebote und Services – Ein Blick auf die eingesetzten Technologien (22:02) Übertragbarkeit, Skalierung und nächste Schritte – So könnt ihr diesen Use Case nutzen Podcast ZusammenfassungWie sicher sind eigentlich meine digitalen Produkte im Feld? Diese Frage stellen sich viele Hersteller – spätestens, wenn es um vernetzte Maschinen, IoT-Plattformen oder Kundenportale geht. Genau darum geht es in dieser Podcastfolge mit Michael Buchenberg, Head of IT Security bei XITASO.Am Beispiel eines Projekts mit DMG MORI und der Plattform CELOS X wird praxisnah aufgezeigt, wie Penetration Tests helfen, reale Schwachstellen frühzeitig zu identifizieren – etwa in Maschinen, Cloud-Anbindungen oder Standard-Schnittstellen wie OPC UA oder MQTT. Getestet wird unter realistischen Bedingungen: direkt an der Maschine im Shopfloor.Zentrale Herausforderungen:Historisch gewachsener Code (z. B. alte SPS-Programme), der nicht für Vernetzung entwickelt wurdeMangelnde Transparenz über Risiken im Gesamtsystem – von der Maschine bis zur CloudFehlendes Schwachstellenmanagement in der ProduktentwicklungSorgen von Endkunden beim Umgang mit sensiblen ProduktionsdatenLösungsansatz: Neben klassischem Penetration Testing spricht Michael über den Ansatz DevSecOps – also das frühzeitige Mitdenken von Sicherheit in der Software- und Produktentwicklung. Entscheidend ist dabei: Wer potenzielle Schwachstellen schon in der Architektur erkennt, spart Aufwand und Kosten in späteren Phasen.Regulatorische Relevanz:Mit dem Cyber Resilience Act und der NIS-2-Richtlinie wird Sicherheit zur Pflicht. Hersteller müssen künftig aktiv nach Schwachstellen suchen, Updates bereitstellen und Sicherheit über den gesamten Produktlebenszyklus sicherstellen.Die Folge liefert klare Best Practices und einen Realitätscheck für alle, die IoT-Lösungen entwickeln oder betreiben – insbesondere im Maschinen- und Anlagenbau, aber auch darüber hinaus.-----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Michael (https://www.linkedin.com/in/michael-buchenberg/)CELOS X Plattform (https://de.dmgmori.com/produkte/digitalisierung/celos-x)Post-Quanten-Kryptographie (https://xitaso.com/projekte/amiquasy-migration-zu-post-quanten-kryptographie/)Penetration Tests von Fräsmaschinen (https://xitaso.com/projekte/dmg-mori-penetration-test/?utm_source=iot.website&utm_medium=podcast&utm_campaign=iot.use.case)Jetzt IoT Use Case auf LinkedIn folgen
Welcome back to a special Thursday edition of Manufacturing Hub!In this episode, we dive deep into one of the standout presentations from the Prove It conference — featuring Travis Cox from Inductive Automation and Arlen Nipper from Cirrus Link Solutions.
Examine of cutting-edge digital technology with industry experts Luciano Botto and Libanio Souza. In this compelling podcast episode, recorded live at the ARC 29th Annual Conference, we explore the transformative power of new digital solutions in process control. Discover how these innovations are enhancing maintainability, interoperability, and quality improvement in the industry. Learn about the groundbreaking OPC UA and OPA standards, and how they are paving the way for smarter, more efficient operations. Join us as we discuss the challenges and opportunities of adopting these technologies. Don't miss this insightful conversation that promises to reshape the future of process control!
www.iotusecase.com#IT-OT-Integration #IIoT-Konzept #SERVICE-Plattform-Architektur #MASCHINENBAUIn der 164. Episode des IoT Use Case Podcasts spricht Gastgeberin Ing. Madeleine Mickeleit mit André Hoettgen, Gruppenleiter Enterprise bei der Paul Horn GmbH, und Sarah Blomeier, IT-Salesmanagerin beim Integrationsspezialisten soffico, über skalierbare Digitalisierung in der Fertigung.Ausgezeichnet mit dem VDMA Award, setzt Paul Horn auf ein zukunftsweisendes IoT- und Servicekonzept. Im Zentrum steht die Middleware Orchestra von soffico, die IT- und OT-Systeme intelligent vernetzt.Die Folge gibt Einblicke in die technische Umsetzung gewachsener Systemlandschaften, den Aufbau standardisierter Architekturen sowie Use Cases wie die digitalisierte Werkzeug-Instandsetzung. Zudem geht es um Make-or-Buy-Entscheidungen und den Einsatz von KI für smarte Datenmappings.Podcast ZusammenfassungIn dieser Episode dreht sich alles um die Integration von IT- und OT-Daten in der Fertigungsindustrie – am Beispiel der Paul Horn GmbH, die für ihr innovatives IoT- und Servicekonzept mit dem VDMA Award ausgezeichnet wurde.Es wird aufgezeigt, wie es gelingt, gewachsene Systemlandschaften effizient zu vernetzen, Silos aufzubrechen und datenbasierte Entscheidungen möglich zu machen – ohne den gesamten Maschinenpark zu ersetzen.Ein zentraler Erfolgsfaktor ist die Middleware Orchestra von soffico, die als Datendrehscheibe fungiert. Sie verbindet IT-Systeme wie SAP oder CAD mit OT-Komponenten via OPC UA – und bildet so das Rückgrat einer modernen, serviceorientierten IT-Architektur.Die Folge liefert spannende Insights:Warum Konnektivität keine Einmallösung, sondern ein strategischer Asset istWie Paul Horn Standards setzt, um Skalierbarkeit sicherzustellenWie konkrete Use Cases (z. B. digitalisierte Werkzeug-Retouren im Service) zur Effizienzsteigerung beitragenWieso eine Make-or-Buy-Entscheidung zugunsten eines starken Partners oft nachhaltiger istUnd: Welche Rolle KI-gestützte Datenmappings in Zukunft spielen werdenEine Folge für alle, die Digitalisierung skalierbar und strategisch denken – mit Best Practices direkt aus der Fertigung.-----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Sarah (https://www.linkedin.com/in/sarahblomeier/)André (https://www.linkedin.com/in/ahoettgen/)soffico Website (https://soffico.de/)HORN Technologietage 2025 (https://www.horn-technologietage.de/)Community-Anmeldung (https://iotusecase.com/de/community/)Jetzt IoT Use Case auf LinkedIn folgen
www.iotusecase.com#GenAI #SmartManufacturing #PredictiveMaintenance Special episode recorded live at Hannover Messe: Together with Scott Kemp from SoftServe, we take a look at real industrial projects – including use cases from SCHUNK, Continental, and NVIDIA. The focus: smart data infrastructures, predictive maintenance, and practical AI applications on the shop floor.Podcast episode summaryHow can smart maintenance, AI, and global IoT infrastructures be put into practice – despite labor shortages and complex machinery? Scott Kemp, Head of Manufacturing Services, EMEA, at SoftServe discusses these challenges with Ing. Madeleine Mickeleit, sharing insights from projects with SCHUNK, Continental, and NVIDIA.SoftServe demonstrates how scalable IoT backbones and AI applications deliver real value – for example, an AI assistant at Continental that reduces MTTR and boosts OEE by 10%. With SCHUNK, SoftServe co-developed an IoT backbone spanning the entire machine portfolio, enabling end customers to perform maintenance with the help of assistive functions.This comes to life in the practical example from OptoTech: Product Owner Vineeth Vellappatt offers a look into an AI-supported grinding process on the SM80 machine – including error detection, parameter analysis, and concrete recommendations for action.Technologically, SoftServe combines structured sensor data with unstructured knowledge (e.g. SOPs), embedded into a RAG model for fast information delivery – implemented across Microsoft Azure, NVIDIA Omniverse, AWS, and more. Standards like OPC UA and Unified Namespace lay the foundation for scalability.At the core: compensating for knowledge loss, empowering new workers, monetizing services – and turning AI from a buzzword into productive reality. SoftServe follows practical frameworks like “Double Diamond Thinking” and Proofs of Technology instead of just POCs. The episode kicks off with a short impulse from Onuora Ogbukagu (Deutsche Messe AG).-----Relevant links from this episode:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Scott (https://www.linkedin.com/in/scottkempmba/)SoftServe (https://www.softserveinc.com/en-us)SoftServe Assessment (https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/softserveinc1605804530752.digital_shopfloor_rapid_2_days_assessment-preview?tab=Overview&flightCodes=6871e3e649584bd2862e2c7a0f379bd5)OptoTech Lösung (https://www.optotech.net/en/product/detail/sm-80-cnc-tc~op25967)Jetzt IoT Use Case auf LinkedIn folgen
In this episode recorded live at the ProveIt Conference, we sit down with Mark and Harry from Tatsoft, creators of the industrial IIoT platform Frameworks. We dive deep into how Tatsoft is redefining what a true industrial platform should be — built from the ground up for the factory floor, yet scalable across the enterprise.Mark and Harry walk us through:Their platform's positioning as a SCADA, HMI, MES, and IIoT toolbox — all in oneHow Frameworks handles real-time data, from connectivity (MQTT, OPC UA, SQL) to transformation and dynamic visualizationWhy the “extra I in IIoT” matters when building for industrial environmentsThe challenges of IT/OT integration, people gaps, and legacy systems — and how Tatsoft tackles them head-onA demo of their ProveIt solution, showing off auto-recognition of new assets, dynamic UI, and high-performance visualization across devicesWhether you're an end user, system integrator, or OEM, this episode will help you understand how Tatsoft's Frameworks V10 is enabling fast, scalable, and future-proof industrial applications — without compromise.
This week's guest is Erik Udstuen (https://www.linkedin.com/in/erik-udstuen-00000), Co-founder and CEO of TwinThread. Erik shares insights from his 30+ years in industrial software, discussing how AI and digital twins are transforming manufacturing by standardizing data, optimizing operations, and driving operational excellence. He also dives into the challenges of industrial data standards, the importance of empowering engineers with no-code/low-code tools, and why AI must go beyond insights to deliver real-time, actionable recommendations on the shop floor. Augmented Ops is a podcast for industrial leaders, citizen developers, shop floor operators, and anyone else that cares about what the future of frontline operations will look like across industries. This show is presented by Tulip (https://tulip.co/), the Frontline Operations Platform. You can find more from us at Tulip.co/podcast (https://tulip.co/podcast) or by following the show on LinkedIn (https://www.linkedin.com/company/augmentedpod/). Special Guest: Erik Udstuen.
Peter Seeberg talks to Xueli An, Research Manager and Industry Development Specialist at Huawei Technologies about integrating OPC UA and 5G.
Maschinen und Anlagen müssen heute im Rahmen einer optimalen Fertigungsstrategie „miteinander reden“. Hierfür wurden entsprechende Kommunikationsmodelle entwickelt. Doch welche Kommunikationssprachen gibt es bereits? Was bedeutet der Standard „OPC-UA“? Warum und wie hat der VDMA das definiert? Diese und weitere Fragen werden im Podcast #15 beantwortet, in dem sowohl Vergangenheits- wie auch Zukunftsszenarien aufgeführt werden.
This week's guest is John Harrington (https://www.linkedin.com/in/john-harrington-142906a/), co-founder and Chief Product Officer of HighByte. John shares how his experiences working on Kepware at PTC led him to co-found HighByte, why Industry 4.0 requires a fundamentally different approach to interoperability, and the importance of contextualizing data in manufacturing. He also breaks down the real value that a Unified Namespace (UNS) approach can bring, whether frameworks like ISA-95 are still relevant, and the age-old OPC vs MQTT debate. Augmented Ops is a podcast for industrial leaders, citizen developers, shop floor operators, and anyone else that cares about what the future of frontline operations will look like across industries. This show is presented by Tulip (https://tulip.co/), the Frontline Operations Platform. You can find more from us at Tulip.co/podcast (https://tulip.co/podcast) or by following the show on LinkedIn (https://www.linkedin.com/company/augmentedpod/). Special Guest: John Harrington.
Peter Seeberg talks to Stefan Hoppe, President + Executive Director of the OPC Foundation about what happened in 2024 and what is planned for 2025
Peter Seeberg talks to Vatsal Shah, Founder & CEO Litmus about Unlocking & Activating Industrial Data
www.iotusecase.com#DEVICEMANAGEMENT #SECURITY #SERVICESIn der 149. Folge des IoT Use Case Podcasts, live von der SPS Messe 2024 in Nürnberg, geht es darum, wie Maschinen- und Anlagenbauer ihre Digitalisierung voranbringen können. Vanessa Kluge, Product Manager IoT Solutions bei Kontron AIS, und Holger Wußmann, Geschäftsführer bei Kontron Electronics, erklären, warum es für Maschinenbauer heute entscheidend ist, ihre Maschinen nicht nur zu vernetzen, sondern dabei auch höchste Sicherheitsstandards zu erfüllen. Im Fokus steht die NIS2-Richtlinie, die strengere Sicherheitsanforderungen für die gesamte Lieferkette fordert.Zusammenfassung der Podcastfolge Als Lösungsbeispiel aus der Praxis dient die Digitalisierungslösung für Vollmer, einen Spezialisten für Schleifmaschinen. Um ihre Maschinen fit für die Zukunft zu machen, setzt Vollmer auf das IoT-Starterpaket von Kontron, das die Datenverarbeitung und -analyse über OPC UA und MQTT ermöglicht. Die Lösung hilft, Maschinendaten endlich sinnvoll auszuwerten und greifbare Mehrwerte zu generieren – ein Muss für jedes Unternehmen, das auf smarte Maschinen setzt.Die beiden Kontron-Produkte KontronOS und KontronGrid spielen dabei eine zentrale Rolle. Sie stellen nicht nur die notwendige Infrastruktur für Condition Monitoring, Flottenmanagement und Update-Verwaltung bereit, sondern sichern die Maschinen auch gegen unerwünschte Zugriffe ab. Dies ist vor allem angesichts der EU-weiten NIS2-Richtlinie entscheidend, die neue Sicherheitsanforderungen an die Lieferketten im IoT stellt. Für Maschinenbauer bedeutet das: Mit den Lösungen von Kontron lassen sich Wartung und Updates kosteneffizient und sicher aus der Ferne steuern. Statt regelmäßiger Vor-Ort-Wartungen und ungenutzter Datenpotenziale erhalten Unternehmen ein Rundum-Paket, das ihnen eine durchgehende Betriebszeit und langfristige Kosteneinsparungen ermöglicht.Die Folge gibt außerdem spannende Einblicke, wie es Kontron gelingt, komplexe Herausforderungen für die Maschinenbau-Branche durch ein flexibles, skalierbares Setup zu lösen und so für eine sichere Datenverarbeitung und Vernetzung zu sorgen. -----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Vanessa (https://www.linkedin.com/in/vanessakluge/)Holger (https://www.linkedin.com/in/holger-wu%C3%9Fmann/)Jetzt IoT Use Case auf LinkedIn folgen
Shawn Tierney meets up with Greg Campion of Paessler to learn about their PRTG OPC UA Server in this episode of The Automation Podcast. To learn more about PRTG, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 226 Show Notes: Special thanks to Paessler for sponsoring this episode and making it Ad Free! To learn more about PRTG, see the below links: Trial of the OPC UA server How to monitor your OT Networks Blog detailing OPC UA Sensors Guide for OPC UA Server and its benefits Monitoring HMIs Original PRTG Podcast (199) Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
www.iotusecase.com#CONDITION-MONITORING #5G #WASSERMANAGEMENT In der 148. Folge des IoT Use Case Podcasts gibt Jürgen Grauer, Sales Direktor EMEA bei Red Lion Controls, spannende Einblicke in die Modernisierung von 70 Regenwasser-Pumpstationen der Entwässerungsbetriebe Würzburg. Er erklärt, wie Red Lion durch den Einsatz moderner Kommunikationsstandards und 5G-Kompatibilität die Zukunftssicherheit dieser kritischen Infrastruktur gewährleistet. Folge 148 auf einen Blick (und Klick):[08:43] Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus[13:00] Lösungen, Angebote und Services – Ein Blick auf die eingesetzten TechnologienZusammenfassung der Podcastfolge: In dieser Episode geht es um die Modernisierung von 70 Regenwasser-Pumpstationen der Entwässerungsbetriebe Würzburg in Zusammenarbeit mit Red Lion Controls. Die Pumpstationen, ursprünglich auf 3G-Basis betrieben, wurden mit modernen Kommunikationsstandards und 5G-Kompatibilität ausgestattet, um eine zukunftssichere, störungsfreie Überwachung und Kontrolle der Infrastruktur zu gewährleisten. Hauptthemen und Herausforderungen: Technologie-Upgrade für kritische Infrastruktur: Red Lion unterstützt den Übergang von veralteten Steuerungen und 3G-Modems hin zu einem modernen System, das 4G/5G und OPC UA integriert. OPC UA und DNP3-Integration: Diese Protokolle ermöglichen eine nahtlose Kommunikation zwischen OT (Operational Technology) und IT (Information Technology), ein Schlüssel für die Datenerfassung und Echtzeitüberwachung. Nutzung von Crimson®: Red Lions Low-Code-Software Crimson® bietet eine einfache, grafische Bedienoberfläche zur Konfiguration und Konvertierung von Protokollen. Die Software kann kostenfrei heruntergeladen werden und unterstützt den OPC UA-Server und -Client ohne zusätzliche Lizenzgebühren. Datensicherheit und Echtzeit-Datenerfassung: Die Datenpufferung über die FlexEdge®-Plattform gewährleistet, dass im Falle von Verbindungsabbrüchen keine Daten verloren gehen. OpenVPN und weitere Sicherheitsfunktionen schützen die Daten. Flexible Cloud-Anbindung: Die Lösung erlaubt eine einfache Integration mit führenden Cloud-Plattformen wie AWS, Azure und Aveva über MQTT und REST-API, was die Datenanalyse und -optimierung vereinfacht. Die Modernisierung der Pumpstationen zeigt, wie durch gezielte Upgrades hohe Kosten für vollständige Systemaustausche vermieden werden können. Mit Red Lions Einstieg in die HMS-Gruppe werden zukünftig noch umfassendere Security- und Netzwerklösungen erwartet, die vor allem für kritische Infrastrukturen von Bedeutung sind. -----Relevante Folgenlinks:Madeleine: (https://www.linkedin.com/in/madeleine-mickeleit/)Jürgen: (linkedin.com/in/jürgen-grauer-4b81b91a8)IoT im Klärwerk Bad Pyrmont: (https://iotusecase.com/de/podcast/energiekosten-ausfaelle-reduzieren/)Jetzt IoT Use Case auf LinkedIn folgen
Peter Seeberg talks to Enrico Noack and Andreas Schuette from Airbus Defense and Space about the use of OPC UA in rockets, specifically, in the TEXUS research rocket.
#INDUSTRIESTANDARDS #FACTORYX #DATENÖKOSYSTEMwww.iotusecase.comIn der 137. Folge des IoT Use Case Podcasts geht es um das Projekt Factory-X, ein Leuchtturmprojekt innerhalb der Manufacturing-X-Initiative des Bundesministeriums für Wirtschaft und Klimaschutz. Diese Initiative zielt darauf ab, ein digitales Ökosystem für die gesamte Fertigungsindustrie zu schaffen.Zusammenfassung der PodcastfolgeDiese Episode fokussiert sich auf die Umsetzung von Use Cases durch die Integration bestehender Standards und die Bedeutung starker Partnerschaften. Factory-X, ein im Februar 2024 gestartetes Konsortialprojekt mit 47 Partnern, hat das Ziel, Datenräume und -ökosysteme für Fabrikausrüster, Maschinenbauer und deren Lieferketten zu gestalten.Zu Gast sind Bastian Brinkmann, Head of Corporate Future Lab und Sustainability Management bei der Uhlmann Group, und Dr. Sebastian Heger von soffico. Sie diskutieren die technische Weiterentwicklung von Factory-X, wie Unternehmen wie Verpackungsmaschinenbauer Uhlmann Group ihre Geschäftsmodelle an das digitale Zeitalter anpassen und wie Standards und Kooperationen den Mittelstand stärken können. Es wird erklärt, wie die Uhlmann Group und soffico zusammenarbeiten, um die Konnektivität in Produktionsumfeldern zu verbessern und neue digitale Geschäftsmodelle zu entwickeln.Ein zentrales Thema des Podcasts ist die Herausforderung, skalierbare und interoperable Konnektivität sicherzustellen, insbesondere im pharmazeutischen Bereich, wo strenge Anforderungen an Rückverfolgbarkeit und Dokumentation gelten. Factory-X setzt auf Technologien wie die Asset Administration Shell, OPC UA und ECLASS, um eine offene, interoperable Infrastruktur zu schaffen, die flexible und nachhaltige Produktion ermöglicht.-----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Sebastian (https://www.linkedin.com/in/dr-sebastian-heger/)Bastian (https://www.linkedin.com/in/bastianbrinkmann/)Manufacturing-X (https://www.plattform-i40.de/IP/Navigation/DE/Manufacturing-X/Initiative/initiative-manufacturing-x.html)Factory-X: (https://www.isst.fraunhofer.de/de/abteilungen/industrial-manufacturing/projekte/factory-x.html)(https://www.siemens.com/de/de/produkte/automatisierung/themenfelder/factory-x.html)(https://www.bmwk.de/Redaktion/DE/Dossier/Manufacturing-x/Module/projekt-factory-x.html)Jetzt IoT Use Case auf LinkedIn folgen
Join Dave and Vlad with Aron Semle of HighByte this Wednesday at 4pm EST!In Episode 169 of Manufacturing Hub, hosts Dave and Vlad engage in an discussion with Aaron Semle, the Chief Technology Officer of HiByte. The episode delves into the evolving field of Industrial DataOps, a concept that Aaron passionately advocates for, and explores his extensive career in the industry.Aaron Semle's journey began with his foundational work on OPC UA and OPC drivers at Kepware, where he was deeply involved in developing communication solutions for industrial automation. This early experience laid the groundwork for his subsequent roles, including significant positions at PTC and a healthcare startup. His career trajectory reflects a broad understanding of how data integration and management have transformed across various sectors.Aaron defines Industrial DataOps as the adaptation of data operations principles specifically for the industrial and manufacturing sectors. Unlike traditional IT operations that often involve transferring data without a comprehensive context, Industrial DataOps focuses on managing data in a way that maintains and enhances its relevance within industrial environments. This approach emphasizes the importance of context and the specific needs of manufacturing processes, aiming to bridge the gap between IT and OT (Operational Technology).A key point Aaron makes is that Industrial DataOps is not just about technology but also about fostering collaboration between IT and OT teams. The success of data initiatives hinges on aligning these teams to address practical challenges effectively. Aaron stresses that while data contextualization is crucial, it should be driven by specific use cases rather than being applied uniformly across all scenarios. This targeted approach ensures that the data solutions implemented are practical and aligned with the operational realities of the manufacturing environment.The conversation highlights how Industrial DataOps can lead to significant benefits for manufacturers, including reduced costs, improved scalability, and enhanced operational efficiency. By leveraging data in a contextualized and strategic manner, manufacturers can make more informed decisions, streamline their operations, and achieve better overall performance.In conclusion, Aaron Semle's insights underscore the transformative potential of Industrial DataOps in modern manufacturing. The episode offers a valuable perspective on how adapting data operations to the unique needs of the industrial sector can drive innovation and improve outcomes across the industry.About Manufacturing HubManufacturing Hub Network is an educational show hosted by two longtime industrial practitioners Dave Griffith and Vladimir Romanov. Together they try to answer big questions in the industry while having fun conversations with other interesting people. Come join us weekly! **********Connect with UsAron SemleVlad RomanovDave GriffithManufacturing HubSolisPLC#automation #manufacturing #robotics #industry40 #iioT
Peter Seeberg talks to Holger Kenn, Chairperson of the Board of the OPC Foundation about the OPC UA for AI Working Group
#MEDTECH #PRODUKTION #IIOTwww.iotusecase.comIn dieser Podcastfolge spricht Madeleine Mickeleit, Geschäftsführerin und Host von IoT Use Case, mit Roman Kuster, Head of Engineering Support bei Weidmann Medical Technology AG, und Marco Müller, CTO von Innomat-Automation AG. Thema ist ihre erfolgreiche Zusammenarbeit im Bereich der IoT-Implementierung zur Optimierung der Produktionsprozesse in der Medizintechnik. Im Speziellen geht es um die Kalisto IoT-Solution von Innomat, um die Produktionsprozesse bei Weidmann zu digitalisieren und zu automatisieren.Folge 132 auf einen Blick (und Klick):[12:23] Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus[17:38] Lösungen, Angebote und Services – Ein Blick auf die eingesetzten TechnologienZusammenfassung der PodcastfolgeWeidmann Medical Technology AG ist ein führender Schweizer Spritzguss-Systemlieferant für die Medizintechnik- und Pharmaindustrie, während Innomat-Automation AG auf die Entwicklung von IoT-Softwarelösungen spezialisiert ist.Das Projekt umfasst die Einführung einer digitalen Laufkarte und eines fahrerlosen Transportsystems. Diese Technologien verbessern die Effizienz und Transparenz der Produktionsprozesse, indem sie Transportaufträge automatisch verwalten und sicherstellen, dass alle Produktionsdaten lückenlos dokumentiert werden.Innomat hat verschiedene Schnittstellen entwickelt, um die Kommunikation zwischen Maschinen, dem ERP-System und dem Transportsystem zu ermöglichen. Diese Schnittstellen umfassen OPC UA, eine REST API und iDoc für SAP. Die Lösungen gewährleisten eine hohe Datenintegrität und minimieren manuelle Prozesse. Durch die Automatisierung und Digitalisierung der Produktionsprozesse können Weidmann und Innomat signifikante Zeit- und Kosteneinsparungen erzielen. Die erhöhte Transparenz und Qualitätssicherung stärken die Wettbewerbsfähigkeit und erfüllen die hohen Anforderungen der Medizintechnikbranche. Marco Müller und Roman Kuster teilen ihre Erfahrungen und Best Practices aus dem Projekt. Sie betonen die Bedeutung einer klaren Zielsetzung, Flexibilität und einer schrittweisen Vorgehensweise bei der Umsetzung von IoT-Projekten. Beide Unternehmen sehen in der digitalen Transformation eine Chance für zukünftige Innovationen und nachhaltige Entwicklung. -----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Roman (https://www.linkedin.com/in/roman-kuster-4046861a1/)Marco (https://www.linkedin.com/in/marco-m%C3%BCller-4569658/)Projekt mit Kalisto: https://iotusecase.com/de/use-cases/m2mkommunikation-mit-reibungslosem-datenaustausch/Jetzt IoT Use Case auf LinkedIn folgen
In this episode of The Manufacturing IT Podcast, I speak with Johannes Liegert from Paessler on all things Connectivity, IIoT and Networking Monitoring for industry 4.0 and Smart Manufacturing Johannes is Product Manager for IoT & Industry at Paessler. He is an experienced Product Owner and Product Manager, with a great history of working in the computer software industry. Skilled in healthcare protocols IHE Process, HL7 Standards, DICOM and industrial communication especially OPC UA. Strong engineering professional with a Master of Science - MS focused in Medical Informatics from Ostbayerische Technische Hochschule Regensburg. Paessler believes monitoring plays a vital part in reducing humankind's consumption of resources. Monitoring data helps its customers save resources, from optimizing their IT, OT and IoT infrastructures, to reducing energy consumption or emissions – for our future and our environment. More than 500,000 users in over 170 countries rely on PRTG and other Paessler solutions to monitor their complex IT, OT and IoT infrastructures. Hope you like the episode, and look forward to hearing your feedback!
Our guest this week is Microsoft's Erich Barnstedt (https://www.linkedin.com/in/erich-barnstedt-9a84685), Chief Architect Standards, Consortia & Industrial IoT, Azure Edge + Platform. Erich brings his perspective as we try to get to the bottom of why–despite overtures from some of the biggest vendors in the space–we still have not achieved true data interoperability in the manufacturing industry. We explore what really goes on behind the curtain at standards committees, and why it is so important for vendors to embrace an open technology ecosystem that puts interoperability at the forefront. Augmented Ops is a podcast for industrial leaders, shop floor operators, citizen developers, and anyone else that cares about what the future of frontline operations will look like across industries. This show is presented by Tulip (https://tulip.co/), the Frontline Operations Platform. You can find more from us at Tulip.co/podcast (https://tulip.co/podcast) or by following the show on LinkedIn (https://www.linkedin.com/company/75424477). Special Guest: Erich Barnstedt.
Podcast: Nexus: A Claroty Podcast (LS 28 · TOP 10% what is this?)Episode: Team82 on NAS Research, OPC UA Exploit FrameworkPub date: 2023-08-20Team82's extensive research into network attached storage devices and the ubiquitous OPC UA industrial protocol came to a head recently in Las Vegas with a pair of presentations at Black Hat USA and DEF CON disclosing vulnerabilities in Synology and Western Digital NAS cloud connections and the unveiling of a unique OPC UA exploit framework. In this episode of the Nexus podcast, researcher Noam Moshe explains how both research initiatives came to be, the implications of each for users, and how the respective ecosystems have been made safer. Read our Synology researchRead our Western Digital researchRead about our OPC UA exploit frameworkDownload the frameworkThe podcast and artwork embedded on this page are from Claroty, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Team82's extensive research into network attached storage devices and the ubiquitous OPC UA industrial protocol came to a head recently in Las Vegas with a pair of presentations at Black Hat USA and DEF CON disclosing vulnerabilities in Synology and Western Digital NAS cloud connections and the unveiling of a unique OPC UA exploit framework. In this episode of the Nexus podcast, researcher Noam Moshe explains how both research initiatives came to be, the implications of each for users, and how the respective ecosystems have been made safer. Read our Synology researchRead our Western Digital researchRead about our OPC UA exploit frameworkDownload the framework
Podcast: Error CodeEpisode: EP 21: Exploiting OPC-UA in OT EnvironmentsPub date: 2023-08-16In a talk at Black Hat USA 2023, Sharon Brizinov and Noam Moshe from Claroty Team82, disclosed a significant vulnerability in the Open Platform Communications Universal Architecture or OPC-UA, a univsersal protocol used to synchronize different OT devices. In this episode they also discuss a new open source OPC exploit framework designed to help OT vendors check their devices in development. Transcript.The podcast and artwork embedded on this page are from Robert Vamosi, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
In a talk at Black Hat USA 2023, Sharon Brizinov and Noam Moshe from Claroty Team82, disclosed a significant vulnerability in the Open Platform Communications Universal Architecture or OPC-UA, a univsersal protocol used to synchronize different OT devices. In this episode they also discuss a new open source OPC exploit framework designed to help OT vendors check their devices in development. Transcript.
Podcast: Aperture: A Claroty PodcastEpisode: Sharon Brizinov on Hacking and Securing PLCsPub date: 2022-04-20In this episode of the Aperture podcast, Claroty Team82 vulnerability research lead Sharon Brizinov covers a presentation he's giving at the S4x22 conference in Miami that explains a unique attack against Siemens SIMATIC 1200 and 1500 PLCs that enabled native code execution on the device. Also, Brizinov explains his participation in the Pwn2Own contest. S4 hosts the only ICS-focused version of Pwn2Own, and this year there are four categories of targets in scope: control servers, OPC UA servers, data gateways, and HMIs.“The goal in most cases is to achieve remote code execution, not only to find a vulnerability but achieve exploitation,” Brizinov said. “Usually we are able to find at least one vulnerability, but the real challenge is to exploit those vulnerabilities. Usually the difficulty around this is to bypass the different security mitigations that both the software, hardware, or operating system present.”The podcast and artwork embedded on this page are from Claroty, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
In this episode of the Aperture podcast, Claroty Team82 vulnerability research lead Sharon Brizinov covers a presentation he's giving at the S4x22 conference in Miami that explains a unique attack against Siemens SIMATIC 1200 and 1500 PLCs that enabled native code execution on the device. Also, Brizinov explains his participation in the Pwn2Own contest. S4 hosts the only ICS-focused version of Pwn2Own, and this year there are four categories of targets in scope: control servers, OPC UA servers, data gateways, and HMIs.“The goal in most cases is to achieve remote code execution, not only to find a vulnerability but achieve exploitation,” Brizinov said. “Usually we are able to find at least one vulnerability, but the real challenge is to exploit those vulnerabilities. Usually the difficulty around this is to bypass the different security mitigations that both the software, hardware, or operating system present.”
Gary recaps the recent ODVA annual general meeting. Keynoter Paul Maurath of P&G corporate engineering discussed laboratory testing of EtherNet/IP with APL. He asked for more configuration and device description help. ODVA tech group engineers have been very busy with OPC UA mapping, TSN, and CIP for discrete devices. It was great to meet again.
Podcast: Unsolicited Response Podcast (LS 30 · TOP 5% what is this?)Episode: OPC UA In Pwn2OwnPub date: 2022-02-01Dale's weekly article covers the importance of some serious security testing of four popular OPC UA stacks that will take place at Pwn2Own Miami at S4x22. The last Pwn2Own Miami awarded $280K for 0days in ICS targets.The podcast and artwork embedded on this page are from Dale Peterson: ICS Security Catalyst and S4 Conference Chair, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Podcast: Digital Transformation ViewpointsEpisode: Larry O'Brien Talks OPC Cybersecurity with Randy Armstrong of the OPC FoundationPub date: 2022-01-18Welcome back to another cybersecurity-focused episode of the ARC Digital Transformation Podcast. In this episode, we speak to Randy Armstrong, director of IT operations at the OPC Foundation. OPC has a very solid cybersecurity foundation and Randy has been at the center of this for some time, giving us an excellent summary of the many different layers of cybersecurity within OPC UA. Security isn't something that was bolted on to OPC. From the beginning, security has been a primary concern. According to Randy, "We wanted to have a standard that incorporated security as a first-class concept. From the beginning, every aspect of the specification is analyzed in terms of its impact on security, and, and has to be able to follow the conventions and the requirements that we've laid out for the overall framework. So by doing this, we've developed a standard that has a very cohesive, holistic view of security that shows up at different levels in the implementations."Many of the concepts behind what's known today as "zero trust" already exist in OPC, such as advanced authentication schemes, including the use of PKI, application authentication, and more. According to Randy, "What we built into the OPC UA infrastructure is this concept of application authentication. So every application that's installed on a particular node has a unique identifier and has a certificate assigned to it. And it will be configured to only allow communication with a finite number of other applications. And it's up to the administrators to decide who's allowed to trust who, and you have very fine-grained control. So you can have a cell on a factory floor with 10 machines, and those 10 machines would all be I'll be configured to talk to each other but nobody else. And this is going to be independent of the user credentials, which may determine what access somebody has when they're accessing the machine. So it's really two layers of authentication."The podcast and artwork embedded on this page are from ARC Advisory Group, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Welcome back to another cybersecurity-focused episode of the ARC Digital Transformation Podcast. In this episode, we speak to Randy Armstrong, director of IT operations at the OPC Foundation. OPC has a very solid cybersecurity foundation and Randy has been at the center of this for some time, giving us an excellent summary of the many different layers of cybersecurity within OPC UA. Security isn't something that was bolted on to OPC. From the beginning, security has been a primary concern. According to Randy, "We wanted to have a standard that incorporated security as a first-class concept. From the beginning, every aspect of the specification is analyzed in terms of its impact on security, and, and has to be able to follow the conventions and the requirements that we've laid out for the overall framework. So by doing this, we've developed a standard that has a very cohesive, holistic view of security that shows up at different levels in the implementations."Many of the concepts behind what's known today as "zero trust" already exist in OPC, such as advanced authentication schemes, including the use of PKI, application authentication, and more. According to Randy, "What we built into the OPC UA infrastructure is this concept of application authentication. So every application that's installed on a particular node has a unique identifier and has a certificate assigned to it. And it will be configured to only allow communication with a finite number of other applications. And it's up to the administrators to decide who's allowed to trust who, and you have very fine-grained control. So you can have a cell on a factory floor with 10 machines, and those 10 machines would all be I'll be configured to talk to each other but nobody else. And this is going to be independent of the user credentials, which may determine what access somebody has when they're accessing the machine. So it's really two layers of authentication."
Thank you EZ VPN for sponsoring this video content. Check out EZ VPN's newest technology IO Hub (That we really love) at https://www.ezvpn.online Thanks for watching! Subscribe!
I sit down with Michael Bowne of PI North America to learn about what's new and happening with Profinet Profibus International, including updates on PI working with OPC UA, IOLink, and Omlox in episode 76 of The Automation Podcast. For more information, check out the "Show Notes" located below the video. Watch the Podcast: Listen via Apple, Google, Pandora, Spotify, iHeartRadio, TuneIn, YouTube, RSS, or below: https://theautomationblog.com/wp-content/uploads/2021/09/TheAutomationPodcast-0076.mp3 The Automation Podcast, Episode 76 Show Notes: Special thanks to Michael Bowne for taking the time to come on the show and give us an update on PI North America. You can now support our work and join our community at Automation.Locals.com! Thanks in advanced for your support! Vendors: Would you like your product featured on the Podcast, Show or Blog? If you would, please contact me at: https://theautomationblog.com/contact Sincerely, Shawn TierneyAutomation Instructor and Blogger Have a question? Join my community of automation professionals and take part in the discussion! You'll also find my PLC, HMI, and SCADA courses at TheAutomationSchool.com. Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form. (219 views)
Join us every week where we answer your questions on Industry 4.0, IIoT, and successfully achieving digital transformation. Thanks for watching! Subscribe!