Podcasts about OSI

  • 608PODCASTS
  • 1,523EPISODES
  • 47mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jan 15, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about OSI

Show all podcasts related to osi

Latest podcast episodes about OSI

Ones Ready
***Sneak Peek***MBRS 76: Cancel Culture for Operators: Tim Kennedy, Shrek, and the Stolen Valor Circus

Ones Ready

Play Episode Listen Later Jan 15, 2026 19:28


Send us a textPeaches dives headfirst into the swamp of veteran drama—Tim Kennedy, Shrek McPhee, stolen valor call-outs, and the internet's obsession with dragging up skeletons. He calls BS on the witch hunts, breaks down how accusations wreck careers long before proof, and exposes the military justice system's shady double standards. From OSI horror stories to generals cashing in on their rank, nothing's off-limits. If you think this episode is about playing nice—you're already lost.⏱️ Timestamps: 0:00 – Peaches sets the stage: busy week, no fluff 1:10 – Nashville and Vegas OTS updates 2:30 – Tim Kennedy, Shrek, and stolen valor heat 5:00 – Why dragging old dirt ruins everyone 7:00 – OSI investigations and dirty tactics 10:00 – Sexual assault accusations gone sideways 13:00 – Wrong name, wrong career destroyed 15:00 – Drawing the line: stolen valor vs personal lives 16:00 – Goggins and the deadbeat dad smear 17:00 – Corrupt generals cashing in post-retirement 18:30 – Peaches signs off (for now)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Artificial Analysis: Independent LLM Evals as a Service — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 8, 2026 78:24


Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b

Pharmacy Podcast Network
Cleansing and Nourishing Rituals with Ayurvedic Oils with Osi Mizrahi | The Holistic Pharmacy Podcast

Pharmacy Podcast Network

Play Episode Listen Later Jan 6, 2026 56:14


I was truly inspired by today's guest and her courage to follow her inner guidance, which served her not only on her personal journey but also led her to found a beautiful company with a sacred mission. Osi Mizrahi is the founder and visionary behind OSI Oils—a wellness brand rooted in slow beauty, Ayurvedic ritual, and the art of coming home to oneself. Born on a farm in Israel, Osi's connection to the natural world began early, shaping her journey as a medical intuitive, Ayurvedic artisan, and guide in the healing arts. With over 30 years of experience in yoga, Ayurveda, Reiki, Kundalini, and Theta Healing, Osi blends ancient wisdom with feminine embodiment. Her work is inspired by Kabbalah, breathwork, and intuitive practices that support deep transformation. Osi believes in slow beauty and slow aging—practices that honor the body's rhythms and nourish vitality over time. Oils, for her, are more than skincare—they are vessels of self-love, sensuality, and spiritual healing. Each OSI Oils formula is handcrafted in small batches using traditional Ayurvedic methods, turning everyday routines into sacred rituals. Whether soothing the nervous system with her belly oil or igniting life-force energy with Radiance Hair Oil, her products are designed to restore from the inside out. Osi's upcoming book shares her own journey through heartbreak, healing, and the power of ancient rituals. Through her storytelling, mentorship, and botanical creations, she helps women around the world reconnect to their spark—and live with more pleasure, purpose, and presence. Connect with Osi via: Email: osi@osioils.com Website: Osi Oils IG: @osioils YT: @OsiMizrahi Linked In: Osi Mizrahi Use My Special Promo Code RAWFORK20 for 20% off orders on osioils.com Visit https://marinabuksov.com for more holistic content. Music from https://www.purple-planet.com. Disclaimer: Statements herein have not been evaluated by the Food and Drug Administration. Products listed are not intended to diagnose, treat, cure, or prevent any diseases.

InfosecTrain
WAF: The Layer 7 Shield Your Web Apps Need in 2026

InfosecTrain

Play Episode Listen Later Dec 28, 2025 3:17


In the high-speed world of web traffic, traditional firewalls are often blind to the most dangerous threats. While a standard firewall guards the "gates" of your network, a Web Application Firewall (WAF) is the specialized bodyguard for your applications, operating at Layer 7 of the OSI model. As we move into 2026, WAFs have evolved from simple rule-based filters into AI-driven defense systems capable of stopping sophisticated injection attacks, malicious bots, and zero-day exploits in real-time. In this episode, we deconstruct the "anatomy of an inspection." We'll follow an HTTP request from the moment it hits the internet to the millisecond it's analyzed, challenged, or blocked. Whether you're defending against the OWASP Top 10 or managing a global cloud-native architecture, this is your guide to understanding the intelligent gatekeeper of the modern web.

Ones Ready
***Sneak Peek***MBRS 73: The Air Force Will Absolutely Throw You Under the Bus

Ones Ready

Play Episode Listen Later Dec 23, 2025 43:06


Send us a textPeaches flies solo and unfiltered, taking you on a no-holds-barred ride through shady OSI tactics, the SIG M18 controversy, and why the Air Force might just toss a junior enlisted under the bus to protect billion-dollar contracts. He drags lazy PT culture through the mud, skewers the “extra 800 meters will kill us all” crowd, and asks the real question—are new policies actually helping prevent suicides, or is it just more PowerPoint theater? From dark humor to brutal honesty, this is Peaches in full “crusty retired PJ” mode—raw, opinionated, and asking you for answers.

RawFork Podcast
S08E22 - Cleansing and Nourishing Rituals with Ayurvedic Oils with Osi Mizrahi

RawFork Podcast

Play Episode Listen Later Dec 19, 2025 55:52


I was truly inspired by today's guest and her courage to follow her inner guidance, which served her not only on her personal journey but also led her to found a beautiful company with a sacred mission. Osi Mizrahi is the founder and visionary behind OSI Oils—a wellness brand rooted in slow beauty, Ayurvedic ritual, and the art of coming home to oneself. Born on a farm in Israel, Osi's connection to the natural world began early, shaping her journey as a medical intuitive, Ayurvedic artisan, and guide in the healing arts. With over 30 years of experience in yoga, Ayurveda, Reiki, Kundalini, and Theta Healing, Osi blends ancient wisdom with feminine embodiment. Her work is inspired by Kabbalah, breathwork, and intuitive practices that support deep transformation. Osi believes in slow beauty and slow aging—practices that honor the body's rhythms and nourish vitality over time. Oils, for her, are more than skincare—they are vessels of self-love, sensuality, and spiritual healing. Each OSI Oils formula is handcrafted in small batches using traditional Ayurvedic methods, turning everyday routines into sacred rituals. Whether soothing the nervous system with her belly oil or igniting life-force energy with Radiance Hair Oil, her products are designed to restore from the inside out. Osi's upcoming book shares her own journey through heartbreak, healing, and the power of ancient rituals. Through her storytelling, mentorship, and botanical creations, she helps women around the world reconnect to their spark—and live with more pleasure, purpose, and presence. Connect with Osi via: Email: osi@osioils.com Website: Osi Oils IG: @osioils YT: @OsiMizrahi Linked In: Osi Mizrahi Use My Special Promo Code RAWFORK20 for 20% off orders on osioils.com Visit https://marinabuksov.com for more holistic content. Music from https://www.purple-planet.com.

Sustain
Episode 277: Rynn Mancuso, Maryblessing Okolie & Mo McElaney on Ethicalsource.dev

Sustain

Play Episode Listen Later Dec 19, 2025 38:57


Guests Rynn Mancuso | Maryblessing Okolie | Mo McElaney Panelist Richard Littauer | Eriol Fox Show Notes In this episode of Sustain, Richard and Eriol talk with members of the Organization for Ethical Source (OES), Rynn Mancuso, Maryblessing Okolie, and Mo McElaney, about how ethics, licensing, and codes of conduct intersect in open source. They unpack the origins and challenges of the Hippocratic License, the community driven overhaul of Contributor Covenant 3.0, what it really takes to collaborate across borders and cultures, and how OES is now turning its attention to ethical AI, translations and practical resources for communities to make it a safer and more inclusive space. They also suggest ways for listeners to get involved in these important initiatives. Hit download now! [00:02:17] Rynn gives the elevator pitch on what the Organization for Ethical (OES) is. [00:04:57] Mo explains the Hippocratic License is modeled on “do no harm” and it's an open source license. [00:06:06] Richard wonders if the Hippocratic License is open source since we're not using OSI's definition. Mo explains that OES still uses “open source” in a broader, “big tent” sense focused on work done in the open, and Rynn adds why definitions need to evolve. [00:09:27] Rynn shares rewriting the Contributor Covenant 3.0, starting from their background, to being a limited scope, and getting feedback from translators that language was too American/Western and 3.0 needed a broader cultural fit. [00:15:12] Maryblessing was brought in to lead v3.0 from an African, non-US perspective and to make the process community driven. She tells us what's new in the Contributor Covenant 3.0. [00:19:43] The discussion covers how they all worked together. It was a highly collaborative, consensus driven process where anyone could propose edits. They talk about how long it took, not work entirely on GitHub, and why not everything was public. [00:24:59] We hear about some adoption challenges for codes of conduct for small projects and enterprises. [00:28:53] Rynn, Mo, and Maryblessing touch on how they are approaching ethical AI work, they share options to support OES, how to get involved, and translation needs. Quotes [00:12:32] “It was a very limited scope, and we always designed it to work on the internet and be for open source projects.” [00:13:23] “I would get these problems that really had to do with caste, but nobody would say anything about caste.” [00:16:37] “This new version also emphasizes restorative justice, and we're keen on using inclusive languages.” [00:17:06] “We're making progress on bringing in African translation.” [00:17:38] “One of the things we did with the new website was to include the CC3 builder which was going to help make it easy for people to adapt the code of conduct.” [00:21:37] “Every bit of feedback we got, we took it seriously, we talked about it.” [00:22:13] “It took is a year and six months to do the entire thing, to make sure people were available. It took that long because we wanted to make sure we were incorporating every feedback.” [00:23:14] “We do not do everything in the open on GitHub. One reason is structural. GitHub is not great at document management. Another reason we do that is we've received a lot of harassment form groups on the internet that were frankly invested in being able to cause trouble for a lot of people.” [00:29:14] “We're in the early stages of considering how we could approach ethical AI.” Spotlight [00:33:12] Mo's spotlight is for more folks to get involved with this project and other projects through the OES. [00:33:34] Rynn's spotlight is a shoutout to the folks at IBM and RedHat and Dev/Mission and JVS where they volunteer. [00:35:25] Maryblessing's spotlight is all the amazing people that helped put together the Contributor Covenant v.3.: Greg Cassel, Coraline Ada Ehmke, Gerardo Lisboa, Rynn Mancuso, Mo McElaney, Maryblessing Okolie, Ben Sternthal, and Casey Watts. [00:36:11] Eriol's spotlight is the OpenSSF Working Group on Securing Software Repositories. [00:36:44] Richard's spotlight is a fun paper called, Paradoxes of Openness: Trans Experiences in Open Source Software by Hana Frluckaj, Nikki Stevens, James Howison, and Laura Dabbish. Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) SustainOSS Bluesky (https://bsky.app/profile/sustainoss.bsky.social) SustainOSS LinkedIn (https://www.linkedin.com/company/sustainoss/) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Eriol Fox X (https://x.com/EriolDoesDesign) Rynn Mancuso LinkedIn (https://www.linkedin.com/in/rynnmancuso/) Maryblessing Okolie LinkedIn (https://www.linkedin.com/in/maryblessingokolie/?originalSubdomain=ng) Mo McElaney LinkedIn (https://www.linkedin.com/in/maureenmcelaney/) Organization For Ethical Source (OES) (https://ethicalsource.dev/) OES- What We Do (https://ethicalsource.dev/what-we-do/) OES-What We Believe (https://ethicalsource.dev/what-we-believe/) Donate-The Organization for Ethical Source (Open Collective) (https://opencollective.com/ethical-source) Contributor Covenant (https://www.contributor-covenant.org/) Contributor Covenant 3.0 Code of Conduct (https://www.contributor-covenant.org/version/3/0/code_of_conduct/) Code of conduct enforcement guidelines (MDN Web Docs) (https://developer.mozilla.org/en-US/docs/MDN/Community/Community_Participation_Guidelines) Coraline Ada Ehmke (https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke) Ethical Source- Beacon (https://github.com/EthicalSource/beacon) Adopt Contributor Covenant (https://www.contributor-covenant.org/adopt/) Resources for Community Moderators (https://www.contributor-covenant.org/resources/) Dev/Mission (https://devmission.org/) JVS (Jewish Vocational Services) (https://jvs.org/) Techtonica (https://techtonica.org/) OpenSSF Working Group on Securing Software Repositories (https://github.com/ossf/wg-securing-software-repos) Paradoxes of Openness: Trans Experiences in Open Source Software (ACM Digital Library) (https://dl.acm.org/doi/abs/10.1145/3687047) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Maryblessing Okolie, Maureen Mcelaney, and Rynn Mancuso.

Radio Sevilla
Antonio García Moreno 'Osi': "No es la historio de Osi, es la historia de lo que ocurrió en España"

Radio Sevilla

Play Episode Listen Later Dec 15, 2025 0:31


Antonio García Moreno 'Osi': "No es la historio de Osi, es la historia de lo que ocurrió en España"

The Analytics Engineering Podcast
Inside Snowflake's AI roadmap (w/ Chris Child)

The Analytics Engineering Podcast

Play Episode Listen Later Dec 14, 2025 57:27


Snowflake VP of Product Management Chris Child joins Tristan Handy to unpack Snowflake's AI roadmap and what it means for data teams. They discuss the evolution from Snowpark to Cortex and Snowflake Intelligence, how to govern agents with row- and column-level controls, and why Snowflake is investing in Apache Iceberg and the Open Semantic Interchange initiative (dbt Labs recently open sourced MetricsFlow, the technology that powers the dbt Semantic Layer, to align with the goals of OSI). Chris also shares a vision for the next five years of data engineering: fewer bespoke pipelines, more standardization and semantics, and a bigger focus on business context and data products. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

Sustain
Episode 276: Dawn Wages and Loren Crary on funding the PSF

Sustain

Play Episode Listen Later Dec 12, 2025 44:16


Guests Dawn Wages | Loren Crary Panelist Richard Littauer Show Notes In this episode of Sustain, Richard Littauer talks with Dawn Wages, former Chair of the Python Software Foundation board and Loren Crary, Deputy Executive Director of the PSF, about how the PSF sustains Python and its community, governance, fundraising, and events like PyCon US, and why they ultimately turned down a $1.5M NSF grant rather than accept new anti-DEI conditions. They walk through what the grant was for, how the decision unfolded, the financial and ethical risks involved, and the overwhelming community response in donations and support, ending with a call to participate in the PSF fundraiser and submit talks to PyCon US 2026. Press download now to hear more! [00:02:41] Dawn explains she just finished her term as Chair at the PSF Board, previously served as Treasurer, and that board seats are elected volunteer toles with three-year terms. [00:03:40] Loren describes her job as Deputy Executive Director, #2 to ED Deb Nicholson. She leads fundraising and revenue strategy, handles internal operations and strategic planning, and she clarifies that the Python Steering Council steers the language itself and mentions PyCon US will be in Long Beach, CA May 2026. [00:05:38] Dawn shares a personal story how PSF funding and local Python user group helped her start in Python a decade ago and encourages listeners to donate and use company matching. [00:06:57] Loren speaks about sponsors and individual donors and plugs the fundraiser and the “cute snake thermometer” on the donate page. [00:08:00] Richard, as a board member of Python New Zealand, underscores PSF's support for Python user groups and conferences. He then pivots to ask about strategy where Loren describes how the board leads strategy. [00:13:34] Dawn reflects on learning to chair the board for the first time, praising staff expertise, and she describes the ‘flywheel' model where staff and board collaborate closely, with staff often joining board meetings to co-develop strategy. [00:15:18] Loren highlights the PSF board and representation. [00:16:59] Richard gives a special shout-out to Phyllis Dobbs as one of the “unsung heroes” of open source, noting her work with OSI and Deb in the past. [00:17:26] The convo turns to the NSF Safe OSE program and what happened with the large grant the PSF was awarded and then declined. Loren details everything that happened and gives a shout-out to Seth Larson, whom she collaborated with. [00:29:00] Loren reads the key clause that PSF would need to affirm, and the board ultimately made the call that it was too risky to their mission to accept the terms. [00:31:42] Dawn explains the board's decision to withdraw and Loren notes that no one on the board or staff ever floated “dropping DEI to take the money.” [00:33:55] Dawn points to Python's reputation as a welcoming, diverse community and DEI is portrayed as “lifeblood,” not an optional extra. [00:35:03] What happened after they said they weren't taking the money? Dawn and Loren recount an outpouring of support after the public statement, and we find out how much money the fundraiser has made so far along including an anonymous donation. [00:38:33] Dawn zooms out to decades of conversations about funding open source, arguing that individual donors and major AI companies profiting from Python should be contributing at scale. [00:41:20] Richard reinforces the ongoing donation, and Loren plugs the PyCon US Call for Proposals (open through December 19) with new AI and security tracks and invites listeners to submit. Quotes [00:07:09] “If you want to know what a nonprofit does, look at who their funders are and that's who they're working for.” [00:12:07] “The board sets a strategy, but there needs to be a ‘flywheel' from the staff to keep things like that going.” [00:18:45] “We dipped our toes into grant funding, and we thought that would be a great way to make our work more sustainable.” [00:32:40] “The $1.5 million is not net worth putting the future health and safety of the language in the organization in jeopardy.” [00:32:58] “I am proud that at no point did anyone float: What if we just stopped doing everything DEI and take the money?” [00:38:09] “I like my boss to be the users.” [00:38:41] “We've been talking about what it means to fund open source for decades…I think this is an interesting arc that we're experiencing. I'm hoping that the numbers will have two or three commas from individual donations.” Spotlight [00:42:15] Richard's spotlight is Phyllis Dobbs. [00:42:26] Dawn's spotlight is PyScript. [00:42:42] Loren's spotlight is The Carpentries. Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) SustainOSS Bluesky (https://bsky.app/profile/sustainoss.bsky.social) SustainOSS LinkedIn (https://www.linkedin.com/company/sustainoss/) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Dawn Wages Website (https://dawnwages.info/) Loren Crary LinkedIn (https://www.linkedin.com/in/loren-crary/) Python Software Foundation (http://www.python.org/psf/) PSF Donate (https://donate.python.org/) PyCon US 2026, Long Beach, CA (https://us.pycon.org/2026/) The Philadelphia Python Users Group (PhillyPUG) (https://www.meetup.com/phillypug/) Safety, Security, and Privacy of Open Source Ecosystems (Safe-OSE) (https://www.nsf.gov/funding/opportunities/safe-ose-safety-security-privacy-open-source-ecosystems) PSF Welcomes New Security Developer in Residence with Support from Alpha-Omega (https://openssf.org/blog/2023/06/22/psf-welcomes-new-security-developer-in-residence-with-support-from-alpha-omega/) Seth Michael Larson-GitHub (https://github.com/sethmlarson) Seth Larson Blog post: I am the first PSF Security Developer-in-Residence (https://sethmlarson.dev/security-developer-in-residence) Python Software Foundation turns down $1.5 million NSF grant because of the anti-DEI strings attached (The Verge) (https://www.theverge.com/news/808268/python-software-foundation-turns-down-1-5-million-nsf-grant-because-of-the-anti-dei-strings-attached) The PSF has withdrawn a $1.5 million proposal to US government grant program (PSF Blog post) (https://pyfound.blogspot.com/2025/10/NSF-funding-statement.html) PSF Board Meeting Minutes Archive (Python) (https://www.python.org/psf/records/board/minutes/) Phyllis Dobbs (https://www.linkedin.com/in/phyllisadobbs/) PyScript (https://pyscript.net/) The Carpentries (https://carpentries.org/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Dawn Wages and Loren Crary.

Resposta Pronta
Greve Geral. "Movimento anárquico infiltrou-se na manifestação"

Resposta Pronta

Play Episode Listen Later Dec 12, 2025 11:22


Hugo Costeira, ex-presidente do OSI, considera que os desacatos foram provocados por um movimento anárquico e informal. Espera ainda que a Justiça não lide de forma "conivente" com este caso.See omnystudio.com/listener for privacy information.

TD Ameritrade Network
ORCL "Story of Two Truths:" Weighing Cloud & Customers to Growing Debt

TD Ameritrade Network

Play Episode Listen Later Dec 9, 2025 7:46


Steven Dickens maintains his long-term bullish view on Oracle (ORCL). While he notes the company's ballooning debt, he sees Oracle's business growth staying intact through core OSI and cloud backlog. Customers from Alphabet (GOOGL), Amazon (AMZN), and OpenAI add to Steven's belief that Oracle has plenty of room to grow. Tom White offers an example options trade for the stock. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about

Podcast literacki Big Book Cafe
W Krainie Czarów. W algorytmie dziwactw. Kontrowersje biograficzne Carrolla. Prof. Dawid Osiński. Wykład Mistrzowski #3

Podcast literacki Big Book Cafe

Play Episode Listen Later Dec 9, 2025 82:54


Przewodnikiem po świecie wyłaniającym się z „Alicji w Krainie Czarów”, stworzonym przez człowieka ukrywającego się pod pseudonimem Lewis Carroll, jest porywający historyk literatury prof. Dawid Maria Osiński.Partnerem wykładów jest Wydawnictwo Prószyński i S-ka, które wydało polski przekład książki „Lewis Carroll w krainie czarów. Prawdziwa historia Alicji” Roberta Douglasa Fairhursta.Big Book Cafe objęło tę znakomitą książkę patronatem.Podczas trzech wykładów prof. Osiński zabierze nas do świata znaczeń i odniesień, które skrywa świat Alicji, ale którymi także naznacza naszą współczesność. Mówimy Alicją, czyli językiem kodów, a nasza popkultura pełna jest płynących z niej inspiracji. Wykłady będą podróżą przez kulturowe zmiany i wpływy jednej z najważniejszych książek wszech czasów.Wykład #3: W algorytmie dziwactw. Kontrowersje biograficzne Carrolla.Wydarzenie odbyło się 2 grudnia, wtorek, o godzinie 19:00 w Big Book Cafe MDM na ulicy Koszykowej 34/50.WYKŁAD MISTRZOWSKIPROF. DAWID MARIA OSIŃSKITrzeci wykład z serii W KRAINIE CZARÓW przybliżający świat Lewisa Carrolla i jego fenomenalnej "Alicji w Krainie Czarów".Inspirowany jest nową biografią autora, którą pisał Robert Douglas Fairhurst.Książkę "Lewis Carroll w Krainie Czarów" objęliśmy patronatem Big Book Cafe.Tym razem literaturoznawca opowie o ALGORYTMIE DZIWACTW.Kto z nas nie ma dziwactw i nie jest przywiązany do jakichś rytuałów, powtarzalnych czynności, specyficznych upodobań? Wychodząc z takiego pełnego zrozumienia założenia, zastanowimy się nad upodobaniami i wyborami jednej z najciekawszych postaci żyjącej w drugiej połowie XIX wieku – Lewisa Carrolla. Biografia autora Przygód Alicji w Krainie Czarów jest wciąż mało znana. Kiedy wskazuje się na jedną z najważniejszych jego bohaterek i postaci literackich w literaturze światowej – Alicję, nie sposób nie pokazać, jakie kontrowersje i fascynacje wiążą się z postacią jej autora, pastora Charlesa Ludwika Dodgsona – matematyka, fotografika, artysty, pisarza, kolekcjonera. Co Carroll kolekcjonował, dlaczego miał obsesję symetryczności, lustrzaności, precyzji, co go przerażało i czy dziś w jakimś sensie przeraża to nas? Czy pod płaszczem nudziarza i prowincjonalnego safanduły ukrywał się wielki oryginał, szalony kreator języka, przekraczający normy?Zapraszamy wspólnie z Wydawnictwem Prószyński i S-Ka.Wykłady są częścią Stałego Programu Kulturalnego i powstają dzięki dofinansowaniu m.st. Warszawy.Tym spotkaniem nową serię WYKŁADÓW MISTRZOWSKICHw Big Book Cafe MDM. Dotychczasowe dwa wykłady prof. Osińskiego znajdziesz w naszych podcastach w Spotify oraz w naszym kanale w You Tube.prof. DAWID MARIA OSIŃSKIPrzewodnikiem po świecie wyłaniającym się z "Alicji w krainie czarów", stworzonym przez człowieka ukrywającego się pod pseudonimem Lewis Carroll, będzie porywający historyk literatury prof. Dawid Maria Osiński.Partnerem wykładów jest Wydawnictwo Prószyński i S-ka, które wydało polski przekład książki "Lewis Carroll w krainie czarów. Prawdziwa historia Alicji" Roberta Douglasa Fairhursta.Big Book Cafe objęło tę znakomitą książkę patronatem.Podczas trzech wykładów prof. Osiński zabierze nas do świata znaczeń i odniesień, które skrywa świat Alicji, ale którymi także naznacza naszą współczesność. Mówimy Alicją, czyli językiem kodów, a nasza popkultura pełna jest płynących z niej inspiracji. Wykłady będą podróżą przez kulturowe zmiany i wpływy jednej z najważniejszych książek wszech czasów.

Military Murder
The Guam Commissary Heist // SSgt Stacey Levay // Part 2

Military Murder

Play Episode Listen Later Dec 8, 2025 46:56


In Part Two, the investigation into the Andersen Air Force Base ambush closes in on its prime suspect: SrA Jose Simoy. As Simoy goes on the run, FBI, OSI, and Security Police search the island of Guam for a killer hiding behind wigs, aliases, and threats. What follows is a dramatic capture, a capital court-martial, and a landmark death sentence (the first on Guam in 44 years). But a death sentence doesn't always mean death… Margot also follows the journeys of the co-conspirators, the emotional and physical aftermath for the two survivors, and the legacy of Sergeant Stacy Levay, a newlywed defender whose life was taken far too soon. This is the conclusion to one of the most brazen crimes ever committed on an Air Force installation.  ⸻

Telecom Reseller
Ribbon Communications Unveils Acumen for Autonomous Networking, Podcast

Telecom Reseller

Play Episode Listen Later Dec 1, 2025


Ram Ramanathan, Vice President of Product at Ribbon Communications, joined Doug Green, Publisher of Technology Reseller News, to discuss Acumen, Ribbon's new AI-powered platform designed to accelerate autonomous networking for service providers and enterprises. Ramanathan explains that rapid shifts—5G adoption, cloud-native architectures, heightened security demands, and a retiring telecom workforce—have created urgent pressure for automation. “We focus on practical, pragmatic AI that delivers real ROI—not hype,” he noted. Practical Automation Across the Service Lifecycle Acumen provides end-to-end observability and automation using real-time data and ML. It is vendor-agnostic, spans OSI layers 0–7, and includes a low-code/no-code Builder that allows Ribbon to tailor automation workflows and chatbots to each customer's environment. Real Deployments Already Underway Ribbon is working with several tier-one operators, including a major mobile provider moving from 4G to 5G across a multi-vendor network. Acumen is helping automate fault management, speed root-cause analysis, and proactively inform customer-facing teams. “It's not just fixing issues faster—it's keeping everyone, including the customer, informed,” Ramanathan said. Looking Ahead Ramanathan cautions organizations to avoid AI hype by setting realistic expectations and focusing on high-ROI outcomes first. “Break it into stages and show progress along the way,” he advised. Learn more at ribboncommunications.com.

Radio Wnet
Ujawnione rozmowy Witkoffa. Bobołowicz: To potężny akt zdrady

Radio Wnet

Play Episode Listen Later Nov 26, 2025 9:26


Bloomberg ujawnił zapis rozmowy Steve'a Witkoffa, specjalnego wysłannika prezydenta USA, z doradcą Władimira Putina Jurijem Uszakowem. Wynika z niej, że Witkoff podpowiadał Rosjaninowi, jak wpłynąć na Donalda Trumpa przed spotkaniem z Wołodymyrem Zełenskim.Media podają także, że ludzie Putina omawiali przekazanie Amerykanom własnego planu pokojowego, który później miał być przedstawiony jako projekt USA.W tym samym czasie Donald Trump potwierdził, że polecił Witkoffowi polecieć do Moskwy na rozmowy z Putinem, licząc na finalizację „planu pokojowego”.Ujawnione materiały – jak ocenia w Popołudniu Radia Wnet Paweł Bobołowicz – pokazują, że cały proces rzekomych negocjacji pokojowych miał charakter rosyjskiej operacji wpływu, prowadzonej z wykorzystaniem amerykańskiego wysłannika. Korespondent Radia Wnet podkreśla, że nikt nie kwestionuje autentyczności rozmów, a ich treść całkowicie zmienia obraz sytuacji.Według Bobołowicza rzekomy „amerykański plan pokojowy” od początku był dokumentem Kremla. Z podsłuchów wynika, że powstał w Moskwie, w języku rosyjskim, a Witkoff miał wprowadzić go do obiegu jako inicjatywę Stanów Zjednoczonych.Ten plan, który powstał w Moskwie, od początku powstał w taki sposób, żeby przez Witkoffa go przekazać Stanom Zjednoczonym i żeby Witkoff spowodował, że ten plan będzie funkcjonował jako plan amerykański– podkreśla Bobołowicz.Jeszcze poważniejszym elementem jest fakt, że Witkoff instruował rosyjskiego urzędnika, jak rozmawiać z prezydentem USA, by uzyskać efekt korzystny dla Rosji.To jest zdrada, to jest po prostu zdrada. To jest podpowiadanie przeciwnikowi, jak ma rozmawiać z prezydentem własnego kraju, żeby uzyskać jakiś efekt– mówi korespondent.Bobołowicz zauważa, że plan ingerował także w sprawy dotyczące Polski, m.in. sugerując rozmieszczenie europejskich myśliwców na naszym terytorium – bez konsultacji z Warszawą. Uważa to za działanie „ponad głowami” sojuszników i przykład uderzenia w bezpieczeństwo regionu.Według korespondenta za przeciekiem mogły stać amerykańskie służby lub struktury współpracujące z ukraińskim wywiadem.Osiągnięty został punkt krytyczny. Jedynym wyjściem było ujawnienie rozmów i zatrzymanie procesu, który nie służył ani Ukrainie, ani Stanom Zjednoczonym, ani ich partnerom– ocenia. 

Forbes Česko
Forbes BrandVoice #152 - Tradice, kvalita a komunita. Dvě mlékárenské farmy naučily zákazníky vracet se

Forbes Česko

Play Episode Listen Later Nov 23, 2025 39:02


České zemědělství roste tam, kde se protne tradice, technologie a poctivost. V podcastu Forbes BrandVoice Lucie Martínková z Martínkovy farmy a Václava Osička z Doubravského dvora mluví o tom, jak budovat komunitu zákazníků a obstát v tvrdé ekonomice. Oba podniky mají ocenění Regionální potravina a dokazují, že kvalita vzniká přímo „u dvora“.

The 20/20 Podcast
The 20/20 Podcast UNSCRIPTED: Authentic Optometry Conversations - Dr. Claudine Coure

The 20/20 Podcast

Play Episode Listen Later Nov 19, 2025 32:33


In this episode of The 20/20 Podcast, Dr. Harbir Sian reconnects with returning guest and dry-eye expert Dr. Claudine Courey, recorded live at the OSI Summit at White Oaks Resort in Niagara-on-the-Lake. The conversation is completely unscripted — a candid mix of clinical pearls, entrepreneurial insight, and authentic reflections on optometry, business, and life.Dr. Courey shares the latest from Eye Drop Shop, including its partnership with OSI and the launch of Rinsada, a new in-office ocular-surface rinse now available in Canada. They discuss how Eye Drop Shop empowers optometrists to retail dry-eye and clean-beauty products online without carrying inventory, creating new revenue streams and patient touch points.The conversation flows into business mindset, patient education, and the difference between selling and helping. Harbir and Claudine also swap perspectives on personal growth, risk-taking, and what it means to build an authentic optometry brand. The episode closes on themes of humility, gratitude, and balance — with Harbir reflecting on the podcast's 200-episode journey and Claudine reminding us that everything — good or bad — is temporary.Key TopicsPartnership between Eye Drop Shop and OSI GroupLaunch of Rinsada, a new in-office saline flush treatment for allergy and debris removalEmpowering ODs through e-commerce and passive-income tools (like Auto)The importance of patient touch points and staying top-of-mind onlineShifting from “selling” to presenting solutionsHarbir's behind-the-scenes story of how The 2020 Podcast beganHandling tough industry conversations and asking hard questionsMindset: accountability, resilience, and self-leadership in optometryWork-life balance, gratitude, and the role of support systemsFeatured GuestsDr. Claudine Courey, Optometrist & Founder of Eye Drop Shop (Montreal, QC)Dr. Harbir Sian, Optometrist, Speaker, Host of The 20/20 PodcastResources MentionedEye Drop Shop — dry-eye & clean-beauty products for clinics and patientsRinsada — new in-office ocular-surface rinse treatmentOSI Group — Optometric Services Inc. networkOtto Optics — integrated e-commerce solution for ODsQuotable Moments“We're not selling — we're giving patients solutions to their problems.” — Dr. Claudine Courey“If it's all my fault, it's also all up to me to fix it.” — Dr. Harbir Sian“Everything is temporary — whether it's good or bad.” — Dr. Claudine CoureyLove the show? Subscribe, rate, review & share! http://www.aboutmyeyes.com/podcast/

The Cybersecurity Readiness Podcast Series
Episode 95 -- Defending Digital Trust – Battling the Deepfake Surge with AI-Powered Detection

The Cybersecurity Readiness Podcast Series

Play Episode Listen Later Nov 19, 2025 43:56


In this episode, Dave Chatterjee, Ph.D. sits down with Sandy Kronenberg, Founder and CEO of Netarx, an AI-driven platform designed to detect and prevent synthetic impersonation across video, voice, and email. With deepfake fraud incidents skyrocketing by 3,000 percent and costing organizations an average of $500,000 per attack, Kronenberg and Chatterjee unpack how AI can now help defeat AI—turning defense innovation into a frontline imperative.Together, they explore the evolution of deepfake technology, the psychology of digital deception, and how organizations can safeguard their people and data from real-time manipulation. Through the Commitment–Preparedness–Discipline (CPD) framework, Dr. Chatterjee emphasizes the importance of leadership discipline, continuous monitoring, and technology integration in establishing a high-performance cybersecurity culture in the era of generative AI threats.Time Stamps• 00:49 — Dave introduces the topic and deepfake threat surge.• 02:37 — Sandy shares his professional journey and early exposure to cyber fraud.• 07:28 — Discussion on the human layer and OSI model limitations.• 09:55 — Integrating deepfake detection within enterprise security architecture.• 13:01 — How AI models ingest 50+ signals for real-time identity validation.• 17:48 — Zoom and video call trust issues in remote business settings.• 19:40 — Why siloed tools fail—importance of cross-channel correlation.• 23:30 — Continuous learning loops: retraining AI models against new deepfake generators.• 26:59 — The rise of Trust Officers and Trust Operations in corporate governance.• 32:15 — HR, finance, and brand use cases for disinformation security.• 35:18 — Balancing training and AI automation.• 37:16 — Expanding defense to email and multimodal verification.• 41:18 — Closing takeaways on readiness and adoption strategy.To access and download the entire podcast summary with discussion highlights - https://www.dchatte.com/episode-95-defending-digital-trust-battling-the-deepfake-surge-with-ai-powered-detection/Connect with Host Dr. Dave Chatterjee LinkedIn: https://www.linkedin.com/in/dchatte/ Website: https://dchatte.com/Books PublishedThe DeepFake ConspiracyCybersecurity Readiness: A Holistic and High-Performance ApproachArticles PublishedRamasastry, C. and Chatterjee, D. (2025). Trusona: Recruiting For The Hacker Mindset, Ivey Publishing, Oct 3, 2025.Chatterjee, D. and Leslie, A. (2024). “Ignorance is not bliss: A human-centered whole-of-enterprise approach to cybersecurity preparedness,” Business Horizons, Accepted on Oct 29, 2024.

Oracle University Podcast
Networking & Security Essentials

Oracle University Podcast

Play Episode Listen Later Nov 11, 2025 17:25


How do all your devices connect and stay safe in the cloud? In this episode, Lois Houston and Nikita Abraham talk with OCI instructors Sergio Castro and Orlando Gentil about the basics of how networks work and the simple steps that help protect them.   You'll learn how information gets from one place to another, why tools like switches, routers, and firewalls are important, and what goes into keeping access secure.   The discussion also covers how organizations decide who can enter their systems and how they keep track of activity.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! In the last episode, we spoke about local area networks and domain name systems. Today, we'll continue our conversation on the fundamentals of networking, covering a variety of important topics.  00:50 Lois: That's right, Niki. And before we close, we'll also touch on the basics of security. Joining us today are two OCI instructors from Oracle University: Sergio Castro and Orlando Gentil. So glad to have you both with us guys. Sergio, with so many users and devices connecting to the internet, how do we make sure everyone can get online? Can you break down what Network Address Translation, or NAT, does to help with this? Sergio: The world population is bigger than 4.3 billion people. That means that if we were to interconnect every single human into the internet, we will not have enough addresses. And not all of us are connected to the internet, but those of us who are, you know that we have more than one device at our disposal. We might have a computer, a laptop, mobile phones, you name it. And all of them need IP addresses. So that's why Network Address Translation exists because it translates your communication from a private IP to a public IP address. That's the main purpose: translate. 02:05 Nikita: Okay, so with NAT handling the IP translation, how do we ensure that the right data reaches the right device within a network? Or to put it differently, what directs external traffic to specific devices inside a network? Sergio: Port forwarding works in a reverse way to Network Address Translation. So, let's assume that this PC here, you want to turn it into a web server. So, people from the outside, customers from the outside of your local area network, will access your PC web server. Let's say that it's an online store. Now all of these devices are using the same public IP address. So how would the traffic be routed specifically to this PC and not to the camera or to the laptop, which is not a web server, or to your IP TV? So, this is where port forwarding comes into play. Basically, whenever it detects a request coming to port, it will route it and forward that request to your PC. It will allow anybody, any external device that wants to access this particular one, this particular web server, for the session to be established. So, it's a permission that you're allowing to this PC and only to this PC. The other devices will still be isolated from that list. That's what port forwarding is. 03:36 Lois: Sergio, let's talk about networking devices. What are some of the key ones, and what role do they play in connecting everything together? Sergio: There's plenty of devices for interconnectivity. These are devices that are different from the actual compute instances, virtual machines, cameras, and IPTV. These are for interconnecting networks. And they have several functionalities. 03:59 Nikita: Yeah, I often hear about a default gateway. Could you explain what that is and why it's essential for a network to function smoothly? Sergio: A gateway is basically where a web browser goes and asks a service from a web server. We have a gateway in the middle that will take us to that web server. So that's basically is the router. A gateway doesn't necessarily have to be a router. It depends on what device you're addressing at a particular configuration. So, a gateway is a connectivity device that connects two different networks. That's basically the functionality.  04:34 Lois: Ok. And when does one use a default gateway? Sergio: When you do not have a specific route that is targeting a specific router. You might have more than one router in your network, connecting to different other local area networks. You might have a route that will take you to local area network B. And then you might have another router that is connecting you to the internet. So, if you don't have a specific route that will take you to local area network B, then it's going to be utilizing the default gateway. It directs data packets to other networks when no specific route is known. In general terms, the default gateway, again, it doesn't have to be a router. It can be any devices. 05:22 Nikita: Could you give us a real-world example, maybe comparing a few of these devices in action, so we can see how they work together in a typical network? Sergio: For example, we have the hub. And the hub operates at the physical layer or layer 1. And then we have the switch. And the switch operates at layer 2. And we also have the router. And the router operates at layer 3. So, what's the big difference between these devices and the layers that they operate in? So, hubs work in the physical layer of the OSI model. And basically, it is for connecting multiple devices and making them act as a single network segment. Now, the switch operates at the data link layer and is basically a repeater, and is used for filtering content by reading the addresses of the source and destination. And these are the MAC addresses that I'm talking about. So, it reads where the packet is coming from and where is it going to at the local area network level. It connects multiple network segments. And each port is connected to a different segment. And the router is used for routing outside of your local area network, performs traffic directing functions on the internet. A data packet is typically forwarded from one router to another through different networks until it reaches its destination node. The switch connects multiple network segments. And each port of the switch is connected to a different segment. And the router performs traffic directing functions on the internet. It takes data from one router to another, and it works at the TCP/IP network layer or internet layer. 07:22 Lois: Sergio, what kind of devices help secure a network from external threats? Sergio: The network firewall is used as a security device that acts as a barrier between a trusted internal network and an untrusted external network, such as the internet. The network firewall is the first line of defense for traffic that passes in and out of your network. The firewall examines traffic to ensure that it meets the security requirements set by your organization, or allowing, or blocking traffic based on set criteria. And the main benefit is that it improves security for access management and network visibility. 08:10 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started.  08:36 Nikita: Welcome back! Sergio, how do networks manage who can and can't enter based on certain permissions and criteria? Sergio: The access control list is like the gatekeeper into your local area network. Think about the access control list as the visa on your passport, assuming that the country is your local area network. Now, when you have a passport, you might get a visa that allows you to go into a certain country. So the access control list is a list of rules that defines which users, groups, or systems have permissions to access specific resources on your networks.  It is a gatekeeper, that is going to specify who's allowed and who's denied. If you don't have a visa to go into a specific country, then you are denied. Similar here, if you are not part of the rule, if the service that you're trying to access is not part of the rules, then you cannot get in. 09:37 Lois: That's a great analogy, Sergio. Now, let's turn our attention to one of the core elements of network security: authentication and authorization. Orlando, can you explain why authentication and authorization are such crucial aspects of a secure cloud network? Orlando: Security is one of the most critical pillars in modern IT systems. Whether you are running a small web app or managing global infrastructure, every secure system starts by answering two key questions. Who are you, and what are you allowed to do? This is the essence of authentication and authorization. Authentication is the first step in access control. It's how a system verifies that you are who you claim to be. Think of it like showing your driver's license at a security checkpoint. The guard checks your photo and personal details to confirm your identity. In IT systems, the same process happens using one or more of these factors. It will ask you for something you know, like a password. It will ask you for something that you have, like a security token, or it will ask you for something that you are, like a fingerprint. An identity does not refer to just a person. It's any actor, human or not, that interacts with your systems. Users are straightforward, think employees logging into a dashboard. But services and machines are equally important. A backend API may need to read data from a database, or a virtual machine may need to download updates. Treating these non-human identities with the same rigor as human ones helps prevent unauthorized access and improves visibility and security. After confirming your identity, can the system move on to deciding what you're allowed to access? That's where authorization comes in. Once authentication confirms who you are, authorization determines what you are allowed to do. Sticking with the driver's license analogy, you've shown your license and proven your identity, but that doesn't mean that you can drive anything anywhere. Your license class might let you drive a car, not a motorcycle or a truck. It might be valid in your country, but not in others. Similarly, in IT systems, authorization defines what actions you can take and on which resources. This is usually controlled by policies and roles assigned to your identity. It ensures that users or services only get access to the things they are explicitly allowed to interact with. 12:34 Nikita: How can organizations ensure secure access across their systems, especially when managing multiple users and resources?  Orlando: Identity and Access Management governs who can do what in our systems. Individually, authentication verifies identity and authorization grants access. However, managing these processes at scale across countless users and resources becomes a complex challenge. That's where Identity and Access Management, or IAM, comes in. IAM is an overarching framework that centralizes and orchestrates both authentication and authorization, along with other critical functions, to ensure secure and efficient access to resources.  13:23 Lois: And what are the key components and methods that make up a robust IAM system? Orlando: User management, a core component of IAM, provides a centralized Identity Management system for all user accounts and their attributes, ensuring consistency across applications. Key functions include user provisioning and deprovisioning, automating account creation for new users, and timely removal upon departure or role changes. It also covers the full user account lifecycle management, including password policies and account recovery. Lastly, user management often involves directory services integration to unify user information. Access management is about defining access permissions, specifically what actions users can perform and which resources they can access. A common approach is role-based access control, or RBAC, where permissions are assigned to roles and users inherit those permissions by being assigned to roles. For more granular control, policy-based access control allows for rules based on specific attributes. Crucially, access management enforces the principle of least privilege, granting only the minimum necessary access, and supports segregation of duties to prevent conflicts of interest. For authentication, IAM systems support various methods. Single-factor authentication, relying on just one piece of evidence like a password, offers basic security. However, multi-factor authentication significantly boosts security by requiring two or more distinct verification types, such as a password, plus a one-time code. We also have biometric authentication, using unique physical traits and token-based authentication, common for API and web services. 15:33 Lois: Orlando, when it comes to security, it's not just about who can access what, but also about keeping track of it all. How does auditing and reporting maintain compliance? Orlando: Auditing and reporting are essential for security and compliance. This involves tracking user activities, logging all access attempts and permission changes. It's vital for meeting compliance and regulatory requirements, allowing you to generate reports for audits. Auditing also aids in security incident detection by identifying unusual activities and providing data for forensic analysis after an incident. Lastly, it offers performance and usage analytics to help optimize your IAM system.  16:22 Nikita: That was an incredibly informative conversation. Thank you, Sergio and Orlando, for sharing your expertise with us. If you'd like to dive deeper into these concepts, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: I agree! This was such a great conversation! Don't miss next week's episode, where we'll continue exploring key security concepts to help organizations operate in a scalable, secure, and auditable way. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Podcast literacki Big Book Cafe
W Krainie Czarów. Zabijanie czasu i uśmiech kota, czyli Alicja w języku polskim. Prof. Dawid Osiński. Wykład Mistrzowski #2

Podcast literacki Big Book Cafe

Play Episode Listen Later Nov 11, 2025 96:00


Nowa seria WYKŁADÓW MISTRZOWSKICHw Big Book Cafe MDM.Przewodnikiem po świecie wyłaniającym się z „Alicji w Krainie Czarów”, stworzonym przez człowieka ukrywającego się pod pseudonimem Lewis Carroll, jest porywający historyk literatury prof. Dawid Maria Osiński.Partnerem wykładów jest Wydawnictwo Prószyński i S-ka, które wydało polski przekład książki „Lewis Carroll w krainie czarów. Prawdziwa historia Alicji” Roberta Douglasa Fairhursta.Big Book Cafe objęło tę znakomitą książkę patronatem.Podczas trzech wykładów prof. Osiński zabierze nas do świata znaczeń i odniesień, które skrywa świat Alicji, ale którymi także naznacza naszą współczesność. Mówimy Alicją, czyli językiem kodów, a nasza popkultura pełna jest płynących z niej inspiracji. Wykłady będą podróżą przez kulturowe zmiany i wpływy jednej z najważniejszych książek wszech czasów.Wykład #2: Zabijanie czasu i uśmiech kota, czyli Alicja w języku polskimWydarzenie odbyło się 4 listopada, wtorek, o godzinie 19:00 w Big Book Cafe MDM na ulicy Koszykowej 34/50.To magia języka Alicji zrewolucjonizowała literaturę dziecięcą i możliwości stojące przed literaturą drugiej połowy XIX wieku i XX wieku. Na ile uświadamiany sobie, że „mówimy Alicją”, kiedy patrzymy na drugą stronę lustra, wpadamy w głąb króliczej nory, idziemy na herbatkę u Szalonego Kapelusznika, wylewamy kałużę czy morze łez, zastanawiamy się nad tym, co znaczy słynny frazeologizm „zabić czas”.Przygody Alicji w Krainie Czarów to jeden z trzech najbardziej znanych na świecie tekstów, każdy z tłumaczy stanął więc przed niezwykłym wyzwaniem konstruowania świata Alicji w swoim języku, kalamburów i gier słownych oraz znaczeniem słów-walizek. Polskie przekłady dowodzą kunsztu, czego dowodem są wyrażenia, które zakorzeniły się w naszym języku.O MISTRZUDawid Maria Osiński (1981) – dr hab., profesor Uniwersytetu Warszawskiego, historyk literatury, pracuje w Zakładzie Literatury i Kultury Drugiej Połowy XIX Wieku na Wydziale Polonistyki UW. Przewodniczący Stołecznego Komitetu Okręgowego OLiJP (od 2016), Zastępca Dyrektora ILP UW (od 2020), redaktor naczelny „Przeglądu Humanistycznego” (od 2025). Autor monografii Aleksander Świętochowski w poszukiwaniu formy. Biografia myśli (2011), Pozytywistów dziedzictwo Oświecenia. Kierunki i formy recepcji (2018), współautor monografii Miejsca trudne – transdyscyplinarny model badań. O przestrzeni placu Piłsudskiego i placu Defilad (2019). Zainteresowania badawcze dotyczą teorii języka pozytywistów i modernistów, wątków syberyjskich w literaturze XIX wieku, translatoryki XIX i XX wieku.KOLEJNY WYKŁAD SERII MISTRZOWSKIEJWykład #3: W algorytmie dziwactw. Kontrowersje biograficzne Carrolla2 grudnia godz. 19.00Kto z nas nie ma dziwactw i nie jest przywiązany do jakichś rytuałów, powtarzalnych czynności, specyficznych upodobań? Wychodząc z takiego pełnego zrozumienia założenia, zastanowimy się nad upodobaniami i wyborami jednej z najciekawszych postaci żyjącej w drugiej połowie XIX wieku – Lewisa Carrolla. Biografia autora Przygód Alicji w Krainie Czarów jest wciąż mało znana. Kiedy wskazuje się na jedną z najważniejszych jego bohaterek i postaci literackich w literaturze światowej – Alicję, nie sposób nie pokazać, jakie kontrowersje i fascynacje wiążą się z postacią jej autora, pastora Charlesa Ludwika Dodgsona – matematyka, fotografika, artysty, pisarza, kolekcjonera. Co Carroll kolekcjonował, dlaczego miał obsesję symetryczności, lustrzaności, precyzji, co go przerażało i czy dziś w jakimś sensie przeraża to nas? Czy pod płaszczem nudziarza i prowincjonalnego safanduły ukrywał się wielki oryginał, szalony kreator języka, przekraczający normy?Wydarzenie zostało dofinansowane w Stałym programie Kulturalnym miasta stołecznego Warszawy. Dziękujemy!

Ones Ready
Ep 525: The Zulu Course Is a Dumpster Fire (Or Maybe Not?)

Ones Ready

Play Episode Listen Later Nov 7, 2025 67:16


Send us a textEveryone online says the new Special Warfare “Zulu Course” is trash—so Peaches and Trent decided to light it up. This isn't a soft take or sanitized military PR moment. It's two retired operators roasting the chaos, the memes, and the ridiculous leadership gag orders that make no sense. Peaches calls out the “change fatigue” across the DOD, breaks down why the Zulu rollout will be rough, and drops truth bombs about command cluelessness, budget black holes, and the myth of the “company man.” If you can't handle sarcasm and honesty about how training actually works, go listen to something else.⏱️ Timestamps: 00:00 – Peaches calls out “Company Man” energy 05:30 – The Zulu Course meltdown begins 08:40 – Change fatigue & leadership chaos 13:00 – Meme wars and gag orders gone stupid 19:00 – Legal orders, gag orders, and OSI overreach 25:00 – Why the first 3 Zulu classes will be total chaos 33:00 – Training breakdown: what “advanced” really means (hint: nothing) 41:00 – Subsuface swimming & pre-dive prep 52:00 – “They're still cones” – Peaches vs. the pipeline 55:00 – Peaches' spicy take on AFSOC “air commandos” 1:02:00 – If the Wing's paying, Peaches is for sale

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work
Air Force OSI Agent Now Serving 30 Years | The Robert Condon Story - S.O.S. #234

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work

Play Episode Listen Later Oct 31, 2025 92:34 Transcription Available


Send us a textA decorated OSI agent who helped capture Taliban fighters and aided disaster survivors should be building a life in his forties. Instead, Robert Condon has spent 12 years behind bars, sentenced to 30, while his mother—retired Toledo police officer Holly Yeager—keeps fighting a case she believes was built on pressure, politics, and broken process. We open the file and follow the twists: a drug ring investigation that put Robert at odds with command priorities, a single accuser whose SANE exam reportedly found no injuries consistent with her extreme account, and two more “victims” cultivated through interviews that steered words toward charges and dangled immunity for unrelated misconduct.Holly walks us through the evidence gaps that still haunt the record: a second phone noted but never collected, weeks of exculpatory messages lost when Robert's device was destroyed after chain-of-custody issues, and discovery that surfaced a concealed felony history too late to test at trial. We talk Article 32 anomalies, special victims counsel influence, and a panel of superiors deciding guilt under the shadow of congressional pressure. Non‑unanimous verdicts, repeated speedy‑trial slippage, and unsworn statements shaped a path to a 30‑year sentence far above average. On appeal, mismatched and sealed record-of-trial pages made it harder for judges to validate citations or see context, dimming the chance for dissent and relief.Beyond the legal maze lies a family's cost: a son who lost his thirties, a 92‑year‑old grandfather running out of road trips, and a parole process that hinges on treatment requiring admissions he won't make. Holly's message is blunt and humane: protect real survivors and protect due process. Stop manufacturing narratives to save weak cases. Build independent evidence integrity, require unanimous verdicts, insulate panels from command, and hold investigators to the same standards we demand in civilian courts.Listen, share, and weigh in with your perspective on military justice reform. If this story moved you, subscribe, leave a review, and send the episode to someone who cares about truth over optics.Support the showVisit my website: https://thehello.llc/THERESACARPENTERRead my writings on my blog: https://www.theresatapestries.com/Listen to other episodes on my podcast: https://storiesofservice.buzzsprout.comWatch episodes of my podcast:https://www.youtube.com/c/TheresaCarpenter76

Hacker Valley Studio
Learning How to Learn: Mastering the Cyber Fundamentals with Rich Greene

Hacker Valley Studio

Play Episode Listen Later Oct 16, 2025 25:38


The real edge in cybersecurity isn't found in new tools, it's built through timeless fundamentals and a mindset that never stops learning. In this episode, Ron sits down with Rich Greene, Senior Solutions Engineer and Instructor at SANS Institute, to uncover how true cyber value starts with skills, curiosity, and mindset. Rich shares his remarkable story of surviving a battlefield injury, retraining his brain, and how that journey shaped his approach to mastering cybersecurity. Together, they connect real-world lessons like the recent Discord breach to the core truth that even advanced systems depend on people who master the basics. Impactful Moments 00:00 - Introduction 02:00 - Discord breach and third-party risk 05:00 - Meet Rich Greene from SANS 06:00 - The power of mastering fundamentals 07:00 - Learning how to learn 08:30 - Rich's story of rebuilding his memory 11:00 - Forcing the brain to grow stronger 12:00 - Top skills that get you paid 14:00 - Skills that lead to fulfillment 16:00 - Fundamentals that fuel long-term success 17:00 - The OSI model decoded 20:00 - Why operating systems matter 21:00 - Security operations fundamentals 23:00 - Why cloud is the #1 must-learn skill 25:00 - Final advice: sharpen your fundamentals   Links Connect with our Rich on LinkedIn: https://www.linkedin.com/in/secgreene/ Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/  

The Nat Coombs Show
Edge Rush - Week 7 Picks! Plus NFC playoff teams, Gen X vs Gen Z & more!

The Nat Coombs Show

Play Episode Listen Later Oct 15, 2025 67:48


Nat, Ben & Prop-O are coming off the back of a mixed week of Drew Locks, so feel the need for backup...enter TalkSport's Will Varney to add some professionalism to proceedings! Nat establishes Will's Gen Z credentials, before following up with a Partridge-esque tale involving him and some risque karoake lyrics before they finally get down to some football chat. Were the Titans right to fire Brian Callahan so soon into the season? Who are the front runners for a head coaching gig next season? The fellas also complement last week's AFC Playoff picks with the NFC selections this week - unsurprisingly, they're not in total agreement. They turn their attention to Week 7 and make their picks including the NFL London game - live on FIVE from 14-00 with Nat, Osi and the crew - plus a whole host of selections from the slate. Prop-O drops his props, the team look for back to back acca wins, and Dutts drops by with his fantasy picks for the FanTeam DFS comps! Speaking of which.... To back any of the action in the show, sign up for our brand new partners FanTeam, hit the link : ⁠⁠⁠https://af.fanteam.com/click?o=1&a=99082&c=1⁠⁠⁠ - use the code RUSH to unlock special offers for followers of The NC Show inc £30 of free bets with any £10 bet. 18+, please play responsibly, BeGambleAware.org Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Automation Podcast
Software Toolbox: OPC Server, Router, DataHub and more (P248)

The Automation Podcast

Play Episode Listen Later Oct 8, 2025 57:48 Transcription Available


Shawn Tierney meets up with Connor Mason of Software Toolbox to learn their company, products, as well as see a demo of their products in action in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 248 Show Notes: Special thanks to Software Toolbox for sponsoring this episode so we could release it “ad free!” To learn about Software Toolbox please checkout the below links: TOP Server Cogent DataHub Industries Case studies Technical blogs Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney with Insights and Automation, and I wanna thank you for tuning back in this week. Now this week on the show, I meet up with Connor Mason from Software Toolbox, who gives us an overview of their product suite, and then he gives us a demo at the end. And even if you’re listening, I think you’re gonna find the demo interesting because Connor does a great job of talking through what he’s doing on the screen. With that said, let’s go ahead and jump into this week’s episode with Connor Mason from Software Toolbox. I wanna welcome Connor from Software Toolbox to the show. Connor, it’s really exciting to have you. It’s just a lot of fun talking to your team as we prepared for this, and, I’m really looking forward to because I just know in your company over the years, you guys have so many great solutions that I really just wanna thank you for coming on the show. And before you jump into talking about products and technologies Yeah. Could you first tell us just a little bit about yourself? Connor Mason (Guest): Absolutely. Thanks, Shawn, for having us on. Definitely a pleasure to be a part of this environment. So my name is Connor Mason. Again, I’m with Software Toolbox. We’ve been around for quite a while. So we’ll get into some of that history as well before we get into all the the fun technical things. But, you know, I’ve worked a lot with the variety of OT and IT projects that are ongoing at this point. I’ve come up through our support side. It’s definitely where we grow a lot of our technical skills. It’s a big portion of our company. We’ll get that into that a little more. Currently a technical application consultant lead. So like I said, I I help run our support team, help with these large solutions based projects and consultations, to find what’s what’s best for you guys out there. There’s a lot of different things that in our in our industry is new, exciting. It’s fast paced. Definitely keeps me busy. My background was actually in data analytics. I did not come through engineering, did not come through the automation, trainings at all. So this is a whole new world for me about five years ago, and I’ve learned a lot, and I really enjoyed it. So, I really appreciate your time having us on here, Shawn Tierney (Host): Shawn. Well, I appreciate you coming on. I’m looking forward to what you’re gonna show us today. I had a the audience should know I had a little preview of what they were gonna show, so I’m looking forward to it. Connor Mason (Guest): Awesome. Well, let’s jump right into it then. So like I said, we’re here at Software Toolbox, kinda have this ongoing logo and and just word map of connect everything, and that’s really where we lie. Some people have called us data plumbers in the past. It’s all these different connections where you have something, maybe legacy or something new, you need to get into another system. Well, how do you connect all those different points to it? And, you know, throughout all these projects we worked on, there’s always something unique in those different projects. And we try to work in between those unique areas and in between all these different integrations and be something that people can come to as an expert, have those high level discussions, find something that works for them at a cost effective solution. So outside of just, you know, products that we offer, we also have a lot of just knowledge in the industry, and we wanna share that. You’ll kinda see along here, there are some product names as well that you might recognize. Our top server and OmniServer, we’ll be talking about LOPA as well. It’s been around in the industry for, you know, decades at this point. And also our symbol factory might be something you you may have heard in other products, that they actually utilize themselves for HMI and and SCADA graphics. That is that is our product. So you may have interacted it with us without even knowing it, and I hope we get to kind of talk more about things that we do. So before we jump into all the fun technical things as well, I kind of want to talk about just the overall software toolbox experience as we call it. We’re we’re more than just someone that wants to sell you a product. We we really do work with, the idea of solutions. How do we provide you value and solve the problems that you are facing as the person that’s actually working out there on the field, on those operation lines, and making things as well. And that’s really our big priority is providing a high level of knowledge, variety of the things we can work with, and then also the support. It’s very dear to me coming through the the support team is still working, you know, day to day throughout that software toolbox, and it’s something that has been ingrained into our heritage. Next year will be thirty years of software toolbox in 2026. So we’re established in 1996. Through those thirty years, we have committed to supporting the people that we work with. And I I I can just tell you that that entire motto lives throughout everyone that’s here. So from that, over 97% of the customers that we interact with through support say they had an awesome or great experience. Having someone that you can call that understands the products you’re working with, understands the environment you’re working in, understands the priority of certain things. If you ever have a plant shut down, we know how stressful that is. Those are things that we work through and help people throughout. So this really is the core pillars of Software Toolbox and who we are, beyond just the products, and and I really think this is something unique that we have continued to grow and stand upon for those thirty years. So jumping right into some of the industry challenges we’ve been seeing over the past few years. This is also a fun one for me, talking about data analytics and tying these things together. In my prior life and education, I worked with just tons of data, and I never fully knew where it might have come from, why it was such a mess, who structured it that way, but it’s my job to get some insights out of that. And knowing what the data actually was and why it matters is a big part of actually getting value. So if you have dirty data, if you have data that’s just clustered, it’s in silos, it’s very often you’re not gonna get much value out of it. This was a study that we found in 2024, from Garner Research, And it said that, based on the question that business were asked, were there any top strategic priorities for your data analytics functions in 2024? And almost 50%, it’s right at ’49, said that they wanted to improve data quality, and that was a strategic priority. This is about half the industry is just talking about data quality, and it’s exactly because of those reasons I said in my prior life gave me a headache, to look at all these different things that I don’t even know where they became from or or why they were so different. And the person that made that may have been gone may not have the contacts, and making that from the person that implemented things to the people that are making decisions, is a very big task sometimes. So if we can create a better pipeline of data quality at the beginning, makes those people’s lives a lot easier up front and allows them to get value out of that data a lot quicker. And that’s what businesses need. Shawn Tierney (Host): You know, I wanna just data quality. Right? Mhmm. I think a lot of us, when we think of that, we think of, you know, error error detection. We think of lost connections. We think of, you know, just garbage data coming through. But I I think from an analytical side, there’s a different view on that, you know, in line with what you were just saying. So how do you when you’re talking to somebody about data quality, how do you get them to shift gears and focus in on what you’re talking about and not like a quality connection to the device itself? Connor Mason (Guest): Absolutely. Yeah. We I kinda live in both those worlds now. You know, I I get to see that that connection state. And when you’re operating in real time, that quality is also very important to you. Mhmm. And I kind of use that at the same realm. Think of that when you’re thinking in real time, if you know what’s going on in the operation and where things are running, that’s important to you. That’s the quality that you’re looking for. You have to think beyond just real time. We’re talking about historical data. We’re talking about data that’s been stored for months and years. Think about the quality of that data once it’s made up to that level. Are they gonna understand what was happening around those periods? Are they gonna understand what those tags even are? Are they gonna understand what those conventions that you’ve implemented, to give them insights into this operation. Is that a clear picture? So, yeah, you’re absolutely right. There are two levels to this, and and that is a big part of it. The the real time data and historical, and we’re gonna get some of that into into our demo as well. It it’s a it’s a big area for the business, and the people working in the operations. Shawn Tierney (Host): Yeah. I think quality too. Think, you know, you may have data. It’s good data. It was collected correctly. You had a good connection to the device. You got it. You got it as often as you want. But that data could really be useless. It could tell you nothing. Connor Mason (Guest): Right. Exactly. Shawn Tierney (Host): Right? It could be a flow rate on part of the process that irrelevant to monitoring the actual production of the product or or whatever you’re making. And, you know, I’ve known a lot of people who filled up their databases, their historians, with they just they just logged everything. And it’s like a lot of that data was what I would call low quality because it’s low information value. Right? Absolutely. I’m sure you run into that too. Connor Mason (Guest): Yeah. We we run into a lot of people that, you know, I’ve got x amount of data points in my historian and, you know, then we start digging into, well, I wanna do something with it or wanna migrate. Okay. Like, well, what do you wanna achieve at the end of this? Right? And and asking those questions, you know, it’s great that you have all these things historized. Are you using it? Do you have the right things historized? Are they even set up to be, you know, worked upon once they are historized by someone outside of this this landscape? And I think OT plays such a big role in this, and that’s why we start to see the convergence of the IT and OT teams just because that communication needs to occur sooner. So we’re not just passing along, you know, low quality data, bad quality data as well. And we’ll get into some of that later on. So to jump into some of our products and solutions, I kinda wanna give this overview of the automation pyramid. This is where we work from things like the field device communications. And you you have certain sensors, meters, actuators along the actual lines, wherever you’re working. We work across all the industries, so this can vary between those. Through there, you work up kind of your control area. A lot of control engineers are working. This is where I think a lot of the audience is very familiar with PLCs. Your your typical name, Siemens, Rockwell, your Schneiders that are creating, these hardware products. They’re interacting with things on the operation level, and they’re generating data. That that was kind of our bread and butter for a very long time and still is that communication level of getting data from there, but now getting it up the stack further into the pyramid of your supervisory, MES connections, and it’ll also now open to these ERP. We have a lot of large corporations that have data across variety of different solutions and also want to integrate directly down into their operation levels. There’s a lot of value to doing that, but there’s also a lot of watch outs, and a lot of security concerns. So that’ll be a topic that we’ll be getting into. We also all know that the cloud is here. It’s been here, and it’s it’s gonna continue to push its way into, these cloud providers into OT as well. There there’s a lot of benefit to it, but there there’s also some watch outs as this kind of realm, changes in the landscape that we’ve been used to. So there’s a lot of times that we wanna get data out there. There’s value into AI agents. It’s a hot it’s a hot commodity right now. Analytics as well. How do we get those things directly from shop floor, up into the cloud directly, and how do we do that securely? It’s things that we’ve been working on. We’ve had successful projects, continues to be an interest area and I don’t see it slowing down at all. Now, when we kind of begin this level at the bottom of connectivity, people mostly know us for our top server. This is our platform for industrial device connectivity. It’s a thing that’s talking to all those different PLCs in your plant, whether that’s brownfield or greenfield. We pretty much know that there’s never gonna be a plant that’s a single PLC manufacturer, that exists in one plant. There’s always gonna be something that’s slightly different. Definitely from Brownfield, things different engineers made different choices, things have been eminent, and you gotta keep running them. TopServe provides this single platform to connect to a long laundry list of different PLCs. And if this sounds very familiar to Kepserver, well, you’re not wrong. Kepserver is the same exact technology that TopServer is. What’s the difference then is probably the biggest question we usually get. The difference technology wise is nothing. The difference in the back end is that actually it’s all the same product, same product releases, same price, but we have been the biggest single source of Kepserver or Topsyra implementation into the market, for almost two plus decades at this point. So the single biggest purchase that we own this own labeled version of Kepserver to provide to our customers. They interact with our support team, our solutions teams as well, and we sell it along the stack of other things because it it fits so well. And we’ve been doing this since the early two thousands when, Kepware was a a much smaller company than it is now, and we’ve had a really great relationship with them. So if you’ve enjoyed the technology of of Kepserver, maybe there’s some users out there. If you ever heard of TopServer and that has been unclear, I hope this clear clarifies it. But it it is a great technology stack that that we build upon and we’ll get into some of that in our demo. Now the other question is, what if you don’t have a standard communication protocol, like a modbus, like an Allen Bradley PLC as well? We see this a lot with, you know, testing areas, pharmaceuticals, maybe also in packaging, barcode scanners, weigh scales, printers online as well. They they may have some form of basic communications that talks over just TCP or or serial. And how do you get that information that’s really valuable still, but it’s not going through a PLC. It’s not going into your typical agent mind SCADA. It might be very manual process for a lot of these test systems as well, how they’re collecting and analyzing the data. Well, you may have heard of our Arm server as well. It’s been around, like I said, for a couple decades and just a proven solution that without coding, you can go in and build a custom protocol that expects a format from that device, translates it, puts it into standard tags, and now that those tags can be accessible through the open standards of OPC, or to it was a a Veeva user suite link as well. And that really provides a nice combination of your standard communications and also these more custom communications may have been done through scripting in the past. Well, you know, put this onto, an actual server that can communicate through those protocols natively, and just get that data into those SCADA systems, HMIs, where you need it. Shawn Tierney (Host): You know, I used that. Many years ago, I had an integrator who came to me. He’s like, Shawn, I wanna this is back in the RSVUE days. He’s like, Shawn, I I got, like, 20 Euotherm devices on a four eighty five, and they speak ASCII, and I gotta I gotta get into RSVUE 32. And, you know, OmniSIR, I love that you could you could basically developing and we did Omega and some other devices too. You’re developing your own protocol, but it’s beautiful. And and the fact that when you’re testing it, it color codes everything. So you know, hey. That part worked. The header worked. The data worked. Oh, the trailing didn’t work, or the terminated didn’t work, or the data’s not in the right format. Or I just it was a joy to work with back then, and I can imagine it’s only gotten better since. Connor Mason (Guest): Yeah. I think it’s like a little engineer playground where you get in there. It started really decoding and seeing how these devices communicate. And then once you’ve got it running, it it’s one of those things that it it just performs and, is saved by many people from developing custom code, having to manage that custom code and integrations, you know, for for many years. So it it’s one of those things that’s kinda tried, tested, and, it it’s kind of a staple still our our base level communications. Alright. So moving along kind of our automation pyramid as well. Another part of our large offering is the Cogent data hub. Some people may have heard from this as well. It’s been around for a good while. It’s been part of our portfolio for for a while as well. This starts building upon where we had the communication now up to those higher echelons of the pyramid. This is gonna bring in a lot of different connectivities. You if you’re not if you’re listening, it it’s kind of this cog and spoke type of concept for real time data. We also have historical implementations. You can connect through a variety of different things. OPC, both the profiles for alarms and events, and even OPC UA’s alarming conditions, which is still getting adoption across the, across the industry, but it is growing. As part of the OPC UA standard, we have integrations to MQTT. It can be its own MQTT broker, and it can also be an MQTT client. That has grown a lot. It’s one of those things that lives be besides OPC UA, not exactly a replacement. If you ever have any questions about that, it’s definitely a topic I love to talk about. There’s space for for this to combine the benefits of both of these, and it’s so versatile and flexible for these different type of implementations. On top of that, it it’s it’s a really strong tool for conversion and aggregation. You kind of add this, like, its name says, it’s a it’s a data hub. You send all the different information to this. It stores it into, a hierarchy with a variety of different modeling that you can do within it. That’s gonna store these values across a standard data format. Once I had data into this, any of those different connections, I can then send data back out. So if I have anything that I know is coming in through a certain plug in like OPC, bring that in, send it out to on these other ones, OPC, DA over to MQTT. It could even do DDA if I’m still using that, which I probably wouldn’t suggest. But overall, there’s a lot of good benefits from having something that can also be a standardization, between all your different connections. I have a lot of different things, maybe variety of OPC servers, legacy or newer. Bring that into a data hub, and then all your other connections, your historians, your MAS, your SCADAs, it can connect to that single point. So it’s all getting the same data model and values from a single source rather than going out and making many to many connections. A a large thing that it was originally, used for was getting around DCOM. That word is, you know, it might send some shivers down people’s spines still, to this day, but it’s it’s not a fun thing to deal with DCOM and also with the security hardening. It’s just not something that you really want to do. I’m sure there’s a lot of security professionals would advise against EPRA doing it. This tunneling will allow you to have a data hub that locally talks to any of the DA server client, communicate between two data hubs over a tunnel that pushes the data just over TCP, takes away all the comm wrappers, and now you just have values that get streamed in between. Now you don’t have to configure any DCOM at all, and it’s all local. So a lot of people went transitioning, between products where maybe the server only supports OPC DA, and then the client is now supporting OPC UA. They can’t change it yet. This has allowed them to implement a solution quickly and cost and at a cost effective price, without ripping everything out. Shawn Tierney (Host): You know, I wanna ask you too. I can see because this thing is it’s a data hub. So if you’re watching and you’re if you’re listening and not watching, you you’re not gonna see, you know, server, client, UAD, a broker, server, client. You know, just all these different things up here on the site. Do you what how does somebody find out if it does what they need? I mean, do you guys have a line they can call to say, I wanna do this to this. Is that something Data Hub can do, or is there a demo? What would you recommend to somebody? Connor Mason (Guest): Absolutely. Reach out to us. We we have a a lot of content outline, and it’s not behind any paywall or sign in links even. You you can always go to our website. It’s just softwaretoolbox.com. Mhmm. And that’s gonna get you to our product pages. You can download any product directly from there. They have demo timers. So typically with, with coaching data hub, after an hour, it will stop. You can just rerun it. And then call our team. Yeah. We have a solutions team that can work with you on, hey. What do I need as well? Then our support team, if you run into any issues, can help you troubleshoot that as well. So, I’ll have some contact information at the end, that’ll get some people to, you know, where they need to go. But you’re absolutely right, Shawn. Because this is so versatile, everyone’s use case of it is usually something a little bit different. And the best people to come talk to that is us because we’ve we’ve seen all those differences. So Shawn Tierney (Host): I think a lot of people run into the fact, like, they have a problem. Maybe it’s the one you said where they have the OPC UA and it needs to connect to an OPC DA client. And, you know, and a lot of times, they’re they’re a little gunshot to buy a license because they wanna make sure it’s gonna do exactly what they need first. And I think that’s where having your people can, you know, answer their questions saying, yes. We can do that or, no. We can’t do that. Or, you know, a a demo that they could download and run for an hour at a time to actually do a proof of concept for the boss who’s gonna sign off on purchasing this. And then the other thing is too, a lot of products like this have options. And you wanna make sure you’re buying the ticking the right boxes when you buy your license because you don’t wanna buy something you’re not gonna use. You wanna buy the exact pieces you need. So I highly recommend I mean, this product just does like, I have, in my mind, like, five things I wanna ask right now, but not gonna. But, yeah, def definitely, when it when it comes to a product like this, great to touch base with these folks. They’re super friendly and helpful, and, they’ll they’ll put you in the right direction. Connor Mason (Guest): Yeah. I I can tell you that’s working someone to support. Selling someone a solution that doesn’t work is not something I’ve been doing. Bad day. Right. Exactly. Yeah. And we work very closely, between anyone that’s looking at products. You know, me being as technical product managers, well, I I’m engaged in those conversations. And Mhmm. Yeah. If you need a demo license, reach out to us to extend that. We wanna make sure that you are buying something that provides you value. Now kind of moving on into a similar realm. This is one of our still somewhat newer offerings, I say, but we’ve been around five five plus years, and it’s really grown. And I kinda said here, it’s called OPC router, and and it’s not it’s not a networking tool. A lot of people may may kinda get that. It’s more of a, kind of a term about, again, all these different type of connections. How do you route them to different ways? It it kind of it it separates itself from the Cogent data hub, and and acting at this base level of being like a visual workflow that you can assign various tasks to. So if I have certain events that occur, I may wanna do some processing on that before I just send data along, where the data hub is really working in between converting, streaming data, real time connections. This gives you a a kind of a playground to work around of if I have certain tasks that are occurring, maybe through a database that I wanna trigger off of a certain value, based on my SCADA system, well, you can build that in in these different workflows to execute exactly what you need. Very, very flexible. Again, it has all these different type of connections. The very unique ones that have also grown into kind of that OT IT convergence, is it can be a REST API server and client as well. So I can be sending out requests to, RESTful servers where we’re seeing that hosted in a lot of new applications. I wanna get data out of them. Or once I have consumed a variety of data, I can become the REST server in OPC router and offer that to other applications to request data from itself. So, again, it can kind of be that centralized area of information. The other thing as we talked about in the automation pyramid is it has connections directly into SAP and ERP systems. So if you have work orders, if you have materials, that you wanna continue to track and maybe trigger things based off information from your your operation floors via PLCs tracking, how they’re using things along the line, and that needs to match up with what the SAP system has for, the amount of materials you have. This can be that bridge. It’s really is built off the mindset of the OT world as well. So we kinda say this helps empower the OT level because we’re now giving them the tools to that they understand what what’s occurring in their operations. And what could you do by having a tool like this to allow you to kind of create automated workflows based off certain values and certain events and automate some of these things that you may be doing manually or doing very convoluted through a variety of solutions. So this is one of those prod, products as well that’s very advanced in the things that supports. Linux and Docker containers is, is definitely could be a hot topic, rightly fleet rightfully so. And this can run on a on a Docker container deployed as well. So we we’ve seen that with the I IT folks that really enjoy being able to control and to higher deployment, allows you to update easily, allows you to control and spin up new containers as well. This gives you a lot of flexibility to to deploy and manage these systems. Shawn Tierney (Host): You know, I may wanna have you back on to talk about this. I used to there’s an old product called Rascal that I used to use. It was a transaction manager, and it would based on data changing or on a time that as a trigger, it could take data either from the PLC to the database or from the database to the PLC, and it would work with stored procedures. And and this seems like it hits all those points, And it sounds like it’s a visual like you said, right there on the slide, visual workflow builder. Connor Mason (Guest): Yep. Shawn Tierney (Host): So you really piqued my interest with this one, and and it may be something we wanna come back to and and revisit in the future, because, it just it’s just I know that that older product was very useful and, you know, it really solved a lot of old applications back in the day. Connor Mason (Guest): Yeah. Absolutely. And this this just takes that on and builds even more. If you if anyone was, kind of listening at the beginning of this year or two, a conference called Prove It that was very big in the industry, we were there to and we presented on stage a solution that we had. Highly recommend going searching for that. It’s on our web pages. It’s also on their YouTube links, and it’s it’s called Prove It. And OPC router was a big part of that in the back end. I would love to dive in and show you the really unique things. Kind of as a quick overview, we’re able to use Google AI vision to take camera data and detect if someone was wearing a hard hat. All that logic and behind of getting that information to Google AI vision, was through REST with OPC router. Then we were parsing that information back through that, connection and then providing it back to the PLCs. So we go all the way from a camera to a PLC controlling a light stack, up to Google AI vision through OPC router, all on hotel Wi Fi. It’s very imp it’s very, very fun presentation, and, our I think our team did a really great job. So a a a pretty new offering I have I wanna highlight, is our is our data caster. This is a an actual piece of hardware. You know, our software toolbox is we we do have some hardware as well. It’s just, part of the nature of this environment of how we mesh in between things. But the the idea is that, there’s a lot of different use cases for HMI and SCADA. They have grown so much from what they used to be, and they’re very core part of the automation stack. Now a lot of times, these are doing so many things beyond that as well. What we found is that in different areas of operations, you may not need all that different control. You may not even have the space to make up a whole workstation for that as well. What this does, the data caster, is, just simply plug it plugs it into any network and into an HDMI compatible display, and it gives you a very easy configure workplace to put a few key metrics onto a screen. So if I have different things from you can connect directly to PLCs like Allen Bradley. You can connect to SQL databases. You can also connect to rest APIs to gather the data from these different sources and build a a a kind of easy to to view, KPI dashboard in a way. So if you’re on a operation line and you wanna look at your current run rate, maybe you have certain things in the POC tags, you know, flow and pressure that’s very important for those operators to see. They may not be, even the capacity to be interacting with anything. They just need visualizations of what’s going on. This product can just be installed, you know, industrial areas with, with any type of display that you can easily access and and give them something that they can easily look at. It’s configured all through a web browser to display what you want. You can put on different colors based on levels of values as well. And it’s just I feel like a very simple thing that sometimes it seems so simple, but those might be the things that provide value on the actual operation floor. This is, for anyone that’s watching, kind of a quick view of a very simple screen. What we’re showing here is what it would look like from all the different data sources. So talking directly to ControlLogs PLC, talking to SQL databases, micro eight eight hundreds, an arrest client, and and what’s coming very soon, definitely by the end of this year, is OPC UA support. So any OPC UA server that’s out there that’s already having your PLC data or etcetera, this could also connect to that and get values from there. Shawn Tierney (Host): Can I can you make it I’m I’m here I go? Can you make it so it, like, changes, like, pages every few seconds? Connor Mason (Guest): Right now, it is a single page, but this is, like I said, very new product, so we’re taking any feedback. If, yeah, if there’s this type of slideshow cycle that would be, you know, valuable to anyone out there, let us know. We’re definitely always interested to see the people that are actually working out at these operation sites, what what’s valuable to them. Yeah. Shawn Tierney (Host): A lot of kiosks you see when when you’re traveling, it’ll say, like, line one well, I’ll just throw out there. Line one, and that’ll be on there for five seconds, and then it’ll go line two. That’ll be on there for five seconds, and then line you know, I and that’s why I just mentioned that because I can see that being a question that, that that I would get from somebody who is asking me about it. Connor Mason (Guest): Oh, great question. Appreciate it. Alright. So now we’re gonna set time for a little hands on demo. For anyone that’s just listening, we’re gonna I’m gonna talk about this at at a high level and walk through everything. But the idea is that, we have a few different POCs, very common in Allen Bradley and just a a Siemens seven, s seven fifteen hundred that’s in our office, pretty close to me on the other side of the wall wall, actually. We’re gonna first start by connecting that to our top server like we talked about. This is our industrial communication server, that offers both OCDA, OC UA, SweetLink connectivity as well. And then we’re gonna bring this into our Cogent data hub. This we talked about is getting those values up to these higher levels. What we’ll be doing is also tunneling the data. We talked about being able to share data through the data hubs themselves. Kinda explain why we’re doing that here and the value you can add. And then we’re also gonna showcase adding on MQTT to this level. Taking beta now just from these two PLCs that are sitting on a rack, and I can automatically make all that information available in the MQTT broker. So any MQTT client that’s out there that wants to subscribe to that data, now has that accessible. And I’ve created this all through a a really simple workflow. We also have some databases connected. Influx, we install with Code and DataHub, has a free visualization tool that kinda just helps you see what’s going on in your processes. I wanna showcase a little bit of that as well. Alright. So now jumping into our demo, when we first start off here is the our top server. Like I mentioned before, if anyone has worked with KEP server in the past, this is gonna look very similar. Like it because it is. The same technology and all the things here. The the first things that I wanted to establish in our demo, was our connection to our POCs. I have a few here. We’re only gonna use the Allen Bradley and the Siemens, for the the time that we have on our demo here. But how this builds out as a platform is you create these different channels and the devices connections between them. This is gonna be your your physical connections to them. It’s either, IP TCPIP connection or maybe your serial connection as well. We have support for all of them. It really is a long list. Anyone watching out there, you can kind of see all the different drivers that that we offer. So allowing this into a single platform, you can have all your connectivity based here. All those different connections that you now have that up the stack, your SCADA, your historians, MAS even as well, they can all go to a single source. Makes that management, troubleshooting, all those a bit easier as well. So one of the first things I did here, I have this built out, but I’ll kinda walk through what you would typically do. You have your Allen Bradley ControlLogix Ethernet driver here first. You know, I have some IPs in here I won’t show, but, regardless, we have our our our drivers here, and then we have a set of tags. These are all the global tags in the programming of the PLC. How I got these to to kind of map automatically is in our in our driver, we’re able to create tags automatically. So you’re able to send a command to that device and ask for its entire tag database. They can come back, provide all that, map it out for you, create those tags as well. This saves a lot of time from, you know, an engineer have to go in and, addressing all the individual items themselves. So once it’s defined in the program project, you’re able to bring this all in automatically. I’ll show now how easy that makes it connecting to something like the Cogent data hub. In a very similar fashion, we have a connection over here to the Siemens, PLC that I also have. You can see beneath it all these different tag structures, and this was created the exact same way. While those those PLC support it, you can do an automatic tag generation, bring in all the structure that you’ve already built out your PLC programming, and and make this available on this OPC server now as well. So that’s really the basis. We first need to establish communications to these PLCs, get that tag data, and now what do we wanna do with it? So in this demo, what I wanted to bring up was, the code in DataHub next. So here, I see a very similar kind of layout. We have a different set set of plugins on the left side. So for anyone listening, the Cogent Data Hub again is kind of our aggregation and conversion tool. All these different type of protocols like OPC UA, OPC DA, and OPC A and E for alarms and events. We also support OPC alarms and conditions, which is the newer profile for alarms in OPC UA. We have all a variety of different ways that you can get data out of things and data’s into the data hub. We can also do bridging. This concept is, how you share data in between different points. So let’s say I had a connection to one OPC server, and it was communicating to a certain PLC, and there were certain registers I was getting data from. Well, now I also wanna connect to a different OPC server that has, entirely different brand of PLCs. And then maybe I wanna share data in between them directly. Well, with this software, I can just bridge those points between them. Once they’re in the data hub, I can do kind of whatever I want with them. I can then allow them to write between those PLCs and share data that way, and you’re not now having to do any type of hardwiring directly in between them, and then I’m compatible to communicate to each other. Through the standards of OPC and these variety of different communication levels, I can integrate them together. Shawn Tierney (Host): You know, you bring up a good point. When you do something like that, is there any heartbeat? Like, is there on the general or under under, one of these, topics? Is there are there tags we can use that are from DataHub itself that can be sent to the destination, like a heartbeat or, you know, the merge transactions? Or Connor Mason (Guest): Yeah. Absolutely. So with this as well, there’s pretty strong scripting engine, and I have done that in the past where you can make internal tags. And that that could be a a timer. It could be a counter. And and just kind of allows you to create your own tags as well that you could do the same thing, could share that, through bridge connection to a PLC. So, yeah, there there are definitely some people that had those cert and, you know, use cases where they wanna get something to just track, on this software side and get it out to those hardware PLCs. Absolutely. Shawn Tierney (Host): I mean, when you send out the data out of the PLC, the PLC doesn’t care to take my data. But when you’re getting data into the PLC, you wanna make sure it’s updating and it’s fresh. And so, you know, they throw a counter in there, the script thing, and be able to have that. As as long as you see that incrementing, you know, you got good data coming in. That’s that’s a good feature. Connor Mason (Guest): Absolutely. You know, another big one is the the redundancy. So what this does is beyond just the OPC, we can make redundancy to basically anything that has two things running of it. So any of these different connections. How it’s unique is what it does is it just looks at the buckets of data that you create. So for an example, if I do have two different OPC servers and I put them into two areas of, let’s say, OPC server one and OPC server two, I can what now create an OPC redundancy data bucket. And now any client that connects externally to that and wants that data, it’s gonna go talk to that bucket of data. And that bucket of data is going to automatically change in between sources as things go down, things come back up, and the client would never know what’s hap what that happened unless you wanted to. There are internal tasks to show what’s the current source and things, but the idea is to make this trans kind of hidden that regardless of what’s going on in the operations, if I have this set up, I can have my external applications just reading from a single source without knowing that there’s two things behind it that are actually controlling that. Very important for, you know, historian connections where you wanna have a full complete picture of that data that’s coming in. If you’re able to make a redundant connection to two different, servers and then allow that historian to talk to a single point where it doesn’t have to control that switching back and forth. It it will just see that data flow streamlessly as as either one is up at that time. Kinda beyond that as well, there’s quite a few other different things in here. I don’t think we have time to cover all of them. But for for our demo, what I wanna focus on first is our OPC UA connection. This allows us both to act as a OPC UA client to get data from any servers out there, like our top server. And also we can act as an OPC UA server itself. So if anything’s coming in from maybe you have multiple connections to different servers, multiple connections to other things that aren’t OPC as well, I can now provide all this data automatically in my own namespace to allow things to connect to me as well. And that’s part of that aggregation feature, and kind of topic I was mentioning before. So with that, I have a connection here. It’s pulling data all from my top server. I have a few different tags from my Alec Bradley and and my Siemens PLC selected. The next part of this, while I was meshing, was the tunneling. Like I said, this is very popular to get around DCOM issues, but there’s a lot of reasons why you still may use this beyond just the headache of DCOM and what it was. What this runs on is a a TCP stream that takes all the data points as a value, a quality, and a timestamp, and it can mirror those in between another DataHub instance. So if I wanna get things across a network, like my OT side, where NASH previously, I would have to come in and allow a, open port onto my network for any OPC UA clients, across the network to access that, I can now actually change the direction of this and allow me to tunnel data out of my network without opening up any ports. This is really big for security. If anyone out there, security professional or working as an engineer, you have to work with your IT and security a lot, they don’t you don’t wanna have an open port, especially to your operations and OT side. So this allows you to change that direction of flow and push data out of this direction into another area like a DMZ computer or up to a business level computer as well. The other things as well that I have configured in this demo, the benefit of having that tunneling streaming data across this connection is I can also store this data locally in a, influx database. The purpose of that then is that I can actually historize this, provide then if this connection ever goes down to backfill any information that was lost during that tunnel connection going down. So with this added layer on and real time data scenarios like OPC UA, unless you have historical access, you would lose a lot of data if that connection ever went down. But with this, I can actually use the back end of this InfluxDB, buffer any values. When my connection comes back up, pass them along that stream again. And if I have anything that’s historically connected, like, another InfluxDB, maybe a PI historian, Vue historian, any historian offering out there that can allow that connection. I can then provide all those records that were originally missed and backfill that into those systems. So I switched over to a second machine. It’s gonna look very similar here as well. This also has an instance of the Cogent Data Hub running here. For anyone not watching, what we’ve actually have on this side is the the portion of the tunneler that’s sitting here and listening for any data requests coming in. So on my first machine, I was able to connect my PLCs, gather that information into Cogent DataHub, and now I’m pushing that information, across the network into a separate machine that’s sitting here and listening to gather information. So what I can quickly do is just make sure I have all my data here. So I have these different points, both from my Allen Bradley PLCs. I have a few, different simulation demo points, like temperature, pressure, tank level, a few statuses, and all this is updating directly through that stream as the PLC is updating it as well. I also have my scenes controller. I have some, current values and a few different counters tags as well. All of this again is being directly streamed through that tunnel. I’m not connecting to an OPC server at all on this side. I can show you that here. There’s no connections configured. I’m not talking to the PLCs directly on this machine as well. But maybe we’ll pass all the information through without opening up any ports on my OT demo machine per se. So what’s the benefit of that? Well, again, security. Also, the ability to do the store and forward mechanisms. On the other side, I was logging directly to a InfluxDB. This could be my d- my buffer, and then I was able to configure it where if any values were lost, to store that across the network. So now with this side, if I pull up Chronic Graph, which is a free visualization tool that installs with the DataHub as well, I can see some very nice, visual workflows and and visual diagrams of what is going on with this data. So I have a pressure that is just a simulator in this, Allen Bradley PLC that ramps up and and comes back down. It’s not actually connected to anything that’s reading a real pressure, but you can see over time, I can kind of change through these different layers of time. And I might go back a little far, but I have a lot of data that’s been stored in here. For a while during my test, I turned this off and, made it fail, but then I came back in and I was able to recreate all the data and backfill it as well. So through through these views, I can see that as data disconnects, as it comes back on, I have a very cyclical view of the data because it was able to recover and store and forward from that source. Like I said, Shawn, data quality is a big thing in this industry. It’s a big thing for people both at the operations side, and both people making decision in the business layer. So being able to have a full picture, without gaps, it is definitely something that, you should be prioritizing, when you can. Shawn Tierney (Host): Now what we’re seeing here is you’re using InfluxDB on this, destination PC or IT side PC and chronograph, which was that utility or that package that comes, gets installed. It’s free. But you don’t actually have to use that. You could have sent this in to an OSI pi or Exactly. Somebody else’s historian. Right? Can you name some of the historians you work with? I know OSI pie. Connor Mason (Guest): Yeah. Yeah. Absolutely. So there’s quite a few different ones. As far as what we support in the Data Hub natively, Amazon Kinesis, the cloud hosted historian that we can also do the same things from here as well. Aviva Historian, Aviva Insight, Apache Kafka. This is a a kind of a a newer one as well that used to be a very IT oriented solution, now getting into OT. It’s kind of a similar database structure where things are stored in different topics that we can stream to. On top of that, just regular old ODBC connections. That opens up a lot of different ways you can do it, or even, the old classic OPC, HDA. So if you have any, historians that that can act as an OPC HDA, connection, we we can also stream it through there. Shawn Tierney (Host): Excellent. That’s a great list. Connor Mason (Guest): The other thing I wanna show while we still have some time here is that MQTT component. This is really growing and, it’s gonna continue to be a part of the industrial automation technology stack and conversations moving forward, for streaming data, you know, from devices, edge devices, up into different layers, both now into the OT, and then maybe out to, IT, in our business levels as well, and definitely into the cloud as we’re seeing a lot of growth into it. Like I mentioned with Data Hub, the big benefit is I have all these different connections. I can consume all this data. Well, I can also act as an MQTT broker. And what what a broker typically does in MQTT is just route data and share data. It’s kind of that central point where things come to it to either say, hey. I’m giving you some new values. Share it with someone else. Or, hey. I need these values. Can you give me that? It really fits in super well with what this product is at its core. So all I have to do here is just enable it. What that now allows is I have an example, MQTT Explorer. If anyone has worked with MQTT, you’re probably familiar with this. There’s nothing else I configured beyond just enabling the broker. And you can see within this structure, I have all the same data that was in my Data Hub already. The same things I were collecting from my PLCs and top server. Now I’ve embedded these as MPPT points and now I have them in JSON format with the value, their timestamp. You can even see, like, a little trend here kind of matching what we saw in Influx. And and now this enables all those different cloud connectors that wanna speak this language to do it seamlessly. Shawn Tierney (Host): So you didn’t have to set up the PLCs a second time to do this? Nope. Connor Mason (Guest): Not at all. Shawn Tierney (Host): You just enabled this, and now the data’s going this way as well. Exactly. Connor Mason (Guest): Yeah. That’s a really strong point of the Cogent Data Hub is once you have everything into its structure and model, you just enable it to use any of these different connections. You can get really, really creative with these different things. Like we talked about with the the bridging aspect and getting into different systems, even writing down the PLCs. You can make crust, custom notifications and email alerts, based on any of these values. You could even take something like this MTT connection, tunnel it across to another data hub as well, maybe then convert it to OPC DA. And now you’ve made a a a new connection over to something that’s very legacy as well. Shawn Tierney (Host): Yeah. That, I mean, the options here are just pretty amazing, all the different things that can be done. Connor Mason (Guest): Absolutely. Well, I, you know, I wanna jump back into some of our presentation here while we still got the time. And now after we’re kinda done with our demo, there’s so many different ways that you can use these different tools. This is just a really simple, kind of view of the, something that used to be very simple, just connecting OpenSea servers to a variety of different connections, kind of expanding onto with that that’s store and forward, the local influx usage, getting out to things like MTT as well. But there’s a lot more you can do with these solutions. So like Shawn said, reach out to us. We’re happy to engage and see what we can help you with. I have a few other things before we wrap up. Just overall, it we’ve worked across nearly every industry. We have installations across the globe on all continents. And like I said, we’ve been around for pushing thirty years next year. So we’ve seen a lot of different things, and we really wanna talk to anyone out there that maybe has some struggles that are going on with just connectivity, or you have any ongoing projects. If you work in these different industries or if there’s nothing marked here and you have anything going on that you need help with, we’re very happy to sit down and let you know if there’s there’s something we can do there. Shawn Tierney (Host): Yeah. For those who are, listening, I mean, we see most of the big energy and consumer product, companies on that slide. So I’m not gonna read them off, but, it’s just a lot of car manufacturers. You know, these are these are these, the household name brands that everybody knows and loves. Connor Mason (Guest): So kind of wrap some things up here. We talked about all the different ways that we’ve kind of helped solve things in the past, but I wanna highlight some of the unique ones, that we’ve also gone do some, case studies on and and success stories. So this one I actually got to work on, within the last few years that, a plastic packaging, manufacturer was looking to track uptime and downtime across multiple different lines, and they had a new cloud solution that they were already evaluating. They’re really excited to get into play. They they had a lot of upside to, getting things connected to this and start using it. Well, what they had was a lot of different PLCs, a lot of different brands, different areas, different, you know, areas of operation that they need to connect to. So what they used was to first get that into our top server, kind of similar to how they showed them use in their in our demo. We just need to get all the data into a centralized platform first, get that data accessible. Then from there, once they had all that information into a centralized area, they used the Cogent Data Hub as well to help aggregate that information and transform it to be sent to the cloud through MQTT. So very similar to the demo here, this is actually a real use case of that. Getting information from PLCs, structuring it into that how that cloud system needed it for MQTT, and streamlining that data connection to now where it’s just running in operation. They constantly have updates about where their lines are in operation, tracking their downtime, tracking their uptime as well, and then being able to do some predictive analytics in that cloud solution based on their history. So this really enabled them to kind of build from what they had existing. It was doing a lot of manual tracking, into an entirely automated system with management able to see real views of what’s going on at this operation level. Another one I wanna talk about was we we were able to do this success story with, Ace Automation. They worked with a pharmaceutical company. Ace Automation is a SI and they were brought in and doing a lot of work with some some old DDE connections, doing some custom Excel macros, and we’re just having a hard time maintaining some legacy systems that were just a pain to deal with. They were working with these older files, from some old InTouch histor HMIs, and what they needed to do was get something that was not just based on Excel and doing custom macros. So one product we didn’t get to talk about yet, but we also carry is our LGH file inspector. It’s able to take these files, put them out into a standardized format like CSV, and also do a lot of that automation of when when should these files be queried? Should they be, queried for different lengths? Should they be output to different areas? Can I set these up in a scheduled task so it can be done automatically rather than someone having to sit down and do it manually in Excel? So they will able to, recover over fifty hours of engineering time with the solution from having to do late night calls to troubleshoot a, Excel macro that stopped working, from crashing machines, because they were running a legacy systems to still support some of the DDE servers, into saving them, you know, almost two hundred plus hours of productivity. Another example, if we’re able to work with a renewable, energy customer that’s doing a lot of innovative things across North America, They had a very ambitious plan to double their footprint in the next two years. And with that, they had to really look back at their assets and see where they currently stand, how do we make new standards to support us growing into what we want to be. So with this, they had a lot of different data sources currently. They’re all kind of siloed at the specific areas. Nothing was really connected commonly to a corporate level area of historization, or control and security. So again, they they were able to use our top server and put out a standard connectivity platform, bring in the DataHub as an aggregation tool. So each of these sites would have a top server that was individually collecting data from different devices, and then that was able to send it into a single DataHub. So now their corporate level had an entire view of all the information from these different plants in one single application. That then enabled them to connect their historian applications to that data hub and have a perfect view and make visualizations off of their entire operations. What this allowed them to do was grow without replacing everything. And that’s a big thing that we try to strive on is replacing and ripping out all your existing technologies. It’s not something you can do overnight. But how do we provide value and gain efficiency with what’s in place and providing newer technologies on top of that without disrupting the actual operation as well? So this was really, really successful. And at the end, I just wanna kind of provide some other contacts and information people can learn more. We have a blog that goes out every week on Thursdays. A lot of good technical content out there. A lot of recast of the the awesome things we get to do here, the success stories as well, and you can always find that at justblog.softwaretoolbox.com. And again, our main website is justsoftwaretoolbox.com. You can get product information, downloads, reach out to anyone on our team. Let’s discuss what what issues you have going on, any new projects, we’ll be happy to listen. Shawn Tierney (Host): Well, Connor, I wanna thank you very much for coming on the show and bringing us up to speed on not only software toolbox, but also to, you know, bring us up to speed on top server and doing that demo with top server and data hub. Really appreciate that. And, I think, you know, like you just said, if anybody, has any projects that you think these solutions may be able to solve, please give them a give them a call. And if you’ve already done something with them, leave a comment. You know? To leave a comment, no matter where you’re watching or listening to this, let us know what you did. What did you use? Like me, I used OmniServer all those many years ago, and, of course, Top Server as an OPC server. But if you guys have already used Software Toolbox and, of course, Symbol Factory, I use that all the time. But if you guys are using it, let us know in the comments. It’s always great to hear from people out there. I know, you know, with thousands of you guys listening every week, but I’d love to hear, you know, are you using these products? Or if you have questions, I’ll funnel them over to Connor if you put them in the comments. So with that, Connor, did you have anything else you wanted to cover before we close out today’s show? Connor Mason (Guest): I think that was it, Shawn. Thanks again for having us on. It was really fun. Shawn Tierney (Host): I hope you enjoyed that episode, and I wanna thank Connor for taking time out of his busy schedule to come on the show and bring us up to speed on software toolbox and their suite of products. Really appreciated that demo at the end too, so we actually got a look at if you’re watching. Gotta look at their products and how they work. And, just really appreciate them taking all of my questions. I also appreciate the fact that Software Toolbox sponsored this episode, meaning we were able to release it to you without any ads. So I really appreciate them. If you’re doing any business with Software Toolbox, please thank them for sponsoring this episode. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Fascination Street
Phil Rossi Returns! - O.G. Podcaster (Crescent / Harvey / Don't Turn Around)

Fascination Street

Play Episode Listen Later Oct 6, 2025 59:24 Transcription Available


Phil Rossi Returns!Take a walk with me down Fascination Street as I get to know even more about Phil Rossi, as he and I catch since his last appearance 7 years ago. Phil is one of the ORIGINAL podcasters. He first started podcasting his unpublished speculative fiction novels twenty years ago! In this episode, I fanboy out a little bit as Phil is on the Mount Rushmore of podcasting in my opinion. Then we move on to discuss his first ever novel 'Crescent' and why he is re-releasing it with a few changes. Next he spills the beans that he is also working on a sequel to Crescent! A few years ago, Phil started a podcast called Don't Turn Around; which focuses on the scary and the paranormal in our everyday lives. So we talk a bit about that and why he started it. More recently, Phil started another podcast called It Came From The Web, where he introduces and comments on paranormal videos and sightings that he finds on the internet. Then, He teamed up with fellow old scool podcasters Tee Morris and Philippa Balentine to form Old Spirits Investigations; a YouTube show where these three investigators are exploring purported haunted areas. So, we discuss all three of these shows, why they got started, and what the listener or viewer can expect from these shows. Phil even shares some personal stories of spooky and unexplainable events that happened in his own life. Make sure you check out the 2 audio podcasts, as we as the video podcast on YouTube. Phil, Tee, and Philippa are the bee's knees!!!

Unnamed Reverse Engineering Podcast
076 - Living In A Vast World of Craziness

Unnamed Reverse Engineering Podcast

Play Episode Listen Later Sep 29, 2025 60:42


  An Interview With Dan Walters Dan Walters/Bytetinker joins us after a long email chain to find a time. You'll understand why it was so hard in the interview. Jen met Dan at cyphercon.com where he runs an ISP hacking village/ward. We covered a lot of stuff at various levels of the OSI stack up. Here's just a short list of terms that came up: DOCSIS Quadrature_amplitude_modulation Some of the exploits we covered were part of Cable Haunt. While Dan for good reason did not provide explicit sites to get started with, you may have luck with archive.org Want to try this for yourself but still waiting for your federal funding to come in? cyphercon.com 2026 is happening and you can do this much more cheaply. It's happening April 1 & 2. Have comments or suggestions for us? Find us on twitter @unnamed_show,  or email us at show@unnamedre.com. Music by TeknoAxe (http://www.youtube.com/user/teknoaxe)

The Nat Coombs Show
NFL Wild Week 3 Chaos w/ Marek Larwood!

The Nat Coombs Show

Play Episode Listen Later Sep 22, 2025 62:23


Nat's back from seeing Osi & Sam in the Five studio - next stop Dublin! - and is straight into unpicking a wild Week 3 in the NFL, and who better to get into the craziness than comedian and All-Pro member of the NC Show crew Marek Larwood! The fellas breaking down some of the 3–0 contenders, the 0–3 fall guys and piece together the wild special teams chaos, and defensive x-factors that generated some extraordinary results in a bonkers slate! Are the Bears back, or are the Cowboys that bad? Are Colts and Chargers for real, real? Will the Rams be fine despite their meltdown? Are the Chiefs gonna make the playoffs? And will the Giants deal Russ? Plus LT bossing it, Marek's reveals which exclusive WhatsApp group he's a member of as well as another classic email from the Commish! ___ To sign up for FanTeam our brand new partners hit the link : ⁠https://af.fanteam.com/click?o=1&a=99082&c=1⁠ - use the code RUSH to unlock special offers for followers of The NC Show! Get involved in the Edge Rush Boosted Acca, the TNF Freeroll contest - free to enter - and more! FanTeam is the ultimate home for NFL fans in the UK, with season-long, weekly, and daily fantasy contests featuring regular five-figure guaranteed prize pools. Users have to be 18+, please play responsibly, BeGambleAware.org ___ Smokin' BBQ, ice-cold beers, and all the NFL action you can handle throughout the season. What's not to love, people? Check out Hickory's Smokehouse here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://hickorys.co.uk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ___ Check out the official Nat Coombs Show music playlist: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://open.spotify.com/playlist/0i1nSLaUJWxZMGCe8eJLQY⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ___ BONUS CONTENT! Subscribe to our YouTube Channel: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@TheNCShow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ___ Follow Nat on X or Instagram: X (Twitter): ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/natcoombs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/natcoombs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ___ NC Show socials: X (Twitter): ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/thencshow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.facebook.com/thencshow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/thencshow/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Tik Tok: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.tiktok.com/@thencshow?lang=en⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Threads: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.threads.net/@thencshow Learn more about your ad choices. Visit podcastchoices.com/adchoices

Packet Pushers - Full Podcast Feed
TCG058: Creating the Internet Layer That Should Have Been With Avery Pennarun

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Sep 17, 2025 51:49


In this deep dive episode, we explore the evolution of networking with Avery Pennarun, Co-Founder and CEO of Tailscale. Avery shares his extensive journey through VPN technologies, from writing his first mesh VPN protocol in 1997 called “Tunnel Vision” to building Tailscale, a zero-trust networking solution. We discuss how Tailscale reimagines the OSI stack by... Read more »

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work
Unjustly accused | Faces 240k in debt!!! - S.O.S. #223

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work

Play Episode Listen Later Sep 8, 2025 55:32 Transcription Available


A life's trajectory derailed by a single night, a textbook case of injustice unfolding at one of America's most prestigious military academies. This urgent special episode of Stories of Service brings to light the troubling case of Joseph Fernau, a wrestler and former Air Force Academy cadet fighting to save his military career and avoid crushing debt after being falsely accused of sexual assault.When a devastating ankle injury sidelined Fernau from his beloved wrestling team, he made a mistake while heavily medicated - fraternizing with a freshman cadet. What followed defies belief: months later, after he began dating someone new, came an accusation of sexual assault that threatened everything he'd worked for. Despite text messages clearly showing consent before and satisfaction after their encounter, and despite OSI investigators finding the assault allegation unfounded, Fernau now faces disenrollment and $240,000 in debt while his accuser transferred to Stanford without consequences.The episode reveals disturbing inconsistencies in how military discipline is administered, with numerous examples of cadets committing similar or worse violations receiving far lighter punishments. Captain Adam DeRito, himself a veteran of a 15-year battle with the Academy over his own case, provides crucial context about potential bias and command influence affecting the proceedings. The conversation raises profound questions about who receives second chances in our military, and whether factors like identity politics might be corrupting the process of justice.Whether you're connected to military service or simply care about fairness in our institutions, this case demands attention. As Fernau's appeal reaches the Secretary of the Air Force, the fundamental question remains: Should one mistake, immediately self-reported and followed by exemplary conduct, end a promising military career? Listen now and decide for yourself.Support the showVisit my website: https://thehello.llc/THERESACARPENTERRead my writings on my blog: https://www.theresatapestries.com/Listen to other episodes on my podcast: https://storiesofservice.buzzsprout.comWatch episodes of my podcast:https://www.youtube.com/c/TheresaCarpenter76

CISSP Cyber Training Podcast - CISSP Training Program
CCT 274: CISSP Rapid Review (Domain 4) - Part 1

CISSP Cyber Training Podcast - CISSP Training Program

Play Episode Listen Later Aug 25, 2025 28:27 Transcription Available


Send us a textCheck us out at:  https://www.cisspcybertraining.com/Get access to 360 FREE CISSP Questions:  https://www.cisspcybertraining.com/offers/dzHKVcDB/checkoutGet access to my FREE CISSP Self-Study Essentials Videos:  https://www.cisspcybertraining.com/offers/KzBKKouvNetwork security is the cornerstone of modern cybersecurity, and understanding its intricacies is essential for anyone preparing for the CISSP exam. In this comprehensive episode, Sean Gerber delivers a rapid review of Domain 4: Communications and Network Security, which constitutes 13% of the CISSP exam questions.The episode opens with a cautionary tale about a disgruntled Chinese developer who received a four-year prison sentence for deploying a logic bomb that devastated his former employer's network. This real-world example underscores the critical importance of proper employee termination procedures and privilege management—especially for technical staff with elevated access. As Sean emphasizes, "The eyes of Sauron" should be on any high-privilege employee showing signs of discontent.Diving into Domain 4, Sean expertly navigates through foundational concepts like the OSI and TCP/IP models, explaining how they standardize network communications and why security professionals must understand them to implement effective defense strategies. The discussion progresses through IP networking (both IPv4 and IPv6), secure protocols, multi-layer protections, and deep packet inspection—all crucial components of a robust security architecture.Particularly valuable is Sean's breakdown of modern network technologies like micro-segmentation, which divides networks into highly granular security zones. While acknowledging its power to limit lateral movement during breaches, he cautions that implementation requires sophisticated knowledge of software-defined networking (SDN) and careful planning: "It's better to start small than to go out and think of and get too big when you're dealing with deploying these SDN type of capabilities."Wireless security, content delivery networks, and endpoint protection receive thorough examination, with Sean emphasizing that endpoints are "your first line of detection" and advocating for comprehensive endpoint detection and response (EDR) solutions that go beyond traditional antivirus. The episode concludes with insights on voice communication security, contrasting traditional telephone networks with modern VoIP systems and their unique vulnerabilities.Whether you're preparing for the CISSP exam or looking to strengthen your organization's network security posture, this episode provides actionable insights backed by real-world experience. Ready to deepen your understanding of cybersecurity fundamentals? Subscribe to the CISSP Cyber Training Podcast and check out the free resources available at cisspybertraining.com to accelerate your certification journey.Support the showGain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!

Acta Non Verba
Dr. Megan McElheran: On applying Stoicism to combat PTSD, The Power of Post Traumatic Growth and the Importance of Preparation Before Operational Stress

Acta Non Verba

Play Episode Listen Later Aug 20, 2025 45:58


In this episode, Dr. Megan McElheran, a clinical psychologist and CEO of Before Operational Stress, Inc. discusses stoicism's practical applications and the misinterpretations associated with it. Dr. McElheran shares her extensive work with trauma-exposed professionals, including military personnel and first responders, and highlights the importance of managing stress and trauma. Marcus and Dr. McElheran delve into the concept of post-traumatic growth, the necessity of facing adversities, and maintaining mental health resilience. The conversation also touches on Dr. McElheran's Bataan Death March experience, underscoring the significant lessons in resilience and determination. Episode Highlights: 02:29 The Misconceptions of Stoicism 08:04 The Impact of Trauma on First Responders 29:32 Stoic Wisdom for Overcoming Hardship 31:10 The Hero's Journey and Personal Growth 32:22 Embracing Pain and Suffering 37:55 Curating Thoughts and Building Confidence 40:20 The Bataan Death March: A Lesson in Endurance Dr. Megan McElheran, CEO of Wayfound Mental Health Group in Calgary, AB, is a Clinical Psychologist with 16 years of expertise in Operational Stress Injuries (OSI). Specializing in active-duty military, Veterans, and public safety personnel, she focuses on assessment, diagnosis, and treatment. Driven by a passion for OSI prevention and resilience enhancement, she developed the BOS program. Exploring innovative approaches, she's delving into psychedelic medicine for psychological injuries. A sought-after speaker and educator, Dr. McElheran shares her insights nationally. Her recent publication in the European Journal of Psychotraumatology, "Functional Disconnection and Reconnection," sheds light on novel strategies for public safety personnel's well-being. You can find out more here: https://www.beforeoperationalstress.com/ Learn more about the gift of Adversity and my mission to help my fellow humans create a better world by heading to www.marcusaureliusanderson.com. There you can take action by joining my ANV inner circle to get exclusive content and information.See omnystudio.com/listener for privacy information.

The Automation Podcast
PROFINET and System Redundancy (P244)

The Automation Podcast

Play Episode Listen Later Aug 13, 2025 45:13 Transcription Available


Shawn Tierney meets up with Tom Weingartner of PI (Profibus Profinet International) to learn about PROFINET and System Redundancy in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 244 Show Notes: Special thanks to Tom Weingartner for coming on the show, and to Siemens for sponsoring this episode so we could release it ad free on all platforms! To learn more PROFINET, see the below links: PROFINET One-Day Training Slide Deck PROFINET One-Day Training Class Dates IO-Link Workshop Dates PROFINET University Certified Network Engineer Course Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney from Insights and Automation, and I wanna thank you for tuning back in this week. Now on this show, I actually had the opportunity to sit down with Thomas Weingoner from PI to learn all about PROFINET. I actually reached out to him because I had some product vendors who wanted me to cover their s two features in their products, and I thought it would be first it’d be better to actually sit down and get a refresh on what s two is. It’s been five years since we’ve had a PROFINET expert on, so I figured now would be a good time before we start getting into how those features are used in different products. So with that said, I also wanna mention that Siemens has sponsored the episode, so it will be completely ad free. I love it when vendor sponsor the shows. Not only do we get the breakeven on the show itself, we also get to release it ad free and make the video free as well. So thank you, Siemens. If you see anybody from Siemens, thank them for sponsoring the Automation Podcast. As a matter of fact, thank any vendor who’s ever sponsored any of our shows. We really appreciate them. One final PSA that I wanna throw out there is that, speaking like I talked about this yesterday on my show, Automation Tech Talk, As we’ve seen with the Ethernet POCs we’re talking about, a lot of micro POCs that were $250 ten years ago are now $400. Right? That’s a lot of inflation, right, for various reasons. Right? And so one of the things I did this summer is I took a look at my P and L, my pros profit and loss statements, and I just can’t hold my prices where they are and be profitable. Right? So if I’m not breaking even, the company goes out of business, and we’ll have no more episodes of the show. So how does this affect you? If you are a student over at the automation school, you have until mid September to do any upgrades or purchase any, courses at the 2020 prices. Alright? So I I don’t wanna raise the prices. I’ve tried as long as I can, but at some point, you have to give in to what the prices are that your vendors are charging you, and you have to raise the prices. So, all my courses are buy one, sell them forever, so this does not affect anybody who’s enrolled in a course. Actually, all of you folks rolled in my PLC courses, I see it updates every week now. So and those who get the ultimate bundles, you’re seeing new lessons added to the new courses because you get that preorder access plus some additional stuff. So in any case but, again, I wanna reiterate, if you’re a vendor who has an old balance or if you are a student who wants to buy a new course, please, make your plans in the next couple of weeks because in mid September, I do have to raise the prices. So I just wanna throw that PSA out there. I know a lot of people don’t get to the end of the show. That’s what I wanted to do at the beginning. So with that said, let’s jump right into this week’s podcast and learn all about Profinet. I wanna welcome to the show, Tom from Profibus, Profinet North America. Tom, I really wanna just thank you for coming on the show. I reached out to you to ask about ask you to come on to to talk to us about this topic. But before we jump in, could you, first tell the audience a little bit about yourself? Tom Weingartner (PI): Yeah. Sure. Absolutely, Shawn. I’m gonna jump to the next slide then and and let everyone know. As Shawn said, my name is Tom, Tom Weingartner, and I am the technical marketing director at PI North America. I have a fairly broad set of experiences ranging from ASIC hardware and software design, and and then I’ve moved into things like, avionic systems design. But it seemed like no no matter what I was working on, it it always centered around communication and control. That’s actually how I got into industrial Ethernet, and I branched out into, you know, from protocols like MIL standard fifteen fifty three and and airing four twenty nine to other serial based protocols like PROFIBUS and MODBUS. And, of course, that naturally led to PROFINET and the other Ethernet based protocols. I I also spent quite a few years developing time sensitive networking solutions. But now I focus specifically on PROFINET and its related technologies. And so with that, I will jump into the the presentation here. And and, now that you know a little bit about me, let let me tell you a little bit about our organization. We are PROFIBUS and PROFINET International or PI for short. We are the global organization that created PROFIBUS and PROFINET, and we continue to maintain and promote these open communication standards. The organization started back in 1989 with PROFIBUS, followed by PROFINET in the early two thousands. Next came IO Link, a communication technology for the last meter, and that was followed by OmLux, a communication technology for wireless location tracking. And now, most recently, MTP or module type package. And this is a communication technology for easier, more flexible integration of process automation equipment. Now we have grown worldwide to 24 regional PI associations, 57 competent centers, eight test labs, and 31 training centers. It’s important to remember that we are a global organization because if you’re a global manufacturer, chances are there’s PROFINET support in the country in which you’re located, and you can get that support in the country’s native language. In the, lower right part of the slide here, we are showing our technologies under the PI umbrella. And I really wanted to point out that these, these technologies all the technologies within PI umbrella are supported by a set of working groups. And these working groups are made up of participants from member companies, and they are the ones that actually create and update the various standards and specifications. Also, any of these working groups are open to any member company. So, PI North America is one of the 24 regional PI associations, and we were founded in 1994. We are a nonprofit member supported organization where we think globally and act locally. So here in North America, we are supported by our local competence centers, training centers, and test labs. And and competence centers, provide technical support for things like protocol, interoperability, and installation type questions. Training centers provide educational services for things like training courses and hands on lab work. And test labs are, well, just that. They are labs that provide testing services and device certification. So any member company can be any combination of these three. You can see here if you’re looking at the slide, that the Profi interface center is all three, where we have JCOM Automation is both a competent center and a training center. And here in North in North America, we are pleased to have HMS as a training center and Phoenix Contact also as a competent center. Now one thing I would like to point out to everyone is that what you should be aware of is that every PROFINET, device must be certified. So if you make a PROFINET device, you need to go to a test lab to get it certified. And here in North America, you certify devices at the PROFINETERFACE center. So I think it’s important to begin our discussion today by talking about the impact digital transformation has had on factory networks. There has been an explosion of devices in manufacturing facilities, and it’s not uncommon for car manufacturers to have over 50,000 Ethernet nodes in just one of their factories. Large production cells can have over a thousand Ethernet nodes in them. But the point is is that all of these nodes increase the amount of traffic automation devices must handle. It’s not unrealistic for a device to have to deal with over 2,000 messages while it’s operating, while it’s trying to do its job. And emerging technologies like automated guided vehicles add a level of dynamics to the network architecture because they’re constantly entering and leaving various production cells located in different areas of the factory. And, of course, as these factories become more and more flexible, networks must support adding and removing devices while the factory is operating. And so in response to this digital transformation, we have gone from rigid hierarchical systems using field buses to industrial Ethernet based networks where any device can be connected to any other device. This means devices at the field level can be connected to devices at the process control level, the production level, even even the operations level and above. But this doesn’t mean that the requirements for determinism, redundancy, safety, and security are any less on a converged network. It means you need to have a network technology that supports these requirements, and this is where PROFINET comes in. So to understand PROFINET, I I think it’s instructive here to start with the OSI model since the OSI model defines networking. And, of course, PROFINET is a networking technology. The OSI model is divided into seven layers as I’m sure we are all familiar with by now, starting with the physical layer. And this is where we get access to the wire, internal electrical signals into bits. Layer two is the data link layer, and this is where we turn bits into bytes that make up an Ethernet frame. Layer three is the network layer, and this is where we turn Ethernet frames into IP packets. So I like to think about Ethernet frames being switched around a local area network, and IP packets being routed around a wide area network like the Internet. And so the next layer up is the transport layer, and this is where we turn IP packets into TCP or UDP datagrams. These datagrams are used based on the type of connection needed to route IP packets. TCP datagrams are connection based, and UDP datagrams are connectionless. But, really, regardless of the type of connection, we typically go straight up to layer seven, the application layer. And this is where PROFINET lives, along with all the other Ethernet based protocols you may be familiar with, like HTTP, FTP, SNMP, and and so on. So then what exactly is PROFINET, and and what challenges is it trying to overcome? The most obvious challenge is environmental. We need to operate in a wide range of harsh environments, and, obviously, we need to be deterministic, meaning we need to guarantee data delivery. But we have to do this in the presence of IT traffic or non real time applications like web servers. We also can’t operate in a vacuum. We need to operate in a local area network and support getting data to wide area networks and up into the cloud. And so to overcome these challenges, PROFINET uses communication channels for speed and determinism. It uses standard unmodified Ethernet, so multiple protocols can coexist on the same wire. We didn’t have this with field buses. Right? It was one protocol, one wire. But most importantly, PROFINET is an OT protocol running at the application layer so that it can maintain real time data exchange, provide alarms and diagnostics to keep automation equipment running, and support topologies for reliable communication. So we can think of PROFINET as separating traffic into a real time channel and a non real time channel. That mess messages with a particular ether type that’s actually eighty eight ninety two, and the number doesn’t matter. But the point here is that the the the real time channel, is is where all PROFINET messages with that ether type go into. And any other ether type, they go into the non real time channel. So we use the non real time channel for acyclic data exchange, and we use the real time channel for cyclic data exchange. So cyclic data exchange with synchronization, we we classify this as time critical. And without synchronization, it is classified as real time. But, really, the point here is that this is how we can use the same standard unmodified Ethernet for PROFINET as we can for any other IT protocol. All messages living together, coexisting on the same wire. So we take this a step further here and and look at the real time channel and and the non real time channel, and and these are combined together into a concept that we call an application relation. So think of an application relation as a network connection for doing both acyclic and cyclic data exchange, and we do this between controllers and devices. This network connection consists of three different types of information to be exchanged, and we call these types of information communication relations. So on the lower left part of the slide, you can see here that we have something called a a record data communication relation, and it’s essentially the non real time channel for acyclic data exchange to pass information like configuration, security, and diagnostics. The IO data communication relation is part of the real time channel for doing this cyclic data exchange that we need to do to periodically update controller and device IO data. And finally, we have the alarm communication relation. So this is also part of the real time channel, because, what we need to do here is it it’s used for alerting the controller to device false as soon as they occur or when they get resolved. Now on the right part of the slide, is we can see some use cases for, application relations, and and these use cases are are either a single application relations for controller to device communication, and we have an optional application relation here for doing dynamic reconfiguration. We also use an application relation for something we call shared device, and, of course, why we are here today and talking about applications relations is actually because of system redundancy. And so we’ll get, into these use cases in more detail here in a moment. But first, I wanted to point out that when we talk about messages being non real time, real time, or time critical, what we’re really doing is specifying a level of network performance. Non real time performance has cycle times above one hundred milliseconds, but we also use this term to indicate that a message may have no cycle time at all. In other words, acyclic data exchange. Real time performance has cycle times in the one to ten millisecond range, but really that range can extend up to one hundred milliseconds. So time critical performance has cycle times less than a millisecond, and it’s not uncommon to have cycle times around two hundred and fifty microseconds or less. Most applications are either real time or non real time, while high performance applications are considered time critical. These applications use time synchronization to guarantee data arrives exactly when needed, but we also must ensure that the network is open to any Ethernet traffic. So in order to achieve time critical performance here, and we do this for the most demanding applications like high speed motion control. And so what we did is we added four features to basic PROFINET here, and and we call this PROFINET ISOCRANESS real time or PROFINET IRT. These added features are synchronization, node arrival time, scheduling, and time critical domains. Now IRT has been around since 02/2004, but in the future, PROFINET will move to a new set of I triple e Ethernet standards called time sensitive networking or TSN. PROFINET over TSN will actually have the same functionality and performance as PROFINET IRT, but we’ll be able to scale to faster and faster, networks and and as bandwidth is is increasing. So this chart shows the differences between PROFINET, RT, IRT, and TSN. And the main difference is, obviously, synchronization. And these other features that, guarantee data arrives exactly when needed. Notice in in the under the, PROFINET IRT column here that that, the bandwidth for PROFINET IRT is a 100 mil a 100 megabits per second. And the bandwidth for PROFINET RT and TSN are scalable. Also, for those device manufacturers out there looking to add PROFINET IRT to their products, there are lots of ASICs and other solutions available in the market with IRT capability. Alright. So let’s take a minute here to summarize all of this. We have a a single infrastructure for doing real time data exchange along with non real time information exchange. PROFINET uses the same infrastructure as any Ethernet network. Machines that speak PROFINET do so, using network connections called application relations, and these messages coexist with all other messages so information can pass from devices to machines, to factories, to the cloud, and back. And so if you take away nothing else from this podcast today, it is the word coexistence. PROFINET coexists with all other protocols on the wire. So let’s start talking a little bit here about the main topic, system redundancy and and and why we got into talking about PROFINET at all. Right? I mean, what why do we need system redundancy and things like like, application relations and dynamic reconfiguration? Well, it’s because one of the things we’re pretty proud of with PROFINET is not only the depth of its capabilities, but also the breadth of its capabilities. And with the lines blurring between what’s factory automation, what’s process automation, and what’s motion control, we are seeing all three types of automation appearing in a single installation. So we wanna make sure PROFINET meets requirements across the entire range of industrial automation. So let’s start out here by looking at the differences between process automation versus factory automation, and then we’ll get into the details. First off, process signals typically change slower on the order of hundreds of milliseconds versus tens of milliseconds in factory automation. And process signals often need to travel longer distances and potentially into hazardous or explosive areas. Now with process plants operating twenty four seven, three sixty five, system must systems must provide high availability and support changes while the plant is in production. This is where system redundancy and dynamic reconfiguration come in. We’ll discuss these again here in in just a minute. I just wanted to finish off this slide with saying that an estop is usually not possible because while you can turn off the automation, that’s not necessarily gonna stop the chemical reaction or whatever from proceeding. Sensors and actuators and process automation are also more complex. Typically, we call them field instruments. And process plants have many, many, many more IO, tens of thousands of IO, usually controlled by a DCS. And so when we talk about system redundancy, I actually like to call it scalable system redundancy because it isn’t just one thing. This is where we add components to the network for increasing the level of system availability. So there are four possibilities, s one, s two, and r one, r two. The letter indicates if there are single or redundant network access points, and the number indicates how many application relations are supported by each network access point. So think of the network access point as a physical interface to the network. And from our earlier discussion, think of an application relation as a network connection between a controller and a device. So you have s one has, single network access points. Right? So each device has single network access points with one application relation connected to one controller. S two is where we also have single network access points, but with two application relations now connected to different controllers. R one is where we have redundant network access points, but each one of these redundant network access points only has one application relation, but those are connected to different controllers. And finally, we could kinda go over the top here with r two, and and here’s where we have redundant network access points with two application relations connected to different controllers. Shawn Tierney (Host): You know, I wanna just stop here and talk about s two. And for the people who are listening, which I know is about a quarter of you guys out there, think of s two is you have a primary controller and a secondary controller. If you’re seeing the screen, you can see I’m reading the the slide. But you have your two primary and secondary controllers. Right? So you have one of each, and, primary controller has the, application one, and secondary has application resource number two. And each device that’s connected on the Ethernet has both the one and two. So you went maybe you have a rack of IO out there. It needs to talk to both the primary controller and the secondary controller. And so to me, that is kinda like your classic redundant PLC system where you have two PLCs and you have a bunch of IO, and each piece of IO has to talk to both the primary and the secondary. So if the primary goes down, the secondary can take over. And so I think that’s why there’s so much interest in s two because that kinda is that that that classic example. Now, Tom, let me turn it back to you. Would you say I’m right on that? Or Tom Weingartner (PI): Spot on. I mean, I think it’s great, and and and really kinda emphasizing the point that there’s that one physical connection on the network access point, but now we have two connections in that physical, access point there. Right? So so you can then have one of those connections go to the primary controller and the other one to the secondary controller. And in case one of those controllers fails, the device still can get the information it needs. So, yep, that that’s how we do that. And and, just a little bit finer point on r one, if you think about it, it’s s two, but now all we’ve done is we’ve split the physical interface. So one of the physical interfaces has has, one of the connections, and the other physical interface has a has the other connection. So you really kinda have, the same level of redundant functionality here, backup functionality with the secondary controller, but here you’re using, multiple physical interfaces. Shawn Tierney (Host): Now let me ask you about that. So as I look at our one, right, it seems like they connect to port let’s I’ll just call it port one on each device to switch number one, which in this case would be the green switch, and port number two of each device to the switch number two, which is the blue switch. Would that be typical to have separate switches, one a different switch for each port? Tom Weingartner (PI): It it it doesn’t have to. Right? I I I think we chose to show it like this for simplicity kinda to Shawn Tierney (Host): Oh, I don’t care. Tom Weingartner (PI): Emphasize the point that, okay. Here’s the second port going to the secondary controller. Here’s the first port going to the primary controller. And we just wanted to emphasize that point. Because sometimes these these, diagrams can be, a bit confusing. And you Shawn Tierney (Host): may have an application that doesn’t require redundant switches depending on the maybe the MTBF of the of the switch itself or your failure mode on your IO. Okay. I’m with you. Go ahead. Tom Weingartner (PI): Yep. Yep. Good. Good. Good. Alright. So, I think that’s some excellent detail on that. And so, if you wouldn’t mind or don’t have any other questions, let’s let’s move on to the the, the the next slide. So you can see in that previous slide how system redundancy supports high availability by increasing system availability using these network access points and application relations. But we can also support high availability by using network redundancy. And the way PROFINET supports network redundancy is through the use of ring topologies, and we call this media redundancy. The reason we use rings is because if a cable breaks or the physical connection, somehow breaks as well or or even a device fails, the network can revert back to a line topology keeping the system operational. However, supporting network redundancy with rings means we can’t use protocols typically used in IT networks like, STP and RSTP. And this is because, STP and RSTP actually prevent network redundancy by blocking redundant paths in order to keep frames from circulating forever in the network. And so in order for PROFINET to support rings, we need a way to prevent frames from circulating forever in the network. And to do this, we use a protocol called the media redundancy protocol or MRP. MRP uses one media redundancy manager for each ring, and the rest, of the devices are called media redundancy clients. Managers are typically controllers or PROFINET switches, and clients are typically the devices in the network. So the way it works is this. A manager periodically sends test frames, around the network here to check the integrity of the ring. If the manager doesn’t get the test frame back, there’s a failure somewhere in the ring. And so the manager then notifies the clients about this failure, and then the manager sets the network to operate as a line topology until, the failure is repaired. Right? And so that’s how we can get, network redundancy with our media redundancy protocol. Alright. So now you you can see how system redundancy and media redundancy both support high availability. System redundancy does this by increasing system availability, Walmart. Media redundancy does this by increasing network availability. Obviously, you can use one without the other, but by combining system redundancy and media redundancy, we can increase the overall system reliability. For example, here we are showing different topologies for s one and s two, and these are similar to the the the topologies that were on the previous slide. So, if you notice here that, for s one, we can only have media redundancy because there isn’t a secondary controller to provide system redundancy. S two is where we combine system redundancy and media redundancy by adding an MRP ring. But I wanted to point out here that that even though we’re showing this MRP ring as as as a possible topology, there really are other topologies possible. It really depends on the level of of system reliability you’re trying to achieve. And so, likewise, on on this next slide here, we are showing two topologies for adding media redundancy to r one and r two. And so for r one, we’ve chosen, again, probably for simplistic, simplicity’s sake, we we add an MRP ring for each redundant network access point. With for r two, we do the same thing here. We also have an MRP ring for each redundant network access point, but we also add a third MRP ring for the controllers. Now this is really just to try to emphasize the point that you can, you you can really, come up with just about any topology possible, but it because it really depends on the number of ports on each device and the number of switches in the network and, again, your overall system reliability requirements. So in order to keep process plants operating twenty four seven three sixty five, dynamic reconfiguration is another use case for application relations. And so this is where we can add or remove devices on the fly while the plant is in production. Because if you think about it, typically, when there is a new configuration for the PLC, the PLC first has to go into stop mode. It needs to then re receive the configuration, and then it can go back into run mode. Well, this doesn’t work in process automation because we’re trying to operate twenty four seven three sixty five. So with dynamic reconfiguration, the controller continues operating with its current application relation while it sets up a new application relation. Right? I mean, again, it’s it’s really trying to get this a a new network connection established. So then the the the controller then switches over to the new application relation after the new configuration is validated. Once we have this validation and the configuration’s good, the controller removes the old application relations and continues operating all while staying in run mode. Pretty handy pretty handy stuff here for for supporting high availability. Now one last topic regarding system redundancy and dynamic reconfiguration, because these two PROFINET capabilities are compatible with a new technology called single pair Ethernet, and this provides power and data over just two wires. This version of Ethernet is now part of the I triple e eight zero two dot three standard referred to as 10 base t one l. So 10 base t one l is the non intrinsically saved version of two wire Ethernet. To support intrinsic safety, 10 base t one l was enhanced by an additional standard called Ethernet APL or advanced physical layer. So when we combine PROFINET with this Ethernet APL version of 10 base t one l, we simply call it PROFINET over APL. It not only provides power and data over the same two wires, but also supports long cable runs up to a kilometer, 10 megabit per second communication speeds, and can be used in all hazardous areas. So intrinsic safety is achieved by ensuring both the Ethernet signals and power on the wire are within explosion safe levels. And even with all this, system redundancy and dynamic reconfiguration work seamlessly with this new technology we call PROFINET over APL. Now one thing I’d like to close with here is a is a final thought regarding a new technology I think I think everyone should become aware of here. I mean, it’s emerging in the market. It’s it’s quite new, and it’s a technology called MTP or module type package. And so this is a technology being applied first here to, use cases considered to be a hybrid of both process automation and factory automation. So what MTP does is it applies OPC UA information models to create standardized, non proprietary application level descriptions for automation equipment. And so what these descriptions do is they simplify the communication, between equipment and the control system, and it does this by modularizing the process into more manageable pieces. So really, the point is to construct a factory with modular equipment to simplify integration and allow for better flexibility should changes be required. Now with the help of the process orchestration layer and this OPC UA connectivity, MTP enabled equipment can plug and operate, reducing the time to commission a process or make changes to that process. This is pretty cutting edge stuff. I think you’re gonna find and hear a lot more about NTP in the near future. Alright. So it’s time to wrap things up with a summary of all the resources you can use to learn even more about PROFINET. One of the things you can do here is you can get access to the PROFINET one day training class slide deck by going to profinet2025.com, entering your email, and downloading the slides in PDF format. And what’s really handy is that all of the links in the PDF are live, so information is just a click away. We also have our website, us.profinet.com. It has white papers, application stories, webinars, and documentation, including access to all of the standards and specifications. This is truly your one stop shop for locating everything about PROFINET. Now we do our PROFINET one day training classes and IO link workshops all over The US and parts of Canada. So if you are interested in attending one of these, you can always find the next city we are going to by clicking on the training links at the bottom of the slide. Shawn Tierney (Host): Hey, guys. Shawn here. I just wanted to jump in for a minute for the audio audience to give you that website. It’s us.profinet.com/0dtc or oscardeltatangocharlie. So that’s the website. And I also went and pulled up the website, which if you’re watching, you can see here. But for those listening, these one day PROFINET courses are coming to Phoenix, Arizona, August 26, Minneapolis, Minnesota, September 10, Newark and New York City, September 25, Greenville, South Carolina, October 7, Detroit, Michigan, October 23, Portland, Oregon, November 4, and Houston, Texas, November 18. So with that said, let’s jump back into the show. Tom Weingartner (PI): Alan, one of our most popular resources is Profinet University. This website structures information into little courses, and you can proceed through them at your own pace. You can go lesson by lesson, or you can jump around. You can even decide which course to take based on a difficulty tag. Definitely make sure to check out this resource. We do have lots of great, webinars on on the, on on the website, and they’re archived on the website. Now some of these webinars, they they rehash what we covered today, but in other cases, they expand on what we covered today. But in either case, make sure you share these webinars with your colleagues, especially if they’re interested in any one of the topics that we have listed on the slide. And finally, the certified network engineer course is the next logical step if you would like to dive deeper into the technical details of PROFINET. It is a week long in Johnson City, Tennessee, and it features hands on lab work. And if you would like us to provide training to eight or more students, we can even come to your site. If you would like more details about any of this, please head to the website to learn more. And with that, Chai, I think that is, my last slide and, covered the topics that I think we wanted some to cover today. Shawn Tierney (Host): Yeah. And I just wanna point out that to you guys, this, training goes out through all around The US. I definitely recommend getting up there. If you’re using PROFINET and you wanna get some training, they usually fill the room, like, you know, 50 to a 100 people. And, it’s you know, they do this every year. So check those dates out. If you need to get some hands on with PROFINET, I would definitely check out those. And, of course, we’ll have all the links in the description. I also wanna thank Tom for that slide. Really defining s one versus s two versus r one and r two. You know, a lot of people say we have s two compatibility. A matter of fact, we’re gonna be looking at some products that have s two compatibility here in the future. And, you know, just trying to understand what that means. Right? You know, when somebody just says s two, it’s like, what does that mean? So I really if that slide really doesn’t for you guys listening, I thought that slide really kinda lays it out, kinda gives you, like, alright. This is what it means. And, so in in in my from my perspective, that’s like it’s you’re supporting redundant controllers. Right? And so if you have an s two setup of redundant, seamless controllers that or CPUs, then you’ll be that product will support that. And that’s important. Right? Because if you had a product that didn’t support it, it’s not gonna work with your application. So I thought that and the the Ethernet APL is such a big deal in process because I you know, the the distance, right, and the fact that it’s it’s, intrinsically safe and supports all those zones and and areas and whatnot, that is, and everybody everybody all the instrumentation people are all over. Right? The, the, the Rosemonts, the fishes, the, the endless houses, everybody is is on that working group. We’ve covered that on the news show many times, and, just very interesting to see where that goes, but I think it’s gonna take over that part of the industry. So, but, Tom, was there anything else you want to cover in today’s show? Tom Weingartner (PI): No. I I think that that really, puts puts a a fine finale on on on this here. I I do wanted to maybe emphasize that, you you know, that point about network redundancy being compatible with, system redundancy. So, you know, you can really hone in on what your system reliability requirements are. And and also with with this this, PROFINET over APL piece of it, completely compatible with with PROFINET, in in of itself. And and, also, you don’t have to worry about it not supporting, system redundancy or or anything of of the like, whether, you know, you you wanted to get, redundant even redundant devices out there. So, that’s that’s, I think that’s that’s about it. Shawn Tierney (Host): Alright. Well, I again, thank you so much for coming on. We look forward to trying out some of these s two profanet devices in the near future. But with that, I I really wanted to have you on first to kinda lay the groundwork for us, and, really appreciate it. Tom Weingartner (PI): No problem. Thank you for having me. Shawn Tierney (Host): Well, I hope you guys enjoyed that episode. I did. I enjoyed sitting down with Tom, getting up to date on all those different products, and it’s great to know they have all these free hands on training days coming across United States. And, you know, what a great refresher from the original 2020 presentation that we had somebody from Siemens do. So I really appreciate Tom coming on. And speaking of Siemens, so thankful they sponsored this episode so we could release it ad free and make the video free to everybody. Please, if you see Siemens or any of the vendors who sponsor our episodes, please tell them to thank you from us. It really helps us keep the show going. Speaking of keeping the show going, just a reminder, if you’re a student or a vendor, price increases will hit mid September. So if you’re a student, you wanna buy another course, now is the time to do it. If you’re a vendor and you have a existing balance, you will want to schedule those podcasts before mid September or else you’ll be subject to the price increase. So with that said, I also wanna remind you I have a new podcast, automation tech talk. I’m reusing the old automation new news headlines podcast. So if you already subscribed to that, you’re just gonna get in the new the new show for free. It’s also on the automation blog, on YouTube, on LinkedIn. So I’m doing it as a live stream every lunchtime, just talking about what I learned, in that last week, you know, little tidbits here and there. And I wanna hear from you guys too. A matter of fact, I already had Giovanni come on and do an interview with me. So at one point, I’ll schedule that as a lunchtime podcast for automation tech talk. Again, it still shows up as automation news headlines, I think. So at some point, I’ll have to find time to edit that to change the name. But in any case, with that, I think I’ve covered everything. I wanna thank you guys for tuning in. Really appreciate you. You’re the best audience in the podcast world or the video world, you know, whatever you wanna look at it as, but I really appreciate you all. Please feel free to send me emails, write to me, leave comments. I love to hear from you guys, and I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Kaatscast
OSI's Blue Hill Deal: 3,100 Acres of Forest and Stream Protected

Kaatscast

Play Episode Listen Later Aug 12, 2025 25:41


Adjacent to the Willowemoc Wild Forest, in the Sullivan Catskills, a 3100-acre parcel once eyed for development is now safeguarded for future generations. In this episode, we chronicle the Open Space Institute's landmark deal, and potentially the largest acquisition for the Catskills in nearly 25 years.From the quiet negotiations with landowners to the sweeping implications for climate resilience and watershed health, this episode dives deep into what makes Blue Hill so important to the region. Key highlights include:Behind-the-scenes details on how OSI's team identified and secured the propertyThe role of Blue Hill in protecting coldwater streams critical to downstream communitiesA look back at Blue Hill's brushes with development, including a ski resort derailed by liquor restrictions What the public can expect in terms of access, trails, and community engagementReflections on regional conservation wins and what they signal for the futureHear from Tom Gravel, OSI's Northeast Project Manager, and Charlie Burgess, OSI's Northern NY Stewardship Manager about OSI's strategic land acquisitions—and how they are advancing New York's commitment to conserve 30% of its lands and waters by 2030 under the state's 30x30 initiative.

Ones Ready
Ep 497: AFSW Pipeline Cheat Codes

Ones Ready

Play Episode Listen Later Aug 11, 2025 63:30


Send us a textForget the Instagram highlight reels—Peaches and Aaron are dropping the unfiltered, step-by-step hacks to survive the Air Force Special Warfare pipeline without becoming another quit statistic. This isn't “drink more water” fluff; it's the down-and-dirty, why-your-shins-are-dying, how-to-not-faceplant-on-the-IFT kind of episode. From running until you hate life, to fueling like a machine, to training until your friends think you've joined a cult, they break down exactly how to build durability, crush run times, and show up already dangerous. Plus, a SIG P320 scandal, why OTS Nashville is about to sell out, and how to tell if you're lying to yourself about being “ready.”

Z pasją o mocnych stronach
#269 Słownik talentów – Strateg (Strategic) – drugi sezon

Z pasją o mocnych stronach

Play Episode Listen Later Aug 7, 2025 80:02


Osoby, które określane są mianem Strategów, tworzą alternatywne ścieżki postępowania. W zetknięciu z każdą sytuacją mogą szybko rozpoznać odpowiednie wzory i problemy. Podziwiam osoby, które mają łatwość zauważenia dostępnych opcji oraz łatwość pójścia za którąś z nich – co wiąże się z umiejętnością odpuszczania, z intuicją, z łatwością zmiany. O tym jak działa to w praktyce, co sprawia, że ten talent jest dojrzały, kiedy widać jego ciemne strony, rozmawiam z Bartkiem Kolasą, Izabelą Marciniak oraz Sonią Zieleniewską. Zapraszam do poznania talentu Strateg! Intencjonalny newsletter Co tydzień wysyłam list, w którym zapraszam do rozmowy i zadania sobie ważnych pytań. Gościnie Bartosz Kolasa – zajmuje się inżynierią i tworzeniem oprogramowania. Implementuje sztuczną inteligencję do wykorzystania w rekrutacji. Tworzy społeczność Wrocław AI Team.Top 5:  Współzależność, Aktywator, Rozwijanie Innych, Indywidualizacja, Strateg. Sonia Zieleniewska – zajmuje się marketingiem produktowym w branży IT. Pasjonuje się AI, tworzeniem procesów, optymalizacją i storytellingiem marketingowym. Dodatkowo jest makijażystką. Interesuje ją Human Design.Top 5: Aktywator, Odkrywczość, Odpowiedzialność, Strateg, CZAR.Instagram Iza Marciniak – zajmuje się zarządzaniem łańcuchem dostaw..Top 5: Strateg, Indywidualizacja, Osiąganie, Ukierunkowanie, Analityk. Linki Streszczenie Jak Stratega widzicie w codziennej pracy? Bartosz: W jednym z raportów była informacja, że Strateg to taki talent do optymalizacji. Czuję się spełniony, gdy wybieram […] The post #269 Słownik talentów – Strateg (Strategic) – drugi sezon appeared first on Near-Perfect Performance.

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work
The DeRito Act and the Fight for Military Justice | Adam DeRito- S.O.S #211

S.O.S. (Stories of Service) - Ordinary people who do extraordinary work

Play Episode Listen Later Jul 28, 2025 106:59 Transcription Available


In this powerful and eye-opening conversation, decorated veteran and military justice reform advocate Adam DeRito takes us through his remarkable journey from Air Force Academy cadet to the frontlines of a battle few civilians understand: the fight against military retaliation.Adam's story begins with his post-9/11 commitment to service, arriving at the Air Force Academy with real-world experience as a firefighter and EMT. After becoming an OSI confidential informant reporting cadet misconduct, his life took a devastating turn when he experienced sexual assault off-campus—and faced dismissal rather than support from his command. What followed was a systematic campaign of retaliation culminating in falsified medical records dated after he'd already left the Academy, an illegal tactic designed to permanently block his military career.Despite these obstacles, Adam persevered through multiple administrative appeals, federal court battles, and political advocacy while continuing to serve in the National Guard and Army Reserves. His experiences led him to draft the Military Mental Health Protection and Justice Act (known informally as the "DeRito Act"), which would prevent commanders from weaponizing command-directed evaluations against service members who report misconduct.The conversation exposes critical gaps in military accountability where commanders operate with minimal oversight, creating a chilling effect that damages readiness and unit cohesion. Adam's documentation of his case—including medical records falsified by someone without proper licensing—reveals how military mental health evaluations can be weaponized to silence whistleblowers and assault survivors.For anyone concerned about veterans' rights, military readiness, or constitutional protections, this episode provides rare insight into how our military justice system actually operates and why reforms like the DeRito Act are desperately needed. Visit adamdorito.com to review the evidence and join the fight for accountability that affects thousands of service members.

Kwadrans na angielski
KNA: Lekcja 366 (7. urodziny podcastu)

Kwadrans na angielski

Play Episode Listen Later Jun 19, 2025 19:13


W 366. lekcji świętujemy 7. urodziny podcastu. Podamy nieco statystyk, omówimy nasze ulubione odcinki, omówimy wyniki ankiety wśród słuchaczy, powiemy o sukcesach i wtopie oraz o planach na przyszłość.-----Rozdziały--------(0:09) - Intro(0:42) - Statystyki po 7 latach(5:08) - Ulubione odcinki(7:20) - Ankieta słuchaczy(10:27) - Osiągnięcia i wtopa(14:21) - Plany na przyszłość(18:24) - Outro----------------------Jeżeli doceniasz moją pracę nad podcastem, to zostań Patronem KNA dzięki stronie https://patronite.pl/kwadrans. Nie wiesz czym jest Patronite? Posłuchaj specjalnego odcinka: https://kwadransnaangielski.pl/wsparcieDołącz do naszej społeczności na stronie https://KwadransNaAngielski.plLekcji możesz słuchać na Spotify albo oglądać na YouTube.Wszystkie nowe wyrażenia z tej lekcji w formie pisemnej są dostępne na stronie https://kwadransnaangielski.pl/366#polskipodcast #kwadransnaangielski #angielski----------------------Mecenasi wśród Patronów:Joanna KwiatkowskaJoannaJakub Wiśniewski - https://bezpiecznyvpn.pl---------Utwór: Happy Birthday to You Classical by SergeQuadrado -- https://freesound.org/s/541177/ -- License: Attribution NonCommercial 3.0

Inside The Line: The Catskills
Episode 174 - Tom from OSI, Blackhead Rescue, Black Bear Lodge plans and more

Inside The Line: The Catskills

Play Episode Listen Later Jun 6, 2025 142:53


Welcome to Episode 174 of Inside The Line: The Catskill Mountains Podcast! This week, we're joined by Tom Gravel, Senior Project Land Manager at the Open Space Institute, for an in-depth conversation about OSI's recent massive 3,100-acre acquisition on Blue Hill in the southern Catskills. We dig into what this means for conservation, recreation, and the future of public access in the region. We also take a look at the development plans for the former Black Bear Lodge, a bizarre case of hikers getting lost while tripping on mushrooms and a recent rescue on Blackhead Mountain. Whether you're here for land conservation talk, trail safety, or the weird stories the mountains always seem to offer—this episode has something for you. Make sure to subscribe on your favorite platform, share the show, donate if you feel like it… or just keep tuning in. I'm just grateful you're here. And as always... VOLUNTEER!!!!Links for the Podcast: https://linktr.ee/ISLCatskillsPodcast, Donate a coffee to support the show! https://www.buymeacoffee.com/ITLCatskills, Like to be a sponsor or monthly supporter of the show? Go here! - https://www.buymeacoffee.com/ITLCatskills/membershipThanks to the sponsors of the show: Outdoor chronicles photography - https://www.outdoorchroniclesphotography.com/, Trailbound Project - https://www.trailboundproject.com/, Camp Catskill - https://campcatskill.co/, Another Summit - https://www.guardianrevival.org/programs/another-summitLinks: Open Space Institutes, Trust for public land, DEC seeks public comments for forest growing,Volunteer Opportunities: Trailhead stewards for 3500 Club -https://www.catskill3500club.org/trailhead-stewardship, Catskills Trail Crew - https://www.nynjtc.org/trailcrew/catskills-trail-crew, NYNJTC Volunteering - https://www.nynjtc.org/catskills, Catskill Center - https://catskillcenter.org/, Catskill Mountain Club - https://catskillmountainclub.org/about-us/, Catskill Mountainkeeper - https://www.catskillmountainkeeper.org/, Bramley Mountain Fire Tower - https://bramleymountainfiretower.org/ Post Hike Brews and Bites - Sunshine Colony#OSI #bluehill #catskillhistory #hikehudson #hikethehudson #hudsonvalleyhiking #NYC #history #husdonvalley #hikingNY #kaaterskill #bluehole #catskillhiking #visitcatskills #catskillstrails #catskillmountains #catskillspodcast #catskills #catskillpark #catskillshiker #catskillmountainsnewyork #hiking #catskill3500club #catskill3500 #hikethecatskills

Living The Next Chapter: Authors Share Their Journey
E544 - Derrick Jackson - Shadow One - Air Force Office of Special Investigations, world of criminal investigations and counterintelligence

Living The Next Chapter: Authors Share Their Journey

Play Episode Listen Later May 28, 2025 50:52


Episode 544 - Derrick Jackson - Shadow One - Air Force Office of Special Investigations, world of criminal investigations and counterintelligenceAbout the authorDerrick Jackson joined the U.S. Air Force and served as a jet engine specialist on the F-15 Eagle, C-5 Galaxy, C-141 Stratofortress and C-17 Globemaster. After 10 years of service, he was recruited to become a Special Agent with the Air Force Office of Special Investigations. His first assignments were as a criminal investigator at Tyndall AFB and Osan Airbase, Republic of Korea. He then volunteered to join OSI's Special Missions Branch at Hurlburt Field, FL to provide counterintelligence services for the Air Force Special Operations Command missions worldwide. After a brief stint at Bolling AFB, DC with the Protective Service Detachment, providing security for foreign dignitaries, Agent Jackson became the Chief of the Economic Crimes Branch at Joint Base Andrews. In 2014, Special Agent retired from the Air Force after 21 years of service.Book: Shadow One - Torn between the love of his life and his career, Air Force Staff Sergeant Devin Jackson is recruited to become a Special Agent with the Office of Special Investigations.When the Agents uncover an international human trafficking and drug smuggling ring, the crime syndicate decides to strike back; and soon the hunters become the prey. Once the pressure mounts, the team begins to crack and questions if one of their own has betrayed them. As Devin struggles to find balance between the disturbing reality of trafficking and his personal life; disaster strikes, and he fails to protect the person closest to him.Depression, self-doubt, and grief overcome him until an old friend arrives back on the scene and provides the healing he needs to seek revenge and bring the criminals to justice.https://a.co/d/hhTERZ2Support the show___https://livingthenextchapter.com/podcast produced by: https://truemediasolutions.ca/Coffee Refills are always appreciated, refill Dave's cup here, and thanks!https://buymeacoffee.com/truemediaca

The Cryptonaut Podcast
#390: Jailbreak Area51! Part 2: Enter The Quataloid

The Cryptonaut Podcast

Play Episode Listen Later May 19, 2025 70:26


While on a camping trip in an abandoned mining town just outside the Nellis Air Force Range, a husband and father of two was killed by a bizarre, bug-like being who had escaped from the S-2 Annex of the notorious Area-51. Following the escape, a pair of OSI officers were tasked with finding out how the mantis-like monstrosity had managed to get out and, more pressing still, what had become of it. The Cryptonaut Podcast Patreon:https://www.patreon.com/cryptonautpodcast  The Cryptonaut Podcast Merch Stores:Hellorspace.com - Cryptonautmerch.com  Stay Connected with the Cryptonaut Podcast: Website - Instagram - TikTok - YouTube- Twitter - Facebook 

The Birth Trauma Mama Podcast
Ep. 166: 4th Degree Tear, Rectovaginal Fistula, & Ongoing Recovery feat. Scarlett

The Birth Trauma Mama Podcast

Play Episode Listen Later May 16, 2025 53:31


In this week's Listener Series episode of The Birth Trauma Mama Podcast, Scarlett bravely shares her story a layered, and still-unfolding journey through birth trauma, postpartum hemorrhage, and complex pelvic floor injuries that continue to impact her life more than five years later.She speaks candidly about the realities of:

Ones Ready
Ep 470: From AF Security Forces to FBI - Endex Archery's Jay Joins Us!

Ones Ready

Play Episode Listen Later May 12, 2025 56:49


Send us a textIn this epic Ones Ready episode, we sit down with Jay from Endex Archery, whose resume reads like a military fever dream: SERE drop, Security Forces, Combat Arms, OSI Agent, tier-one special mission unit… and then casually slides into the FBI. This man literally said “nah” to survival school and then sprinted straight into a career that landed him fighting terror, investigating spies, and now helping vets heal through archery.Jay opens up about losing six teammates to a VBIED in Afghanistan, how that tragedy turned into his life's mission, and why a bow and arrow saved his soul. We talk survivor's guilt, FBI hostage rescue, sneaky pull-up bars, and why he's built an organization to help other veterans shoot their way back into mental clarity. Also: hilarious stories about SERE instructors, Air Force recruiter lies, and the absolute dumpster fire that is military admin.If you've ever thought you couldn't pivot, couldn't overcome, or couldn't create something powerful out of pain—this episode is your proof otherwise.

Ones Ready
AFSPECWAR Q&A Live Stream - Late Night Love with Aaron

Ones Ready

Play Episode Listen Later May 11, 2025 63:27


Send us a textJust a quick hitter answering all your AFSPECWAR questionsIn this conversation, Aaron discusses various topics related to the Air Force, including the current state of the Air Force amidst budget cuts, the enduring demand for special operations forces, and the importance of military brotherhood. He reflects on a recent retirement event, shares thoughts on U.S. foreign policy obligations, and expresses gratitude for the community's support. The conversation also touches on future training initiatives, the special warfare pipeline, and interactions with the OSI, emphasizing the importance of collaboration and community engagement. In this conversation, Aaron discusses various aspects of military life, including the importance of engaging with local communities during deployments, the differences between enlisted personnel and officers, and the evolving nature of drone warfare. He shares insights on physical preparation, experiences with unidentified aerial phenomena, and the advancements in military technology. Additionally, he addresses changes in training pipelines and the considerations for age when enlisting in the military.TakeawaysThe Air Force is always evolving and adapting to changes.Special operations jobs will always be in demand due to their critical roles.Military brotherhood is a cherished aspect of service that many miss after retirement.The U.S. is not obligated to intervene in every foreign conflict.Community engagement and gratitude are vital for morale and support.Future training initiatives are being planned to better prepare candidates.Understanding the special warfare pipeline is crucial for aspiring candidates.Collaboration with OSI enhances mission effectiveness and safety.Everyone has a role to play in the military, regardless of their specific job.Open communication with the community is essential for growth and improvement. Engaging with local communities is crucial during deployments.There are significant differences between enlisted personnel and officers.The Air Force leads in drone warfare tactics and strategies.Physical preparation is essential for military readiness.Unidentified aerial phenomena can be perplexing and concerniSupport the showJoin this channel to get access to perks: HEREBuzzsprout Subscription page: HERECollabs:Ones Ready - OnesReady.com 18A Fitness - Promo Code: 1Ready ATACLete - Follow the URL (no promo code): ATACLeteCardoMax - Promo Code: ONESREADYDanger Close Apparel - Promo Code: ONESREADYDFND Apparel - Promo Code: ONESREADYHoist - Promo Code: ONESREADYKill Cliff - Pro...

The Flip
Why the NFL is Bringing American Football to Africa

The Flip

Play Episode Listen Later Apr 23, 2025 6:37


There are 1,696 active players in the NFL. Just 138 are African. But if it were up to Osi Umenyiora, 11-year veteran and 2-time Super Bowl Champion, there would be many more. Osi is the Founder of The Uprise, the NFL's lead in Africa, and he's pioneering American football on the African continent. At the NFL's camp in Lagos, Nigeria, young athletes are vying for a shot to join the NFL Academy in London or to go straight to the League through the International Player Pathway Program. But many of them have never played American football before. So why is the NFL hosting camps in Africa? Is there really any shot of these players making it to the NFL?00:00 - The NFL is in Lagos, Nigeria00:30 - Osi Umenyiora is bringing football to Africa02:05 - The NFL wants the best talent in the world03:55 - Creating opportunities for African talentOur Links -

The Protectors
534 | Derrick Jackson | BOOK DISCUSSION: "Shadow One"

The Protectors

Play Episode Listen Later Apr 4, 2025 21:17 Transcription Available


Send us a textDerek Jackson shares his journey from Air Force jet engine mechanic to OSI special agent, revealing the intense personal and professional challenges of federal law enforcement, and how these experiences inspired his crime thriller novel "Shadow One."• Starting as an enlisted jet engine mechanic before getting recruited to OSI• Discussing the reality of FLETC (Federal Law Enforcement Training Center) and the demanding training process• Transitioning from mechanic to criminal investigator and the mental shift required• Balancing investigative work with personal life and relationships• Using music to inspire creative writing and developing characters• Exploring how traumatic cases affect agents psychologically• Turning real-life experiences into a crime thriller novel• Finding the courage to follow your own path despite others' doubtsFind Derek Jackson's book "Shadow One" on Amazon, Barnes & Noble, Books A Million, and Walmart online.Support the showMake sure to check out Jason on IG @drjasonpiccolo

WP Tavern
#161 – Robert Jacobi on WordPress, Security, and the OSI Model

WP Tavern

Play Episode Listen Later Mar 18, 2025 43:44


On the podcast today we have Robert Jacobi and he's here to talk about his tech journey, and his role at Black Wall, formerly BotGuard. We talk about the OSI model, explaining how computer networks communicate through seven layers, from application to physical. Robert shares insights into Black Wall's focus on preventing bot attacks at a layer far from the website, mitigating risks before they hit the hosting company. There's also a brief discussion of WordPress plugins and the complexity of online security, with a nod to the hope of increasing listeners' understanding of these intricate processes. If you've ever wondered about the unseen layers of internet security and infrastructure, or the strategic moves involved in rebranding a tech company, this episode is for you.

Packet Pushers - Full Podcast Feed
N4N017: Routing Fundamentals

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Mar 13, 2025 49:48


On today's N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today's... Read more »

Packet Pushers - Fat Pipe
N4N017: Routing Fundamentals

Packet Pushers - Fat Pipe

Play Episode Listen Later Mar 13, 2025 49:48


On today's N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today's... Read more »