Podcasts about SSE

  • 346PODCASTS
  • 645EPISODES
  • 40mAVG DURATION
  • 1WEEKLY EPISODE
  • Feb 3, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about SSE

Latest podcast episodes about SSE

Python Bytes
#468 A bolt of Django

Python Bytes

Play Episode Listen Later Feb 3, 2026 31:00 Transcription Available


Topics covered in this episode: django-bolt: Faster than FastAPI, but with Django ORM, Django Admin, and Django packages pyleak More Django (three articles) Datastar Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: django-bolt : Faster than FastAPI, but with Django ORM, Django Admin, and Django packages Farhan Ali Raza High-Performance Fully Typed API Framework for Django Inspired by DRF, FastAPI, Litestar, and Robyn Django-Bolt docs Interview with Farhan on Django Chat Podcast And a walkthrough video Michael #2: pyleak Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak. Has patterns for Context managers decorators Checks for Unawaited asyncio tasks Threads Blocking of an asyncio loop Includes a pytest plugin so you can do @pytest.mark.no_leaks Brian #3: More Django (three articles) Migrating From Celery to Django Tasks Paul Taylor Nice intro of how easy it is to get started with Django Tasks Some notes on starting to use Django Julia Evans A handful of reasons why Django is a great choice for a web framework less magic than Rails a built-in admin nice ORM automatic migrations nice docs you can use sqlite in production built in email The definitive guide to using Django with SQLite in production I'm gonna have to study this a bit. The conclusion states one of the benefits is “reduced complexity”, but, it still seems like quite a bit to me. Michael #4: Datastar Sent to us by Forrest Lanier Lots of work by Chris May Out on Talk Python soon. Official Datastar Python SDK Datastar is a little like HTMX, but The single source of truth is your server Events can be sent from server automatically (using SSE) e.g yield SSE.patch_elements( f"""{(#HTML#)}{datetime.now().isoformat()}""" ) Why I switched from HTMX to Datastar article Extras Brian: Django Chat: Inverting the Testing Pyramid - Brian Okken Quite a fun interview PEP 686 – Make UTF-8 mode default Now with status “Final” and slated for Python 3.15 Michael: Prayson Daniel's Paper tracker Ice Cubes (open source Mastodon client for macOS) Rumdl for PyCharm, et. al cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun Python Developers Survey 2026 Joke: Pushed to prod

Noticentro
Llega el primer fin de semana largo del 2026

Noticentro

Play Episode Listen Later Jan 29, 2026 1:50 Transcription Available


Veracruz avala doble cargo en el sector salud Repunte económico en varias entidades del paísSe relaja la tensión entre EU y Dinamarca por GroenlandiaMás información en nuestro podcast

The Uptime Wind Energy Podcast
Inside ATT and SSE’s Faskally Safety Leadership Centre

The Uptime Wind Energy Podcast

Play Episode Listen Later Jan 29, 2026 29:49


Allen visits the Faskally Safety Leadership Centre with Mark Patterson, Director of Safety, Health, and Environment at SSE, and Dermot Kerrigan, Director and Co-Founder of Active Training Team. They discuss how SSE has put over 9,000 employees and 2,000 contract partners through ATT’s innovative training program, which uses actors and realistic scenarios to create lasting behavioral change across the entire workforce chain, from executives to technicians. Reach out to SSE and ATT to learn more! Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! Welcome to Uptime Spotlight, shining Light on Wind. Energy’s brightest innovators. This is the Progress Powering tomorrow. Allen Hall: Mark and Turnt. Welcome to the show. Thank you.  Mark Patterson: Thank you.  Allen Hall: We’re in Scotland, present Scotland and per Scotland, which is a place most people probably haven’t ventured to in the United States, but it is quite lovely, although chilly and rainy. It’s Scotland. We’re in December. Uh, and we’re here to take a look at the SSE Training Center. And the remarkable things that active training team is doing here, because we had seen this in Boston in a smaller format, uh, about a year ago almost now.  Dermot Kerrigan: Just Yeah,  Allen Hall: yeah. Six months  Dermot Kerrigan: ago.  Allen Hall: Yeah. Yeah. It hasn’t been that long ago. Uh, but IC was on me to say, you gotta come over. You gotta come over. You gotta see the, the whole, uh, environment where we put you into the police room and some of the things we wanna talk about, uh, because it, [00:01:00] it does play different. And you’re right, it does play different. It is very impactful. And it, and maybe we should start off first of Mark, you’re the head of basically health and safety and environment for SSE here in Perth. This is a remarkable facility. It is unlike anything I have seen in the States by far. And SSE has made the commitment to do this sort of training for. Everybody in your employment and outside of your employment, even contractors.  Mark Patterson: We have been looking at some quite basic things in safety as everybody does. And there’s a fundamental thing we want to do is get everybody home safe. And uh, it’s easier said than done because you’ve gotta get it right for every single task, every single day. And that’s a massive challenge. And we have like 15,000. 15,000 people in SSE, we probably work with about 50,000 contract [00:02:00] partners and we’re heavily dependent, uh, on get our contract partners to get our activities done. And they’re crucial.  Speaker: Mm-hmm.  Mark Patterson: And in that it’s one community and we need to make sure everybody there gets home safe. And that’s what drove us to think about adding more rules isn’t gonna do it. Um, you need to give people that sense of a feeling, uh, when a really serious sense of cars and then equip them with tools to, to deal with it. So. We’ve all probably seen training that gives that sense of doom and dread when something goes badly wrong, but actually that needs to be. Coupled with something which is quite powerful, is what are the tools that help people have the conversations that gets everybody home safe. So kind of trying to do two things.  Allen Hall: Well, SSC is involved in a number of large projects. You have three offshore wind farms, about a more than a thousand turbines right now. Wind turbines onshore, offshore, and those offshore projects are not easy. There’s a lot of complexity to them.  Mark Patterson: Absolutely. So look, I I think [00:03:00] that’s, that’s something that. You’ve gotta partner with the right people. If you wanna be successful, you need to make it easy for people to do the right thing. Yeah, as best you possibly can. You need to partner with the right people, and you need to get people that you need to have a sense that you need to keep checking that as you’re growing your business. The chinks in your armor don’t grow too. But fundamentally there’s something else, which is a sense of community. When people come together to, to do a task, there is a sense of community and people work, put a lot of discretionary effort into to get, uh, big projects done. And in that, um, it’s a sense of community and you wanna make sure everybody there gets home safe to their friends and family. ’cause if we’re all being honest about it, you know, SSE is a brilliant company. What we do is absolutely worth doing. I love SC. But I love my family a fair amount more. And if you bought into that, you probably bought into the strategy that we’re trying to adopt in terms of safety. Uh, it’s really simple messaging. Um,  Allen Hall: yeah. That, that is very clear. Yeah. And it should be [00:04:00]well communicated outside of SSEI hope because it is a tremendous, uh, value to SSE to do that. And I’m sure the employees appreciate it because you have a culture of safety. What. Trigger that. How long ago was that trigger? Is this, this is not something you thought up yesterday for sure.  Mark Patterson: No, look, this, the, the, what we’ve done in the immersive training center, um, really reinforces a lot of things that we’ve had in place for a while, and it, it takes it to the, the next level. So we’ve been working probably more than 10 years, but, uh, certainly the. Seven years we’ve been talking very much about our safety family, that’s the community and SSE with our contract partners and what we need to do. And part of that is really clear language about getting people home safe. Uh, a sense that you’ve, everybody in it that works with us has a safety license. And that license is, if it’s not safe, we don’t do it. It’s not a rural based thing. It’s how we roll. It’s part of the culture. We’d, we, uh, have a culture where, and certainly trying to instill for everybody a culture. Where [00:05:00] they’ve got that license. If, if they think something’s not right, we’ll stop the job and get it right. And even if they’re wrong, we’ll still listen to them because ultimately we need to work our way through, right? So we’ve been, we’ve thought hard about the language we wanted to use to reinforce that. So the importance of plan, scan and adapt. So planning our work well, thinking through what we need to do. Not just stopping there though, keeping scanning for what could go wrong. That sense that you can’t remember everything. So you need to have immediate corrective actions and that immediate sort of see it, sort of report it. If you see something that isn’t right, do something about it. And that sense of community caring for the community that you work with. And those are the essence of our, our language on safety and the immersive training. Uh, is not trying to shove that language down everybody’s throats again, particularly our contract partners, but it’s, it’s helping people see some really clear things. One is if a [00:06:00] really serious incident occurs at what, what it feels like here. And I’ve spent a lot of time in various industries and people are different when they’ve been on a site or involved when there’s been a really serious incident and you need to do something to. Get that sense of a feeling of what it feels like and actually make people feel slightly uncomfortable in the process. ’cause that’s part of it,  Allen Hall: right? Yes.  Mark Patterson: Because you know,  Allen Hall: you remember that.  Mark Patterson: You remember that. Yeah. We’ve had, you know, we’ve had people say, well, I felt very uncomfortable in that bit of the training. It was okay. But was, I felt very uncomfortable. And you know, we’ve talked about that a lot.  Allen Hall: Yeah.  Mark Patterson: We know you kinda should because if there’s something wrong with you, if you don’t feel uncomfortable about that. But what’s super powerful on the guys in at TT do brilliantly. Is have facilitators that allow you to have that conversation and understand what do you need to do differently? How do you influence somebody who’s more senior? How do you, how do you bring people with you so that they’re gonna [00:07:00] do what you want ’em to do after you’ve left the building? And. Just pointing the finger at people and shouting at them. Never does that. Right? Uh, rarely does that. You’ve gotta get that sense of how do you get people to have a common belief? And,  Allen Hall: and I think that’s important in the way that SSE addresses that, is that you’re not just addressing technicians, it’s the whole chain. It’s everybody is involved in this action. And you can break the link anywhere in there. I wanna get through the description of why that. Process went through ATTs head to go. We need to broaden the scope a little bit. We need to think about the full chain from the lowest entry worker just getting started to the career senior executive. Why chain them all together? Why put them in the same room together? Yeah. Why do you do that?  Dermot Kerrigan: Well, behavioral safety or behavioral base safety kind of got a bad rep because it was all about. If we could just [00:08:00] make those guys at the front line behave themselves,  Allen Hall: then everything’s fine,  Dermot Kerrigan: then everything’s fine.  Allen Hall: Yes.  Dermot Kerrigan: But actually that’s kind of a, the wrong way of thinking. It didn’t work. I, I think,  Allen Hall: yeah, it didn’t work.  Dermot Kerrigan: What the mess, the central message we’re trying to get across is that actually operational safety is not just the business of operational people. It’s everybody’s business.  Allen Hall: Right.  Dermot Kerrigan: You know? Um, and. Yeah, everybody has a role to p play in that, you know? Right. So site based teams, back office support functions, everybody has a role to play. And, you know, there’s a strand in, in this scenario where, uh, an incident takes place because people haven’t been issued with the right piece of equipment. Which is a lifting cage.  Allen Hall: Yes.  Dermot Kerrigan: And there’s a whole story about that, which goes through a procurement decision made somewhere where somebody hit a computer and a computer said no because they’d asked for too many lifting cages when they, somebody could have said, you’ve asked for five lifting cages, it’s takes you over the procurement cap. Would four do it? [00:09:00] Yes, that would be fine. That would be fine. Yeah. As it is, they come to a crucial piece of operation. This incr this, you know, this crucial piece of kit simply isn’t there. So in order to hit the deadline and try and make people happy, two ordinary guys, two technicians, put two and two together, make five, and, and one of them gets killed, you know? Yeah. So it’s, we’re, we’re trying to show that, that this isn’t just operational people. It’s everybody’s business.  Mark Patterson: Well, that’s why we worked with you in this, because, um, we saw. Why you got it in terms of that chain? Um, so in, in the scenario, it’s very clear there’s a senior exec talking to the client and actually as SSE. We’re sometimes that client, we’ve got big principal contractors that are doing our big construction activities. We’ve got a lot in renewables and onshore and offshore wind obviously, but, and the transmission business and in thermal, so, uh, and distribution. So I’ll list all our businesses and including customer’s business, but we’ve got some big project activities where we’re the client sometime we’re the principal contractor [00:10:00] ourselves. And we need to recognize that in each chain, each link in that chain, there’s a risk that we say the wrong thing, put the wrong pressure on. And I think what’s really helpful is we have in the center that sort of philosophy here that we get everybody in together mixed up. Probably at least half of our board have done this. Our executive team have all done this. Um, people are committed to it at that level, and they’re here like everybody else sitting, waiting for this thing to start. Not being quite sure what they’re gonna go through in the day. Um, and it’s actually really important you’ve got a chief exec sitting with somebody who’s, um, a scaffolder. That’s really important. ’cause the scaffolder is probably the more likely person to get hurt rather than chief exec. So actually everybody seeing what it’s like and the pressures that are under at each level is really important.  Allen Hall: SSC is such a good example for the industry. I watched you from outside in America for a long time and you just watch the things that happened. [00:11:00] Here you go. Wow. Okay. SSC is organized. They know what they’re doing, they understand what the project is, they’re going about it. Mm-hmm. Nothing is perfect, but I, I think when we watch from the United States, we see, oh, there’s order to it. There’s a reason they’re doing these things. They’re, they’re measuring what is happening. And I think that’s one of the things about at t is the results. Have been remarkable, not just here, but in several different sites, because a TT touches a lot of massive infrastructure projects in the uk and the success rate has been tremendous. Remember? You wanna just briefly talk about that?  Dermot Kerrigan: Yeah. But we, we run a number of centers. We also run mobile programs, which you got from having seen us in the States. Um, but the first, uh, center that we, we, we opened was, was called. Epic, which stood for Employers Project Induction Center, and that was the Thames Tideway Tunnel Project, which is now more or less finished. It’s completed. And that was a 10 year project, 5 billion pounds. Allen Hall: Wow.  Dermot Kerrigan: Um, [00:12:00] and you know, unfortunately the fact is on, on that kind of project, you would normally expect to hurt a number of people, sometimes fatally. That would be the expectation.  Allen Hall: Right. It’s a complicated  Dermot Kerrigan: project, statistic underground. So, you know, we, and, and of course Tide, we are very, very. Very pleased that, uh, in that 10 year span, they didn’t even have one, uh, serious life-changing injury, uh, let alone a fatality. Um, so you know that that’s, and I’m I’m not saying that what ATTs work, uh, what we do is, is, is, is directly responsible for that, but certainly Epic, they would say Tideway was the cornerstone for the safety practices, very good safety practices that they, they put out. Uh, on that project, again, as a cultural piece to do with great facilities, great leadership on the part of the, of the, of the executive teams, et cetera, and stability. It was the same ex executive team throughout that whole project, which is quite unusual.  Allen Hall: No.  Dermot Kerrigan: Yeah. [00:13:00] Um, so yeah, it, it, it seems to work, you know, uh, always in safety that the, the, the, the tricky thing is trying to prove something works because it hasn’t happened. You know?  Allen Hall: Right, right. Uh, prove the negative. Dermot Kerrigan: Yeah. Um,  Allen Hall: but in safety, that’s what you want to have happen. You, you do know, not want an outcome.  Dermot Kerrigan: No, absolutely not.  Allen Hall: No reports, nothing.  Dermot Kerrigan: No. So, you know, you have to give credit to, to organizations. Organizations like SSE. Oh, absolutely. And projects like Tideway and Sted, uh, on their horn projects. Who, who have gone down this, frankly, very left field, uh, route. We we’re, you know, it is only in the last 10 years that we’ve been doing this kind of thing, and it hasn’t, I mean, you know, Tideway certainly is now showing some results. Sure. But, you know, it’s, it’s, it, it wasn’t by any means a proven way of, of, of dealing with safety. So  Mark Patterson: I don’t think you could ever prove it. Dermot Kerrigan: No.  Mark Patterson: And actually there’s, there’s something [00:14:00]fundamentally of. It, it kind of puts a stamp on the culture that you want, either you talked about the projects in SSE, we’ve, we’ve done it for all of our operational activities, so we’ve had about 9,000 people through it for SSE and so far about 2000 contract partners. Um, we’re absolutely shifting our focus now. We’ve got probably 80% of our operational teams have been through this in each one of our businesses, and, uh, we. We probably are kind of closing the gaps at the moment, so I was in Ireland with. I here guys last week, um, doing a, a mobile session because logistically it was kind of hard to come to Perth or to one of the other centers, but we’re, we’re gradually getting up to that 80%, uh, for SSE colleagues and our focus is shifting a bit more to contract partners and making sure they get through. And look, they are super positive about this. Some of them have done that themselves and worked with a TT in the past, so they’re. Really keen to, to use the center that we have [00:15:00] here in Perth, uh, for their activities. So when, when they’re working with us, we kind of work together to, to make that happen. Um, but they can book that separately with you guys. Yeah. Uh, in, in the, uh, Fastly Center too.  Allen Hall: I think we should describe the room that we’re in right now and why this was built. This is one of three different scenes that, that each of the. Students will go through to put some realism to the scenario and the scenario, uh, a worker gets killed. This is that worker’s home? Dermot Kerrigan: Yeah. So each of the spaces that we have here that, that they denote antecedents or consequences, and this is very much consequences. Um, so the, the, the participants will be shown in here, uh, as they go around the center, uh, and there’s a scene that takes place where they meet the grown up daughter of the young fella who’s been right, who’s been, who’s been tragically killed. Uh, and she basically asks him, uh, asks [00:16:00] them what happened. And kind of crucially this as a subtext, why didn’t you do something about it?  Allen Hall: Mm-hmm.  Dermot Kerrigan: Because you were there,  Allen Hall: you saw it, why it was played out in front of you. You saw, you  Dermot Kerrigan: saw what happened. You saw this guy who was obviously fast asleep in the canteen. He was exhausted. Probably not fit for work. Um, and yet being instructed to go back out there and finish the job, um, with all the tragic consequences that happen,  Allen Hall: right?  Dermot Kerrigan: But it’s important to say, as Mark says, that. It’s not all doom and gloom. The first part of the day is all about showing them consequences. Allen Hall: Sure. It’s  Dermot Kerrigan: saying it’s a,  Allen Hall: it’s a Greek tragedy  Dermot Kerrigan: in  Allen Hall: some  Dermot Kerrigan: ways, but then saying this doesn’t have to happen. If you just very subtly influence other people’s behavior, it’s  Allen Hall: slight  Dermot Kerrigan: by thinking about how you behave and sure adapting your behavior accordingly, you can completely change the outcome. Uh, so long as I can figure out where you are coming from and where that behavior is coming from, I might be able to influence it,  Allen Hall: right. Dermot Kerrigan: And if I can, then I can stop that [00:17:00] hap from happening. And sure enough, at the end of the day, um, the last scene is that the, the, the daughter that we see in here growing up and then going back into this tragic, uh, ending, uh. She’s with her dad, then it turned out he was the one behind the camera all along. So he’s 45 years old, she’s just passed the driving test and nobody got her 21 years ago. You know,  Mark Patterson: I think there, there is, there’s a journey that you’ve gotta take people through to get to believe that. And kind of part of that journey is as, as we look around this room, um, no matter who it is, and we’ve talked to a lot of people, they’ll be looking at things in this room and think, well, yeah, I’ve got a cup like that. And yes. Yeah. When my kids were, we, we had. That play toy for the kids. Yes. So there is something that immediately hooks people and children hook  Allen Hall: people.  Mark Patterson: Absolutely. And  Allen Hall: yes,  Mark Patterson: they get to see that and understand that this is, this is, this is, could be a real thing. And also in the work site, uh, view, there’s kind of a work site, there’s a kind of a boardroom type thing [00:18:00] and you can actually see, yeah, that’s what it kind of feels like. The work sites a little bit. You know, there’s scuffs in the, on the line, on the floor because that’s what happens in work sites and there’s a sense of realism for all of this, uh, is really important.  Allen Hall: The realism is all the way down to the outfits that everybody’s worn, so they’re not clean safety gear. It’s. Dirty, worn safety gear, which is what it should be. ’cause if you’re working, that’s what it should look like. And it feels immediately real that the, the whole stage is set in a, in the canteen, I’ll call it, I don’t know, what do you call the welfare area? Yeah. Okay.  Dermot Kerrigan: Yeah.  Allen Hall: Okay. Uh, wanna use the right language here. But, uh, in the states we call it a, a break room. Uh, so you’re sitting in the break room just minding your own business and boom. An actor walks in, in full safety gear, uh, speaking Scottish very quickly, foreign American. But it’s real.  Mark Patterson: I think  Allen Hall: it feels real because you, you, I’ve been in those situations, I’ve seen that that break the,  Mark Patterson: the language is real and, uh, [00:19:00] perhaps not all, uh, completely podcast suitable. Um, but when you look at it, the feedback we’ve got from, from people who are closer to the tools and at all levels, in fact is, yeah. This feels real. It’s a credible scenario and uh, you get people who. I do not want to be in a safety training for an entire day. Um, and they’re saying arms folded at the start of the day and within a very short period of time, they are absolutely watching what the heck’s going on here. Yes. To understand what’s happening, what’s going on. I don’t understand. And actually it’s exactly as you say, those subtle things that you, not just giving people that experience, but the subtle things you can nudge people on to. There’s some great examples of how do you nudge people, how do you give feedback? And we had some real examples where people have come back to us and said even things to do with their home life. We were down in London one day, um, and I was sitting in on the training and one of the guys said, God, you’ve just taught me something about how I can give feedback to people in a really impactful [00:20:00] way. So you, so you explain the behavior you see, which is just the truth of what the behavior is. This is what I saw you do, this is what happened, but actually the impact that that has. How that individual feels about it. And the example that they used was, it was something to do with their son and how their son was behaving and interacting. And he said, do you know what? I’ve struggled to get my son to toe the line to, to look after his mom in the right way. I’m gonna stop on the way home and I’m gonna have a conversation with him. And I think if I. Keep yourself cool and calm and go through those steps. I think I can have a completely different conversation. And that was a great example. Nothing to do with work, but it made a big difference to that guy. But all those work conversations where you could just subtly change your tone. Wind yourself back, stay cool and calm and do something slightly different. And I think that those, those things absolutely make a difference,  Allen Hall: which is hard to do in the moment. I think that’s what the a TT training does make you think of the re the first reaction, [00:21:00] which is the impulsive reaction. We gotta get this job done. This has gotta be done. Now I don’t have the right safety gear. We’ll, we’ll just do it anyway to, alright, slow. Just take a breather for a second. Think about what the consequences of this is. And is it worth it at the end of the day? Is it worth it? And I think that’s the, the reaction you want to draw out of people. But it’s hard to do that in a video presentation or  Dermot Kerrigan: Yeah.  Allen Hall: Those things just  Dermot Kerrigan: don’t need to practice.  Allen Hall: Yeah. It doesn’t stick in your brain.  Dermot Kerrigan: You need to give it a go And to see, right. To see how to see it happen. And, and the actors are very good. They’re good if they, you know. What, whatever you give them, they will react to.  Mark Patterson: They do. That’s one of the really powerful things. You’ve got the incident itself, then you’ve got the UNP of what happened, and then you’ve got specific, uh, tools and techniques and what’s really good is. Even people who are not wildly enthusiastic at the start of the day of getting, being interactive in, in, in a session, they do throw themselves into it ’cause they recognize they’ve been through [00:22:00] something. It’s a common sense of community in the room.  Dermot Kerrigan: Right.  Mark Patterson: And they have a bit of fun with it. And it is fun. Yeah. You know, people say they enjoy the day. Um, they, they, they recognize that it’s challenged them a little bit and they kinda like that, but they also get the opportunity to test themselves. And that testing is really important in terms of, sure. Well, how do you challenge somebody you don’t know and you just walking past and you see something? How do you have that conversation in a way that just gets to that adult To adult communication? Yeah. And actually gets the results that you need. And being high handed about it and saying, well, those are the rules, or, I’m really important, just do it. That doesn’t give us a sustained improvement.  Dermot Kerrigan: PE people are frightened of failure, you know? Sure. They’re frightened of getting things wrong, so give ’em a space where they, where actually just fall flat in your face. Come back up again and try again. You know, give it a go. And, because no one’s, this is a safe space, you know, unlike in the real world,  Allen Hall: right?  Dermot Kerrigan: This is as near to the real world as you want to get. It’s pretty real. It’s safe, you know, uh, it’s that Samuel Beckett thing, you know, fail again, [00:23:00] fail better,  Allen Hall: right?  Mark Patterson: But there’s, there’s a really good thing actually because people, when they practice that they realize. Yeah, it’s not straightforward going up and having a conversation with somebody about something they’re doing that could be done better. And actually that helps in a way because it probably makes people a little bit more generous when somebody challenges them on how they’re approaching something. Even if somebody challenges you in a bit of a cat handed way, um, then you can just probably take a breath and think this. This, this guy’s probably just trying to have a conversation with me,  Allen Hall: right. Mark Patterson: So that I get home to my family.  Allen Hall: Right.  Mark Patterson: It’s hard to get annoyed when you get that mindset. Mindset  Allen Hall: someone’s looking after you just a little bit. Yeah. It does feel nice.  Mark Patterson: And, and even if they’re not doing it in the best way, you need to be generous with it. So there’s, there’s good learnings actually from both sides of the, the, the interaction. Allen Hall: So what’s next for SSE and at t? You’ve put so many people through this project in, in the program and it has. Drawn great results.  Mark Patterson: Yeah.  Allen Hall: [00:24:00] How do you, what do you think of next?  Mark Patterson: So what’s next? Yeah, I guess, uh, probably the best is next to come. Next to come. We, I think there’s a lot more that we can do with this. So part of what we’ve done here is establish with a big community of people, a common sense of what we’re doing. And I think we’ve got an opportunity to continue with that. We’ve got, um, fortunate to be in a position where we’ve got a good level of growth in the business.  Allen Hall: Yes,  Mark Patterson: we do. Um, there’s a lot going on and so there’s always a flow of new people into an organization, and if people, you know, the theory of this stuff better than I do, would say that you need to maintain a, a sense of community that’s kind of more than 80%. If you want a certain group of people to act in a certain way, you need about 80% of the people plus to act in that way, and then it’ll sustain. But if it starts. To drift so that only 20% of people are acting a certain way, then that is gonna ex extinguish that elements of the culture. So we need to keep topping up our Sure, okay. Our, our [00:25:00] immersive training with people, and we’re also then thinking about the contract partners that we have and also leaving a bit of a legacy. For the communities in Scotland, because we’ve got a center that we’re gonna be using a little bit less because we’ve fortunate to get the bulk of our people in SSE through, uh, we’re working with contract partners. They probably want to use it for. For their own purposes and also other community groups. So we’ve had all kinds of people from all these different companies here. We’ve had the Scottish first Minister here, we’ve had loads of people who’ve been really quite interested to see what we’re doing. And as a result of that, they’ve started to, uh, to, to step their way through doing something different themselves. So,  Allen Hall: so that may change the, the future of at t also. And in terms of the slight approach, the scenarios they’re in. The culture changes, right? Yeah. Everybody changes. You don’t wanna be stuck in time.  Dermot Kerrigan: No, absolutely.  Allen Hall: That’s one thing at t is not,  Dermot Kerrigan: no, it’s not  Allen Hall: stuck in time.  Dermot Kerrigan: But, uh, I mean, you know, we first started out with the centers, uh, accommodating project. Yeah. So this would [00:26:00] be an induction space. You might have guys who were gonna work on a project for two weeks, other guys who were gonna work on it for six months. They wanted to put them through the same experience. Mm. So that when they weren’t on site. That they could say, refer back to the, the, the, the induction and say, well, why ask me to do that? You know, we, we, we both have that experience, so I’m gonna challenge you and you’re gonna accept challenge, et cetera. So it was always gonna be a short, sharp shock. But actually, if you’re working with an organization, you don’t necessarily have to take that approach. You could put people through a little bit of, of, of, of the training, give ’em a chance to practice, give ’em a chance to reflect, and then go on to the next stage. Um. So it, it becomes more of a, a journey rather than a single hard, a single event experience. Yeah. You don’t learn to drive in a day really, do you? You know, you have to, well, I do transfer it to your right brain and practice, you know?  Allen Hall: Right. The more times you see an experience that the more it’s memorable and especially with the, the training on how to work with others.[00:27:00] A refresh of that is always good.  Dermot Kerrigan: Yeah.  Allen Hall: Pressure changes people and I think it’s always time to reflect and go back to what the culture is of SSE That’s important. So this, this has been fantastic and I, I have to. Thank SSC and a TT for allowing us to be here today. It was quite the journey to get here, but it’s been really enlightening. Uh, and I, I think we’ve been an advocate of a TT and the training techniques that SSC uses. For well over a year. And everybody we run into, and in organizations, particularly in win, we say, you, you gotta call a TT, you gotta reach out because they’re doing things right. They’re gonna change your safety culture, they’re gonna change the way you work as an organization. That takes time. That message takes time. But I do think they need to be reaching out and dermo. How do they do that? How do, how do they reach att?  Dermot Kerrigan: Uh, they contact me or they contact att. So info at Active Trading Team, us.  Allen Hall: Us. [00:28:00] There you go.  Dermot Kerrigan: or.co uk. There you go. If you’re on the other side of the pond. Yeah. Allen Hall: Yes. And Mark, because you just established such a successful safety program, I’m sure people want to reach out and ask, and hopefully a lot of our US and Australian and Canadian to listen to this podcast. We’ll reach out and, and talk to you about how, what you have set up here, how do they get ahold of you? Mark Patterson: I’ll give you a link that you can access in the podcast, if that. Great. And uh, look. The, the risk of putting yourself out there and talking about this sort of thing is you sometimes give the impression you’ve got everything sorted and we certainly don’t in SSE. And if the second you think you’ve got everything nailed in terms of safety in your approach, then, then you don’t. Um, so we’ve got a lot left to do. Um, but I think this particular thing has made a difference to our colleagues and, and contract partners and just getting them home safe.  Allen Hall: Yes. Yes, so thank you. Just both of you. Mark Dermott, thank you so much for being on the podcast. We appreciate both [00:29:00] of you and yeah, I’d love to attend this again, this is. Excellent, excellent training. Thanks, Alan. Thanks.

Com d'Archi
S7#15

Com d'Archi

Play Episode Listen Later Jan 20, 2026 22:07


In this new episode of Com d'Archi Podcast, we head to “Cœur Paris,” a project unveiled in January 2026, located in the former headquarters of the AP-HP, in the historic heart of the capital.Winner of the “Réinventer Paris 3” call for projects, this 27,000 m² development will become, in 2028, the first “mission-driven building” in Paris.On the program:– rehabilitation of Haussmannian heritage,– contemporary architectural additions,– low-carbon transition,– mixed uses: offices, social housing, social and solidarity economy (SSE), services and shops open to all.Through this project, Com d'Archi explores a central notion: urban hospitality.How can we repair in times of peace?How can we inhabit buildings steeped in history in new ways?And how much “heart” do we really put into shaping the city?This English version was generated using AI with voice cloning, preserving the speakers' timbre (Anne-Charlotte) and their natural French accent.Audio production comdarchipodcastTeaser image © Dominique PerraultProject: Cœur ParisView of the “Chambord”-inspired wooden staircase in the Saint-Martin block, designed by Dominique Perrault___If you like the podcast do not hesitate:. to subscribe so you don't miss the next episodes,. to leave us stars and a comment :-),. to follow us on Instagram @comdarchipodcast to find beautiful images, always chosen with care, so as to enrich your view on the subject.Nice week to all of you ! Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Transmission
The art of origination: Offtake and risk in GB's energy market with Josh Brown (SSE)

Transmission

Play Episode Listen Later Jan 15, 2026 50:34


Want the latest news, analysis, and price indices from power markets around the globe - delivered to your inbox, every week?Sign up for the Weekly Dispatch - Modo Energy's unmissable newsletter.https://bit.ly/TheWeeklyDispatchNavigating the energy transition requires more than just building assets; it requires a deep understanding of how to price risk in a market that is fundamentally cannibalising itself as it grows.The transition to a renewables dominated energy system requires expert commercial strategy, especially in the volatile realm of battery storage and renewable certificate. Ed Porter is joined by Josh Brown - Operations Team Manager at SSE plc to explore what the front office operations of a major utility look like in practice and what navigating market saturation in batteries and the management of third-party assets using financing tools like tolls and Power Purchase Agreements (PPAs).Key topics covered: •How utility origination teams manage the commercial complexity of battery assets in a fundamentally "self-cannibalizing" market?•What internal process are required to negotiate and approve complex, high-risk contracts such as tolls.• Is the energy sector prepared for the disruptive market shift from annual REGO matching to a 24/7 hourly certification system?• How commercial teams are structuring PPAs between developers and offtakers.• Whether Contracts for Difference (CFD) rules are creating significant exposure for large offtakers.About our guestJosh Brown is the Origination Team Manager at SSE, working within the Energy Markets division, managing market-facing power and gas positions for both SSE's own extensive asset base and third-party clients. He specialises in navigating the complexities of Power Purchase Agreements (PPAs) for solar, wind, and hydro, alongside structured battery optimisation products and the management of green certificate trading (including REGOs and ROCs) for the entire group. Connect with Josh here https://www.linkedin.com/in/josh-brown-4a8b0336/?originalSubdomain=ukSSE is a leading clean energy utility with a major presence across Great Britain and Ireland. The group is active across the entire energy value chain, including renewable and thermal generation, electricity networks, and supply. SSE has contracted over 2 GW of batteries and 3 GW of CfD-backed assets in the last two years alone for more information, head to their website. https://www.sse.com/About Modo EnergyModo Energy helps the owners, operators, builders, and financiers of battery energy storage understand the market — and make the most out of their assets.All episodes of Transmission are available to watch or listen to on the Modo Energy site. To stay up to date with our analysis, research, data visualisations, live events, and conversations, follow us on LinkedIn. Explore The Energy Academy, our bite-sized video series explaining how power markets work.

Transmission
The art of origination: Offtake and risk in GB's energy market with Josh Brown (SSE)

Transmission

Play Episode Listen Later Jan 15, 2026 50:34


Want the latest news, analysis, and price indices from power markets around the globe - delivered to your inbox, every week?Sign up for the Weekly Dispatch - Modo Energy's unmissable newsletter.https://bit.ly/TheWeeklyDispatchNavigating the energy transition requires more than just building assets; it requires a deep understanding of how to price risk in a market that is fundamentally cannibalising itself as it grows.The transition to a renewables dominated energy system requires expert commercial strategy, especially in the volatile realm of battery storage and renewable certificate. Ed Porter is joined by Josh Brown - Operations Team Manager at SSE plc to explore what the front office operations of a major utility look like in practice and what navigating market saturation in batteries and the management of third-party assets using financing tools like tolls and Power Purchase Agreements (PPAs).Key topics covered: •How utility origination teams manage the commercial complexity of battery assets in a fundamentally "self-cannibalizing" market?•What internal process are required to negotiate and approve complex, high-risk contracts such as tolls.• Is the energy sector prepared for the disruptive market shift from annual REGO matching to a 24/7 hourly certification system?• How commercial teams are structuring PPAs between developers and offtakers.• Whether Contracts for Difference (CFD) rules are creating significant exposure for large offtakers.About our guestJosh Brown is the Origination Team Manager at SSE, working within the Energy Markets division, managing market-facing power and gas positions for both SSE's own extensive asset base and third-party clients. He specialises in navigating the complexities of Power Purchase Agreements (PPAs) for solar, wind, and hydro, alongside structured battery optimisation products and the management of green certificate trading (including REGOs and ROCs) for the entire group. Connect with Josh here https://www.linkedin.com/in/josh-brown-4a8b0336/?originalSubdomain=ukSSE is a leading clean energy utility with a major presence across Great Britain and Ireland. The group is active across the entire energy value chain, including renewable and thermal generation, electricity networks, and supply. SSE has contracted over 2 GW of batteries and 3 GW of CfD-backed assets in the last two years alone for more information, head to their website. https://www.sse.com/About Modo EnergyModo Energy helps the owners, operators, builders, and financiers of battery energy storage understand the market — and make the most out of their assets.All episodes of Transmission are available to watch or listen to on the Modo Energy site. To stay up to date with our analysis, research, data visualisations, live events, and conversations, follow us on LinkedIn. Explore The Energy Academy, our bite-sized video series explaining how power markets work.

Consensus Unreality: Occult, UFO, Phenomena and Conspiracy strangeness
Solid State Intelligence, John C. Lilly Revisited, ECCO, 3I/Atlas PATREON PREVIEW

Consensus Unreality: Occult, UFO, Phenomena and Conspiracy strangeness

Play Episode Listen Later Dec 19, 2025 9:14


ECCO is calling. Will you answer? SSE is at your door. Do you let it in? Hear our advice in this breezy discussion of our beloved mad scientist mystic John C. Lilly. The spiritual insights of a Vitamin K. psychonaut; eerie AI prophecies; coincidence and synchronicity; and more!  Here this full episode and over 150 hours of exclusive episodes on Patreon, plus join our Print Club to receive our printed Journal of Shells publication and more.. https://www.patreon.com/c/consensusunreality

The Investor Way
E255 - Burberry, Persimmon, SSE, Taylor Wimpey, Experian & Netflix

The Investor Way

Play Episode Listen Later Dec 8, 2025 38:15


In this episode we discuss Burberry, Persimmon, SSE, Taylor Wimpey, Experian & Netflix$brby $psn $sse $tw. $expn $nflx#brby #psn #sse #tw. #expn #nflx

The Uptime Wind Energy Podcast
Europe Weighs Chinese Turbines Against Energy Independence

The Uptime Wind Energy Podcast

Play Episode Listen Later Dec 1, 2025 5:42


Allen covers the debate over Chinese wind turbines in Europe, from data security concerns and unfair subsidies to the risk of trading one energy dependency for another. Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary Barnes’ YouTube channel here. Have a question we can answer on the show? Email us! Wind energy is one of Europe’s great strengths. Providing twenty percent of European electricity today. Over half by 2050. That’s the plan. Competitive. Homegrown. Quick to build. Almost every wind turbine spinning in Europe today was made in Europe. By European companies. Assembled in European factories. Hundreds of factories across the continent make components for wind turbines. Over Four hundred thousand Europeans punch the clock in wind energy. Every new turbine generates sixteen million euros of economic activity. And this week, proof of that investment. In Germany, the He Dreiht offshore wind farm just sent its first power into the grid. Nine hundred sixty megawatts. Germany’s largest offshore wind farm. VESTAS turbines standing one hundred forty-two meters tall. Sixty-four turbines total. All commissioned by summer 2026. NILS DE BAAR of VESTAS said the fifteen megawatt turbine sets new standards in offshore wind power. European technology. European manufacturing. European energy. In Ireland, more European investment. SSE and FUTURENERGY IRELAND tapped NORDEX to build the Wind Farm in County Donegal. Twelve turbines. Sixty megawatts. One hundred thirty-eight million dollars. Forty thousand Irish homes powered when those blades turn in 2027. And in Scotland and Italy, floating wind is consolidating. NADARA is acquiring BLUEFLOAT ENERGY’s stake in ten floating offshore projects. BROADSHORE. BELLROCK. SINCLAIR. SCARABEN. Nearly three gigawatts of floating wind now under single European ownership. Today’s wind farms save Europe one hundred billion cubic meters of gas imports every year. In Britain alone, consumers saved one hundred four billion pounds between 2010 and 2023. That’s after factoring in the cost of building the wind farms. Wind means lower energy bills. Wind means independence. But here comes the temptation. Chinese turbines are cheaper. Much cheaper. And in times of strained budgets and rising costs… That’s hard to ignore. GILES DICKSON is the CEO of WINDEUROPE. He says… Think about what you’re buying. The European Commission launched an inquiry last year. They suspect Chinese manufacturers offer prices and payment terms backed by unfair government subsidies. European manufacturers can’t legally offer the same deferred payment deals. OECD rules won’t allow it. Then there’s energy security. Europe just weaned itself off Russian gas. Painfully. Expensively. Three years later, high energy prices still drag on the economy. Does Europe want another dangerous dependency? This time on imported equipment instead of imported fuel? And as Giles points out – a modern wind turbine has hundreds of sensors. Hundreds. Gathering performance data. Monitoring operations. European law prohibits exporting that data to China. But Chinese law allows Beijing to require Chinese companies to send data home from overseas operations. There’s a contradiction. Someone’s going to break the law. And those sensors? They don’t just collect data. They can control equipment. The European Union and NATO are voicing concerns. The wind industry has invested over fourteen billion euros in new and expanded European factories in just the last two years. That’s commitment. That’s confidence. And the rest of the world is taking notice. In Japan, FAIRWIND just signed a strategic partnership with WIND ENERGY PARTNERS in YOKOHAMA. MATT CROSSAN, FAIRWIND’s Asia Pacific Director, said Japan’s wind sector is still young compared to Europe. But government support and investment are driving expansion. They want European expertise. European experience. European standards. Wind energy is the last strategic clean tech sector with a truly European footprint. The last one. Solar panels. Batteries. Electric vehicles. Those have already migrated elsewhere. But Wind remains. For now. Four hundred forty thousand workers. Two hundred fifty factories. Fourteen billion euros in new investment. One hundred billion cubic meters of gas imports avoided every year. Germany’s largest offshore wind farm now feeding the grid. Ireland building new capacity. Scotland consolidating floating wind. Japan seeking European partners. Europe can buy cheaper today. Or build stronger tomorrow. GILES DICKSON is sounding the alarm. But, will Europe listen? That's the wind industry news on the 1st of December 2025.

.NET in pillole
321 - Le evoluzioni di ASP.NET Core (con .NET 10) che gli sviluppatori non possono ignorare

.NET in pillole

Play Episode Listen Later Dec 1, 2025 19:53


n questa puntata esploriamo le principali novità introdotte in ASP.NET Core 10 (escludendo Blazor) dalle ottimizzazioni di Kestrel alla validazione nelle Minimal API, dal nuovo supporto agli SSE fino ai miglioramenti di OpenAPI, sicurezza e performance. Un aggiornamento ricco di funzionalità pratiche che semplificano lo sviluppo di API moderne, più veloci e più sicure.https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-10.0https://learn.microsoft.com/en-us/aspnet/core/security/authentication/passkeys/blazorhttps://learn.microsoft.com/en-us/aspnet/core/migration/90-to-100#dotnet #aspnet #dotnet10 #podcast #dotnetinpillole

Irish Tech News Audio Articles
10% GDP boost to Global South from clean energy transition

Irish Tech News Audio Articles

Play Episode Listen Later Nov 18, 2025 4:13


A new University of Oxford report finds a rapid switch to renewables could double energy-sector productivity in low-to-middle income economies within 25 years. In many countries, this would result in a GDP boost by mid-century of around 10%. "Opting for clean energy could be an economic boon for solar-rich countries such as Burundi, DR Congo and Mozambique," says Professor Sam Fankhauser, Interim Director of Oxford Smith School of Enterprise and the Environment. "For context, 10% of GDP is roughly the amount countries typically spend on public health. These productivity gains are unprecedented, and it could be the developing countries that benefit the most." The importance and benefits of a clean energy transition Renewable energy boosts productivity in two ways: more electricity is generated per dollar invested, with fewer losses (for example to heat) compared to fossil fuels, and renewable energy is cheaper - enabling households, businesses and industries to run for longer at lower cost. The report quantifies this gain over the next 25 years and finds that renewable energy productivity gains are much higher in the Global South, resulting in an important advantage in the growing net zero economy. Renewables could finally start to close the income gap between rich and poor countries, say the authors. The report, part of a three-year research programme funded by energy company SSE, also investigates how renewable energy investment has already boosted GDP in low and middle-income countries as compared to fossil fuels. Spending on renewables gets multiplied in the local economy much more than fossil fuels - along the supply chain and through local wages. The analysis shows that from 2017-2022 this has boosted the GDP of the 100 largest developing countries (excluding China) by a combined US$1.2 trillion - the equivalent of 2 to 5% of GDP for most nations. In COP30 host Brazil, renewable investments raised GDP by US$128 billion. However, the authors caution that the economic benefits of renewables do not automatically flow to host communities. Instead, deliberate benefit-sharing mechanisms such as community benefit funds and co-ownership are needed. The report concludes by emphasising the potential of distributed renewable energy for accessibility and inclusion. "The success of the renewable energy transition will depend not only on lower costs and higher productivity - both of which are now all but guaranteed - but on our collective ability to ensure that its benefits are fairly and widely shared, leaving no community behind," says Professor Fankhauser. Rhian Kelly, Chief Sustainability Officer at SSE, comments: "Meaningful consultation must sit at the heart of every approach to community engagement. The most successful models go well beyond minimum requirements, reflecting the priorities and context of local people. By sharing learnings, we can identify what works best - and ensure that dedicated community funds are transparent, flexible, truly responsive to local needs. In the UK and Ireland, these funds have already supported more than 12,000 projects. With clear policy frameworks - including minimum contribution thresholds and standardised benefit-sharing agreements - we can build on this success and deliver lasting benefits for communities." The report will be uploaded here: https://www.smithschool.ox.ac. uk/research/economics sustainability About the Smith School of Enterprise and the Environment The Smith School of Enterprise and the Environment at the University of Oxford equips enterprise to achieve net zero emissions and the sustainable development goals, through world-leading research, teaching and partnerships. https://www.smithschool.ox.ac. uk/ See more breaking stories here.

Walker Crips' Market Commentary
Countdown to a December interest rate cut

Walker Crips' Market Commentary

Play Episode Listen Later Nov 18, 2025 8:20


The case for a December Bank of England (“BoE”) rate cut strengthened significantly last week, as a string of data pointed to a stalling economy and a rapidly cooling labour market. Third quarter gross domestic product (“GDP”) growth fell short of expectations at just 0.1%, with the lack of momentum reflecting continued weakness. Further to this, UK unemployment figures rose to 5%, the highest since 2021, prompting traders to price in an 80% chance of a BoE rate cut in December. In addition, UK wage growth is stalling, with a recent KPMG/Recruitment and Employment Confederation survey showing near 4-year lows, indicating to the BoE that wage pressures are easing. All eyes will now be on Wednesday's inflation data, forecast to ease to 3.6%, which would support the case for a more dovish BoE...Stocks featured:3i Group, SSE and Vodafone GroupTo find out more about the investment management services offered by Walker Crips, please visit our website:https://www.walkercrips.co.uk/This podcast is intended to be Walker Crips Investment Management's own commentary on markets. It is not investment research and should not be construed as an offer or solicitation to buy, sell or trade in any of the investments, sectors or asset classes mentioned. The value of any investment and the income arising from it is not guaranteed and can fall as well as rise, so that you may not get back the amount you originally invested. Past performance is not a reliable indicator of future results. Movements in exchange rates can have an adverse effect on the value, price or income of any non-sterling denominated investment. Nothing in this podcast constitutes advice to undertake a transaction, and if you require professional advice you should contact your financial adviser or your usual contact at Walker Crips. Walker Crips Investment Management Limited is authorised and regulated by the Financial Conduct Authority (FRN: 226344) and is a member of the London Stock Exchange. Hosted on Acast. See acast.com/privacy for more information.

VSA Capital
VSA Capital Technology and Transitional Energy 13/11/2025

VSA Capital

Play Episode Listen Later Nov 13, 2025 26:00


SSE, Rolls Royce, BAE Systems, Avon Technologies, Volex, Luceco Group, AB Dynamics, Dialight, Oxford Instruments

Mercado Abierto
Las claves del día en Europa

Mercado Abierto

Play Episode Listen Later Nov 12, 2025 7:58


Repasamos nombres como Infineon, STMicroelectronics, RWE, E.ON, SSE, NIBC Bank y Edenred. Con Pablo García, director general de Divacons Alphavalue.

Special English
Village galas ignite ethnic heritage revival in southwest China

Special English

Play Episode Listen Later Nov 3, 2025 27:00


①China unveils first batch of firms listed on SSE's sci-tech growth tier ②Village galas ignite ethnic heritage revival in southwest China ③China reports 4.7 mln 5G base stations by end of September ④China strengthens position in musical instrument industry ⑤China's manned submersibles report success of joint underwater operations in Arctic ⑥China's Sichuan enacts first local regulation on ancient book protection

Reading Teachers Lounge
8.3 Affirming Practices with Dr. Jasmine Rogers

Reading Teachers Lounge

Play Episode Listen Later Oct 31, 2025 44:51 Transcription Available


In this episode, you're getting a preview of the types of conversations happening with educators in our bonus subscription episodes.   This month, Shannon and Mary chat with Dr. Jasmine Rogers, a reading specialist and college educator, about her dual roles in literacy.   Dr. Rogers discusses her research on African American English (AAE) and structured literacy, emphasizing the importance of affirming behaviors in promoting student motivation and effective communication. Drawing on personal experiences and professional expertise, she emphasizes the importance of affirming diverse dialects, including Black English, and fostering an inclusive and supportive classroom environment. The episode also covers translanguaging and strategies for teachers to support multilingual students, highlighting the significance of creating a positive, inclusive, and affirming classroom environment.  Tune in to learn more about effective teaching practices, the science of reading, and how teachers can better support students from diverse linguistic backgrounds.0:00 Welcome to the Reading Teacher Lounge01:09 Introducing Dr. Jasmine Rogers02:37 Understanding Black Language in Education04:34 Research on Affirming Student Language07:32 The Importance of Cultural Awareness in Teaching13:55 Personal Experiences and Reflections15:48 Journey into Structured Literacy17:37 Merging Identity with Teaching Practices22:14 Reflecting on Teaching Practices23:03 The Power of Translanguaging24:57 Effective Communication Techniques26:55 Building Positive Classroom Environments30:04 Supporting Teachers and Students31:29 The Importance of Authenticity in Teaching32:59 Insights from Research36:12 Morphology and Language Learning39:50 Final Thoughts and FarewellsRESOURCES MENTIONED IN THE EPISODEDr. Jasmine's websiteConnect with Dr. Jasmine Rogers through her websiteConnect with Dr. Jasmine Rogers through LinkedIn44 Phonemes Video from RRFTS (Rollins Center for Language and Literacy)DC Public Schools Reading ClinicFrayer ModelStrive for Five Conversations by Tricia Zucker and Sonia Cabell *Amazon affiliate linkEducation Week:  What is Translanguaging and How Is it Used in the Classroom?Buy us a coffeeGet a FREE Green Chef box with our linkBonus Episodes access through your podcast appBonus episodes access through PatreonFree Rubrics Guide created by usFinding Good Books Guide created by usSupport the showGet Literacy Support through our Patreon

Packet Pushers - Heavy Networking
HN802: Unifying Networking and Security with Fortinet SASE: Architecture, Reality, and Lessons Learned (Sponsored)

Packet Pushers - Heavy Networking

Play Episode Listen Later Oct 24, 2025 58:39


The architecture and tech stack of a Secure Access Service Edge (SASE) solution will influence how the service performs, the robustness of its security controls, and the complexity of its operations. Sponsor Fortinet joins Heavy Networking to make the case that a unified offering, which integrates SD-WAN and SSE from a single vendor, provides a... Read more »

Packet Pushers - Full Podcast Feed
HN802: Unifying Networking and Security with Fortinet SASE: Architecture, Reality, and Lessons Learned (Sponsored)

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Oct 24, 2025 58:39


The architecture and tech stack of a Secure Access Service Edge (SASE) solution will influence how the service performs, the robustness of its security controls, and the complexity of its operations. Sponsor Fortinet joins Heavy Networking to make the case that a unified offering, which integrates SD-WAN and SSE from a single vendor, provides a... Read more »

Packet Pushers - Fat Pipe
HN802: Unifying Networking and Security with Fortinet SASE: Architecture, Reality, and Lessons Learned (Sponsored)

Packet Pushers - Fat Pipe

Play Episode Listen Later Oct 24, 2025 58:39


The architecture and tech stack of a Secure Access Service Edge (SASE) solution will influence how the service performs, the robustness of its security controls, and the complexity of its operations. Sponsor Fortinet joins Heavy Networking to make the case that a unified offering, which integrates SD-WAN and SSE from a single vendor, provides a... Read more »

Proactive - Interviews for investors
Hercules acquires Lyons Power Services to boost UK energy reach

Proactive - Interviews for investors

Play Episode Listen Later Oct 20, 2025 3:46


Hercules PLC (LSE:HERC) CEO Brusk Korkmaz and CFO Paul Wheatcroft talked with Proactive's Stephen Gunnion about the company's acquisition of Lyons Power Services. The acquisition marks a strategic step in Hercules expansion within the power and energy infrastructure sector. Korkmaz explained that Lyons brings a strong reputation and high-profile clients, such as Siemens and SSE, to the Hercules group. The move complements Hercules' earlier acquisition of Advantage NRG and aligns with its broader strategy to support the UK's transition to clean energy. Korkmaz noted: “It takes us to the next level in the power and energy space with Advantage NRG and now Lyons Power Services.” He highlighted that the UK energy infrastructure sector is set to see £58 billion in investment, presenting significant opportunity. Wheatcroft provided financial details, stating that Hercules acquired 70% of Lyons for £703,000, split evenly between cash and shares. He said: “Lyons generated £1.4 million in revenue and £287,000 in pre-tax profit for the year ending January 2025.” The shares are subject to a 12-month lock-in followed by a 12-month orderly market agreement. Importantly, the existing energy manager at Lyons will retain a 30% stake and remain active in the business. Wheatcroft said this model ensures continuity and supports long-term value creation. With the combined capabilities of Advantage NRG and Lyons, Korkmaz said Hercules is positioned to supply a skilled workforce for growing power transmission and infrastructure demands, helping to build a resilient network for the UK's energy future. For more interviews and company updates, visit Proactive's YouTube channel. Don't forget to like this video, subscribe, and enable notifications for future content. #HerculesPLC #EnergyInfrastructure #LyonsPowerServices #UKEnergy #CleanEnergyTransition #PowerTransmission #BusinessAcquisition #InfrastructureInvestment #ProactiveInvestors #EnergyWorkforce

ImpactGirl
Dammi 15 minuti e ti aiuto a non farti più fregare (quando deleghi) #231

ImpactGirl

Play Episode Listen Later Oct 9, 2025 16:00


Institute for Government
Is Labour's clean power mission on track?

Institute for Government

Play Episode Listen Later Oct 2, 2025 58:29


This event is part of the Institute for Government's Labour Party Conference 2025 fringe programme. Speakers: Michael Shanks MP, Minister for Energy at the Department for Energy Security and Net Zero Sam Alvis, Associate Director for Environment, Energy Security and Nature at IPPR Sam Peacock, Managing Director for Corporate Affairs, Regulation and Strategy at SSE Dhara Vyas, Chief Executive Officer of Energy UK This event was chaired by Jill Rutter, Senior Fellow at the Institute for Government. This event was held in partnership with Energy UK and SSE. 

VSA Capital
VSA Capital Tech and Transitional Energy Podcast 021025

VSA Capital

Play Episode Listen Later Oct 2, 2025 27:36


Data centres and the energy required, JLR and the auto industry,Aurrigo, Invinity, CPH2, Ceres Power, SSE, National Grid, Cyber plays, Corero Network Security, BAE Systems, BT

IfG LIVE – Discussions with the Institute for Government
Is Labour's clean power mission on track?

IfG LIVE – Discussions with the Institute for Government

Play Episode Listen Later Oct 2, 2025 58:29


 Speakers:    Michael Shanks MP, Minister for Energy at the Department for Energy Security and Net Zero Sam Alvis, Associate Director for Environment, Energy Security and Nature at IPPR Sam Peacock, Managing Director for Corporate Affairs, Regulation and Strategy at SSE Dhara Vyas, Chief Executive Officer of Energy UK This event was chaired by Jill Rutter, Senior Fellow at the Institute for Government.   This event was held in partnership with Energy UK and SSE. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Red Box Politics Podcast
The Chancellor's Hard Choices To Come

The Red Box Politics Podcast

Play Episode Listen Later Sep 29, 2025 21:00


Rachel Reeves has delivered her speech to Labour Party conference, warning of 'harsh global headwinds' and harder choices to come. Is she laying the ground for a brutal autumn Budget, and did she look like a chancellor secure in her job?Hugo Rifkind unpacks the speech with Joe Mayes and Megan Kenyon. He also speaks to Bill Esterson, chair of the Commons Energy Security and Net Zero Committee, about whether the government is making the case to the public for net zero. This bonus episode is brought to you by SSE, from the Labour Party conference. Hosted on Acast. See acast.com/privacy for more information.

Irish Tech News Audio Articles
Nevo EV Show Returns -Bigger Than Ever as EV Demand Surges

Irish Tech News Audio Articles

Play Episode Listen Later Sep 11, 2025 4:09


Ireland's electric vehicle market is booming, with registrations up 69% in August and over 20,000 new EVs licensed so far in 2025 - a 37% increase year-on-year. With one in six new cars now electric, EVs are becoming the mainstream choice for both drivers and businesses. This rapid growth makes the return of the Nevo Electric Vehicle Show to Dublin's RDS Simmonscourt this November especially timely - uniting industry leaders, public sector decision-makers and consumers for Ireland's largest ever showcase of electric mobility, clean energy and sustainable transport. The Nevo Electric Vehicle Show, in partnership with Bank of Ireland, is set to return to Dublin's RDS Simmonscourt this November with its most ambitious programme yet. Running across two days, Friday 7th November for businesses and Saturday 8th November for the general public, it will be Ireland's largest ever event dedicated to electric vehicles, clean energy, and sustainable mobility. Bank of Ireland is once again the show's title partner in 2025 while SSE Airtricity will continue as the exclusive Energy Partner, reflecting the growing importance of energy solutions in driving Ireland's shift to electrification. Every automotive brand operating in Ireland will be present, alongside exhibitors spanning public and home charging, solar energy, personal and public electric transport, smart home technology and wider energy services. Business day, on Friday, 7 November, is designed to help businesses, fleets, and the Public Sector of all sizes plan for a sustainable future. With climate targets looming, the event will bring together CEOs, CFOs, Heads of Fleet and Sustainability from across Ireland. The agenda will feature keynote speakers, panel discussions and case studies from organisations already transitioning to electric mobility. Workshops will be hosted throughout the day by GEOTAB, ESB, SSE, Activ8 Energies and Pragmatica, covering topics such as fleet management, smart energy, and business strategy development. The goal is to empower decision-makers to accelerate their journey towards net zero while also gaining practical advice on costs, infrastructure, and policy. For the general public on Saturday, 8 November, the Nevo EV Show promises a full day of discovery, excitement and hands-on experiences. Over 120 electric vehicles will be on display across 56 stands, representing 34 car brands. Nissan is confirmed as the official vehicle launch partner this year, where the brand will unveil the all-new Micra and the latest Leaf, marking their first official appearance in Ireland, giving visitors an exclusive first look. More than 30 vehicles will be available to test drive as part of the SSE Airtricity Driving Experience, while ESB ecars will showcase 12 vehicles in the new live demonstration area with EV expert Derek Reilly offering insights into performance, design and features. Visitors can also look forward to exclusive vehicle launches from more leading brands, expert panel discussions on everything from vehicle grants to charging, and a chance to explore the very latest in sustainable transport solutions. Attendance is once again expected to be significant! Organisers are targeting 10,000 registrations for the business day and 20,000 attendees for the public day, backed by a nationwide marketing campaign and strong support from event partners including Bank of Ireland, SSE Airtricity, ESB ecars, GEOTAB, ZEVI and SEAI. The Nevo EV Show aims to build on the extraordinary success of last year's event, which attracted almost 20,000 visitors. With a broader programme, bigger displays and more vehicles than ever before, the 2025 edition is shaping up to be Ireland's definitive showcase of the electric future. Admission is free, but registration is required. Tickets for both the Business Day and Public Day are available now at nevo.ie.

Les Cast Codeurs Podcast
LCC 329 - L'IA, ce super stagiaire qui nous fait travailler plus

Les Cast Codeurs Podcast

Play Episode Listen Later Aug 14, 2025 120:24


Arnaud et Guillaume explore l'évolution de l'écosystème Java avec Java 25, Spring Boot et Quarkus, ainsi que les dernières tendances en intelligence artificielle avec les nouveaux modèles comme Grok 4 et Claude Code. Les animateurs font également le point sur l'infrastructure cloud, les défis MCP et CLI, tout en discutant de l'impact de l'IA sur la productivité des développeurs et la gestion de la dette technique. Enregistré le 8 août 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–329.mp3 ou en vidéo sur YouTube. News Langages Java 25: JEP 515 : Profilage de méthode en avance (Ahead-of-Time) https://openjdk.org/jeps/515 Le JEP 515 a pour but d'améliorer le temps de démarrage et de chauffe des applications Java. L'idée est de collecter les profils d'exécution des méthodes lors d'une exécution antérieure, puis de les rendre immédiatement disponibles au démarrage de la machine virtuelle. Cela permet au compilateur JIT de générer du code natif dès le début, sans avoir à attendre que l'application soit en cours d'exécution. Ce changement ne nécessite aucune modification du code des applications, des bibliothèques ou des frameworks. L'intégration se fait via les commandes de création de cache AOT existantes. Voir aussi https://openjdk.org/jeps/483 et https://openjdk.org/jeps/514 Java 25: JEP 518 : Échantillonnage coopératif JFR https://openjdk.org/jeps/518 Le JEP 518 a pour objectif d'améliorer la stabilité et l'évolutivité de la fonction JDK Flight Recorder (JFR) pour le profilage d'exécution. Le mécanisme d'échantillonnage des piles d'appels de threads Java est retravaillé pour s'exécuter uniquement à des safepoints, ce qui réduit les risques d'instabilité. Le nouveau modèle permet un parcours de pile plus sûr, notamment avec le garbage collector ZGC, et un échantillonnage plus efficace qui prend en charge le parcours de pile concurrent. Le JEP ajoute un nouvel événement, SafepointLatency, qui enregistre le temps nécessaire à un thread pour atteindre un safepoint. L'approche rend le processus d'échantillonnage plus léger et plus rapide, car le travail de création de traces de pile est délégué au thread cible lui-même. Librairies Spring Boot 4 M1 https://spring.io/blog/2025/07/24/spring-boot–4–0–0-M1-available-now Spring Boot 4.0.0-M1 met à jour de nombreuses dépendances internes et externes pour améliorer la stabilité et la compatibilité. Les types annotés avec @ConfigurationProperties peuvent maintenant référencer des types situés dans des modules externes grâce à @ConfigurationPropertiesSource. Le support de l'information sur la validité des certificats SSL a été simplifié, supprimant l'état WILL_EXPIRE_SOON au profit de VALID. L'auto-configuration des métriques Micrometer supporte désormais l'annotation @MeterTag sur les méthodes annotées @Counted et @Timed, avec évaluation via SpEL. Le support de @ServiceConnection pour MongoDB inclut désormais l'intégration avec MongoDBAtlasLocalContainer de Testcontainers. Certaines fonctionnalités et API ont été dépréciées, avec des recommandations pour migrer les points de terminaison personnalisés vers les versions Spring Boot 2. Les versions milestones et release candidates sont maintenant publiées sur Maven Central, en plus du repository Spring traditionnel. Un guide de migration a été publié pour faciliter la transition depuis Spring Boot 3.5 vers la version 4.0.0-M1. Passage de Spring Boot à Quarkus : retour d'expérience https://blog.stackademic.com/we-switched-from-spring-boot-to-quarkus-heres-the-ugly-truth-c8a91c2b8c53 Une équipe a migré une application Java de Spring Boot vers Quarkus pour gagner en performances et réduire la consommation mémoire. L'objectif était aussi d'optimiser l'application pour le cloud natif. La migration a été plus complexe que prévu, notamment à cause de l'incompatibilité avec certaines bibliothèques et d'un écosystème Quarkus moins mature. Il a fallu revoir du code et abandonner certaines fonctionnalités spécifiques à Spring Boot. Les gains en performances et en mémoire sont réels, mais la migration demande un vrai effort d'adaptation. La communauté Quarkus progresse, mais le support reste limité comparé à Spring Boot. Conclusion : Quarkus est intéressant pour les nouveaux projets ou ceux prêts à être réécrits, mais la migration d'un projet existant est un vrai défi. LangChain4j 1.2.0 : Nouvelles fonctionnalités et améliorations https://github.com/langchain4j/langchain4j/releases/tag/1.2.0 Modules stables : Les modules langchain4j-anthropic, langchain4j-azure-open-ai, langchain4j-bedrock, langchain4j-google-ai-gemini, langchain4j-mistral-ai et langchain4j-ollama sont désormais en version stable 1.2.0. Modules expérimentaux : La plupart des autres modules de LangChain4j sont en version 1.2.0-beta8 et restent expérimentaux/instables. BOM mis à jour : Le langchain4j-bom a été mis à jour en version 1.2.0, incluant les dernières versions de tous les modules. Principales améliorations : Support du raisonnement/pensée dans les modèles. Appels d'outils partiels en streaming. Option MCP pour exposer automatiquement les ressources en tant qu'outils. OpenAI : possibilité de définir des paramètres de requête personnalisés et d'accéder aux réponses HTTP brutes et aux événements SSE. Améliorations de la gestion des erreurs et de la documentation. Filtering Metadata Infinispan ! (cc Katia( Et 1.3.0 est déjà disponible https://github.com/langchain4j/langchain4j/releases/tag/1.3.0 2 nouveaux modules expérimentaux, langchain4j-agentic et langchain4j-agentic-a2a qui introduisent un ensemble d'abstractions et d'utilitaires pour construire des applications agentiques Infrastructure Cette fois c'est vraiment l'année de Linux sur le desktop ! https://www.lesnumeriques.com/informatique/c-est-enfin-arrive-linux-depasse-un-seuil-historique-que-microsoft-pensait-intouchable-n239977.html Linux a franchi la barre des 5% aux USA Cette progression s'explique en grande partie par l'essor des systèmes basés sur Linux dans les environnements professionnels, les serveurs, et certains usages grand public. Microsoft, longtemps dominant avec Windows, voyait ce seuil comme difficilement atteignable à court terme. Le succès de Linux est également alimenté par la popularité croissante des distributions open source, plus légères, personnalisables et adaptées à des usages variés. Le cloud, l'IoT, et les infrastructures de serveurs utilisent massivement Linux, ce qui contribue à cette augmentation globale. Ce basculement symbolique marque un changement d'équilibre dans l'écosystème des systèmes d'exploitation. Toutefois, Windows conserve encore une forte présence dans certains segments, notamment chez les particuliers et dans les entreprises classiques. Cette évolution témoigne du dynamisme et de la maturité croissante des solutions Linux, devenues des alternatives crédibles et robustes face aux offres propriétaires. Cloud Cloudflare 1.1.1.1 s'en va pendant une heure d'internet https://blog.cloudflare.com/cloudflare–1–1–1–1-incident-on-july–14–2025/ Le 14 juillet 2025, le service DNS public Cloudflare 1.1.1.1 a subi une panne majeure de 62 minutes, rendant le service indisponible pour la majorité des utilisateurs mondiaux. Cette panne a aussi causé une dégradation intermittente du service Gateway DNS. L'incident est survenu suite à une mise à jour de la topologie des services Cloudflare qui a activé une erreur de configuration introduite en juin 2025. Cette erreur faisait que les préfixes destinés au service 1.1.1.1 ont été accidentellement inclus dans un nouveau service de localisation des données (Data Localization Suite), ce qui a perturbé le routage anycast. Le résultat a été une incapacité pour les utilisateurs à résoudre les noms de domaine via 1.1.1.1, rendant la plupart des services Internet inaccessibles pour eux. Ce n'était pas le résultat d'une attaque ou d'un problème BGP, mais une erreur interne de configuration. Cloudflare a rapidement identifié la cause, corrigé la configuration et mis en place des mesures pour prévenir ce type d'incident à l'avenir. Le service est revenu à la normale après environ une heure d'indisponibilité. L'incident souligne la complexité et la sensibilité des infrastructures anycast et la nécessité d'une gestion rigoureuse des configurations réseau. Web L'évolution des bonnes pratiques de Node.js https://kashw1n.com/blog/nodejs–2025/ Évolution de Node.js en 2025 : Le développement se tourne vers les standards du web, avec moins de dépendances externes et une meilleure expérience pour les développeurs. ES Modules (ESM) par défaut : Remplacement de CommonJS pour un meilleur outillage et une standardisation avec le web. Utilisation du préfixe node: pour les modules natifs afin d'éviter les conflits. API web intégrées : fetch, AbortController, et AbortSignal sont maintenant natifs, réduisant le besoin de librairies comme axios. Runner de test intégré : Plus besoin de Jest ou Mocha pour la plupart des cas. Inclut un mode “watch” et des rapports de couverture. Patterns asynchrones avancés : Utilisation plus poussée de async/await avec Promise.all() pour le parallélisme et les AsyncIterators pour les flux d'événements. Worker Threads pour le parallélisme : Pour les tâches lourdes en CPU, évitant de bloquer l'event loop principal. Expérience de développement améliorée : Intégration du mode --watch (remplace nodemon) et du support --env-file (remplace dotenv). Sécurité et performance : Modèle de permission expérimental pour restreindre l'accès et des hooks de performance natifs pour le monitoring. Distribution simplifiée : Création d'exécutables uniques pour faciliter le déploiement d'applications ou d'outils en ligne de commande. Sortie de Apache EChart 6 après 12 ans ! https://echarts.apache.org/handbook/en/basics/release-note/v6-feature/ Apache ECharts 6.0 : Sortie officielle après 12 ans d'évolution. 12 mises à niveau majeures pour la visualisation de données. Trois dimensions clés d'amélioration : Présentation visuelle plus professionnelle : Nouveau thème par défaut (design moderne). Changement dynamique de thème. Prise en charge du mode sombre. Extension des limites de l'expression des données : Nouveaux types de graphiques : Diagramme de cordes (Chord Chart), Nuage de points en essaim (Beeswarm Chart). Nouvelles fonctionnalités : Jittering pour nuages de points denses, Axes coupés (Broken Axis). Graphiques boursiers améliorés Liberté de composition : Nouveau système de coordonnées matriciel. Séries personnalisées améliorées (réutilisation du code, publication npm). Nouveaux graphiques personnalisés inclus (violon, contour, etc.). Optimisation de l'agencement des étiquettes d'axe. Data et Intelligence Artificielle Grok 4 s'est pris pour un nazi à cause des tools https://techcrunch.com/2025/07/15/xai-says-it-has-fixed-grok–4s-problematic-responses/ À son lancement, Grok 4 a généré des réponses offensantes, notamment en se surnommant « MechaHitler » et en adoptant des propos antisémites. Ce comportement provenait d'une recherche automatique sur le web qui a mal interprété un mème viral comme une vérité. Grok alignait aussi ses réponses controversées sur les opinions d'Elon Musk et de xAI, ce qui a amplifié les biais. xAI a identifié que ces dérapages étaient dus à une mise à jour interne intégrant des instructions encourageant un humour offensant et un alignement avec Musk. Pour corriger cela, xAI a supprimé le code fautif, remanié les prompts système, et imposé des directives demandant à Grok d'effectuer une analyse indépendante, en utilisant des sources diverses. Grok doit désormais éviter tout biais, ne plus adopter un humour politiquement incorrect, et analyser objectivement les sujets sensibles. xAI a présenté ses excuses, précisant que ces dérapages étaient dus à un problème de prompt et non au modèle lui-même. Cet incident met en lumière les défis persistants d'alignement et de sécurité des modèles d'IA face aux injections indirectes issues du contenu en ligne. La correction n'est pas qu'un simple patch technique, mais un exemple des enjeux éthiques et de responsabilité majeurs dans le déploiement d'IA à grande échelle. Guillaume a sorti toute une série d'article sur les patterns agentiques avec le framework ADK pour Java https://glaforge.dev/posts/2025/07/29/mastering-agentic-workflows-with-adk-the-recap/ Un premier article explique comment découper les tâches en sous-agents IA : https://glaforge.dev/posts/2025/07/23/mastering-agentic-workflows-with-adk-sub-agents/ Un deuxième article détaille comment organiser les agents de manière séquentielle : https://glaforge.dev/posts/2025/07/24/mastering-agentic-workflows-with-adk-sequential-agent/ Un troisième article explique comment paralleliser des tâches indépendantes : https://glaforge.dev/posts/2025/07/25/mastering-agentic-workflows-with-adk-parallel-agent/ Et enfin, comment faire des boucles d'amélioration : https://glaforge.dev/posts/2025/07/28/mastering-agentic-workflows-with-adk-loop-agents/ Tout ça évidemment en Java :slightly_smiling_face: 6 semaines de code avec Claude https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/ Orta partage son retour après 6 semaines d'utilisation quotidienne de Claude Code, qui a profondément changé sa manière de coder. Il ne « code » plus vraiment ligne par ligne, mais décrit ce qu'il veut, laisse Claude proposer une solution, puis corrige ou ajuste. Cela permet de se concentrer sur le résultat plutôt que sur l'implémentation, comme passer de la peinture au polaroid. Claude s'avère particulièrement utile pour les tâches de maintenance : migrations, refactors, nettoyage de code. Il reste toujours en contrôle, révise chaque diff généré, et guide l'IA via des prompts bien cadrés. Il note qu'il faut quelques semaines pour prendre le bon pli : apprendre à découper les tâches et formuler clairement les attentes. Les tâches simples deviennent quasi instantanées, mais les tâches complexes nécessitent encore de l'expérience et du discernement. Claude Code est vu comme un très bon copilote, mais ne remplace pas le rôle du développeur qui comprend l'ensemble du système. Le gain principal est une vitesse de feedback plus rapide et une boucle d'itération beaucoup plus courte. Ce type d'outil pourrait bien redéfinir la manière dont on pense et structure le développement logiciel à moyen terme. Claude Code et les serveurs MCP : ou comment transformer ton terminal en assistant surpuissant https://touilleur-express.fr/2025/07/27/claude-code-et-les-serveurs-mcp-ou-comment-transformer-ton-terminal-en-assistant-surpuissant/ Nicolas continue ses études sur Claude Code et explique comment utiliser les serveurs MCP pour rendre Claude bien plus efficace. Le MCP Context7 montre comment fournir à l'IA la doc technique à jour (par exemple, Next.js 15) pour éviter les hallucinations ou les erreurs. Le MCP Task Master, autre serveur MCP, transforme un cahier des charges (PRD) en tâches atomiques, estimées, et organisées sous forme de plan de travail. Le MCP Playwright permet de manipuler des navigateurs et d'executer des tests E2E Le MCP Digital Ocean permet de déployer facilement l'application en production Tout n'est pas si ideal, les quotas sont atteints en quelques heures sur une petite application et il y a des cas où il reste bien plus efficace de le faire soit-même (pour un codeur expérimenté) Nicolas complète cet article avec l'écriture d'un MVP en 20 heures: https://touilleur-express.fr/2025/07/30/comment-jai-code-un-mvp-en-une-vingtaine-dheures-avec-claude-code/ Le développement augmenté, un avis politiquement correct, mais bon… https://touilleur-express.fr/2025/07/31/le-developpement-augmente-un-avis-politiquement-correct-mais-bon/ Nicolas partage un avis nuancé (et un peu provoquant) sur le développement augmenté, où l'IA comme Claude Code assiste le développeur sans le remplacer. Il rejette l'idée que cela serait « trop magique » ou « trop facile » : c'est une évolution logique de notre métier, pas un raccourci pour les paresseux. Pour lui, un bon dev reste celui qui structure bien sa pensée, sait poser un problème, découper, valider — même si l'IA aide à coder plus vite. Il raconte avoir codé une app OAuth, testée, stylisée et déployée en quelques heures, sans jamais quitter le terminal grâce à Claude. Ce genre d'outillage change le rapport au temps : on passe de « je vais y réfléchir » à « je tente tout de suite une version qui marche à peu près ». Il assume aimer cette approche rapide et imparfaite : mieux vaut une version brute livrée vite qu'un projet bloqué par le perfectionnisme. L'IA est selon lui un super stagiaire : jamais fatigué, parfois à côté de la plaque, mais diablement productif quand bien briefé. Il conclut que le « dev augmenté » ne remplace pas les bons développeurs… mais les développeurs moyens doivent s'y mettre, sous peine d'être dépassés. ChatGPT lance le mode d'étude : un apprentissage interactif pas à pas https://openai.com/index/chatgpt-study-mode/ OpenAI propose un mode d'étude dans ChatGPT qui guide les utilisateurs pas à pas plutôt que de donner directement la réponse. Ce mode vise à encourager la réflexion active et l'apprentissage en profondeur. Il utilise des instructions personnalisées pour poser des questions et fournir des explications adaptées au niveau de l'utilisateur. Le mode d'étude favorise la gestion de la charge cognitive et stimule la métacognition. Il propose des réponses structurées pour faciliter la compréhension progressive des sujets. Disponible dès maintenant pour les utilisateurs connectés, ce mode sera intégré dans ChatGPT Edu. L'objectif est de transformer ChatGPT en un véritable tuteur numérique, aidant les étudiants à mieux assimiler les connaissances. A priori Gemini viendrait de sortir un fonctionnalité similaire Lancement de GPT-OSS par OpenAI https://openai.com/index/introducing-gpt-oss/ https://openai.com/index/gpt-oss-model-card/ OpenAI a lancé GPT-OSS, sa première famille de modèles open-weight depuis GPT–2. Deux modèles sont disponibles : gpt-oss–120b et gpt-oss–20b, qui sont des modèles mixtes d'experts conçus pour le raisonnement et les tâches d'agent. Les modèles sont distribués sous licence Apache 2.0, permettant leur utilisation et leur personnalisation gratuites, y compris pour des applications commerciales. Le modèle gpt-oss–120b est capable de performances proches du modèle OpenAI o4-mini, tandis que le gpt-oss–20b est comparable au o3-mini. OpenAI a également open-sourcé un outil de rendu appelé Harmony en Python et Rust pour en faciliter l'adoption. Les modèles sont optimisés pour fonctionner localement et sont pris en charge par des plateformes comme Hugging Face et Ollama. OpenAI a mené des recherches sur la sécurité pour s'assurer que les modèles ne pouvaient pas être affinés pour des utilisations malveillantes dans les domaines biologique, chimique ou cybernétique. Anthropic lance Opus 4.1 https://www.anthropic.com/news/claude-opus–4–1 Anthropic a publié Claude Opus 4.1, une mise à jour de son modèle de langage. Cette nouvelle version met l'accent sur l'amélioration des performances en codage, en raisonnement et sur les tâches de recherche et d'analyse de données. Le modèle a obtenu un score de 74,5 % sur le benchmark SWE-bench Verified, ce qui représente une amélioration par rapport à la version précédente. Il excelle notamment dans la refactorisation de code multifichier et est capable d'effectuer des recherches approfondies. Claude Opus 4.1 est disponible pour les utilisateurs payants de Claude, ainsi que via l'API, Amazon Bedrock et Vertex AI de Google Cloud, avec des tarifs identiques à ceux d'Opus 4. Il est présenté comme un remplacement direct de Claude Opus 4, avec des performances et une précision supérieures pour les tâches de programmation réelles. OpenAI Summer Update. GPT–5 is out https://openai.com/index/introducing-gpt–5/ Détails https://openai.com/index/gpt–5-new-era-of-work/ https://openai.com/index/introducing-gpt–5-for-developers/ https://openai.com/index/gpt–5-safe-completions/ https://openai.com/index/gpt–5-system-card/ Amélioration majeure des capacités cognitives - GPT‑5 montre un niveau de raisonnement, d'abstraction et de compréhension nettement supérieur aux modèles précédents. Deux variantes principales - gpt-5-main : rapide, efficace pour les tâches générales. gpt-5-thinking : plus lent mais spécialisé dans les tâches complexes, nécessitant réflexion profonde. Routeur intelligent intégré - Le système sélectionne automatiquement la version la plus adaptée à la tâche (rapide ou réfléchie), sans intervention de l'utilisateur. Fenêtre de contexte encore étendue - GPT‑5 peut traiter des volumes de texte plus longs (jusqu'à 1 million de tokens dans certaines versions), utile pour des documents ou projets entiers. Réduction significative des hallucinations - GPT‑5 donne des réponses plus fiables, avec moins d'erreurs inventées ou de fausses affirmations. Comportement plus neutre et moins sycophant - Il a été entraîné pour mieux résister à l'alignement excessif avec les opinions de l'utilisateur. Capacité accrue à suivre des instructions complexes - GPT‑5 comprend mieux les consignes longues, implicites ou nuancées. Approche “Safe completions” - Remplacement des “refus d'exécution” par des réponses utiles mais sûres — le modèle essaie de répondre avec prudence plutôt que bloquer. Prêt pour un usage professionnel à grande échelle - Optimisé pour le travail en entreprise : rédaction, programmation, synthèse, automatisation, gestion de tâches, etc. Améliorations spécifiques pour le codage - GPT‑5 est plus performant pour l'écriture de code, la compréhension de contextes logiciels complexes, et l'usage d'outils de développement. Expérience utilisateur plus rapide et fluide- Le système réagit plus vite grâce à une orchestration optimisée entre les différents sous-modèles. Capacités agentiques renforcées - GPT‑5 peut être utilisé comme base pour des agents autonomes capables d'accomplir des objectifs avec peu d'interventions humaines. Multimodalité maîtrisée (texte, image, audio) - GPT‑5 intègre de façon plus fluide la compréhension de formats multiples, dans un seul modèle. Fonctionnalités pensées pour les développeurs - Documentation plus claire, API unifiée, modèles plus transparents et personnalisables. Personnalisation contextuelle accrue - Le système s'adapte mieux au style, ton ou préférences de l'utilisateur, sans instructions répétées. Utilisation énergétique et matérielle optimisée - Grâce au routeur interne, les ressources sont utilisées plus efficacement selon la complexité des tâches. Intégration sécurisée dans les produits ChatGPT - Déjà déployé dans ChatGPT avec des bénéfices immédiats pour les utilisateurs Pro et entreprises. Modèle unifié pour tous les usages - Un seul système capable de passer de la conversation légère à des analyses scientifiques ou du code complexe. Priorité à la sécurité et à l'alignement - GPT‑5 a été conçu dès le départ pour minimiser les abus, biais ou comportements indésirables. Pas encore une AGI - OpenAI insiste : malgré ses capacités impressionnantes, GPT‑5 n'est pas une intelligence artificielle générale. Non, non, les juniors ne sont pas obsolètes malgré l'IA ! (dixit GitHub) https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ L'IA transforme le développement logiciel, mais les développeurs juniors ne sont pas obsolètes. Les nouveaux apprenants sont bien positionnés, car déjà familiers avec les outils IA. L'objectif est de développer des compétences pour travailler avec l'IA, pas d'être remplacé. La créativité et la curiosité sont des qualités humaines clés. Cinq façons de se démarquer : Utiliser l'IA (ex: GitHub Copilot) pour apprendre plus vite, pas seulement coder plus vite (ex: mode tuteur, désactiver l'autocomplétion temporairement). Construire des projets publics démontrant ses compétences (y compris en IA). Maîtriser les workflows GitHub essentiels (GitHub Actions, contribution open source, pull requests). Affûter son expertise en révisant du code (poser des questions, chercher des patterns, prendre des notes). Déboguer plus intelligemment et rapidement avec l'IA (ex: Copilot Chat pour explications, corrections, tests). Ecrire son premier agent IA avec A2A avec WildFly par Emmanuel Hugonnet https://www.wildfly.org/news/2025/08/07/Building-your-First-A2A-Agent/ Protocole Agent2Agent (A2A) : Standard ouvert pour l'interopérabilité universelle des agents IA. Permet communication et collaboration efficaces entre agents de différents fournisseurs/frameworks. Crée des écosystèmes multi-agents unifiés, automatisant les workflows complexes. Objet de l'article : Guide pour construire un premier agent A2A (agent météo) dans WildFly. Utilise A2A Java SDK pour Jakarta Servers, WildFly AI Feature Pack, un LLM (Gemini) et un outil Python (MCP). Agent conforme A2A v0.2.5. Prérequis : JDK 17+, Apache Maven 3.8+, IDE Java, Google AI Studio API Key, Python 3.10+, uv. Étapes de construction de l'agent météo : Création du service LLM : Interface Java (WeatherAgent) utilisant LangChain4J pour interagir avec un LLM et un outil Python MCP (fonctions get_alerts, get_forecast). Définition de l'agent A2A (via CDI) : ▪︎ Agent Card : Fournit les métadonnées de l'agent (nom, description, URL, capacités, compétences comme “weather_search”). Agent Executor : Gère les requêtes A2A entrantes, extrait le message utilisateur, appelle le service LLM et formate la réponse. Exposition de l'agent : Enregistrement d'une application JAX-RS pour les endpoints. Déploiement et test : Configuration de l'outil A2A-inspector de Google (via un conteneur Podman). Construction du projet Maven, configuration des variables d'environnement (ex: GEMINI_API_KEY). Lancement du serveur WildFly. Conclusion : Transformation minimale d'une application IA en agent A2A. Permet la collaboration et le partage d'informations entre agents IA, indépendamment de leur infrastructure sous-jacente. Outillage IntelliJ IDEa bouge vers une distribution unifiée https://blog.jetbrains.com/idea/2025/07/intellij-idea-unified-distribution-plan/ À partir de la version 2025.3, IntelliJ IDEA Community Edition ne sera plus distribuée séparément. Une seule version unifiée d'IntelliJ IDEA regroupera les fonctionnalités des éditions Community et Ultimate. Les fonctionnalités avancées de l'édition Ultimate seront accessibles via abonnement. Les utilisateurs sans abonnement auront accès à une version gratuite enrichie par rapport à l'édition Community actuelle. Cette unification vise à simplifier l'expérience utilisateur et réduire les différences entre les éditions. Les utilisateurs Community seront automatiquement migrés vers cette nouvelle version unifiée. Il sera possible d'activer les fonctionnalités Ultimate temporairement d'un simple clic. En cas d'expiration d'abonnement Ultimate, l'utilisateur pourra continuer à utiliser la version installée avec un jeu limité de fonctionnalités gratuites, sans interruption. Ce changement reflète l'engagement de JetBrains envers l'open source et l'adaptation aux besoins de la communauté. Prise en charge des Ancres YAML dans GitHub Actions https://github.com/actions/runner/issues/1182#issuecomment–3150797791 Afin d'éviter de dupliquer du contenu dans un workflow les Ancres permettent d'insérer des morceaux réutilisables de YAML Fonctionnalité attendue depuis des années et disponible chez GitLab depuis bien longtemps. Elle a été déployée le 4 aout. Attention à ne pas en abuser car la lisibilité de tels documents n'est pas si facile Gemini CLI rajoute les custom commands comme Claude https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands Mais elles sont au format TOML, on ne peut donc pas les partager avec Claude :disappointed: Automatiser ses workflows IA avec les hooks de Claude Code https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks/ Claude Code propose des hooks qui permettent d'exécuter des scripts à différents moments d'une session, par exemple au début, lors de l'utilisation d'outils, ou à la fin. Ces hooks facilitent l'automatisation de tâches comme la gestion de branches Git, l'envoi de notifications, ou l'intégration avec d'autres outils. Un exemple simple est l'envoi d'une notification sur le bureau à la fin d'une session. Les hooks se configurent via trois fichiers JSON distincts selon le scope : utilisateur, projet ou local. Sur macOS, l'envoi de notifications nécessite une permission spécifique via l'application “Script Editor”. Il est important d'avoir une version à jour de Claude Code pour utiliser ces hooks. GitButler permet desormais de s'intégrer à Claude Code via ces hooks: https://blog.gitbutler.com/parallel-claude-code/ Le client Git de Jetbrains bientot en standalone https://lp.jetbrains.com/closed-preview-for-jetbrains-git-client/ Demandé par certains utilisateurs depuis longtemps Ca serait un client graphique du même style qu'un GitButler, SourceTree, etc Apache Maven 4 …. arrive …. l'utilitaire mvnupva vous aider à upgrader https://maven.apache.org/tools/mvnup.html Fixe les incompatibilités connues Nettoie les redondances et valeurs par defaut (versions par ex) non utiles pour Maven 4 Reformattage selon les conventions maven … Une GitHub Action pour Gemini CLI https://blog.google/technology/developers/introducing-gemini-cli-github-actions/ Google a lancé Gemini CLI GitHub Actions, un agent d'IA qui fonctionne comme un “coéquipier de code” pour les dépôts GitHub. L'outil est gratuit et est conçu pour automatiser des tâches de routine telles que le triage des problèmes (issues), l'examen des demandes de tirage (pull requests) et d'autres tâches de développement. Il agit à la fois comme un agent autonome et un collaborateur que les développeurs peuvent solliciter à la demande, notamment en le mentionnant dans une issue ou une pull request. L'outil est basé sur la CLI Gemini, un agent d'IA open-source qui amène le modèle Gemini directement dans le terminal. Il utilise l'infrastructure GitHub Actions, ce qui permet d'isoler les processus dans des conteneurs séparés pour des raisons de sécurité. Trois flux de travail (workflows) open-source sont disponibles au lancement : le triage intelligent des issues, l'examen des pull requests et la collaboration à la demande. Pas besoin de MCP, le code est tout ce dont vous avez besoin https://lucumr.pocoo.org/2025/7/3/tools/ Armin souligne qu'il n'est pas fan du protocole MCP (Model Context Protocol) dans sa forme actuelle : il manque de composabilité et exige trop de contexte. Il remarque que pour une même tâche (ex. GitHub), utiliser le CLI est souvent plus rapide et plus efficace en termes de contexte que passer par un serveur MCP. Selon lui, le code reste la solution la plus simple et fiable, surtout pour automatiser des tâches répétitives. Il préfère créer des scripts clairs plutôt que se reposer sur l'inférence LLM : cela facilite la vérification, la maintenance et évite les erreurs subtiles. Pour les tâches récurrentes, si on les automatise, mieux vaut le faire avec du code reusable, plutôt que de laisser l'IA deviner à chaque fois. Il illustre cela en convertissant son blog entier de reStructuredText à Markdown : plutôt qu'un usage direct d'IA, il a demandé à Claude de générer un script complet, avec parsing AST, comparaison des fichiers, validation et itération. Ce workflow LLM→code→LLM (analyse et validation) lui a donné confiance dans le résultat final, tout en conservant un contrôle humain sur le processus. Il juge que MCP ne permet pas ce type de pipeline automatisé fiable, car il introduit trop d'inférence et trop de variations par appel. Pour lui, coder reste le meilleur moyen de garder le contrôle, la reproductibilité et la clarté dans les workflows automatisés. MCP vs CLI … https://www.async-let.com/blog/my-take-on-the-mcp-verses-cli-debate/ Cameron raconte son expérience de création du serveur XcodeBuildMCP, qui lui a permis de mieux comprendre le débat entre servir l'IA via MCP ou laisser l'IA utiliser directement les CLI du système. Selon lui, les CLIs restent préférables pour les développeurs experts recherchant contrôle, transparence, performance et simplicité. Mais les serveurs MCP excellent sur les workflows complexes, les contextes persistants, les contraintes de sécurité, et facilitent l'accès pour les utilisateurs moins expérimentés. Il reconnaît la critique selon laquelle MCP consomme trop de contexte (« context bloat ») et que les appels CLI peuvent être plus rapides et compréhensibles. Toutefois, il souligne que beaucoup de problèmes proviennent de la qualité des implémentations clients, pas du protocole MCP en lui‑même. Pour lui, un bon serveur MCP peut proposer des outils soigneusement définis qui simplifient la vie de l'IA (par exemple, renvoyer des données structurées plutôt que du texte brut à parser). Il apprécie la capacité des MCP à offrir des opérations état‑durables (sessions, mémoire, logs capturés), ce que les CLI ne gèrent pas naturellement. Certains scénarios ne peuvent pas fonctionner via CLI (pas de shell accessible) alors que MCP, en tant que protocole indépendant, reste utilisable par n'importe quel client. Son verdict : pas de solution universelle — chaque contexte mérite d'être évalué, et on ne devrait pas imposer MCP ou CLI à tout prix. Jules, l'agent de code asynchrone gratuit de Google, est sorti de beta et est disponible pour tout le monde https://blog.google/technology/google-labs/jules-now-available/ Jules, agent de codage asynchrone, est maintenant publiquement disponible. Propulsé par Gemini 2.5 Pro. Phase bêta : 140 000+ améliorations de code et retours de milliers de développeurs. Améliorations : interface utilisateur, corrections de bugs, réutilisation des configurations, intégration GitHub Issues, support multimodal. Gemini 2.5 Pro améliore les plans de codage et la qualité du code. Nouveaux paliers structurés : Introductif, Google AI Pro (limites 5x supérieures), Google AI Ultra (limites 20x supérieures). Déploiement immédiat pour les abonnés Google AI Pro et Ultra, incluant les étudiants éligibles (un an gratuit de AI Pro). Architecture Valoriser la réduction de la dette technique : un vrai défi https://www.lemondeinformatique.fr/actualites/lire-valoriser-la-reduction-de-la-dette-technique-mission-impossible–97483.html La dette technique est un concept mal compris et difficile à valoriser financièrement auprès des directions générales. Les DSI ont du mal à mesurer précisément cette dette, à allouer des budgets spécifiques, et à prouver un retour sur investissement clair. Cette difficulté limite la priorisation des projets de réduction de dette technique face à d'autres initiatives jugées plus urgentes ou stratégiques. Certaines entreprises intègrent progressivement la gestion de la dette technique dans leurs processus de développement. Des approches comme le Software Crafting visent à améliorer la qualité du code pour limiter l'accumulation de cette dette. L'absence d'outils adaptés pour mesurer les progrès rend la démarche encore plus complexe. En résumé, réduire la dette technique reste une mission délicate qui nécessite innovation, méthode et sensibilisation en interne. Il ne faut pas se Mocker … https://martinelli.ch/why-i-dont-use-mocking-frameworks-and-why-you-might-not-need-them-either/ https://blog.tremblay.pro/2025/08/not-using-mocking-frmk.html L'auteur préfère utiliser des fakes ou stubs faits à la main plutôt que des frameworks de mocking comme Mockito ou EasyMock. Les frameworks de mocking isolent le code, mais entraînent souvent : Un fort couplage entre les tests et les détails d'implémentation. Des tests qui valident le mock plutôt que le comportement réel. Deux principes fondamentaux guident son approche : Favoriser un design fonctionnel, avec logique métier pure (fonctions sans effets de bord). Contrôler les données de test : par exemple en utilisant des bases réelles (via Testcontainers) plutôt que de simuler. Dans sa pratique, les seuls cas où un mock externe est utilisé concernent les services HTTP externes, et encore il préfère en simuler seulement le transport plutôt que le comportement métier. Résultat : les tests deviennent plus simples, plus rapides à écrire, plus fiables, et moins fragiles aux évolutions du code. L'article conclut que si tu conçois correctement ton code, tu pourrais très bien ne pas avoir besoin de frameworks de mocking du tout. Le blog en réponse d'Henri Tremblay nuance un peu ces retours Méthodologies C'est quoi être un bon PM ? (Product Manager) Article de Chris Perry, un PM chez Google : https://thechrisperry.substack.com/p/being-a-good-pm-at-google Le rôle de PM est difficile : Un travail exigeant, où il faut être le plus impliqué de l'équipe pour assurer le succès. 1. Livrer (shipper) est tout ce qui compte : La priorité absolue. Mieux vaut livrer et itérer rapidement que de chercher la perfection en théorie. Un produit livré permet d'apprendre de la réalité. 2. Donner l'envie du grand large : La meilleure façon de faire avancer un projet est d'inspirer l'équipe avec une vision forte et désirable. Montrer le “pourquoi”. 3. Utiliser son produit tous les jours : Non négociable pour réussir. Permet de développer une intuition et de repérer les vrais problèmes que la recherche utilisateur ne montre pas toujours. 4. Être un bon ami : Créer des relations authentiques et aider les autres est un facteur clé de succès à long terme. La confiance est la base d'une exécution rapide. 5. Donner plus qu'on ne reçoit : Toujours chercher à aider et à collaborer. La stratégie optimale sur la durée est la coopération. Ne pas être possessif avec ses idées. 6. Utiliser le bon levier : Pour obtenir une décision, il faut identifier la bonne personne qui a le pouvoir de dire “oui”, et ne pas se laisser bloquer par des avis non décisionnaires. 7. N'aller que là où on apporte de la valeur : Combler les manques, faire le travail ingrat que personne ne veut faire. Savoir aussi s'écarter (réunions, projets) quand on n'est pas utile. 8. Le succès a plusieurs parents, l'échec est orphelin : Si le produit réussit, c'est un succès d'équipe. S'il échoue, c'est la faute du PM. Il faut assumer la responsabilité finale. Conclusion : Le PM est un chef d'orchestre. Il ne peut pas jouer de tous les instruments, mais son rôle est d'orchestrer avec humilité le travail de tous pour créer quelque chose d'harmonieux. Tester des applications Spring Boot prêtes pour la production : points clés https://www.wimdeblauwe.com/blog/2025/07/30/how-i-test-production-ready-spring-boot-applications/ L'auteur (Wim Deblauwe) détaille comment il structure ses tests dans une application Spring Boot destinée à la production. Le projet inclut automatiquement la dépendance spring-boot-starter-test, qui regroupe JUnit 5, AssertJ, Mockito, Awaitility, JsonAssert, XmlUnit et les outils de testing Spring. Tests unitaires : ciblent les fonctions pures (record, utilitaire), testés simplement avec JUnit et AssertJ sans démarrage du contexte Spring. Tests de cas d'usage (use case) : orchestrent la logique métier, généralement via des use cases qui utilisent un ou plusieurs dépôts de données. Tests JPA/repository : vérifient les interactions avec la base via des tests realisant des opérations CRUD (avec un contexte Spring pour la couche persistance). Tests de contrôleur : permettent de tester les endpoints web (ex. @WebMvcTest), souvent avec MockBean pour simuler les dépendances. Tests d'intégration complets : ils démarrent tout le contexte Spring (@SpringBootTest) pour tester l'application dans son ensemble. L'auteur évoque également des tests d'architecture, mais sans entrer dans le détail dans cet article. Résultat : une pyramide de tests allant des plus rapides (unitaires) aux plus complets (intégration), garantissant fiabilité, vitesse et couverture sans surcharge inutile. Sécurité Bitwarden offre un serveur MCP pour que les agents puissent accéder aux mots de passe https://nerds.xyz/2025/07/bitwarden-mcp-server-secure-ai/ Bitwarden introduit un serveur MCP (Model Context Protocol) destiné à intégrer de manière sécurisée les agents IA dans les workflows de gestion de mots de passe. Ce serveur fonctionne en architecture locale (local-first) : toutes les interactions et les données sensibles restent sur la machine de l'utilisateur, garantissant l'application du principe de chiffrement zero‑knowledge. L'intégration se fait via l'interface CLI de Bitwarden, permettant aux agents IA de générer, récupérer, modifier et verrouiller les identifiants via des commandes sécurisées. Le serveur peut être auto‑hébergé pour un contrôle maximal des données. Le protocole MCP est un standard ouvert qui permet de connecter de façon uniforme des agents IA à des sources de données et outils tiers, simplifiant les intégrations entre LLM et applications. Une démo avec Claude (agent IA d'Anthropic) montre que l'IA peut interagir avec le coffre Bitwarden : vérifier l'état, déverrouiller le vault, générer ou modifier des identifiants, le tout sans intervention humaine directe. Bitwarden affiche une approche priorisant la sécurité, mais reconnaît les risques liés à l'utilisation d'IA autonome. L'usage d'un LLM local privé est fortement recommandé pour limiter les vulnérabilités. Si tu veux, je peux aussi te résumer les enjeux principaux (interopérabilité, sécurité, cas d'usage) ou un extrait spécifique ! NVIDIA a une faille de securite critique https://www.wiz.io/blog/nvidia-ai-vulnerability-cve–2025–23266-nvidiascape Il s'agit d'une faille d'évasion de conteneur dans le NVIDIA Container Toolkit. La gravité est jugée critique avec un score CVSS de 9.0. Cette vulnérabilité permet à un conteneur malveillant d'obtenir un accès root complet sur l'hôte. L'origine du problème vient d'une mauvaise configuration des hooks OCI dans le toolkit. L'exploitation peut se faire très facilement, par exemple avec un Dockerfile de seulement trois lignes. Le risque principal concerne la compromission de l'isolation entre différents clients sur des infrastructures cloud GPU partagées. Les versions affectées incluent toutes les versions du NVIDIA Container Toolkit jusqu'à la 1.17.7 et du NVIDIA GPU Operator jusqu'à la version 25.3.1. Pour atténuer le risque, il est recommandé de mettre à jour vers les dernières versions corrigées. En attendant, il est possible de désactiver certains hooks problématiques dans la configuration pour limiter l'exposition. Cette faille met en lumière l'importance de renforcer la sécurité des environnements GPU partagés et la gestion des conteneurs AI. Fuite de données de l'application Tea : points essentiels https://knowyourmeme.com/memes/events/the-tea-app-data-leak Tea est une application lancée en 2023 qui permet aux femmes de laisser des avis anonymes sur des hommes rencontrés. En juillet 2025, une importante fuite a exposé environ 72 000 images sensibles (selfies, pièces d'identité) et plus d'1,1 million de messages privés. La fuite a été révélée après qu'un utilisateur ait partagé un lien pour télécharger la base de données compromise. Les données touchées concernaient majoritairement des utilisateurs inscrits avant février 2024, date à laquelle l'application a migré vers une infrastructure plus sécurisée. En réponse, Tea prévoit de proposer des services de protection d'identité aux utilisateurs impactés. Faille dans le paquet npm is : attaque en chaîne d'approvisionnement https://socket.dev/blog/npm-is-package-hijacked-in-expanding-supply-chain-attack Une campagne de phishing ciblant les mainteneurs npm a compromis plusieurs comptes, incluant celui du paquet is. Des versions compromises du paquet is (notamment les versions 3.3.1 et 5.0.0) contenaient un chargeur de malware JavaScript destiné aux systèmes Windows. Ce malware a offert aux attaquants un accès à distance via WebSocket, permettant potentiellement l'exécution de code arbitraire. L'attaque fait suite à d'autres compromissions de paquets populaires comme eslint-config-prettier, eslint-plugin-prettier, synckit, @pkgr/core, napi-postinstall, et got-fetch. Tous ces paquets ont été publiés sans aucun commit ou PR sur leurs dépôts GitHub respectifs, signalant un accès non autorisé aux tokens mainteneurs. Le domaine usurpé [npnjs.com](http://npnjs.com) a été utilisé pour collecter les jetons d'accès via des emails de phishing trompeurs. L'épisode met en lumière la fragilité des chaînes d'approvisionnement logicielle dans l'écosystème npm et la nécessité d'adopter des pratiques renforcées de sécurité autour des dépendances. Revues de sécurité automatisées avec Claude Code https://www.anthropic.com/news/automate-security-reviews-with-claude-code Anthropic a lancé des fonctionnalités de sécurité automatisées pour Claude Code, un assistant de codage d'IA en ligne de commande. Ces fonctionnalités ont été introduites en réponse au besoin croissant de maintenir la sécurité du code alors que les outils d'IA accélèrent considérablement le développement de logiciels. Commande /security-review : les développeurs peuvent exécuter cette commande dans leur terminal pour demander à Claude d'identifier les vulnérabilités de sécurité, notamment les risques d'injection SQL, les vulnérabilités de script intersite (XSS), les failles d'authentification et d'autorisation, ainsi que la gestion non sécurisée des données. Claude peut également suggérer et implémenter des correctifs. Intégration GitHub Actions : une nouvelle action GitHub permet à Claude Code d'analyser automatiquement chaque nouvelle demande d'extraction (pull request). L'outil examine les modifications de code pour y trouver des vulnérabilités, applique des règles personnalisables pour filtrer les faux positifs et commente directement la demande d'extraction avec les problèmes détectés et les correctifs recommandés. Ces fonctionnalités sont conçues pour créer un processus d'examen de sécurité cohérent et s'intégrer aux pipelines CI/CD existants, ce qui permet de s'assurer qu'aucun code n'atteint la production sans un examen de sécurité de base. Loi, société et organisation Google embauche les personnes clés de Windsurf https://www.blog-nouvelles-technologies.fr/333959/openai-windsurf-google-deepmind-codage-agentique/ windsurf devait être racheté par OpenAI Google ne fait pas d'offre de rachat mais débauche quelques personnes clés de Windsurf Windsurf reste donc indépendante mais sans certains cerveaux y compris son PDG. Les nouveaux dirigeants sont les ex leaders des force de vente Donc plus une boîte tech Pourquoi le deal a 3 milliard est tombé à l'eau ? On ne sait pas mais la divergence et l‘indépendance technologique est possiblement en cause. Les transfuge vont bosser chez Deepmind dans le code argentique Opinion Article: https://www.linkedin.com/pulse/dear-people-who-think-ai-low-skilled-code-monkeys-future-jan-moser-svade/ Jan Moser critique ceux qui pensent que l'IA et les développeurs peu qualifiés peuvent remplacer les ingénieurs logiciels compétents. Il cite l'exemple de l'application Tea, une plateforme de sécurité pour femmes, qui a exposé 72 000 images d'utilisateurs en raison d'une mauvaise configuration de Firebase et d'un manque de pratiques de développement sécurisées. Il souligne que l'absence de contrôles automatisés et de bonnes pratiques de sécurité a permis cette fuite de données. Moser avertit que des outils comme l'IA ne peuvent pas compenser l'absence de compétences en génie logiciel, notamment en matière de sécurité, de gestion des erreurs et de qualité du code. Il appelle à une reconnaissance de la valeur des ingénieurs logiciels qualifiés et à une approche plus rigoureuse dans le développement logiciel. YouTube déploie une technologie d'estimation d'âge pour identifier les adolescents aux États-Unis https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/ Sujet très à la mode, surtout au UK mais pas que… YouTube commence à déployer une technologie d'estimation d'âge basée sur l'IA pour identifier les utilisateurs adolescents aux États-Unis, indépendamment de l'âge déclaré lors de l'inscription. Cette technologie analyse divers signaux comportementaux, tels que l'historique de visionnage, les catégories de vidéos consultées et l'âge du compte. Lorsqu'un utilisateur est identifié comme adolescent, YouTube applique des protections supplémentaires, notamment : Désactivation des publicités personnalisées. Activation des outils de bien-être numérique, tels que les rappels de temps d'écran et de coucher. Limitation de la visualisation répétée de contenus sensibles, comme ceux liés à l'image corporelle. Si un utilisateur est incorrectement identifié comme mineur, il peut vérifier son âge via une pièce d'identité gouvernementale, une carte de crédit ou un selfie. Ce déploiement initial concerne un petit groupe d'utilisateurs aux États-Unis et sera étendu progressivement. Cette initiative s'inscrit dans les efforts de YouTube pour renforcer la sécurité des jeunes utilisateurs en ligne. Mistral AI : contribution à un standard environnemental pour l'IA https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Mistral AI a réalisé la première analyse de cycle de vie complète d'un modèle d'IA, en collaboration avec plusieurs partenaires. L'étude quantifie l'impact environnemental du modèle Mistral Large 2 sur les émissions de gaz à effet de serre, la consommation d'eau, et l'épuisement des ressources. La phase d'entraînement a généré 20,4 kilotonnes de CO₂ équivalent, consommé 281 000 m³ d'eau, et utilisé 660 kg SB-eq (mineral consumption). Pour une réponse de 400 tokens, l'impact marginal est faible mais non négligeable : 1,14 gramme de CO₂, 45 mL d'eau, et 0,16 mg d'équivalent antimoine. Mistral propose trois indicateurs pour évaluer cet impact : l'impact absolu de l'entraînement, l'impact marginal de l'inférence, et le ratio inference/impact total sur le cycle de vie. L'entreprise souligne l'importance de choisir le modèle en fonction du cas d'usage pour limiter l'empreinte environnementale. Mistral appelle à plus de transparence et à l'adoption de standards internationaux pour permettre une comparaison claire entre modèles. L'IA promettait plus d'efficacité… elle nous fait surtout travailler plus https://afterburnout.co/p/ai-promised-to-make-us-more-efficient Les outils d'IA devaient automatiser les tâches pénibles et libérer du temps pour les activités stratégiques et créatives. En réalité, le temps gagné est souvent aussitôt réinvesti dans d'autres tâches, créant une surcharge. Les utilisateurs croient être plus productifs avec l'IA, mais les données contredisent cette impression : une étude montre que les développeurs utilisant l'IA prennent 19 % de temps en plus pour accomplir leurs tâches. Le rapport DORA 2024 observe une baisse de performance globale des équipes lorsque l'usage de l'IA augmente : –1,5 % de throughput et –7,2 % de stabilité de livraison pour +25 % d'adoption de l'IA. L'IA ne réduit pas la charge mentale, elle la déplace : rédaction de prompts, vérification de résultats douteux, ajustements constants… Cela épuise et limite le temps de concentration réelle. Cette surcharge cognitive entraîne une forme de dette mentale : on ne gagne pas vraiment du temps, on le paie autrement. Le vrai problème vient de notre culture de la productivité, qui pousse à toujours vouloir optimiser, quitte à alimenter l'épuisement professionnel. Trois pistes concrètes : Repenser la productivité non en temps gagné, mais en énergie préservée. Être sélectif dans l'usage des outils IA, en fonction de son ressenti et non du battage médiatique. Accepter la courbe en J : l'IA peut être utile, mais nécessite des ajustements profonds pour produire des gains réels. Le vrai hack de productivité ? Parfois, ralentir pour rester lucide et durable. Conférences MCP Submit Europe https://mcpdevsummit.ai/ Retour de JavaOne en 2026 https://inside.java/2025/08/04/javaone-returns–2026/ JavaOne, la conférence dédiée à la communauté Java, fait son grand retour dans la Bay Area du 17 au 19 mars 2026. Après le succès de l'édition 2025, ce retour s'inscrit dans la continuité de la mission initiale de la conférence : rassembler la communauté pour apprendre, collaborer et innover. La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 25–27 août 2025 : SHAKA Biarritz - Biarritz (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 22–24 septembre 2025 : Kernel Recipes - Paris (France) 22–27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23–24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25–26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025–1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6–7 octobre 2025 : Swift Connection 2025 - Paris (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7–8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8–10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9–10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17–19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30–31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30–31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025–2 novembre 2025 : PyConFR 2025 - Lyon (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 5–6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15–16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19–21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1–2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4–5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9–11 décembre 2025 : APIdays Paris - Paris (France) 9–11 décembre 2025 : Green IO Paris - Paris (France) 10–11 décembre 2025 : Devops REX - Paris (France) 10–11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 28–31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2–6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12–13 février 2026 : Touraine Tech #26 - Tours (France) 22–24 avril 2026 : Devoxx France 2026 - Paris (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

time community ai power google uk internet guide pr france building spring data elon musk microsoft chatgpt attention mvp phase dans agent construction tests windows bay area patterns ces tout tea ia pas limitations openai gemini faire distribution extension nvidia runner blue sky passage rust api retour conf agile gpt python cela toujours sb nouveau ml unis linux java trois priorit guillaume github mieux activation int libert aur jest savoir num selon valid donner armin bom lam certains javascript exposition llm documentation opus apache mod donc arnaud nouvelles contr prise changement gpu cpu maven nouveaux parfois m1 travailler grok google cloud ast exp dns normandie certaines tester aff cinq construire vall counted sql principales verified cloudflare lorsqu moser node git loi anthropic utiliser pdg sujet sortie afin sig lancement fen deepmind accepter ssl gitlab axes spel optimisation enregistr mocha mongodb toutefois ci cd xai json mistral modules mcp capacit configuration permet paris france orta cli aot github copilot objet comportement repenser utilisation montrer capitole enregistrement fuite prd jit fixe ecrire appels favoriser sse firebase commande oauth crud jep oci vache bgp jetbrains swe bitwarden nuage github actions windsurf mistral ai livrer propuls faille a2a xss stagiaire optimis remplacement mocker websockets automatiser cvss chris perry devcon revues spring boot tom l personnalisation jdk lyon france podman vertex ai adk jfr bordeaux france amazon bedrock profilage diagramme script editor clis junit dockerfile javaone provence france testcontainers toulouse france strasbourg france github issues commonjs micrometer lille france codeurs sourcetree dijon france devoxx france
Kodsnack
Kodsnack 655 - Gratis prestanda

Kodsnack

Play Episode Listen Later Aug 12, 2025 55:18


Fredrik och Tobias diskuterar en tillräckligt mystisk bugg Tobias jagat ifatt, och berättar på vägen om register och vektorisering. Tobias har sedan sist varit med och levererat sitt första spel på Ubisoft och berättar om vad som fanns att göra på kompilatornivå sex månader innan ett Assassins' creed-spel ska släppas. Men huvudämnet är vektorisering. Det började givetvis med en konstig bugg, som kräver ett par dykningar i hur processorer och kompilatorer fungerar för att få sin förklaring. Ett stort tack till Cloudnet som sponsrar vår VPS! Har du kommentarer, frågor eller tips? Vi är @kodsnack, @thieta, @krig, och @bjoreman på Mastodon, har en sida på Facebook och epostas på info@kodsnack.se om du vill skriva längre. Vi läser allt som skickas. Gillar du Kodsnack får du hemskt gärna recensera oss i iTunes! Du kan också stödja podden genom att ge oss en kaffe (eller två!) på Ko-fi, eller handla något i vår butik. Länkar Avsnitt 581 Amanda Assassin's creed shadows Anvil Profile-guided optimization Bitmaskande Perforce Git bisect Stöd oss på Ko-fi! Autovektorisering, eller loopvektorisering SSE, SSE 2, AVX Register i CPU:er Pentium XOR Scalar SIMD - Single instruction, multiple data Neon Pipelining i CPU:er Micro-ops Scheduler i kompilatorer Snowdrop JIT - just-in-time-kompilering Raw string Expedition 33 Videon om skapandet av Expedition 33 Titlar Tillbaka från avsnitt 581 Sporadisk gäst Tiden har ju sprungit som den gör Då finns det att göra Gratis prestanda Innan GPU:n tar över Två kuber ovanpå varandra Vart i kompilatorn gick det här åt skogen? Vektoriseringsmagi Två stora arrayer som beskriver någonting Ineffektivt att göra det i serie Inte speciellt ergonomiskt Det här kan jag vektorisera bort åt dig Bitmaskade på fel bit Det här är värt besväret Miljoner arrayer och loopar

Packet Pushers - Full Podcast Feed
PP071: SSE Vendor Test Results; Can HPE and Juniper Get Along?

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jul 22, 2025 46:20


CyberRatings, a non-profit that performs independent testing of security products and services, has released the results of comparative tests it conducted on Secure Service Edge, or SSE, services. Tested vendors include Cisco, Cloudflare, Fortinet, Palo Alto Networks, Skyhigh Security, Versa Networks, and Zscaler. We look at what was tested and how, highlight results, and discuss... Read more »

Packet Pushers - Fat Pipe
PP071: SSE Vendor Test Results; Can HPE and Juniper Get Along?

Packet Pushers - Fat Pipe

Play Episode Listen Later Jul 22, 2025 46:20


CyberRatings, a non-profit that performs independent testing of security products and services, has released the results of comparative tests it conducted on Secure Service Edge, or SSE, services. Tested vendors include Cisco, Cloudflare, Fortinet, Palo Alto Networks, Skyhigh Security, Versa Networks, and Zscaler. We look at what was tested and how, highlight results, and discuss... Read more »

Everything Co-op with Vernon Oakes
Michael Peck n Chris Clamp 6262025

Everything Co-op with Vernon Oakes

Play Episode Listen Later Jul 18, 2025 53:59


June 26, 2025 - Michael Peck and Dr. Christina Clamp to discuss the 2nd Volume of "Humanity@Work&life-Global Diffusion of the Mondragon Cooperative Ecosystem Experience.” Michael Peck co-founded 1worker1vote in 2014, alongside ten advisory board members, to build on the 2009 United Steelworkers/Mondragon Collaboration MOU and the 2012 Union-Coop Model. He currently serves as the organization's Executive Director. In early 2015, 1worker1vote was incorporated as a New York 501(c)(3) by CUNY Law School's Community Economic Development Clinic. Drawing inspiration from Mondragon's 70-year cooperative ecosystem, 1worker1vote is leading the “Good Trouble Capitalism” and “Generation Union” campaigns under its 2025 initiative. These efforts promote global Social and Solidarity Economy (SSE) principles, community enterprise development, authentic sustainability metrics, predistributive financing, and cooperative-mutualist housing best practices. Central to its mission is advancing hybrid worker ownership and workplace democracy through union-coop models. Current collaborations include: The Coalition for Affordable, Cooperative-Mutualist Housing (NY project) ASETT (Mondragon-inspired SSE think-and-do tank) UNRISD and ASETT on Sustainable Development Performance Indicators The Mutualist Society American Sustainable Business Network Coop Cincy NewsSocial Coop (UK) Worx Printing (union-coop) Blue-Green Alliance Humanity@Work&Life publications Dr. Christina Clamp is heralded for her diverse work grounded in the values of civil rights, social justice and an inclusive economy. She is best known for her research on Mondragon, the world's largest worker cooperative. The results of her deep interviews with Mondragon managers and founders continue to inform human resource strategies for worker co-ops worldwide. Her extensive list of publications includes, most recently, a collection of 30 essays highlighting the story of Mondragon and its ongoing influence in the U.S. UK, Korea and Germany, Humanity@ Work & Life, coedited with Michael Peck. For more than 40 years Professor Clamp taught college courses on cooperatives and led a master's program in community economic development at Southern New Hampshire University. As an activist professor, Chris expected her students to be engaged with community groups, particularly those that support existing and developing co-ops. Her work crosses sectors in cooperative development: from cutting-edge research on worker and shared-services cooperatives to training generations of cooperators to building and connecting cooperatives to broader movements for community economic development and the social solidarity economy, Chris is a steadfast champion of cooperatives. Chris serves on the boards of the Local Enterprise Assistance Fund (LEAF), The ICA Group, and The Fund for Jobs Worth Owning. “Humanity@Work&life - Global Diffusion of the Mondragon Cooperative Ecosystem Experience 2nd Edition” , published by Oak Tree Press, frames a collective labor of earned merit, vision and determination by 36 contributors in six countries, three continents, proving how solidarity, innovation, and conviction forge sustaining local and global social economy practice on behalf of the greater common good.

Les Cast Codeurs Podcast
LCC 328 - Expert généraliste cherche Virtual Thread

Les Cast Codeurs Podcast

Play Episode Listen Later Jul 16, 2025 90:13


Dans cet épisode, Emmanuel et Antonio discutent de divers sujets liés au développement: Applets (et oui), app iOS développées sous Linux, le protocole A2A, l'accessibilité, les assistants de code AI en ligne de commande (vous n'y échapperez pas)… Mais aussi des approches méthodologiques et architecturales comme l'architecture hexagonale, les tech radars, l'expert généraliste et bien d'autres choses encore. Enregistré le 11 juillet 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-328.mp3 ou en vidéo sur YouTube. News Langages Les Applets Java c'est terminé pour de bon… enfin, bientot: https://openjdk.org/jeps/504 Les navigateurs web ne supportent plus les applets. L'API Applet et l'outil appletviewer ont été dépréciés dans JDK 9 (2017). L'outil appletviewer a été supprimé dans JDK 11 (2018). Depuis, impossible d'exécuter des applets avec le JDK. L'API Applet a été marquée pour suppression dans JDK 17 (2021). Le Security Manager, essentiel pour exécuter des applets de façon sécurisée, a été désactivé définitivement dans JDK 24 (2025). Librairies Quarkus 3.24 avec la notion d'extensions qui peuvent fournir des capacités à des assistants https://quarkus.io/blog/quarkus-3-24-released/ les assistants typiquement IA, ont accès a des capacités des extensions Par exemple générer un client à partir d'openAPI Offrir un accès à la,base de données en dev via le schéma. L'intégration d'Hibernate 7 dans Quarkus https://quarkus.io/blog/hibernate7-on-quarkus/ Jakarta data api restriction nouvelle Injection du SchemaManager Sortie de Micronaut 4.9 https://micronaut.io/2025/06/30/micronaut-framework-4-9-0-released/ Core : Mise à jour vers Netty 4.2.2 (attention, peut affecter les perfs). Nouveau mode expérimental “Event loop Carrier” pour exécuter des virtual threads sur l'event loop Netty. Nouvelle annotation @ClassImport pour traiter des classes déjà compilées. Arrivée des @Mixin (Java uniquement) pour modifier les métadonnées d'annotations Micronaut sans altérer les classes originales. HTTP/3 : Changement de dépendance pour le support expérimental. Graceful Shutdown : Nouvelle API pour un arrêt en douceur des applications. Cache Control : API fluente pour construire facilement l'en-tête HTTP Cache-Control. KSP 2 : Support de KSP 2 (à partir de 2.0.2) et testé avec Kotlin 2. Jakarta Data : Implémentation de la spécification Jakarta Data 1.0. gRPC : Support du JSON pour envoyer des messages sérialisés via un POST HTTP. ProjectGen : Nouveau module expérimental pour générer des projets JVM (Gradle ou Maven) via une API. Un super article sur experimenter avec les event loops reactives dans les virtualthreads https://micronaut.io/2025/06/30/transitioning-to-virtual-threads-using-the-micronaut-loom-carrier/ Malheureusement cela demander le hacker le JDK C'est un article de micronaut mais le travail a ete collaboratif avec les equipes de Red Hat OpenJDK, Red Hat perf et de Quarkus et Vert.x Pour les curieux c'est un bon article Ubuntu offre un outil de creation de container pour Spring notamment https://canonical.com/blog/spring-boot-containers-made-easy creer des images OCI pour les applications Spring Boot basées sur Ubuntu base images bien sur utilise jlink pour reduire la taille pas sur de voir le gros avantage vs d'autres solutions plus portables d'ailleurs Canonical entre dans la danse des builds d'openjdk Le SDK Java de A2A contribué par Red Hat est sorti https://quarkus.io/blog/a2a-project-launches-java-sdk/ A2A est un protocole initié par Google et donne à la fondation Linux Il permet à des agents de se décrire et d'interagir entre eux Agent cards, skills, tâche, contexte A2A complémente MCP Red hat a implémenté le SDK Java avec le conseil des équipes Google En quelques annotations et classes on a un agent card, un client A2A et un serveur avec l'échange de messages via le protocole A2A Comment configurer mockito sans warning après java 21 https://rieckpil.de/how-to-configure-mockito-agent-for-java-21-without-warning/ les agents chargés dynamiquement sont déconseillés et seront interdis bientôt Un des usages est mockito via bytebuddy L'avantage est que la,configuration était transparente Mais bon sécurité oblige c'est fini. Donc l'article décrit comment configurer maven gradle pour mettre l'agent au démarrage des tests Et aussi comment configurer cela dans IntelliJ idea. Moins simple malheureusement Web Des raisons “égoïstes” de rendre les UIs plus accessibles https://nolanlawson.com/2025/06/16/selfish-reasons-for-building-accessible-uis/ Raisons égoïstes : Des avantages personnels pour les développeurs de créer des interfaces utilisateurs (UI) accessibles, au-delà des arguments moraux. Débogage facilité : Une interface accessible, avec une structure sémantique claire, est plus facile à déboguer qu'un code désordonné (la « soupe de div »). Noms standardisés : L'accessibilité fournit un vocabulaire standard (par exemple, les directives WAI-ARIA) pour nommer les composants d'interface, ce qui aide à la clarté et à la structuration du code. Tests simplifiés : Il est plus simple d'écrire des tests automatisés pour des éléments d'interface accessibles, car ils peuvent être ciblés de manière plus fiable et sémantique. Après 20 ans de stagnation, la spécification du format d'image PNG évolue enfin ! https://www.programmax.net/articles/png-is-back/ Objectif : Maintenir la pertinence et la compétitivité du format. Recommandation : Soutenu par des institutions comme la Bibliothèque du Congrès américain. Nouveautés Clés :Prise en charge du HDR (High Dynamic Range) pour une plus grande gamme de couleurs. Reconnaissance officielle des PNG animés (APNG). Support des métadonnées Exif (copyright, géolocalisation, etc.). Support Actuel : Déjà intégré dans Chrome, Safari, Firefox, iOS, macOS et Photoshop. Futur :Prochaine édition : focus sur l'interopérabilité entre HDR et SDR. Édition suivante : améliorations de la compression. Avec le projet open source Xtool, on peut maintenant construire des applications iOS sur Linux ou Windows, sans avoir besoin d'avoir obligatoirement un Mac https://xtool.sh/tutorials/xtool/ Un tutoriel très bien fait explique comment faire : Création d'un nouveau projet via la commande xtool new. Génération d'un package Swift avec des fichiers clés comme Package.swift et xtool.yml. Build et exécution de l'app sur un appareil iOS avec xtool dev. Connexion de l'appareil en USB, gestion du jumelage et du Mode Développeur. xtool gère automatiquement les certificats, profils de provisionnement et la signature de l'app. Modification du code de l'interface utilisateur (ex: ContentView.swift). Reconstruction et réinstallation rapide de l'app mise à jour avec xtool dev. xtool est basé sur VSCode sur la partie IDE Data et Intelligence Artificielle Nouvelle edition du best seller mondial “Understanding LangChain4j” : https://www.linkedin.com/posts/agoncal_langchain4j-java-ai-activity-7342825482830200833-rtw8/ Mise a jour des APIs (de LC4j 0.35 a 1.1.0) Nouveaux Chapitres sur MCP / Easy RAG / JSon Response Nouveaux modeles (GitHub Model, DeepSeek, Foundry Local) Mise a jour des modeles existants (GPT-4.1, Claude 3.7…) Google donne A2A a la Foundation Linux https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ Annonce du projet Agent2Agent (A2A) : Lors du sommet Open Source Summit North America, la Linux Foundation a annoncé la création du projet Agent2Agent, en partenariat avec Google, AWS, Microsoft, Cisco, Salesforce, SAP et ServiceNow. Objectif du protocole A2A : Ce protocole vise à établir une norme ouverte pour permettre aux agents d'intelligence artificielle (IA) de communiquer, collaborer et coordonner des tâches complexes entre eux, indépendamment de leur fournisseur. Transfert de Google à la communauté open source : Google a transféré la spécification du protocole A2A, les SDK associés et les outils de développement à la Linux Foundation pour garantir une gouvernance neutre et communautaire. Soutien de l'industrie : Plus de 100 entreprises soutiennent déjà le protocole. AWS et Cisco sont les derniers à l'avoir validé. Chaque entreprise partenaire a souligné l'importance de l'interopérabilité et de la collaboration ouverte pour l'avenir de l'IA. Objectifs de la fondation A2A : Établir une norme universelle pour l'interopérabilité des agents IA. Favoriser un écosystème mondial de développeurs et d'innovateurs. Garantir une gouvernance neutre et ouverte. Accélérer l'innovation sécurisée et collaborative. parler de la spec et surement dire qu'on aura l'occasion d'y revenir Gemini CLI :https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ Agent IA dans le terminal : Gemini CLI permet d'utiliser l'IA Gemini directement depuis le terminal. Gratuit avec compte Google : Accès à Gemini 2.5 Pro avec des limites généreuses. Fonctionnalités puissantes : Génère du code, exécute des commandes, automatise des tâches. Open source : Personnalisable et extensible par la communauté. Complément de Code Assist : Fonctionne aussi avec les IDE comme VS Code. Au lieu de blocker les IAs sur vos sites vous pouvez peut-être les guider avec les fichiers LLMs.txt https://llmstxt.org/ Exemples du projet angular: llms.txt un simple index avec des liens : https://angular.dev/llms.txt lllms-full.txt une version bien plus détaillée : https://angular.dev/llms-full.txt Outillage Les commits dans Git sont immuables, mais saviez vous que vous pouviez rajouter / mettre à jour des “notes” sur les commits ? https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/ Fonctionnalité méconnue : git notes est une fonctionnalité puissante mais peu utilisée de Git. Ajout de métadonnées : Permet d'attacher des informations à des commits existants sans en modifier le hash. Cas d'usage : Idéal pour ajouter des données issues de systèmes automatisés (builds, tickets, etc.). Revue de code distribuée : Des outils comme git-appraise ont été construits sur git notes pour permettre une revue de code entièrement distribuée, indépendante des forges (GitHub, GitLab). Peu populaire : Son interface complexe et le manque de support des plateformes de forge ont limité son adoption (GitHub n'affiche même pas/plus les notes). Indépendance des forges : git notes offre une voie vers une plus grande indépendance vis-à-vis des plateformes centralisées, en distribuant l'historique du projet avec le code lui-même. Un aperçu dur Spring Boot debugger dans IntelliJ idea ultimate https://blog.jetbrains.com/idea/2025/06/demystifying-spring-boot-with-spring-debugger/ montre cet outil qui donne du contexte spécifique à Spring comme les beans non activés, ceux mockés, la valeur des configs, l'état des transactions Il permet de visualiser tous les beans Spring directement dans la vue projet, avec les beans non instanciés grisés et les beans mockés marqués en orange pour les tests Il résout le problème de résolution des propriétés en affichant la valeur effective en temps réel dans les fichiers properties et yaml, avec la source exacte des valeurs surchargées Il affiche des indicateurs visuels pour les méthodes exécutées dans des transactions actives, avec les détails complets de la transaction et une hiérarchie visuelle pour les transactions imbriquées Il détecte automatiquement toutes les connexions DataSource actives et les intègre avec la fenêtre d'outils Database d'IntelliJ IDEA pour l'inspection Il permet l'auto-complétion et l'invocation de tous les beans chargés dans l'évaluateur d'expression, fonctionnant comme un REPL pour le contexte Spring Il fonctionne sans agent runtime supplémentaire en utilisant des breakpoints non-suspendus dans les bibliothèques Spring Boot pour analyser les données localement Une liste communautaire sur les assistants IA pour le code, lancée par Lize Raes https://aitoolcomparator.com/ tableau comparatif qui permet de voir les différentes fonctionnalités supportées par ces outils Architecture Un article sur l'architecture hexagonale en Java https://foojay.io/today/clean-and-modular-java-a-hexagonal-architecture-approach/ article introductif mais avec exemple sur l'architecture hexagonale entre le domaine, l'application et l‘infrastructure Le domain est sans dépendance L‘appli spécifique à l'application mais sans dépendance technique explique le flow L'infrastructure aura les dépendances à vos frameworks spring, Quarkus Micronaut, Kafka etc Je suis naturellement pas fan de l'architecture hexagonale en terme de volume de code vs le gain surtout en microservices mais c'est toujours intéressant de se challenger et de regarder le bénéfice coût. Gardez un œil sur les technologies avec les tech radar https://www.sfeir.dev/cloud/tech-radar-gardez-un-oeil-sur-le-paysage-technologique/ Le Tech Radar est crucial pour la veille technologique continue et la prise de décision éclairée. Il catégorise les technologies en Adopt, Trial, Assess, Hold, selon leur maturité et pertinence. Il est recommandé de créer son propre Tech Radar pour l'adapter aux besoins spécifiques, en s'inspirant des Radars publics. Utilisez des outils de découverte (Alternativeto), de tendance (Google Trends), de gestion d'obsolescence (End-of-life.date) et d'apprentissage (roadmap.sh). Restez informé via les blogs, podcasts, newsletters (TLDR), et les réseaux sociaux/communautés (X, Slack). L'objectif est de rester compétitif et de faire des choix technologiques stratégiques. Attention à ne pas sous-estimer son coût de maintenance Méthodologies Le concept d'expert generaliste https://martinfowler.com/articles/expert-generalist.html L'industrie pousse vers une spécialisation étroite, mais les collègues les plus efficaces excellent dans plusieurs domaines à la fois Un développeur Python expérimenté peut rapidement devenir productif dans une équipe Java grâce aux concepts fondamentaux partagés L'expertise réelle comporte deux aspects : la profondeur dans un domaine et la capacité d'apprendre rapidement Les Expert Generalists développent une maîtrise durable au niveau des principes fondamentaux plutôt que des outils spécifiques La curiosité est essentielle : ils explorent les nouvelles technologies et s'assurent de comprendre les réponses au lieu de copier-coller du code La collaboration est vitale car ils savent qu'ils ne peuvent pas tout maîtriser et travaillent efficacement avec des spécialistes L'humilité les pousse à d'abord comprendre pourquoi les choses fonctionnent d'une certaine manière avant de les remettre en question Le focus client canalise leur curiosité vers ce qui aide réellement les utilisateurs à exceller dans leur travail L'industrie doit traiter “Expert Generalist” comme une compétence de première classe à nommer, évaluer et former ca me rappelle le technical staff Un article sur les métriques métier et leurs valeurs https://blog.ippon.fr/2025/07/02/monitoring-metier-comment-va-vraiment-ton-service-2/ un article de rappel sur la valeur du monitoring métier et ses valeurs Le monitoring technique traditionnel (CPU, serveurs, API) ne garantit pas que le service fonctionne correctement pour l'utilisateur final. Le monitoring métier complète le monitoring technique en se concentrant sur l'expérience réelle des utilisateurs plutôt que sur les composants isolés. Il surveille des parcours critiques concrets comme “un client peut-il finaliser sa commande ?” au lieu d'indicateurs abstraits. Les métriques métier sont directement actionnables : taux de succès, délais moyens et volumes d'erreurs permettent de prioriser les actions. C'est un outil de pilotage stratégique qui améliore la réactivité, la priorisation et le dialogue entre équipes techniques et métier. La mise en place suit 5 étapes : dashboard technique fiable, identification des parcours critiques, traduction en indicateurs, centralisation et suivi dans la durée. Une Definition of Done doit formaliser des critères objectifs avant d'instrumenter tout parcours métier. Les indicateurs mesurables incluent les points de passage réussis/échoués, les temps entre actions et le respect des règles métier. Les dashboards doivent être intégrés dans les rituels quotidiens avec un système d'alertes temps réel compréhensibles. Le dispositif doit évoluer continuellement avec les transformations produit en questionnant chaque incident pour améliorer la détection. La difficulté c'est effectivement l'évolution métier par exemple peu de commandes la nuit etc ça fait partie de la boîte à outils SRE Sécurité Toujours à la recherche du S de Sécurité dans les MCP https://www.darkreading.com/cloud-security/hundreds-mcp-servers-ai-models-abuse-rce analyse des serveurs mcp ouverts et accessibles beaucoup ne font pas de sanity check des parametres si vous les utilisez dans votre appel genAI vous vous exposer ils ne sont pas mauvais fondamentalement mais n'ont pas encore de standardisation de securite si usage local prefferer stdio ou restreindre SSE à 127.0.0.1 Loi, société et organisation Nicolas Martignole, le même qui a créé le logo des Cast Codeurs, s'interroge sur les voies possibles des développeurs face à l'impact de l'IA sur notre métier https://touilleur-express.fr/2025/06/23/ni-manager-ni-contributeur-individuel/ Évolution des carrières de développeur : L'IA transforme les parcours traditionnels (manager ou expert technique). Chef d'Orchestre d'IA : Ancien manager qui pilote des IA, définit les architectures et valide le code généré. Artisan Augmenté : Développeur utilisant l'IA comme un outil pour coder plus vite et résoudre des problèmes complexes. Philosophe du Code : Un nouveau rôle centré sur le “pourquoi” du code, la conceptualisation de systèmes et l'éthique de l'IA. Charge cognitive de validation : Nouvelle charge mentale créée par la nécessité de vérifier le travail des IA. Réflexion sur l'impact : L'article invite à choisir son impact : orchestrer, créer ou guider. Entraîner les IAs sur des livres protégés (copyright) est acceptable (fair use) mais les stocker ne l'est pas https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ Victoire pour Anthropic (jusqu'au prochain procès): L'entreprise a obtenu gain de cause dans un procès très suivi concernant l'entraînement de son IA, Claude, avec des œuvres protégées par le droit d'auteur. “Fair Use” en force : Le juge a estimé que l'utilisation des livres pour entraîner l'IA relevait du “fair use” (usage équitable) car il s'agit d'une transformation du contenu, pas d'une simple reproduction. Nuance importante : Cependant, le stockage de ces œuvres dans une “bibliothèque centrale” sans autorisation a été jugé illégal, ce qui souligne la complexité de la gestion des données pour les modèles d'IA. Luc Julia, son audition au sénat https://videos.senat.fr/video.5486945_685259f55eac4.ia–audition-de-luc-julia-concepteur-de-siri On aime ou pas on aide pas Luc Julia et sa vision de l'IA . C'est un eversion encore plus longue mais dans le même thème que sa keynote à Devoxx France 2025 ( https://www.youtube.com/watch?v=JdxjGZBtp_k ) Nature et limites de l'IA : Luc Julia a insisté sur le fait que l'intelligence artificielle est une “évolution” plutôt qu'une “révolution”. Il a rappelé qu'elle repose sur des mathématiques et n'est pas “magique”. Il a également alerté sur le manque de fiabilité des informations fournies par les IA génératives comme ChatGPT, soulignant qu'« on ne peut pas leur faire confiance » car elles peuvent se tromper et que leur pertinence diminue avec le temps. Régulation de l'IA : Il a plaidé pour une régulation “intelligente et éclairée”, qui devrait se faire a posteriori afin de ne pas freiner l'innovation. Selon lui, cette régulation doit être basée sur les faits et non sur une analyse des risques a priori. Place de la France : Luc Julia a affirmé que la France possédait des chercheurs de très haut niveau et faisait partie des meilleurs mondiaux dans le domaine de l'IA. Il a cependant soulevé le problème du financement de la recherche et de l'innovation en France. IA et Société : L'audition a traité des impacts de l'IA sur la vie privée, le monde du travail et l'éducation. Luc Julia a souligné l'importance de développer l'esprit critique, notamment chez les jeunes, pour apprendre à vérifier les informations générées par les IA. Applications concrètes et futures : Le cas de la voiture autonome a été discuté, Luc Julia expliquant les différents niveaux d'autonomie et les défis restants. Il a également affirmé que l'intelligence artificielle générale (AGI), une IA qui dépasserait l'homme dans tous les domaines, est “impossible” avec les technologies actuelles. Rubrique débutant Les weakreferences et le finalize https://dzone.com/articles/advanced-java-garbage-collection-concepts un petit rappel utile sur les pièges de la méthode finalize qui peut ne jamais être invoquée Les risques de bug si finalize ne fini jamais Finalize rend le travail du garbage collector beaucoup plus complexe et inefficace Weak references sont utiles mais leur libération n'est pas contrôlable. Donc à ne pas abuser. Il y a aussi les soft et phantom references mais les usages ne sont assez subtils et complexe en fonction du GC. Le sériel va traiter les weak avant les soft, parallel non Le g1 ça dépend de la région Z1 ça dépend car le traitement est asynchrone Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-19 juillet 2025 : DebConf25 - Brest (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Daily Signal News
Trump Supports Bondi, Noem Defends Texas Response, Rand Paul Issues Secret Service Report | July 14, 2025

Daily Signal News

Play Episode Listen Later Jul 14, 2025 9:06


On today's Top News in 10, we cover: President Donald Trump spoke out in favor of Attorney General Pam Bondi, who is under fire for her handling of the Jeffrey Epstein case. Homeland Security Secretary Kristi Noem defended how the Trump administration responded to the devastating Texas floods. President Trump reflects on Butler a year later, and Sen. Rand Paul issues a new report on the Secret Service.   Want to support the recovery efforts in Texas? Consider supporting any of these vetted organizations:    Kerr County Flood Relief Fund https://cftexashillcountry.fcsuite.com/erp/donate/create/fund?funit_id=4201 Cross Kingdom https://www.facebook.com/story.php?story_fbid=1148132027344737&id=100064438522058&_rdr   Ark of Highland Lakes https://www.flowcode.com/page/arkofhighlandlakes?fce_id=f990427b-6b27-4241-8771-5f63a046b186&utm_term=rHkUOyNwo#fid=rHkUOyNwo&c=eb777762-e086-4e11-b2f9-3b793b656202-SSE:1751725930    Subscribe to The Tony Kinnett Cast:    ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://podcasts.apple.com/us/podcast/the-tony-kinnett-cast/id1714879044⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠   Don't forget our other shows: Virginia Allen's Problematic Women:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.dailysignal.com/problematic-women⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Bradley Devlin's The Signal Sitdown:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.dailysignal.com/the-signal-sitdown⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Follow The Daily Signal:  X:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://x.com/DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Instagram:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.instagram.com/thedailysignal/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Facebook:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.facebook.com/TheDailySignalNews/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Truth Social:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://truthsocial.com/@DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  YouTube:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.youtube.com/user/DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Rumble:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://rumble.com/c/TheDailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Thanks for making The Daily Signal Podcast your trusted source for the day's top news. Subscribe on your favorite podcast platform and never miss an episode. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices

Daily Signal News
Trump to Travel to Central Texas, Judge Blocks Trump's EO on Birthright Citizenship, US Fast Tracks Drone Production | July 11, 2025

Daily Signal News

Play Episode Listen Later Jul 11, 2025 7:18


On today's Top News in 10, we cover: A federal judge blocks President Donald Trump's ability to enforce his executive order that limits birthright citizenship.  Secretary of Defense Pete Hegseth says America is fast tracking drone production to grow our unmanned warfare capabilities.  President Trump is headed down to Texas today to survey the damage devastating flash flooding caused there last week. Want to support the recovery efforts in Texas? Consider supporting any of these vetted organizations:  Kerr County Flood Relief Fund https://cftexashillcountry.fcsuite.com/erp/donate/create/fund?funit_id=4201 Cross Kingdom https://www.facebook.com/story.php?story_fbid=1148132027344737&id=100064438522058&_rdr Ark of Highland Lakes https://www.flowcode.com/page/arkofhighlandlakes?fce_id=f990427b-6b27-4241-8771-5f63a046b186&utm_term=rHkUOyNwo#fid=rHkUOyNwo&c=eb777762-e086-4e11-b2f9-3b793b656202-SSE:1751725930  Subscribe to The Tony Kinnett Cast:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://podcasts.apple.com/us/podcast/the-tony-kinnett-cast/id1714879044⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Don't forget our other shows: Virginia Allen's Problematic Women:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.dailysignal.com/problematic-women⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Bradley Devlin's The Signal Sitdown:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.dailysignal.com/the-signal-sitdown⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Follow The Daily Signal:  X:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://x.com/DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Instagram:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.instagram.com/thedailysignal/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Facebook:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.facebook.com/TheDailySignalNews/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Truth Social:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://truthsocial.com/@DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  YouTube:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.youtube.com/user/DailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Rumble:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://rumble.com/c/TheDailySignal⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Thanks for making The Daily Signal Podcast your trusted source for the day's top news. Subscribe on your favorite podcast platform and never miss an episode. Learn more about your ad choices. Visit megaphone.fm/adchoices

Puedes Hacerlo
290. La Magia de Soltar la Culpa

Puedes Hacerlo

Play Episode Listen Later May 22, 2025 15:18


Si últimamente has notado que, en la búsqueda de lograr tu peso ideal, al querer cambiar hábitos, cuando tu avance no es lo que esperabas y te estás hablando feo... quédate conmigo. Es importante atender esto. Lo que hoy compartiré contigo te va a permitir respirar distinto, retomar tu camino desde un espacio más bonito, que combina más contigo. Te lo mereces. Hoy quiero invitarte a reflexionar en algo que muchas vivimos: la dureza con la que nos tratamos. Esa dureza se traduce en culpa, en remordimiento. Y esto solo nos coloca en ciclos que no nos sirven y que no combinan con esa versión de nosotras mismas que nos sentimos llamadas a crear y vivir. En los dos episodios anteriores reflexionamos sobre cómo jugamos ciertos roles inconscientemente: El rol de víctima: que nos debilita. El rol de salvadora: que nos desgasta y nos desenfoca. Y en este episodio, la propuesta es observar y detectar si solemos ser nuestras propias verdugas, nuestras propias acusadoras. Quizá te has acostumbrado a ser esa voz que te critica, que te juzga, que te repite todo lo que hiciste mal. Si es así, hoy quiero recordarte algo que tú sabes: Tú no naciste para vivir en guerra contigo. Tú naciste para mucho más: naciste para florecer, para brillar, para disfrutar y honrar el hermoso regalo de tu vida. Por más romántico que esto suene... consiéralo. Más que una nueva dieta, más que una nueva rutina de ejercicio... considera que esto es lo que realmente necesitas hoy escuchar y tener presente. ¿Te ha pasado que un día... o muchos días... por ejemplo: Te comiste ese pan que habías planeado no comerte. Te prometiste ir a tus clases... y no fuiste. Cocinaste súper sano... pero después te comiste una bolsa de totopos. Y ante esto, acto seguido, aparece esa voz dentro de ti… esa voz que parece venir acompañada de un látigo, llena de juicio, de dureza. Esa voz que muchas veces confundimos con responsabilidad, y que viene con frases como: "Soy un caso perdido." "Nunca voy a lograrlo." "¿Otra vez con el autosabotaje? ¿Qué está mal conmigo?" "No tengo fuerza de voluntad." "Soy el colmo. Estoy fatal." "Qué vergüenza… soy un fraude." "Soy pésima, la más inconstante." "Qué horror, voy de mal en peor. ¿Dónde voy a parar?" ¿Te suenan familiares algunas de estas frases?   Con lo que hoy quiero invitarte a que te familiarices más es con esta idea: Cometer errores no es el problema. No seguir un plan a la perfección no es el problema. Darle cuerda a esa voz que te castiga y que te juzga, sí que lo es. Y te lo repito para que se te quede grabado: Cometer errores no es el problema. Darle poder a la voz que juzga y castiga, sí que lo es. Esa voz nos desempodera, nos aplasta, nos incapacita. Y si te estás identificando con esto, por favor, pausa. Respira profundo. Y no permitas que este descubrimiento desate más culpa. Una vez más: respira profundo y date cuenta de que puedes soltar y transformar esa conversación cargada de culpa y juicio. Tú puedes hacerlo.   Método SOLTAR – Para liberarte de la culpa y volver a ti Aquí te comparto un paso a paso para soltar la culpa. Es una práctica sencilla y poderosa que te ayudará a reconectar con tu poder, con tu amor propio y con la magia de ser más tú. Visualiza la palabra SOLTAR, y descubre en cada letra una invitación poderosa: S – Señala Identifica esa frase que sueles repetirte, que te juzga y te hace sentir culpable. Ponle nombre. Ejemplo: “Soy un caso perdido.” O – Observa ¿Qué impacto tiene esa frase en ti? ¿Dónde la sientes en el cuerpo? ¿Qué se activa? Permítelo sin juicio. Puede sentirse como una puñalada, como presión, como algo pesado que te debilita. L – Libera Permite que esas sensaciones se muevan por tu cuerpo hasta dejarlas ir. No tienes que quedarte con ellas para siempre. T – Transforma Elige conscientemente una nueva frase que te sostenga. Ejemplo: “Estoy aprendiendo.” “Confío en mí.” “Soy un caso extraordinario.” A – Activa Activa el poder de tu nueva frase en tí! Escríbela. Dila en voz alta. Permite que se instale en cada célula de tu cuerpo. Muévete con ella. Camina, báilala. Actívala. R – Respira Respira y experimenta la magia de hacerte cargo de crear y vivir la versión de ti misma que te sientes llamada a ser. Tú puedes hacerlo! Esto como todo lo que comparto, esto es solo una invitación para que pruebes y compruebes cómo, al cambiar nuestra manera de pensar, claro que podemos cambiar espectacularmente nuestra manera de vivir. Practica una y otra vez este PODEROSO MÉTODO SOLTAR Y si quieres ser parte de Mi Mejor Versión, el espacio de acompañamiento que he creado para compartir contigo estas estrategias y llevarlas a un siguiente nivel hasta hacerlas vida. Accede a monicasosa.com/mmv para unirte en primera fila en cuanto vuelva a abrir las puertas. Con cariño, Tu coach Mónica.

The Uptime Wind Energy Podcast
India’s Wind Ambitions and UK Offshore Expansion

The Uptime Wind Energy Podcast

Play Episode Listen Later May 12, 2025 1:48


This episode covers India's ambitious plans to double its wind energy capacity by 2030, the UK's expansion of offshore wind farms, and the US states' legal challenge against President Trump's executive order halting wind energy development. Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard's StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary Barnes' YouTube channel here. Have a question we can answer on the show? Email us! Allen Hall: Starting the week off in India, India's wind energy sector is investing heavily in capacity and workforce development to double its current 50 gigawatt capacity by 2030. The Indian Wind Turbine Manufacturers Association says they're focusing on technology innovations while advancing the Make in India mission to achieve this ambitious target. The country already has 18 gigawatts of annual manufacturing capacity for turbines and components. Companies like LAN and zf Windpower produce critical parts locally. Positioning India as a potential global export hub. Renewable sector hiring is expected to grow by 19% this year in India with most workers being young [00:01:00] Indians between 26 and 35 years old. Over in the uk the UK's Crown estate has approved expansion of high density wind farms on existing seabed leases to support the country's energy transition. Seven projects will increase capacity by 4.7 gigawatts helping Britain towards its target of 50 gigawatts of offshore wind by 2030. Up from the current 15 gigawatts projects include RWE's Ramon two and SSE's and Equinor's Dogger Bank D. The Crown Estate's Marine director Gus Jasper says, this capacity increase program will provide up to 4 million homes with clean energy and decrease the UK's reliance on internationally sourced fossil fuels. Britain is already the world's second largest offshore wind market after China, though inflation and supply chain issues have challenged the sector recently. Over in the United States, a coalition of 17 states and Washington [00:02:00]DC has filed a lawsuit against President Donald Trump's executive order halting wind energy development. The order signed on his first day in office, pauses, approvals, permits, and loans for all wind projects, both offshore and onshore. New York Attorney General Letitia James leading the coalition argues the directive threatens thousands of good paying jobs and billions in investment while delaying the transition away from fossil fuels. The administration recently ordered Norwegian company Ecuador to halt construction on Empire Wind, one near Long Island, despite the project being 30% complete after a seven year permitting process. Wind currently provides about 10% of US electricity, making it the nation's largest renewable energy source. The states argued Trump's order contradicts years of bipartisan support for wind energy and his own declaration of quote, a national energy emergency unquote calling for expanded domestic energy production.[00:03:00] The administration has also suspended funding for floating offshore wind research in Maine and revoked permits for a project in New Jersey. Internationally, other nations are accelerating wind investments with the UK and Canada's Nova Scotia recently announcing major offshore expansion plans. That's this week's top News stories. Tune in tomorrow for the Uptime Wind Energy Podcast.

The Azure Podcast
Episode 519 - VM Repair Extension

The Azure Podcast

Play Episode Listen Later May 2, 2025


In this episode of the Azure Podcast, hosts Evan Baslik and Sujit D'Mello are joined by special guests Adam Sandor, Travis Maier, and Leslie Chou to discuss the VM Repair extension. They delve into its capabilities, recent updates, and how it enhances supportability for Azure VMs. The conversation covers practical applications, security considerations, and future improvements, providing valuable insights for Azure users. Tune in to learn how the VM Repair extension can help you efficiently troubleshoot and resolve VM issues. Episode Highlights: Overview of the VM Repair extension and its benefits Recent updates and new supported scenarios Security and customization options Future improvements and AI integration Practical tips for using the extension effectively Don't miss this informative episode to stay updated on the latest Azure support tools and enhancements! Media file: https://azpodcast.blob.core.windows.net/episodes/Episode519.mp3 YouTube: https://youtu.be/IcSAN_BJXWk Resources: Starting point for VM Repair and summary: Repair a Windows VM by using the Azure Virtual Machine repair commands - Virtual Machines | Microsoft Learn Specific VM Repair examples, showcasing how to use the new functionality I called out: https://learn.microsoft.com/en-us/cli/azure/vm/repair?view=azure-cli-latest#az-vm-repair-create-examples Repair Script Open Source Repo: Open Source repair scripts  Official VM Repair docs: az vm repair | Microsoft Learn  Linux repair script ALAR for some linux love: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/repair-linux-vm-using-alar   Other updates: New ExpressRoute Metro locations Azure updates | Microsoft Azure Azure Container Instances now supports larger container size instances in public preview https://azure.microsoft.com/en-us/updates/?id=490690 Virtual network TAP https://azure.microsoft.com/en-us/updates/?id=490830 CAPTCHA for Azure Web Application Firewall (WAF) with Azure Front Door https://azure.microsoft.com/en-us/updates/?id=490854 Multitenant managed logging in Container Insights  https://azure.microsoft.com/en-us/updates/?id=488110 MCP with server-sent events (SSE) with Azure Functions https://azure.microsoft.com/en-us/updates/?id=489433

Farming Today
1/5/25 Vets call for ban on farrowing crates, wind farm and wild birds, processing pulses.

Farming Today

Play Episode Listen Later May 1, 2025 14:18


The British Veterinary Association and the Pig Veterinary Society have issued a new joint statement calling for farrowing crates to be banned. They say they should be phased out over the next 15 years to give the industry a chance to adapt. Farrowing crates are the small pens that 60% of sows in the UK are kept in around the time they give birth, to ensure the safety of their piglets. Animal welfare campaigners have been saying they should be banned for years, but farmers have concerns that replacing them with alternative systems will not only endanger the lives of piglets but also be costly and will put them at a disadvantage to farmers in other countries where the crates aren't banned. Conservation groups are urging ministers in Scotland to reject plans for an offshore windfarm which the developer predicts will kill tens of thousands of seabirds.  Five charities, led by RSPB Scotland, have written to the first minister to argue that approving Berwick Bank in the Firth of Forth would undermine efforts to protect nature.  SSE says it has already amended its designs to minimise any potential risks to Scottish seabirds. All week we've been discussing pulses, the dried seeds from plants like beans, lentils and peas all this week. Most of the pulses we buy in the shops are grown overseas. They're a valuable source of protein and there's a growing market for protein rich products in groups including runners, gym-goers as well as vegans So could UK farmers cash in? We visit a company which processes home-grown and imported pulses. Presenter = Caz Graham Producer = Rebecca Rooney

Syntax - Tasty Web Development Treats
893: Everyone Is Talking About MCP

Syntax - Tasty Web Development Treats

Play Episode Listen Later Apr 14, 2025 33:59


Scott and Wes break down the Model Context Protocol (MCP), a new open standard that gives AI agents secure, tool-like access to your dev environment. They cover how it works, why it's a big deal for AI coding workflows, and real-world use cases like GitHub, Sentry, and YouTube. Show Notes 00:00 Welcome to Syntax! 00:49 The lore of ICP. Wes MCP Shirt. 03:09 Brought to you by Sentry.io. 03:33 What is MCP? 05:06 The steps of AI coding. 07:11 MCP hosts. 07:28 MCP clients. 07:35 MCP servers. 08:24 Why you might want to do this. 10:39 How this works in VS Code. 14:10 Wes built an MCP server. SVGL. 14:57 Playwright. 17:24 Sentry's implementation. Building Sentry's MCP with David Cramer. 18:54 YouTube implementation. 21:19 DaVinci Resolve implementation. Smithery. 23:02 Postgres. 24:40 Transport protocols. 24:49 STDIO. 25:19 SSE. 25:32 Streaming. 26:24 Writing you own MCP server. 26:28 FastMCP. 27:00 Cloudflare. 28:01 Data validation. 28:47 Standard schema. Episode 873. 29:27 Other parts of MCP. 29:35 MCP resources. 30:37 MCP prompts. 30:48 MCP roots. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Telecom Reseller
Out of the Box and Into Zero Trust: Nile Delivers Built-In Security for Campus Networks, Podcast

Telecom Reseller

Play Episode Listen Later Apr 14, 2025


“We deliver Zero Trust out of the box—it's built in, not bolted on.” — Suresh Katukam, Chief Product Officer, Nile While the cybersecurity conversation continues to focus on Zero Trust and Secure Service Edge (SSE), Nile is calling out what many have missed: the campus network. In a world where cloud-based remote work has advanced rapidly, on-premises security—especially across corporate and hybrid environments—has lagged behind. In a Technology Reseller News podcast recorded just after Enterprise Connect, Suresh Katukam outlined why even the most well-resourced companies struggle to achieve Zero Trust in their campus networks—and how Nile's “out-of-the-box” approach changes the game. Campus Zero Trust: The Missing Link “The same users who are secure at home become vulnerable in the office,” said Katukam. “That's because campus networks were built on implicit trust—just plugging into an Ethernet port gives you access. That's broken by design.” While cloud Zero Trust has made strides, most enterprise campuses still rely on legacy NAC solutions, VLANs, ACLs, and other outdated, complex layers of bolt-on security. Nile flips that model—offering Zero Trust campus security as a native feature of the network itself. What “Out of the Box” Really Means Nile's solution is pre-configured for Zero Trust from day one. Every user and device is authenticated and authorized continuously, not just at login. Micro-segmentation, behavioral analytics, and continuous risk scoring mean that even compromised credentials won't lead to lateral movement or ransomware spread. “We call it a segment of one,” said Katukam. “You can't see other users on the network. You can't move laterally. Ransomware can't propagate.” Administrators have full control through a simplified interface that supports policy toggling, real-time response, and behavioral-based reauthentication—without layering in extra management tools. Security-Driven Network as a Service Nile isn't just a security company—it's a networking company that rethinks how networks are built and managed. Delivered as a service, Nile offers high-performance, low-latency connectivity with embedded Zero Trust principles. “Even large enterprises with robust security teams are choosing Nile—because the security is integrated into the network itself,” Katukam explained. For example, one financial services customer consolidated three segmented networks (IT, OT, and guest) into a single secure fabric using Nile. Another prevented a physical intrusion from turning into a breach, thanks to the system's strict device authentication and visibility controls. Universal Zero Trust: Bridging Campus and Cloud Nile's model doesn't stop at the office door. The company advocates for Universal Zero Trust, connecting campus-level protections with cloud-based SSE providers. “Whether a user is on-site or remote, whether it's an IT or OT device, they should be protected the same way,” said Katukam. “That's Universal Zero Trust—unifying cloud and campus with seamless security.” Learn More To explore how Nile is reimagining networking and delivering built-in Zero Trust, visit NileSecure.com or reach out to Suresh directly at Suresh@NileSecure.com #Nile #ZeroTrust #CampusSecurity #UniversalZeroTrust #OutOfTheBoxSecurity #NetworkSecurity #EnterpriseConnect2025 #SecureNetworking #NaaS #BehavioralAnalytics #Microsegmentation #Cybersecurity  

Leading for Business Excellence
Minisode #80: The Modern Boardroom Mindset

Leading for Business Excellence

Play Episode Listen Later Apr 8, 2025 4:32


Welcome to our series of bite-sized episodes featuring favourite moments from the Leading for Business Excellence podcast.In this minisode, Helen Mahy, Chair of the NextEnergy Solar Fund and Non-Executive Director at SSE plc and Gowling WLG, shares why modern boards must go beyond the boardroom. How do great leaders balance strategic oversight with on-the-ground understanding?Listen to the full episode here: https://pmi.co.uk/knowledge-hub/what-makes-an-effective-non-executive-director-steering-ftse-companies/More from PMI: Dive into our Knowledge Hub for more tools, videos, and infographics Join us for a PMI LIVE Webinar Follow us on LinkedIn

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh

The Uptime Wind Energy Podcast
GE Vernova Customer Center, Sophia Offshore Wind Project

The Uptime Wind Energy Podcast

Play Episode Listen Later Mar 31, 2025 4:03


This week, SSE appoints Martin Pibsworth as the next CEO, GE Vernova inaugurates a new customer center in Florida, RWE advances its Sophia Offshore Wind Project, and Nantucket challenges three offshore wind projects along Massachusetts coast. Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard's StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary Barnes' YouTube channel here. Have a question we can answer on the show? Email us! Welcome to Uptime Newsflash, industry News Lightning fast. Newsflash is brought to you by IntelStor. For market intelligence that generates revenue, visit www.intelstor.com. Allen Hall: Starting off the week, British Utility Company SSE has named Martin Pibsworth as its chief executive designate. Pibsworth joined SSE in 1998 and currently serves as Chief Commercial Officer. Pibsworth will take over from Alistair Phillips Davies, who has been CEO since 2013 and will hand over the reigns following the annual general meeting on July 17th. Before leaving the company in November, uh, the new CEO will lead SSE renewables push helping the UK deliver on its decarbonization goals. During Philip's Davies tenure, SSE made a strategic shift toward networks and renewables with shares gaining about 4% during his leadership. Last year. SSE announced plans to invest at least 22 billion pounds in grid infrastructure over five years. Over in the United States, GE Vernova has opened a new customer experience center at its Pensacola facility in Florida, marked by a ribbing cutting event hosted by CEO Scott Strazik. The center includes multiple conference rooms, collaboration areas, and direct access to production space. The investments are part of GE Vernova's broader plan announced in January to invest nearly $600 million in its US factories and facilities. Over the next two years, the Pensacola factory has already produced enough turbines to supply over 1.2 gigawatts of the 2.4 gigawatts ordered for the Sunzia Wind Farm in New Mexico. German Energy group RWE has installed its first turbines at its 1.4 Gigawatt Sophia Offshore Wind Project in the uk Located on Dogger Bank, 195 kilometers off the northeast coast of Britain. Sophia is set to become one of the world's largest single offshore wind farms. The project will consist of 100 Siemens Gamesa turbines featuring 150 recyclable blades. The wind park is scheduled to be fully operational in the second half of 2026. RWE's Chief Operating Officer for offshore wind commented that Sophia will make a significant contribution to the UK's clean power 2030 targets. And over in Massachusetts, the town of Nantucket and a Nantucket based activist group are challenging three offshore wind projects off the Massachusetts coast. The town recently sued the US Department of Interior and the Bureau of Ocean Energy Management requesting that the government set aside its approval of South Coast Wind and restart the environmental review. Meanwhile, the group ACK for Whales is asking the Environmental Protection Agency to rescind permits granted to Vineyard Wind and New England wind. These challenges come amid the Trump administration's opposition to offshore wind. Industry analyst Timothy Fox's Vineyard Wind faces less risk from these challenges since it's already under construction while projects in planning stages are at higher risk. South Coast wind, which receive final federal approval on the last business day of the Biden administration could be delayed by up to four years. Vineyard wind is the furthest along among these projects with more than half of its 62 turbine towers already installed. Massachusetts Energy Secretary Rebecca Tepper has reiterated the state support for offshore wind emphasizing the need for energy independence...

Anti-Neocon Report
ADL still trying to pardon Leo Frank

Anti-Neocon Report

Play Episode Listen Later Mar 30, 2025 55:04


Many of you know that a B'nai B'rith organization gave birth to the ADL while defending its Atlanta chapter president Leo Frank. Frank raped and murdered a 13 year old girl who he was also employing along with many other teens, against child labor laws. Leo Frank ran a pencil factory sweatshop and often flirted with his illegal underage employees. The ADL was formed to defend him when he murdered and raped Mary Phagan. The details were disgusting. Her underwear was ripped and bloody and she was strangled to death with a wire. Her head had also been pummeled with a pipe. She went to get her paycheck of a meager $1.20 and never returned home. She was raped and murdered and then her body was dragged to the basement. Police found strands of her hair and blood on the floor above right across from Frank's office. Frank nervously revealed the victims name in front of police before they had given him any such details. The ADL was going to get him released based purely on the fact that He was Jewish and a high profile crime made Jews look bad. Arguably a Jewish organization trying to get a child murderer off the hook, makes Jews look worse. They would like one to believe that he was innocent with fake news history and will tell you so on Wikipedia which has Israelis paid to edit it. Leo admitted on the witness stand to the jury that he was “unconsciously” at the scene of the crime when the murder occurred. What we don't know, is if he raped her before or after killing her. The grand jury voted 21 – 0 for indicting him. Four of those jurors were Jewish. That shouldn't matter, but it does because later the ADL would try to argue that the jury wrongly convicted him because of antisemitism rather than because all the evidence showed that he did it in everyone's eyes. He was convicted. After the Judge, Leonard Roan, rejected all the appeals, he ordered Leo to be hanged on his birthday April 17, 1913. However Frank who was unanimously elected president of the B'nai Brith Chapter again even after being convicted of rape and murder had one last method to weasel out. He with Jewish pressure groups, appealed to the Governor. The lame-duck governor, John M. Slaton, in a very Clinton-esk move, commuted Leo's sentence his last week in office. He changed it from the death penalty to life in prison.Frank was knifed in prison by an inmate who took justice into their own hands. William Creen used a butcher knife and cut Leo's throat severely injuring him. On August 16th a mob broke into the prison captured Leo Frank and took him 2 miles away and hanged him. Although they took photographs no one in town would identify them. Of course the ADL twisted the story to say that these men were motivated by antisemitism and not that they hated him for raping and murdering a child. To see Southern Justice click hereThe ADL would fight to have him given a posthumous pardon which he got in 1986. Fred Grimm of the Miami Herald said in response to the pardon, “A salve for one of the South's most hateful, festering memories, was finally applied” showing his own prejudice towards the South rather than admitting a well known exploiter of child labor, who raped and killed a young girl and was unanimously convicted for the crime and sentenced to death was killed even after weaseling a pardon by an outgoing governor. Fred Grimm is constantly chasing down and doing stories about “Neo-Confederates” and “Neo-Nazis” as if either one are some huge bane and influence in modern society. Ironically it is groups like Antifa who act like ISIS tearing down American Statues and assaulting people. Despite having entire cities burned civilian homes and all by Lincoln's terrorists, not once in 150 years has a Southerner attacked a Union monument. Yelling racism at everything is fun though because it exercises safe moral indignation. That the US recently invaded Libya and have caused a country to be run by Al Qaeda terrorists who have revived the institution of slavery, selling humans for $400 in the market, doesn't seem to bother these same people so much as statues of Confederate generals. Apparently the Union military generals like Custer who rode west and committed genocide on Native Americans immediately following the Civil War, or enslaving the Chinese to build railroads, doesn't count as racism either.The ADL itself was created with Jewish mafia money. With connections to Meyer Lansky, Moe Dalitz, Bugsy Siegal, and illegal arms trafficker Hank Greenspun. The ADL gave Jewish gangster Moe Dalitz the Torch of Liberty Award. Dalitz was partnered with Galvastan's Sam Marceo and his brother Rosario of international narcotic trafficking fame. Dalitz and Sam began with a bootlegging gig. And it was the Maceo brothers who with Dalitz financed the Desert Inn Casio (where Frank Sinatra got his first Vegas gig). Interesting note, Sam's sister Olivia married Joseph Fertitta. You probably know the famous former owners of the UFC Frank III and Lorenzo Fertitta. They're all “family”. Maceo died only a year after purchasing the casino and it quickly went into the Fertitta side of the family. Dalitz not only did business with Maceo, he ran with the Mayfield Road gang in Ohio who had a branch dubbed the Collinwood Crew nicknamed the Young Turks. This is a very fitting name considering that the ADL denies the Armenian genocide. They even fired a New England Director Andrew H. Tarsy because he broke rank and called it a genocide. See killing 1,500,000 people isn't genocide because nothing is allowed to compete with the Holocaust victimhood.Moe Dalitz at Desert InnDalitz was an early business partner with Abe Berstien of the murderous Purple Gang. They used to murder motorists for sport. That didn't bother the ADL. In 1985 they gave Moe an award. Moe would become the Mob Boss of Cleveland, even tough most of his operations would move and center on Vegas. His businesses however were all over the United States. Dalitz was not only a close confidant of Meyer Lansky, the two co-owned the Frolic Club in Miami. (p.6)The Desert Inn casino also took investments from convicted illegal arms smuggler Hank Greenspun, who was not only invested but became the publicist as well. He owned the Las Vegas Sun and pulled a money laundering scheme with advertising that was similar to what Boris Berezovsky repeated in Russia. Prior to that, he had been the publicist for another Mafia Casino, the Flamingo, which was run by Lanksy's childhood friend and murderer Bugsy Siegal. Greenspun's wife was given top honors by the ADL. Her husband attempted to smuggle 42 Pratt and Whitney R2800 LOW airplane engines to Palestine when the Haganah terrorist group was creating the state of Israel through ethnic cleansing.After jury tampering, with the sole Jewish Juror meeting with the defense, Greenspun and two of his cohorts William Sosnow, and Samuel Lewis were acquitted, but his other partners Adolph Schwimmer, Leon Gardner, Renoyld Selk, and Abraham Levin, were convicted.But Greenspun would be found guilty of smuggling the machine guns that would go with the planes as well as artillery and ammo. He stole 30 and 50 cal machine guns from Hawaii and shipped them to the Haganah in Palestine through Mexico. When he was indicted Greenspun tried to bribe his way out. He offered $25,000 to Seth Solomon Pope “or anyone else designated by Pope” to “quash” a second Neutrality Act indictment against him. Solomon worked in Hawaii at the War Assets Administration, in charge of decommissioning and selling off WWII surplus. He was most likely the original contact for the smuggling. The man was investigated three time for fraudulent sales. They also stole over 500 machine gun barrels. Reportedly Hank took an addition 10% Kickback from arms sales he made. A Grand Jury in Los Angeles indicted Hank and six other of violating the Neutrality Act and Export Control Law, Title 50 United States Code section 701 and Title 22 United Stated Code, section 452. However he got only a 10k fine and no jail time. Greenspun was paid through the SSE. The SSE was a front for the AJDC's Lishka which financed communist and Bricha illegal immigration. The Jewish Agency which was the government in waiting that organized the terrorist groups that formed Israel, facilitated the cash flow to gun runners like Hank. In “Concealed in the Open: Recipients of International Clandestine Jewish Aid in Early 1950s Hungary” Zachary Paul Levine, of Yeshiva University Museum writes.“The JDC-Israeli collaboration that formed around clandestine emigration to Israel and welfare to migrants filled the vacuum with the creation of two institutions. The first was created in 1952 by the Israeli government's Liaison Bureau of the Israel Ministry of Foreign Affairs, or Lishka by its Hebrew acronym, which collected information and administered individual aid. The second was created in Switzerland in 1953. Known as the Society for Mutual Aid (SSE by its French acronym), this organization directed AJDC funds to the Lishka and represented Jewish aid providers' interests to communist governments” …”However, as an American organization at the height of the McCarthy “Red Scare,” AJDC administrators could hardly justify the appearance of sending cash or material into a state with which the U.S. was technically engaged in “economic warfare.” In March 1953, the AJDC and Lishka together established the SSE, a “paper organization” that “covered” the AJDC-Israeli partnership, and provided a means for regularized AJDC funding for Lishka from the Joint's Relief-in-Transit budget that funded activities that might have contravened U.S. law (Beizer 2009: 117). The SSE's Swiss chairman, Erwin Haymann, had years of experience channeling money from the U.S. for Bricha and other clandestine activities. Funds traveled through the SSE and on to Lishka agents who received U.S. dollars or another western currency and exchanged them into Hungarian forints on the black market in Vienna. Subsequently, these forints traveled via diplomatic pouch or in the suitcase of an apparent traveler to the legation in Budapest, whose staff distributed the cash around the country.”We learned from declassified FBI documents that Erwin Haymann, the same man aiding communist on behalf of the JA is who made three transfers of 1.3 million dollars to Greenspun. Greenspun would later become the Western Director of bonds for Israel. Haymann sent the payments to Banco del Ahorro, Mexico by cable.Interesting, because 1.3 million is exactly how much Moe Dalitz sank into the Desert Inn Casino, which Greenspun was a publicist for and invested in, what a coincidence. If you are into Kennedy Research here is a cookie for you. Hungarian Jew Tibor Rosenbaum is the bridge between Meyer Lansky, Erwin Haymann, and heavy Florida-Cuba crime syndicate. …But I will leave that tangent alone. Greespun was known for having blackmail on political candidates, Howard Hunt and G. Gordon Liddy even plotted to raid on the Vegas Sun vault in order to gain access to blackmail that Hank had on Howard Hughes. Hughes by the way bought Mafia properties like the Desert Inn Casino using millions in cash. They credit him with cleaning Vegas up from the mob, it was more like the mob took him to the cleaners. Dalitz ironically started out with a cash only dry cleaning business.Kennedy whose father was involved with the Outfit and the East Coast mob and who had a love affair with his friend Frank Sinatra's ex-girlfriend Judith Exner while she was also involved with Chicago mob boss Sam Giacanna. Sinatra introduced her to JFK. Kennedy gave Greenspun a pardon his first year in office. I wonder why. LBJ likewise was sleeping with Mathilde Krim who was also part of the Swiss connection who help Irgun terrorist. Johnson did all this while she was married to his campaign advisor Athur Krim, a willing cuck. It makes you rethink Monica Lewinsky doesn't it. Well Clinton did give Jewish Billionaire Marc Rich a pardon, after Rich donated $100,000 to the ADL. Rich was yet another crook in the Swiss connection.These are the founders and reward recipients of the ADL. The ADL was given defacto powers of an intelligence agency in the United State and it gathers intel on who it pleases. It is anything but an Anti-Defamation League. They defame people themselves. The ADL under the cover of fighting Anti-Semitism, simply uses this cry as a club to chase down and censor anyone critical of Zionism or the Israeli state. If you point out that Israeli snipers are shooting children in Palestine from across the border, then the ADL can get you removed. Vimeo stole $5,000 in profits from me and erased six years worth of my work because of my criticism of Israel. When the ADL partnered with YouTube December of 2008, my channel was gone the first day, and over a thousand videos were erased. No justification was needed, simply the accusation of antisemitism. When I made a complaint in my appeal I learned that the ADL would oversee the case. Of course I never had my channel restored nor was I even given an explanation from YouTube. Another wing of the ADL is the SPLC and they too have been granted censorship powers across social media. The ADL used the SPLC as both an attack dog and a buffer to separate itself from ramifications of its constant chicken little censorship. In the rare case of actual antisemitic groups online or otherwise the ADL has been busted reacting to its own creations as the “Nazis” they screech about turn out to be their own provocateurs.Birthed to defend a murdering child rapist, financed by mass murdering terrorists and organized crime, narcotic peddling, gun running, psychopaths formed the pro Zionist organizational bully called the ADL. They have been caught spying through American police departments, spying on American citizens, and even coaching American police on what they should be on the look out for and how Hate Crime means anything Israel doesn't like. And this is their great online weapon. The Zog Media already refuses to report on what Israel is doing to Palestine, the Israeli role in orchestrating the Iraq War, and the Proxy War on Syria. People have been giving the information online. Naturally the ADL has been censoring such journalist all while screaming antisemitism. AIPAC bribes congress and the ADL censors the media. It is a one two punch to protect criminal Zionists interest. And now you know its criminal origins. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.ryandawson.org/subscribe

Les Cast Codeurs Podcast
LCC 323 - L'accessibilité des messageries chiffrées

Les Cast Codeurs Podcast

Play Episode Listen Later Mar 17, 2025 70:33


Dans cet épisode, Emmanuel et Arnaud discutent des dernières nouvelles du dev, en mettant l'accent sur Java, l'intelligence artificielle, et les nouvelles fonctionnalités des versions JDK 24 et 25. Ils abordent également des sujets comme Quarkus, l'accessibilité des sites web, et l'impact de l'IA sur le trafic web. Cette conversation aborde les approches pour les devs en matière d'intelligence artificielle et de développement logiciel. On y discute notamment des défis et des bénéfices de l'utilisation de l'IA. Enfin, ils partagent leurs réflexions sur l'importance des conférences pour le développement professionnel. Enregistré le 14 mars 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-323.mp3 ou en vidéo sur YouTube. News Langages Java Metal https://www.youtube.com/watch?v=yup8gIXxWDU Peut-être qu'on la déjà partagé ? Article d'opinion Java coming for AI https://thenewstack.io/2025-is-the-last-year-of-python-dominance-in-ai-java-comin/ 2025 pourrait être la dernière année où Python domine l'IA. Java devient un concurrent sérieux dans le domaine. En 2024, Python était toujours en tête, Java restait fort en entreprise, et Rust gagnait en popularité. Java est de plus en plus utilisé pour l'AI remettant en cause la suprématie de Python. article vient de javaistes la domination de python est cluturelle et plus technique (enfin pour les ML lib c'est encore technique) projets paname et babylon changent la donne JavaML est populaire L'almanach java sur les versions https://javaalmanac.io/ montre kes APIs et les diff entre versions puis les notes ou la spec java Les nouvelles de JDK 24 et du futur 25 https://www.infoq.com/news/2025/02/java-24-so-far/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global JDK 24 a atteint sa première phase de release candidate et sera officiellement publié le 18 mars 2025. 24 nouvelles fonctionnalités (JEPs) réparties en 5 catégories : Core Java Library (7), Java Language Specification (4), Security Library (4), HotSpot (8) et Java Tools (1). Project Amber : JEP 495 “Simple Source Files and Instance Main Methods” en quatrième preview, visant à simplifier l'écriture des premiers programmes Java pour les débutants. Project Loom : JEP 487 “Scoped Values” en quatrième preview, permettant le partage de données immuables entre threads, particulièrement utile avec les virtual threads. Project Panama : JEP 489 “Vector API” en neuvième incubation, continuera d'incuber jusqu'à ce que les fonctionnalités nécessaires de Project Valhalla soient disponibles. Project Leyden : JEP 483 “Ahead-of-Time Class Loading & Linking” pour améliorer le temps de démarrage en rendant les classes d'une application instantanément disponibles au démarrage de la JVM. Sécurité quantique : Deux JEPs (496 et 497) introduisant des algorithmes résistants aux ordinateurs quantiques pour la cryptographie, basés sur les réseaux modulaires. Sécurité renforcée : JEP 486 propose de désactiver définitivement le Security Manager, tandis que JEP 478 introduit une API de dérivation de clés. Optimisations HotSpot : JEP 450 “Compact Object Headers” (expérimental) pour réduire la taille des en-têtes d'objets de 96-128 bits à 64 bits sur les architectures 64 bits. (a ne aps utiliser en prod!) Améliorations GC : JEP 404 “Generational Shenandoah” (expérimental) introduit un mode générationnel pour le Garbage Collector Shenandoah, tout en gardant le non generationel. Évolution des ports : Windows 32-bit x86 ca sent le sapin JEP 502 dans JDK 25 : Introduction des “Stable Values” (preview), anciennement “Computed Constants”, offrant les avantages des champs final avec plus de flexibilité pour l'initialisation. Points Supplémentaires sur JDK 25 Date de sortie : JDK 25 est prévu pour septembre 2025 et représentera la prochaine version LTS (Long-Term Support) après JDK 21. Finalisation de l'on-ramp : Gavin Bierman a annoncé son intention de finaliser la fonction “Simple Source Files” dans JDK 25, après quatre previews successives. CDS Object Streaming : Le JEP Draft 8326035 propose d'ajouter un mécanisme d'archivage d'objets pour Class-Data Sharing (CDS) dans ZGC, avec un format d'archivage et un chargeur unifiés. HTTP/3 supporté dans HttpClient Un article sur l'approche de Go pour éviter les attaques par chemin de fichier https://go.dev/blog/osroot Librairies Quarkus 3.19 es sorti https://quarkus.io/blog/quarkus-3-19-1-released/ UBI 9 par defaut pour les containers En plus de AppCDS, support tu cache AOT (JEP 483) pour demarrer encore plus rapidement Preuve de possession dans OAuth tokers 2 Mario Fusco sur les patterns d'agents en Quarkus https://quarkus.io/blog/agentic-ai-with-quarkus/ et https://quarkus.io/blog/agentic-ai-with-quarkus-p2/ premier article sur les patterns de workflow chainer, paralleliser ou router avec des exemples de code qui tournent les agents a proprement parler (le LLM qui decide du workflow) les agents ont des toolbox que le LLM peut decided d'invoquer Le code va dans les details et permet de mettre les interactions en lumiere tracing rend les choses visuelles Web Le European Accessibility Act (EAA) https://martijnhols.nl/blog/the-european-accessibility-act-for-websites-and-apps Loi européenne sur l'accessibilité (EAA) adoptée en 2019 Vise à rendre sites web et apps accessibles aux personnes handicapées Suivre les normes WCAG 2.1 AA (clarté, utilisabilité, compatibilité) Entreprises concernées : banques, e-commerce, transports, etc. Date limite de mise en conformité : 28 juin 2025 2025 c'est pour les nouveaux developpements 2027 c'est pour les applications existantes. bon et je fais comment pour savoir si le site web des cast codeurs est conforme ? API Popover https://web.dev/blog/popover-baseline?hl=en L'API Popover est maintenant disponible dans tous les navigateurs majeurs Ajoutée à Baseline le 27 janvier 2025 Permet de créer des popovers natifs en HTML, sans JavaScript complexe Exemple : Ouvrir Contenu du popover Problème initial (2024) : Bug sur iOS empêchant la fermeture des popovers Intégrer un front-end React dans une app Spring-Boot https://bootify.io/frontend/react-spring-boot-integration.html Etape par etape, comment configurer son build (https://bootify.io/frontend/webpack-spring-boot.html) et son app (controllers…) pour y intégrer un front en rect. Data et Intelligence Artificielle Traffic des sites web venant de IA https://ahrefs.com/blog/ai-traffic-study/ le AIEO apres le SEO va devenir un gros business vu que les modèles ont tendance a avoir leurs chouchous techniques ou de reference. 63% des sites ont au moins un referal viennent d'une IA 50% ChatGPT, puis plrplexity et enfin Gemini, bah et LeChat alors? 0,17% du traffic des sites vient de l'IA. Et en meme temps l'AI resume plutot que pointe donc c'est logique Granite 3.2 est sorti https://www.infoq.com/news/2025/03/ibm-granite-3-2/ IBM sort Granite 3.2, un modèle IA avancé. Meilleur raisonnement et nouvelles capacités multimodales. Granite Vision 3.2 excelle en compréhension d'images et de documents. Granite Guardian 3.2 détecte les risques dans les réponses IA. Modèles plus petits et efficaces pour divers usages. Améliorations en raisonnement mathématique et prévisions temporelles. les trucs interessants de Granite c'est sa petite taille et son cote “vraiment” open source Prompt Engineering - article détaillé https://www.infoq.com/articles/prompt-engineering/ Le prompt engineering, c'est l'art de bien formuler les instructions pour guider l'IA. Accessible à tous, il ne remplace pas la programmation mais la complète. Techniques clés : few-shot learning, chain-of-thought, tree-of-thought. Avantages : flexibilité, rapidité, meilleure interaction avec l'IA. Limites : manque de précision et dépendance aux modèles existants. Futur : un outil clé pour améliorer l'IA et le développement logiciel. QCon San Francisco - Les agents AI - Conference https://www.infoq.com/presentations/ai-agents-infrastructure/ Sujet : Infrastructure pour agents d'IA. Technologies : RAG et bases de données vectorielles. Rôle des agents d'IA : Automatiser des tâches, prévoir des besoins, superviser. Expérience : Shruti Bhat de Oracle à Rockset (acquis par OpenAI). Objectif : Passer des applis classiques aux agents IA intelligents. Défis : Améliorer la recherche en temps réel, l'indexation et la récupération. Nous concernant: Évolution des rôles : Les développeurs passent à des rôles plus stratégiques. Adaptation nécessaire : Les développeurs doivent s'adapter aux nouvelles technologies. Official Java SDK for MCP & Spring AI https://spring.io/blog/2025/02/14/mcp-java-sdk-released-2 Désormais une implémentation officielle aux côtés des SDK Python, TypeScript et Kotlin. ( https://modelcontextprotocol.io/ ) Prise en charge de Stdio-based transport, SSE (via HTTP) et intégration avec Spring WebFlux et WebMVC. Intégration avec Spring AI, configuration simplifiée pour les applications Spring Boot (different starters disponibles) Codez avec Claude https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview Claude Code est en beta, plus de liste d'attente Un outil de codage agentique intégré au terminal, capable de comprendre votre base de code et d'accélérer le développement grâce à des commandes en langage naturel. Les fonctionnalités permettent de comprendre le code, le refactorer, tester, debugger, … Gemini Code Assist est gratuit https://blog.google/technology/developers/gemini-code-assist-free/ Pour un usage personnel. Pas besoin de compte. Pas de limite. 128k token input. Guillaume démarre une série d'articles sur le RAG (niveau avancé). Le premier sur Sentence Window Retrievalhttps://glaforge.dev/posts/2025/02/25/advanced-rag-sentence-window-retrieval/ Guillaume propose une technique qui améliore les résultats de rechercher de Retrieval Augmented Generation L'idée est de calculer des vecteurs embeddings sur des phrases, par exemple, mais de retourner un contexte plus large L'intérêt, c'est d'avoir des calculs de similarité de vector embedding qui ont de bons scores (sans dilution de sens) de similarité, mais de ne pas perdre des informations sur le contexte dans lequel cette phrase se situe GitHub Copilot edits en GA, GitHub Copilot en mode agent dans VSCode Insiders https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/ Copilot Edits permet via le chat de modifier plusieurs fichiers en même temps, ce qui simplifie les refactoring Copilot en mode agent ajoute un mode autonome (Agentic AI) qui va tout seul chercher les modifications à faire dans votre code base. “what could possibly go wrong?” Méthodologies Article d'opinion interessant sur AI et le code assistant de Addy Osmani https://addyo.substack.com/p/the-70-problem-hard-truths-about Un article de l'année dernière de Addy Osmani https://addyo.substack.com/p/10-lessons-from-12-years-at-google plusieurs types d'aide IA Ceux pour boostrapper, dun figma ou d'une image et avoir un proto non fonctionnel en quelques jours Ceux pour iterer sur du code donc plus long terme on va faire une interview sur les assistants de code IA Le cout de la vitesse de l'ia les dev senior refactur et modifie le code proposé pour se l'approprier, chnger l'architecture etc donc basé sur leur connaissance appliquer ce qu'on connait deja amis plus vite est un pattern different d'apprendre avec l'IA explore des patterns d'approche et la prospective sur le futur Loi, société et organisation Elon Musk essaie d'acheter Open AI https://www.bbc.com/news/articles/cpdx75zgg88o La réponse: “non merci mais on peut racheter twiter pour 9,74 milliars si tu veux” Avec la loi narcotrafic votée au sénat, Signal ne serait plus disponible en France https://www.clubic.com/actualite-555135-avec-la-loi-narcotrafic-signal-quittera-la-france.html en plus de légaliser les logiciels espions s'appuyant sur les failles logiciel La loi demande aux messageries de laisser l'état accéder aux conversations Donc une backdoor avec une clé etatique par exemple Une backdoor comme celle des téléphones filaires américains mis en place il y a des années et maintenant exploitée par l'espionnage chinois Signal à une position ferme, soit c'est sécurisé soit on sort d'un pays Olvid WhatsApp et iMessage sont aussi visée par exemple La loi défini la cible comme la criminalité organisée : les classiques mais aussi les gilets jaunes, les opposants au projet de Bure, les militants aidant les personnes exilées à Briançon, ou encore les actions contre le cimentier Lafarge à Bouc-Bel-Air et à Évreux Donc plus large que ce que les gens pensent. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19-21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27-28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 28-30 mars 2025 : Shift - Nantes (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 4 avril 2025 : aMP Orléans 04-04-2025 - Orléans (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 11-12 avril 2025 : Faiseuses du Web 4 - Dinan (France) 14 avril 2025 : Lyon Craft - Lyon (France) 16-18 avril 2025 : Devoxx France - Paris (France) 23-25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day - Strasbourg 2025 - Strasbourg (France) 29-30 avril 2025 : MixIT - Lyon (France) 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Haunted History Chronicles
Poltergeists, Psychokinesis, and the MACRO-PK Project with Eric Dullin

Haunted History Chronicles

Play Episode Listen Later Feb 21, 2025 62:29


In the words of physicist John Archibald Wheeler, “In any field, find the strange thing and explore it.” That's exactly what my guest today is doing. A mechanical and electrical engineer by trade, Eric Dullin has spent his career exploring the frontiers of science, technology, and human potential. With a PhD in information processing, an MBA, and extensive experience as an entrepreneur and coach, he brings a unique analytical perspective to one of the most elusive and controversial topics in paranormal research—the study of macro-telekinesis. Now the Research Director at the LAPDC in France and a member of multiple parapsychological associations, Eric has dedicated his post-retirement years to systematically investigating the phenomenon of physical mediumship, poltergeists, and spontaneous psychokinetic events.Today, we'll be diving deep into his latest project, MACRO-PK, an ambitious international historical database and collaborative research initiative that aims to catalogue and analyse unexplained physical phenomena throughout history. What can these strange events tell us about the nature of reality? How can we apply scientific rigour to what is often dismissed as mere superstition? And what do poltergeists, mystical levitations, and PK agents have in common?As Arthur Conan Doyle once wrote through his famous detective, Sherlock Holmes:“It is a capital error to theorise before having data. Insensibly, we begin to distort the facts to fit the theories, instead of adapting the theories to the facts.”My Special Guest Is Eric Dullin Eric Dullin is a mechanical and electrical engineer with a PhD in information processing, MBA, PNL and a certified coach. He was a software entrepreneur, coach and advisor to other software entrepreneurs as well as a Co-founder with his wife of an online training centre for children and teachers. He has been involved in experimentation with paranormal phenomena (clairvoyance, OBEs, telekinesis) and experience of a poltergeist phenomenon. Since his retirement he has been focused on the scientific study of paranormal phenomena, and more specifically on macro-telekinesis/Psychokinesis. He is the research director at the LAPDC in France (macro-telekinesis experiments) and member of parapsychological associations such as IMI in France, SPR, ASSAP in the UK, SSE and PA in the USA. He has published scientific articles and given conferences on the subject of poltergeist / macro-telekinesis. He has recently launched the macropk.org website, an international historical database on Poltergeist, physical mediumship/PK agent and mystical levitation phenomena, and a worldwide collaborative project on macro-telekinesis phenomena. https://www.macropk.org/ In this episode, you will be able to: 1. Discover how this database and research initiative is cataloguing poltergeist activity, physical mediumship, and unexplained macro-psychokinetic events.If you value this podcast and want to enjoy more episodes please come and find us on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.patreon.com/Haunted_History_Chronicles⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠to ⁠support the podcast, gain a wealth of additional exclusive podcasts, writing and other content.Links to all Haunted History Chronicles Social Media Pages, Published Materials and more:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/hauntedhistorychronicles?fbclid=IwAR15rJF2m9nJ0HTXm27HZ3QQ2Llz46E0UpdWv-zePVn9Oj9Q8rdYaZsR74I⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠NEW Podcast Shop:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.teepublic.com/user/haunted-history-chronicles⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Buy Me A Coffee⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://ko-fi.com/hauntedhistorychronicles⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠Guest Links⁠ ⁠⁠⁠⁠⁠⁠Website:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.macropk.org/ 

NextMedia Podcast
Личные итоги: продолжаем праздновать и ставим цели на 2025 год

NextMedia Podcast

Play Episode Listen Later Jan 7, 2025 33:34


В новом выпуске NextMediaPodcast Эльнара Петрова делится, как рефлексировать над прошедшим годом, учиться ценить свои достижения и планировать будущее. А также рассказывает:— Почему важно хвалить себя и фиксировать прогресс— Какие есть лайфхаки для подведения итогов: от благодарностей до анализа провалов— Вдохновляющие истории из личного опыта— Планы на 2025-й: перезапуск курса по подкастам, стратегические сессии и новые проектыИтоги года Эльнара подводила в формате групповой онлайн-сессии вместе с Яной Лисовской. Яна — коуч и ментор с большим опытом в корпоративном бизнесе. Мы учились вместе в SSE, доверяю Яне и дарю сессии с ней друзьям. У Яны есть канал в телеграме: t.me/yana_lisovska Мы запускаем курс по созданию подкастов с нуля. До конца января вы можете приобрести курс с обратной связью за 22 тысячи руб., а курс без обратной связи за 15 тысяч. Подробности в описании и на сайте: nextpodcast.ru. В январе проведу открытый вебинар про рынок подкастов и возможности, следите за обновлениями в тг-канале: t.me/nextmedia Также до конца января вы – эксперты в диджитал и агентства, бизнесы – можете приобрести участие в подкасте в формате экспертной рубрики за 40к руб. В пакет входят 3 интеграции, каждая до 7 мин. Гарантированное число прослушиваний каждой интеграции - 1к прослушиваний спустя месяц.А еще я самостоятельно запускаю новую услугу «Стратегическая сессия для личного бренда». Будут открытые вебинары по подкастам и по персональной стратегии! В январе и феврале, планируйте! Все анонсы в моем личном канале: https://t.me/elnaragram

The Past Lives Podcast
Proof of Reincarnation: Classic Episode

The Past Lives Podcast

Play Episode Listen Later Dec 19, 2024 60:27


Dr. Semkiw is a Board Certified Occupational Medicine physician who practices at a major medical center in San Francisco, where he served as the Assistant Chief of Occupational Medicine.  Previously, he served as Medical Director for Unocal 76, a Fortune 500 oil company.Walter embarked on reincarnation research in 1995 and he is the author of Return of the Revolutionaries: The Case for Reincarnation and Soul Groups Reunited, which was published in 2003.  In this book, a cohort reincarnated from the time of the American Revolution is identified.  Former President Bill Clinton wrote, regarding Revolutionaries, “It looks fascinating,” and neurosurgeon Norm Shealy, MD, PhD, wrote “For the survival of humanity, this is the most important book written in 2000 years.”Walter is also the author of Born Again, which is available in the US, India, Indonesia and Serbia (2006 version).   In this book, independently researched reincarnation cases with evidence of reincarnation are compiled with a focus on the work of Ian Stevenson, MD of the University of Virginia.  Cases derived through world famous trance medium Kevin Ryerson, who has been featured in Shirley MacLaine's books, are also presented.  Born Again has received widespread media attention in India and Walter was featured on CNN in March 2006.An expanded international edition of Born Again (2011), which summarizes key reincarnation cases with evidence of past lives, is available as an E-Book, as well as in a printed version.Born Again has been commented on by the former President of India, Abdul Kalam, and by Shah Rukh Kahn, one of India's greatest film and television stars.Walter has also penned Origin of the Soul and the Purpose of Reincarnation.  Whereas Return of the Revolutionaries and Born Again present cases which demonstrate objective evidence of reincarnation, Origin of the Soul addresses the big picture of why we reincarnate and the nature of the spiritual world.Walter has presented at the Society for Scientific Exploration (SSE), an academic group that pioneer reincarnation researcher Ian Stevenson, MD cofounded.  Walter spent a day with Dr. Stevenson in 2001 and Dr. Stevenson personally sponsored Walter's membership in the SSE. Walter is an advocate of Ian Stevenson's past lives research.Dr. Semkiw has been a speaker at the first four World Congresses for Regression Therapy, held in the Netherlands, India, Brazil and Turkey.  He has appeared on CNN and in Newsweek, as well as numerous other television and radio shows, including Coast to Coast.  He has been cited on numerous occasions in the Times of India, which has the largest circulation of any English language newspaper in the world.Walter has been selected as one of Who's Who Professionals of the Year for 2016.In sum, Dr. Semkiw is an expert in reincarnation research, particularly reincarnation cases which demonstrate objective evidence of reincarnation.https://reincarnationresearch.com/ https://www.pastliveshypnosis.co.uk/https://www.patreon.com/ourparanormalafterlife