Central component of any computer system which executes input/output, arithmetical, and logical operations
POPULARITY
Categories
Episode 81: We're back! Lots to discuss in this video, including YouTube weirdness, the future of AMD and Intel's CPU platforms, the good old CPU core debate, upcoming GPU rumors and more.CHAPTERS00:00 - Intro03:13 - Our YouTube views are down, this is what the stats say31:14 - Zen 7 on AM5 and Intel's competing platform54:13 - How important is platform longevity?1:07:58 - Six core CPUs are still powerful for gaming1:17:27 - Will Intel make an Arc B770?1:26:22 - No RTX Super any time soon1:29:14 - Updates from our boring livesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
An airhacks.fm conversation with Ingo Kegel (@IngoKegel) about: jprofiler Visual Studio Code integration using Kotlin Multiplatform, migrating Java code to Kotlin common code for cross-platform compatibility, transpiling to JavaScript for Node.js runtime, JClassLib bytecode viewer and manipulation library, Visual Studio Code's Language Server Protocol (LSP), profiling unit tests and performance regression testing, Java Flight Recorder (JFR) for production monitoring with custom business events, cost-driven development in cloud environments, serverless architecture with AWS Lambda and S3, performance optimization with parallelism in single-CPU environments, integrating profiling data with LLMs for automated optimization, MCP servers for AI agent integration, Gradle and Maven build system integration, cooperative window switching between JProfiler and VS Code, memory profiling and thread analysis, comparing streams vs for-loops performance, brokk AI's Swing-based LLM development tool, context-aware performance analysis, automated code optimization with AI agents, business event correlation with low-level JVM metrics, cost estimation based on cloud API calls, quarkus for fast startup times in serverless, performance assertions in System Tests, multi-monitor development workflow support Ingo Kegel on twitter: @IngoKegel
Thunderstorms were raging across southern Germany as Elliot Williams was joined by Jenny List for this week's podcast. The deluge outside didn't stop the hacks coming though, and we've got a healthy smorgasbord for you to snack from. There's the cutest ever data cassette recorder taking a tiny Olympus dictation machine and re-engineering it with a beautiful case for the Commodore 64, a vastly overcomplex machine for perfectly cracking an egg, the best lightning talk timer Hackaday has ever seen, and a demoscene challenge that eschews a CPU. Then in Quick Hacks we've got a QWERTY slider phone, and a self-rowing canoe that comes straight out of Disney's The Sorcerer's Apprentice sequence. For a long time we've had a Field guide series covering tech in infrastructure and other public plain sight, and this week's one dealt with pivot irrigation. A new subject for Jenny who grew up on a farm in a wet country. Then both editors are for once in agreement, over using self-tapping screws to assemble 3D-printed structures. Sit back and enjoy the show!
Dave and Shannon kick off Casual Friday by troubleshooting a recent recording delay that turned out to be an AI video agent (Opus Clip beta) hammering CPU, noting browser quirks and local processing. They pivot into a broader conversation about the risks of oversharing personal details with AI, the “sycophant” […] The post FridAI – AI Guardrails – Business Brain 681 appeared first on Business Brain - The Entrepreneurs' Podcast.
This is a recap of the top 10 posts on Hacker News on September 03, 2025. This podcast was generated by wondercraft.ai (00:30): Claude Code: Now in Beta in ZedOriginal post: https://news.ycombinator.com/item?id=45116688&utm_source=wondercraft_ai(01:54): MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive DeclineOriginal post: https://news.ycombinator.com/item?id=45114753&utm_source=wondercraft_ai(03:18): Where's the shovelware? Why AI coding claims don't add upOriginal post: https://news.ycombinator.com/item?id=45120517&utm_source=wondercraft_ai(04:42): %CPU utilization is a lieOriginal post: https://news.ycombinator.com/item?id=45110688&utm_source=wondercraft_ai(06:06): VibeVoice: A Frontier Open-Source Text-to-Speech ModelOriginal post: https://news.ycombinator.com/item?id=45114245&utm_source=wondercraft_ai(07:30): Voyager – An interactive video generation model with realtime 3D reconstructionOriginal post: https://news.ycombinator.com/item?id=45114379&utm_source=wondercraft_ai(08:54): Nuclear: Desktop music player focused on streaming from free sourcesOriginal post: https://news.ycombinator.com/item?id=45117230&utm_source=wondercraft_ai(10:18): The 16-year odyssey it took to emulate the Pioneer LaserActiveOriginal post: https://news.ycombinator.com/item?id=45114003&utm_source=wondercraft_ai(11:42): Evidence that AI is destroying jobs for young peopleOriginal post: https://news.ycombinator.com/item?id=45121342&utm_source=wondercraft_ai(13:06): Microsoft BASIC for 6502 Microprocessor – Version 1.1Original post: https://news.ycombinator.com/item?id=45118392&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
En primera persona: mi primer ordenador Mi primer ordenador fue un Amstrad 8082 a finales de los años 80. En casa estábamos pensando en comprar una máquina de escribir electrónica, pero yo conseguí convencer a mi padre de que mejor invirtiéramos en un PC. Recuerdo perfectamente aquel momento: era un equipo enorme, con una CPU voluminosa, un monitor de tubo de unas 26 pulgadas y una impresora matricial que parecía una máquina industrial. Fue tal el tamaño del conjunto que mi padre tuvo que comprar una mesa de escritorio gigante para poder colocarlo todo. Ese ordenador fue mi puerta de entrada al mundo digital. Desde aprender los comandos básicos hasta escribir mis primeros textos y hacer pequeñas tareas, marcó un antes y un después en mi forma de ver la tecnología. No era rápido, ni silencioso, pero era mío… y con él empezó una historia que me acompaña hasta hoy.
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether blogs and websites still matter in the age of generative AI. You’ll learn why traditional content and SEO remain essential for your online presence, even with the rise of AI. You’ll discover how to effectively adapt your content strategy so that AI models can easily find and use your information. You’ll understand why focusing on answering your customer’s questions will benefit both human and AI search. You’ll gain practical tips for optimizing your content for “Search Everywhere” to maximize your visibility across all platforms. Tune in now to ensure your content strategy is future-proof! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-do-websites-matter-in-the-age-of-ai.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, one of the biggest questions that people have, and there’s a lot of debate on places like LinkedIn about this, is whether blogs and websites and things even matter in the age of generative AI. There are two different positions on this. The first is saying, no, it doesn’t matter. You just need to be everywhere. You need to be doing podcasts and YouTube and stuff like that, as we are now. The second is the classic, don’t build on rented land. They have a place that you can call your own and things. So I have opinions on this, but Katie, I want to hear your opinions on this. Katie Robbert – 00:37 I think we are in some ways overestimating people’s reliance on using AI for fact-finding missions. I think that a lot of people are turning to generative AI for, tell me the best agency in Boston or tell me the top five list versus the way that it was working previous to that, which is they would go to a search bar and do that instead. I think we’re overestimating the amount of people who actually do that. Katie Robbert – 01:06 Given, when we talk to people, a lot of them are still using generative AI for the basics—to write a blog post or something like that. I think personally, I could be mistaken, but I feel pretty confident in my opinion that people are still looking for websites. Katie Robbert – 01:33 People are still looking for thought leadership in the form of a blog post or a LinkedIn post that’s been repurposed from a blog post. People are still looking for that original content. I feel like it does go hand in hand with AI because if you allow the models to scrape your assets, it will show up in those searches. So I guess I think you still need it. I think people are still going to look at those sources. You also want it to be available for the models to be searching. Christopher S. Penn – 02:09 And this is where folks who know the systems generally land. When you look at a ChatGPT or a Gemini or a Claude or a Deep Seat, what’s the first thing that happens when a model is uncertain? It fires up a web search. That web search is traditional old school SEO. I love the content saying, SEO doesn’t matter anymore. Well, no, it still matters quite a bit because the web search tools are relying on the, what, 30 years of website catalog data that we have to find truthful answers. Christopher S. Penn – 02:51 Because AI companies have realized people actually do want some level of accuracy when they ask AI a question. Weird, huh? It really is. So with these tools, we have to. It is almost like you said, you have to do both. You do have to be everywhere. Christopher S. Penn – 03:07 You do have to have content on YouTube, you do have to post on LinkedIn, but you also do have to have a place where people can actually buy something. Because if you don’t, well. Katie Robbert – 03:18 And it’s interesting because if we say it in those terms, nothing’s changed. AI has not changed anything about our content dissemination strategy, about how we are getting ourselves out there. If anything, it’s just created a new channel for you to show up in. But all of the other channels still matter and you still have to start at the beginning of creating the content because you’re not. People like to think that, well, I have the idea in my head, so AI must know about it. It doesn’t work that way. Katie Robbert – 03:52 You still have to take the time to create it and put it somewhere. You are not feeding it at this time directly into OpenAI’s model. You’re not logging into OpenAI saying, here’s all the information about me. Katie Robbert – 04:10 So that when somebody asks, this is what you serve it up. No, it’s going to your website, it’s going to your blog post, it’s going to your social profiles, it’s going to wherever it is on the Internet that it chooses to pull information from. So your best bet is to keep doing what you’re doing in terms of your content marketing strategy, and AI is going to pick it up from there. Christopher S. Penn – 04:33 Mm. A lot of folks are talking, understandably, about how agentic AI functions and how agentic buying will be a thing. And that is true. It will be at some point. It is not today. One thing you said, which I think has an asterisk around it, is, yes, our strategy at Trust Insights hasn’t really changed because we’ve been doing the “be everywhere” thing for a very long time. Christopher S. Penn – 05:03 Since the inception of the company, we’ve had a podcast and a YouTube channel and a newsletter and this and that. I can see for legacy companies that were still practicing, 2010 SEO—just build it and they will come, build it and Google will send people your way—yeah, you do need an update. Katie Robbert – 05:26 But AI isn’t the reason. AI is—you can use AI as a reason, but it’s not the reason that your strategy needs to be updated. So I think it’s worth at least acknowledging this whole conversation about SEO versus AEO versus Giao Odo. Whatever it is, at the end of the day, you’re still doing, quote unquote, traditional SEO and the models are just picking up whatever you’re putting out there. So you can optimize it for AI, but you still have to optimize it for the humans. Christopher S. Penn – 06:09 Yep. My favorite expression is from Ashley Liddell at Deviate, who’s an SEO shop. She said SEO now just stands for Search Everywhere Optimization. Everything has a search. TikTok has a search. Pinterest has a search. You have to be everywhere and then you have to optimize for it. I think that’s the smartest way to think about this, to say, yeah, where is your customer and are you optimizing for? Christopher S. Penn – 06:44 One of the things that we do a lot, and this is from the heyday of our web analytics era, before the AI era, go into your Google Analytics, go into referring source sites, referring URLs, and look where you’re getting traffic from, particularly look where you’re getting traffic from for places that you’re not trying particularly hard. Christopher S. Penn – 07:00 So one place, for example, that I occasionally see in my own personal website that I have, to my knowledge, not done anything on, for quite some time, like decades or years, is Pinterest. Every now and again I get some rando from Pinterest coming. So look at those referring URLs and say, where else are we getting traffic from? Maybe there’s a there. If we’re getting traffic from and we’re not trying at all, maybe there’s a there for us to try something out there. Katie Robbert – 07:33 I think that’s a really good pro tip because it seems like what’s been happening is companies have been so focused on how do we show up in AI that they’re forgetting that all of these other things have not gone away and the people who haven’t forgotten about them are going to capitalize on it and take that digital footprint and take that market share. While you were over here worried about how am I going to show up as the first agency in Boston in the OpenAI search, you still have—so I guess to your question, where you originally asked, is, do we still need to think about websites and blogs and that kind of content dissemination? Absolutely. If we’re really thinking about it, we need to consider it even more. Katie Robbert – 08:30 We need to think about longer-form content. We need to think about content that is really impactful and what is it? The three E’s—to entertain, educate, and engage. Even more so now because if you are creating one or two sentence blurbs and putting that up on your website, that’s what these models are going to pick up and that’s it. So if you’re like, why is there not a more expansive explanation as to who I am? That’s because you didn’t put it out there. Christopher S. Penn – 09:10 Exactly. We were just doing a project for a client and were analyzing content on their website and I kid you not, one page had 12 words on it. So no AI tool is going to synthesize about you. It’s just going to say, wow, this sucks and not bother referring to you. Katie Robbert – 09:37 Is it fair to say that AI is a bit of a distraction when it comes to a content marketing strategy? Maybe this is just me, but the way that I would approach it is I would take AI out of the conversation altogether just for the time being. In terms of what content do we want to create? Who do we want to reach? Then I would insert AI back in when we’re talking about what channels do we want to appear on? Because I’m really thinking about AI search. For a lack of a better term, it’s just another channel. Katie Robbert – 10:14 So if I think of my attribution modeling and if I think of what that looks like, I would expect maybe AI shows up as a first touch. Katie Robbert – 10:31 Maybe somebody was doing some research and it’s part of my first touch attribution. But then they’re like, oh, that’s interesting. I want to go learn more. Let me go find their social profiles. That’s going to be a second touch. That’s going to be sort of the middle. Then they’re like, okay, now I’m ready. So they’re going to go to the website. That’s going to be a last touch. I would just expect AI to be a channel and not necessarily the end-all, be-all of how I’m creating my content. Am I thinking about that the right way? Christopher S. Penn – 11:02 You are. Think about it in terms of the classic customer training—awareness, consideration, evaluation, purchase and so on and so forth. Awareness you may not be able to measure anymore, because someone’s having a conversation in ChatGPT saying, gosh, I really want to take a course on AI strategy for leaders and I’m not really sure where I would go. It’s good. And ChatGPT will say, well, hey, let’s talk about this. It may fire off some web searches back and forth and things, and come back and give you an answer. Christopher S. Penn – 11:41 You might say, take Katie Robbert’s Trust Insights AI strategy course at Trust Insights AI/AI strategy course. You might not click on that, or there might not even be a link there. What might happen is you might go, I’ll Google that. Christopher S. Penn – 11:48 I’ll Google who Katie Robbert is. So the first touch is out of your control. But to your point, that’s nothing new. You may see a post from Katie on LinkedIn and go, huh, I should Google that? And then you do. Does LinkedIn get the credit for that? No, because nothing was clicked on. There’s no clickstream. And so thinking about it as just another channel that is probably invisible is no different than word of mouth. If you and I or Katie are at the coffee shop and having a cup of coffee and you tell me about this great new device for the garden, I might Google it. Or I might just go straight to Amazon and search for it. Katie Robbert – 12:29 Right. Christopher S. Penn – 12:31 But there’s no record of that. And the only way you get to that is through really good qualitative market research to survey people to say, how often do you ask ChatGPT for advice about your marketing strategy? Katie Robbert – 12:47 And so, again, to go back to the original question of do we still need to be writing blogs? Do we still need to have websites? The answer is yes, even more so. Now, take AI out of the conversation in terms of, as you’re planning, but think about it in terms of a channel. With that, you can be thinking about the optimized version. We’ve covered that in previous podcasts and live streams. There’s text that you can add to the end of each of your posts or, there’s the AI version of a press release. Katie Robbert – 13:28 There are things that you can do specifically for the machines, but the machine is the last stop. Katie Robbert – 13:37 You still have to put it out on the wire, or you still have to create the content and put it up on YouTube so that you have a place for the machine to read the thing that you put up there. So you’re really not replacing your content marketing strategy with what are we doing for AI? You’re just adding it into the fold as another channel that you have to consider. Christopher S. Penn – 14:02 Exactly. If you do a really good job with the creation of not just the content, but things like metadata and anticipating the questions people are going to ask, you will do better with AI. So a real simple example. I was actually doing this not too long ago for Trust Insights. We got a pricing increase notice from our VPS provider. I was like, wow, that’s a pretty big jump. Went from like 40 bucks a month, it’s going to go like 90 bucks a month, which, granted, is not gigantic, but that’s still 50 bucks a month more that I would prefer not to spend if I don’t have to. Christopher S. Penn – 14:40 So I set up a deep research prompt in Gemini and said, here’s what I care about. Christopher S. Penn – 14:49 I want this much CPU and this much memory and stuff like that. Make me a short list by features and price. It came back with a report and we switched providers. We actually found a provider that provided four times the amount of service for half the cost. I was like, yes. All the providers that have “call us for a demo” or “request a quote” didn’t make the cut because Gemini’s like, weird. I can’t find a price on your website. Move along. And they no longer are in consideration. Christopher S. Penn – 15:23 So one of the things that everyone should be doing on your website is using your ideal customer profile to say, what are the questions that someone would ask about this service? As part of the new AI strategy course, we. Christopher S. Penn – 15:37 One of the things we did was we said, what are the frequently asked questions people are going to ask? Like, do I get the recordings, what’s included in the course, who should take this course, who should not take this course, and things like that. It’s not just having more content for the sake of content. It is having content that answers the questions that people are going to ask AI. Katie Robbert – 15:57 It’s funny, this kind of sounds familiar. It almost kind of sounds like the way that Google would prioritize content in its search algorithm. Christopher S. Penn – 16:09 It really does. Interestingly enough, if you were to go into it, because this came up recently in an SEO forum that I’m a part of, if you go into the source code of a ChatGPT web chat, you can actually see ChatGPT’s internal ranking for how it ranks search results. Weirdly enough, it does almost exactly what Google does. Which is to say, like, okay, let’s check the authority, let’s check the expertise, let’s check the trustworthiness, the EEAT we’ve been talking about for literally 10 years now. Christopher S. Penn – 16:51 So if you’ve been good at anticipating what a Googler would want from your website, your strategy doesn’t need to change a whole lot compared to what you would get out of a generative AI tool. Katie Robbert – 17:03 I feel like if people are freaking out about having the right kind of content for generative AI to pick up, Chris, correct me if I’m wrong, but a good place to start might be with inside of your SEO tools and looking at the questions people ask that bring them to your website or bring them to your content and using that keyword strategy, those long-form keywords of “how do I” and “what do I” and “when do I”—taking a look at those specifically, because that’s how people ask questions in the generative AI models. Katie Robbert – 17:42 It’s very similar to how when these search engines included the ability to just yell at them, so they included like the voice feature and you would say, hey, search engine, how do I do the following five things? Katie Robbert – 18:03 And it changed the way we started looking at keyword research because it was no longer enough to just say, I’m going to optimize for the keyword protein shake. Now I have to optimize for the keyword how do I make the best protein shake? Or how do I make a fast protein shake? Or how do I make a vegan protein shake? Or, how do I make a savory protein shake? So, if it changed the way we thought about creating content, AI is just another version of that. Katie Robbert – 18:41 So the way you should be optimizing your content is the way people are asking questions. That’s not a new strategy. We’ve been doing that. If you’ve been doing that already, then just keep doing it. Katie Robbert – 18:56 That’s when you think about creating the content on your blog, on your website, on your LinkedIn, on your Substack newsletter, on your Tumblr, on your whatever—you should still be creating content that way, because that’s what generative AI is picking up. It’s no different, big asterisks. It’s no different than the way that the traditional search engines are picking up content. Christopher S. Penn – 19:23 Exactly. Spend time on stuff like metadata and schema, because as we’ve talked about in previous podcasts and live streams, generative AI models are language models. They understand languages. The more structured the language it is, the easier it is for a model to understand. If you have, for example, JSON, LD or schema.org markup on your site, well, guess what? That makes the HTML much more interpretable for a language model when it processes the data, when it goes to the page, when it sends a little agent to the page that says, what is this page about? And ingests the HTML. It says, oh look, there’s a phone number here that’s been declared. This is the phone number. Oh look, this is the address. Oh look, this is the product name. Christopher S. Penn – 20:09 If you spend the time to either build that or use good plugins and stuff—this week on the Trust Insights live stream, we’re going to be talking about using WordPress plugins with generative AI. All these things are things that you need to think about with your content. As a bonus, you can have generative AI tools look at a page and audit it from their perspective. You can say, hey ChatGPT, check out this landing page here and tell me if this landing page has enough information for you to guide a user about whether or not they should—if they ask you about this course, whether you have all the answers. Think about the questions someone would ask. Think about, is that in the content of the page and you can do. Christopher S. Penn – 20:58 Now granted, doing it one page at a time is somewhat tedious. You should probably automate that. But if it’s a super high-value landing page, it’s worth your time to say, okay, ChatGPT, how would you help us increase sales of this thing? Here’s who a likely customer is, or even better if you have conference call transcripts, CRM notes, emails, past data from other customers who bought similar things. Say to your favorite AI tool: Here’s who our customers actually are. Can you help me build a customer profile and then say from that, can you optimize, help me optimize this page on my website to answer the questions this customer will have when they ask you about it? Katie Robbert – 21:49 Yeah, that really is the way to go in terms of using generative AI. I think the other thing is, everyone’s learning about the features of deep research that a lot of the models have built in now. Where do you think the data comes from that the deep research goes and gets? And I say that somewhat sarcastically, but not. Katie Robbert – 22:20 So I guess again, sort of the PSA to the organizations that think that blog posts and thought leadership and white papers and website content no longer matter because AI’s got it handled—where do you think that data comes from? Christopher S. Penn – 22:40 Mm. So does your website matter? Sure, it does a lot. As long as it has content that would be useful for a machine to process. So you need to have it there. I just have curiosity. I just typed in “can you see any structured data on this page?” And I gave it the URL of the course and immediately ChatGPT in the little thinking—when it says “I’m looking for JSON, LD and meta tags”—and saying “here’s what I do and don’t see.” I’m like, oh well that’s super nice that it knows what those things are. And it’s like, okay, well I guess you as a content creator need to do this stuff. And here’s the nice thing. Christopher S. Penn – 23:28 If you do a really good job of tuning a page for a generative AI model, you will also tune it really well for a search engine and you will also tune it really well for an actual human being customer because all these tools are converging on trying to deliver value to the user who is still human for the most part and helping them buy things. So yes, you need a website and yes, you need to optimize it and yes, you can’t just go posting on social networks and hope that things work out for the best. Katie Robbert – 24:01 I guess the bottom line, especially as we’re nearing the end of Q3, getting into Q4, and a lot of organizations are starting their annual planning and thinking about where does AI fit in and how do we get AI as part of our strategy. And we want to use AI. Obviously, yes, take the AI Ready Strategist course at TrustInsights AIstrategy course, but don’t freak out about it. That is a very polite way of saying you’re overemphasizing the importance of AI when it comes to things like your content strategy, when it comes to things like your dissemination plan, when it comes to things like how am I reaching my audience. You are overemphasizing the importance because what’s old is new. Katie Robbert – 24:55 Again, basic best practices around how to create good content and optimize it are still relevant and still important and then you will show up in AI. Christopher S. Penn – 25:07 It’s weird. It’s like new technology doesn’t solve old problems. Katie Robbert – 25:11 I’ve heard that somewhere. I might get that printed on a T-shirt. But I mean that’s the thing. And so I’m concerned about the companies going to go through multiple days of planning meetings and the focus is going to be solely on how do we show up in AI results. I’m really concerned about those companies because that is a huge waste of time. Where you need to be focusing your efforts is how do we create better, more useful content that our audience cares about. And AI is a benefit of that. AI is just another channel. Christopher S. Penn – 25:48 Mm. And clearly and cleanly and with lots of relevant detail. Tell people and machines how to buy from you. Katie Robbert – 25:59 Yeah, that’s a biggie. Christopher S. Penn – 26:02 Make it easy to say like, this is how you buy from Trust Insights. Katie Robbert – 26:06 Again, it sounds familiar. It’s almost like if there were a framework for creating content. Something like a Hero Hub help framework. Christopher S. Penn – 26:17 Yeah, from 12 years ago now, a dozen years ago now, if you had that stuff. But yeah, please folks, just make it obvious. Give it useful answers to questions that you know your buyers have. Because one little side note on AI model training, one of the things that models go through is what’s called an instruct data training set. Instruct data means question-answer pairs. A lot of the time model makers have to synthesize this. Christopher S. Penn – 26:50 Well, guess what? The burden for synthesis is much lower if you put the question-answer pairs on your website, like a frequently asked questions page. So how do I buy from Trust Insights? Well, here are the things that are for sale. We have this on a bunch of our pages. We have it on the landing pages, we have in our newsletters. Christopher S. Penn – 27:10 We tell humans and machines, here’s what is for sale. Here’s what you can buy from us. It’s in our ebooks and things you can. Here’s how you can buy things from us. That helps when models go to train to understand. Oh, when someone asks, how do I buy consulting services from Trust Insights? And it has three paragraphs of how to buy things from us, that teaches the model more easily and more fluently than a model maker having to synthesize the data. It’s already there. Christopher S. Penn – 27:44 So my last tactical tip was make sure you’ve got good structured question-answer data on your website so that model makers can train on it. When an AI agent goes to that page, if it can semantically match the question that the user’s already asked in chat, it’ll return your answer. Christopher S. Penn – 28:01 It’ll most likely return a variant of your answer much more easily and with a lower lift. Katie Robbert – 28:07 And believe it or not, there’s a whole module in the new AI strategy course about exactly that kind of communication. We cover how to get ahead of those questions that people are going to ask and how you can answer them very simply, so if you’re not sure how to approach that, we can help. That’s all to say, buy the new course—I think it’s really fantastic. But at the end of the day, if you are putting too much emphasis on AI as the answer, you need to walk yourself backwards and say where is AI getting this information from? That’s probably where we need to start. Christopher S. Penn – 28:52 Exactly. And you will get side benefits from doing that as well. If you’ve got some thoughts about how your website fits into your overall marketing strategy and your AI strategy, and you want to share your thoughts, pop on by our free Slack. Go to trustinsights.ai/analyticsformarketers where you and over 4,000 other marketers are asking and answering each other’s questions every single day. Christopher S. Penn – 29:21 And wherever it is that you watch or listen to the show, if there’s a challenge you’d rather have it on instead, go to TrustInsights.ai/tipodcast. We can find us at all the places fine podcasts are served. Thanks for tuning in and we’ll talk to you all on the next one. Katie Robbert – 29:31 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth and acumen and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Katie Robbert – 30:04 Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 30:24 Trust Insights also offers expert guidance on social media analytics, marketing technology and Martech selection and implementation and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic, Claude Dall-E, Midjourney Stock, Stable Diffusion and Metalama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights podcast, the Inbox Insights newsletter, the So What Livestream webinars and keynote speaking. Katie Robbert – 31:14 What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Katie Robbert – 31:29 Data storytelling—this commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
Alors qu'Apple s'apprête à généraliser l'eSIM en Europe avec son prochain iPhone, une autre technologie pourrait bien bousculer encore davantage nos téléphones : l'iSIM. Plus discrète, plus intégrée, elle promet tout simplement de faire disparaître la carte SIM telle qu'on la connaît. Mais attention, ne la confondez pas avec l'eSIM.Avec l'eSIM, la carte reste une petite puce soudée sur la carte mère du smartphone. L'iSIM, elle, va beaucoup plus loin. Elle s'intègre directement au cœur du processeur, dans le SoC, ce “System on Chip” qui regroupe déjà le CPU pour les applis, le GPU pour les images, le NPU pour l'intelligence artificielle… et demain, peut-être, votre forfait mobile. En clair, la carte SIM devient une simple zone sécurisée dans la puce principale.Et ce n'est pas de la science-fiction. Dès 2023, Thales, Qualcomm et Vodafone ont présenté des prototypes fonctionnels, et la GSMA – l'organisme qui définit les standards mobiles – a commencé à délivrer ses premières certifications de sécurité. Les objets connectés ouvrent déjà la voie : montres, capteurs, appareils miniaturisés… pour eux, chaque millimètre carré gagné compte. Plus d'espace pour la batterie, plus de place pour de nouvelles fonctions, sans changer la taille des appareils. L'iSIM présente aussi un intérêt majeur côté sécurité. En étant intégrée au cœur du processeur, elle bénéficie des protections matérielles les plus avancées, rendant le piratage ou le clonage quasi impossibles sans accès direct au silicium. Et sur le plan industriel, la suppression du tiroir SIM, des connecteurs et même de la puce eSIM réduit les coûts de fabrication.Mais attention, tout n'est pas si simple. Pour les opérateurs, l'iSIM impose de moderniser leurs systèmes de gestion. L'activation et la gestion des profils se font avec les mêmes protocoles que l'eSIM, mais les diagnostics techniques deviennent plus complexes en cas de problème. Reste une certitude : après avoir enterré la carte SIM physique, l'iSIM pourrait bien redessiner le futur du smartphone. Et cette fois, ce n'est plus qu'une question d'années. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
This week we talk about General Motors, the Great Recession, and semiconductors.We also discuss Goldman Sachs, US Steel, and nationalization.Recommended Book: Abundance by Ezra Klein and Derek ThompsonTranscriptNationalization refers to the process through which a government takes control of a business or business asset.Sometimes this is the result of a new administration or regime taking control of a government, which decides to change how things work, so it gobbles up things like oil companies or railroads or manufacturing hubs, because that stuff is considered to be fundamental enough that it cannot be left to the whims, and the ebbs and eddies and unpredictable variables of a free market; the nation needs reliable oil, it needs to be churning out nails and screws and bullets, so the government grabs the means of producing these things to ensure nothing stops that kind of output or operation.That more holistic reworking of a nation's economy so that it reflects some kind of socialist setup is typically referred to as socialization, though commentary on the matter will still often refer to the individual instances of the government taking ownership over something that was previously private as nationalization.In other cases these sorts of assets are nationalized in order to right some kind of perceived wrong, as was the case when the French government, in the wake of WWII, nationalized the automobile company Renault for its alleged collaboration with the Nazis when they occupied France.The circumstances of that nationalization were questioned, as there was a lot of political scuffling between capitalist and communist interests in the country at that time, and some saw this as a means of getting back against the company's owner, Louis Renault, for his recent, violent actions against workers who had gone on strike before France's occupation—but whatever the details, France scooped up Renault and turned it into a state-owned company, and in 1994, the government decided that its ownership of the company was keeping its products from competing on the market, and in 1996 it was privatized and they started selling public shares, though the French government still owns about 15% of the company.Nationalization is more common in some non-socialist nations than others, as there are generally considered to be significant pros and cons associated with such ownership.The major benefit of such ownership is that a government owned, or partially government owned entity will tend to have the government on its side to a greater or lesser degree, which can make it more competitive internationally, in the sense that laws will be passed to help it flourish and grow, and it may even benefit from direct infusions of money, when needed, especially with international competition heats up, and because it generally allows that company to operate as a piece of government infrastructure, rather than just a normal business.Instead of being completely prone to the winds of economic fortune, then, the US government can ensure that Amtrak, a primarily state-owned train company that's structured as a for-profit business, but which has a government-appointed board and benefits from federal funding, is able to keep functioning, even when demand for train services is low, and barbarians at the gate, like plane-based cargo shipping and passenger hauling, becomes a lot more competitive, maybe even to the point that a non-government-owned entity may have long-since gone under, or dramatically reduced its service area, by economic necessity.A major downside often cited by free-market people, though, is that these sorts of companies tend to do poorly, in terms of providing the best possible service, and in terms of making enough money to pay for themselves—services like Amtrak are structured so that they pay as much of their own expenses as much as possible, for instance, but are seldom able to do so, requiring injections of resources from the government to stay afloat, and as a result, they have trouble updating and even maintaining their infrastructure.Private companies tend to be a lot more agile and competitive because they have to be, and because they often have leadership that is less political in nature, and more oriented around doing better than their also private competition, rather than merely surviving.What I'd like to talk about today is another vital industry that seems to have become so vital, like trains, that the US government is keen to ensure it doesn't go under, and a stake that the US government took in one of its most historically significant, but recently struggling companies.—The Emergency Economic Stabilization Act of 2008 was a law passed by the US government after the initial whammy of the Great Recession, which created a bunch of bailouts for mostly financial institutions that, if they went under, it was suspected, would have caused even more damage to the US economy.These banks had been playing fast and loose with toxic assets for a while, filling their pockets with money, but doing so in a precarious and unsustainable manner.As a result, when it became clear these assets were terrible, the dominos started falling, all these institutions started going under, and the government realized that they would either lose a significant portion of their banks and other financial institutions, or they'd have to bail them out—give them money, basically.Which wasn't a popular solution, as it looked a lot like rewarding bad behavior, and making some businesses, private businesses, too big to fail, because the country's economy relied on them to some degree. But that's the decision the government made, and some of these institutions, like Goldman Sachs, had their toxic assets bought by the government, removing these things from their balance sheets so they could keep operating as normal. Others declared bankruptcy and were placed under government control, including Fannie Mae and Freddie Mac, which were previously government supported, but not government run.The American International Group, the fifth largest insurer in the world at that point, was bought by the US government—it took 92% of the company in exchange for $141.8 billion in assistance, to help it stay afloat—and General Motors, not a financial institution, but a car company that was deemed vital to the continued existence of the US auto market, went bankrupt, the fourth largest bankruptcy in US history. The government allowed its assets to be bought by a new company, also called GM, which would then function as normal, which allowed the company to keep operating, employees to keep being paid, and so on, but as part of that process, the company was given a total of $51 billion by the government, which took a majority stake in the new company in exchange.In late-2013, the US government sold its final shares of GM stock, having lost about $10.7 billion over the course of that ownership, though it's estimated that about 1.5 million jobs were saved as a result of keeping GM and Chrysler, which went through a similar process, afloat, rather than letting them go under, as some people would have preferred.In mid-August of this year, the US government took another stake in a big, historically significant company, though this time the company in question wasn't going through a recession-sparked bankruptcy—it was just falling way behind its competition, and was looking less and less likely to ever catch up.Intel was founded 1968, and it designs, produces, and sells all sorts of semiconductor products, like the microprocessors—the computer chips—that power all sorts of things, these days.Intel created the world's first commercial computer chip back in 1971, and in the 1990s, its products were in basically every computer that hit the market, its range and dominance expanding with the range and dominance of Microsoft's Windows operating system, achieving a market share of about 90% in the mid- to late-1990s.Beginning in the early 2000s, though, other competitors, like AMD, began to chip away at Intel's dominance, and though it still boasts a CPU market share of around 67% as of Q2 of 2025, it has fallen way behind competitors like Nvidia in the graphics card market, and behind Samsung in the larger semiconductor market.And that's a problem for Intel, as while CPUs are still important, the overall computing-things, high-tech gadget space has been shifting toward stuff that Intel doesn't make, or doesn't do well.Smaller things, graphics-intensive things. Basically all the hardware that's powered the gaming, crypto, and AI markets, alongside the stuff crammed into increasingly small personal devices, are things that Intel just isn't very good at, and doesn't seem to have a solid means of getting better at, so it's a sort of aging giant in the computer world—still big and impressive, but with an outlook that keeps getting worse and worse, with each new generation of hardware, and each new innovation that seems to require stuff it doesn't produce, or doesn't produce good versions of.This is why, despite being a very unusual move, the US government's decision to buy a 10% stake in Intel for $8.9 billion didn't come as a total surprise.The CEO of Intel had been raising the possibility of some kind of bailout, positioning Intel as a vital US asset, similar to all those banks and to GM—if it went under, it would mean the US losing a vital piece of the global semiconductor pie. The government already gave Intel $2.2 billion as part of the CHIPS and Science Act, which was signed into law under the Biden administration, and which was meant to shore-up US competitiveness in that space, but that was a freebie—this new injection of resources wasn't free.Response to this move has been mixed. Some analysts think President Trump's penchant for netting the government shares in companies it does stuff for—as was the case with US Steel giving the US government a so-called ‘golden share' of its company in exchange for allowing the company to merge with Japan-based Nippon Steel, that share granting a small degree of governance authority within the company—they think that sort of quid-pro-quo is smart, as in some cases it may result in profits for a government that's increasingly underwater in terms of debt, and in others it gives some authority over future decisions, giving the government more levers to use, beyond legal ones, in steering these vital companies the way it wants to steer them.Others are concerned about this turn of events, though, as it seems, theoretically at least, anti-competitive. After all, if the US government profits when Intel does well, now that it owns a huge chunk of the company, doesn't that incentivize the government to pass laws that favor Intel over its competitors? And even if the government doesn't do anything like that overtly, doesn't that create a sort of chilling effect on the market, making it less likely serious competitors will even emerge, because investors might be too spooked to invest in something that would be going up against a partially government-owned entity?There are still questions about the legality of this move, as it may be that the CHIPS Act doesn't allow the US government to convert grants into equity, and it may be that shareholders will find other ways to rebel against the seeming high-pressure tactics from the White House, which included threats by Trump to force the firing of its CEO, in part by withholding some of the company's federal grants, if he didn't agree to giving the government a portion of the company in exchange for assistance.This also raises the prospect that Intel, like those other bailed-out companies, has become de facto too big to fail, which could lead to stagnation in the company, especially if the White House goes further in putting its thumb on the scale, forcing more companies, in the US and elsewhere, to do business with the company, despite its often uncompetitive offerings.While there's a chance that Intel takes this influx of resources and support and runs with it, catching up to competitors that have left it in the dust and rebuilding itself into something a lot more internationally competitive, then, there's also the chance that it continues to flail, but for much longer than it would have, otherwise, because of that artificial support and government backing.Show Noteshttps://www.reuters.com/legal/legalindustry/did-trump-save-intel-not-really-2025-08-23/https://www.nytimes.com/2025/08/23/business/trump-intel-us-steel-nvidia.htmlhttps://arstechnica.com/tech-policy/2025/08/intel-agrees-to-sell-the-us-a-10-stake-trump-says-hyping-great-deal/https://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganizationhttps://www.investopedia.com/articles/economics/08/government-financial-bailout.asphttps://www.tomshardware.com/pc-components/cpus/amds-desktop-pc-market-share-hits-a-new-high-as-server-gains-slow-down-intel-now-only-outsells-amd-2-1-down-from-9-1-a-few-years-agohttps://www.spglobal.com/commodity-insights/en/news-research/latest-news/metals/062625-in-rare-deal-for-us-government-owns-a-piece-of-us-steelhttps://en.wikipedia.org/wiki/Renaulthttps://en.wikipedia.org/wiki/State-owned_enterprises_of_the_United_Stateshttps://247wallst.com/special-report/2021/04/07/businesses-run-by-the-us-government/https://en.wikipedia.org/wiki/Nationalizationhttps://www.amtrak.com/stakeholder-faqshttps://en.wikipedia.org/wiki/General_Motors_Chapter_11_reorganization This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In this cutting-edge episode, we explore how Edge AI is transforming drug discovery and revolutionising laboratory workflows, real-time molecular analysis, and protein folding predictions—all at the source of data collection. Joining us is Nuri Cankaya, Vice President of Commercial Marketing at Intel Corporation, and a renowned thought leader in AI and healthcare innovation.You'll discover how AI at the edge—enabled by on-device NPUs, GPUs, and CPUs—is unlocking privacy-preserving, high-performance computing in the most sensitive environments, such as clinical labs and pharmaceutical R&D centers. Nuri shares his deep experience in AI, discusses hardware configurations for edge deployments, and provides real-world examples of AI accelerating high-throughput screening, compound discovery, and target validation.Key Topics:What is Edge AI and how it differs from cloud-based AIHow real-time AI in the lab enables faster, cheaper drug discoveryHardware requirements: NPU, GPU, CPU integration for edge computingThe role of AlphaFold and protein folding prediction in therapeutic developmentUse cases in molecular screening, genomics, and clinical trial simulationsHow Edge AI preserves data privacy and complies with GDPR and HIPAAPredictions for AGI (Artificial General Intelligence) and Quantum Computing in healthcareStrategic advice for pharma leaders and biotech innovators looking to pilot AIThe energy efficiency and sustainability gains from Edge AI vs. cloud AIAbout the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget. As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more. Dr. Andree Bates LinkedIn | Facebook | Twitter
The ASX 200 jumped early, then fell back to close up only 5 points at 8972 as the great rotation continues. Unleash the resource bulls! Banks and industrials sold off as commodity stocks rallied hard.The Big Bank Basket fell to $283.70 (-1.5%) with insurers also under siege on rate-cut hopes, QBE down 2.1% and SUN off 2.9%. REITs picked up the slack, pushing ahead, GPT up 1.1% and CHC up 1.6%. Financials were also better, AMP up 1.4% and ZIP basking in a warm embrace from broker calls, rallying another 7.5%.Defensives in consumer land slid, WES down 2.6% with WOW off 1.5% and COL down 0.6%. CPU fell 5.2% on lower rates, and QAN came in for a landing, down 1.3%. Tech also came under a little pressure, XRO continues to slide lower after the U.S. acquisition and cap raise. Retail and travel stocks were slightly better on rate-cut news — TPW rallied 5.4%, and LOV was up 2.9%. GYG found some support, up 7.8%, and FLT rose 1.1%.In resources, the big iron ore miners had a strong day, BHP, RIO, and FMG all up around 2.6%. Lithium stocks were better — PLS results today were cheered, with the stock rising 2.4%. IGO up 3.7%, and LTR rallying 4.2%. Gold miners were firm, EVN up 3.5% and NST up 2.8%, with copper stocks also in demand, SFR up 5.3%. Uranium stocks are finding new friends, PDN up 5.3% and NXG rising 5.3%.In corporate news, REH got smashed 16.4% by the Victorian economy, ABB beat forecasts and is going “buddyless.” BEN delivered an FY loss but rallied 1.1%, with STO extending the ADNOC deadline by a month. NHF rose 2.7% on better-than-expected results. ANN bounced 10.3% on upgraded guidance, and SXL surged 26.5% after a good set of numbers.Nothing much on the economic front.Asian markets up again, Japan up 0.4%, HK up 1.8% and China up 1.1%European markets opening flat. US Dow futures down 28 Nasdaq down 9. UK markets closed for bank holiday.Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services. Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.
This week, Gareth and Ted unpack the latest tech buzz. Is Valve's "Fremont" console the future of gaming? The ROG Ally impresses as a solid portable Xbox experience. Plus, Ted showcases an innovative AI-powered tablet with a projector that teaches kids a new language. With Gareth Myles and Ted Salmon Join us on Mewe RSS Link: https://techaddicts.libsyn.com/rss Direct Download | iTunes | YouTube Music | Stitcher | Tunein | Spotify Amazon | Pocket Casts | Castbox | PodHubUK Feedback, Fallout and Contributions Salmon's (hopeful) iOS Leap! News Amazon is switching its Fire tablets to Android Malcolm Bryant - 'Open source Android', aka AOSP, is really bare-bones. All OEMs to this point have supplemented AOSP with extra functionality in order to have a viable consumer-facing product. In particular of course there is Google Play Services, which gives access to the Play Store and many other core Android features. Google Play Services is not part of AOSP but it seems extremely unlikely that Amazon would omit it if the intent is to make this tablet a mainstream Android device. Projector in a tablet with 30,000mAh Battery? Blackview Active 12 Pro 5G review - Specs Dex is an AI-powered camera device that helps children learn new languages - 97-second YouTube Video Valve's Fremont SteamOS console surfaces with six-core Zen 4 CPU and RX 7600 GPU ROG Xbox Ally and ROG Xbox Ally X launch October 16 Honor Magic V Flip 2 unveiled with 200MP camera, 5,500mAh battery The Google Event was Cringeworthy! Pixel 10 from £799 Pixel 10 Pro from £999 Pixel 10 Pro XL from £1,199 Pixel 10 Pro Fold (later release?) from £1,749 Pixel Watch 4 from £349 Banters: Knocking out a Quick Bant Pixel Tablet with Speaker Dock, 256GB, Porcelain and Logitech Keys-To-Go 2 Google Headlines - Weather especially - Yellow Alert warnings… several days afterwards Bargain Basement: Best UK deals and tech on sale we have spotted Samsung Galaxy Chromebook Plus £559 from £749 (and i3 £399 from £649) Lenovo Legion R24e - 23.8" FHD (1920x1080) -25% £59.00 Was: £79.00 Lenovo IdeaPad Chromebook Duet 11 £242 from £369/£299 Tapo Smart Plug with Energy Monitoring - £26.88 Logitech G Yeti Orb USB Condenser RGB Microphone £33.24 from £59.99 WD 22TB Elements External Hard Drive - £308.99 Poco M7 128GB/6GB (£119), 256GB/8GB (£139) Main Show URL: http://www.techaddicts.uk | PodHubUK Contact:: gareth@techaddicts.uk | @techaddictsuk Gareth - @garethmyles | Mastodon | Blusky | garethmyles.com | Gareth's Ko-Fi Ted - tedsalmon.com | Ted's PayPal | Mastodon | Ted'
On this week's episode of The MacRumors Show, we talk through what to expect from the Apple Watch SE 3, Series 11, and Ultra 3, and whether it's worth holding off on an upgrade until next year. The third-generation Apple Watch SE is rumored to feature a larger display (perhaps like the Apple Watch Series 7), the S11 chip, and potentially a plastic casing. It could also available at a slightly lower price point. The Apple Watch Series 11 will likely feature the S11 chip, 5G RedCap connectivity on cellular models, a "Sleep Score" feature, and potentially hypertension detection. The Apple Watch Ultra 3 is rumored to also get all of these new features, as well as a slightly larger wide-angle OLED display with a faster refresh rate, and satellite connectivity. Earlier this week, internal Apple code revealed that the 2026 Apple Watch lineup is poised to get some major enhancements. The new devices will feature Touch ID for biometric authentication, a redesigned chip based on newer CPU technology for improved performance, a revamped design with a new rear sensor array, and more.
Suscríbete para más: https://www.youtube.com/c/pixxelersSigueme en redes: https://linktr.ee/jlrock92Discord: https://discord.gg/EFkfqhMZDUNOTAS:- Silksong: https://youtu.be/6XGeJwsUP9c- Gamescom: https://www.youtube.com/live/74oh7zD_jxI- Helldivers 2 x Halo: https://youtu.be/gUC24yAP7So- DenshAttack: https://tinyurl.com/58azdtc4- Gamers Nexus: http://youtube.com/post/UgkxoRkNZ-Bj_wXYX2nfG-Ixcgm3KwRJQSkA- PS5 precios: https://tinyurl.com/24p3yeec- Xbox Ally X: https://youtu.be/VTaboYwSyuc- Halo Studios CEO: https://tinyurl.com/bdedhc8u- PC Handheld poderosa 1: https://tinyurl.com/3px83ejn - PC Handheld poderosa 2: https://youtu.be/le99zSu_zLw- Valve Fremont: https://tinyurl.com/5bzyrnmk- Windows 11 SSD: https://youtu.be/mlY2QjP_-9s- YouaTube IA filtro: https://youtu.be/86nhP8tvbLY- Meta IA funable 1: https://tinyurl.com/4a6cmxhb- Meta IA funable 2: https://tinyurl.com/jpu5e9an- CatGPT Súper Héroe: https://tinyurl.com/36p96x98- ChatGPT sal: https://tinyurl.com/2uf89nxw- Burbuja IA: https://tinyurl.com/4nvefna8- CPU marketshare: https://tinyurl.com/4ju65y68- Trump x Intel: https://tinyurl.com/cweza6mb- Pokemon Peru: https://tinyurl.com/mrxmk4y6- Steam una mano: https://tinyurl.com/y2dfwydx
Send us a text!Watch this episode on YouTubeThis week: Somehow, FineWoven returned… as TechWoven! Will it be any better? Also: Details on the iPhone 17e, Touch ID on the Apple Watch, iOS 26's coolest new feature, a bananas multidisplay setup, and a fantastic Qi2 battery pack from Anker!This episode supported by:Listeners like you. Your support helps us fund CultCast Off-Topic, a new weekly podcast of bonus content available for everyone; and helps us secure the future of the podcast. You also get access to The CultClub Discord, where you can chat with us all week long, give us show topics, and even end up on the show. Support The CultCast at support.thecultcast.com — or unsubscribe at unfork.thecultcast.comInsta360 GO Ultra is the tiny, hands-free 53g camera that redefines how you capture your life. To bag a bag of free Sticky Tabs with your Insta360 GO Ultra purchase, head to store.insta360.com and use the promo code cultcast, available for the first 30 purchases only.This week's stories:Apple's new TechWoven iPhone cases might suck less than FineWovenApple's possible new FineWoven replacement for iPhone 17 cases trades some luxury feel for more practical grippy durability.iPhone 17e could ditch notch for Dynamic IslandA new rumor claims the upcoming iPhone 17e ditches the notch in favor of a Dynamic Island design — a fresh approach for the budget handset.Touch ID could come to Apple WatchThe 2026 Apple Watch could pack some big upgrades, including Touch ID integration for biometric authentication. Plus a faster CPU.Screenfest: Top 15 multidisplay computer setupsWhen it comes to the best-multi-monitor-setup, users often choose between the biggest displays and the most displays. Many go for both.Under Review: Anker Nano Power Bank (5K, MagGo, Slim)The Anker Nano Power Bank has 5,000 mAh of power in a third of an inch. It's the battery that doesn't make your iPhone feel like a brick.
This week on the podcast we go over our review of the ASUS ROG Strix G16 (2025) Gaming Laptop. We also discuss the ASUS ROG Matrix RTX 5090 graphics card, AMD gaining more CPU market share, and all of the news coming out of Gamescom 2025!
Talk Python To Me - Python conversations for passionate developers
Python's data stack is getting a serious GPU turbo boost. In this episode, Ben Zaitlen from NVIDIA joins us to unpack RAPIDS, the open source toolkit that lets pandas, scikit-learn, Spark, Polars, and even NetworkX execute on GPUs. We trace the project's origin and why NVIDIA built it in the open, then dig into the pieces that matter in practice: cuDF for DataFrames, cuML for ML, cuGraph for graphs, cuXfilter for dashboards, and friends like cuSpatial and cuSignal. We talk real speedups, how the pandas accelerator works without a rewrite, and what becomes possible when jobs that used to take hours finish in minutes. You'll hear strategies for datasets bigger than GPU memory, scaling out with Dask or Ray, Spark acceleration, and the growing role of vector search with cuVS for AI workloads. If you know the CPU tools, this is your on-ramp to the same APIs at GPU speed. Episode sponsors Posit Talk Python Courses Links from the show RAPIDS: github.com/rapidsai Example notebooks showing drop-in accelerators: github.com Benjamin Zaitlen - LinkedIn: linkedin.com RAPIDS Deployment Guide (Stable): docs.rapids.ai RAPIDS cuDF API Docs (Stable): docs.rapids.ai Asianometry YouTube Video: youtube.com cuDF pandas Accelerator (Stable): docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #516 deep-dive: talkpython.fm/516 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
Join The Full Nerd gang as they talk about the latest PC building news. In this episode the gang covers the newest reports of AMD and Intel desktop CPU marketshare, the incoming upgrades to Nvidia's GeForce Now, PC gaming's addiction (or lack there of) to performance monitoring, and more. And of course we answer your questions live! Links: - GeForce Now updates: https://www.pcworld.com/article/2881079/nvidias-geforce-now-adds-killer-upgrades-rtx-5080-cloud-storage.html - CPU marketshare report: https://www.pcworld.com/article/2878869/amd-continues-to-kick-ass-and-take-names-in-desktop-pcs.html - Steam performance monitor: https://www.pcworld.com/article/2879636/steams-new-performance-monitor-beats-task-manager-says-valve.html Join the PC related discussions and ask us questions on Discord: https://discord.gg/SGPRSy7 Follow the crew on X: @AdamPMurray @BradChacos @MorphingBall @WillSmith ============= Follow PCWorld! Website: http://www.pcworld.com X: https://www.x.com/pcworld =============
En este episodio exploramos en profundidad el modelo de Von Neumann, la arquitectura que dio forma a la informática tal y como la conocemos. Veremos sus orígenes históricos, sus componentes principales (CPU, memoria, entrada/salida y bus), y cómo marcaron la diferencia frente a las primeras máquinas cableadas. Repasamos ejemplos concretos como PCs, microcontroladores y consolas retro que siguen este esquema, analizamos el famoso cuello de botella del bus compartido, y revisamos su evolución hacia arquitecturas Harvard, multiprocesadores y el debate RISC vs CISC. Un recorrido claro y técnico para comprender las bases del hardware y entender por qué, aún hoy, el legado de Von Neumann sigue presente en cada dispositivo que usamos.
Topics covered in this episode: pyx - optimized backend for uv * Litestar is worth a look* * Django remake migrations* * django-chronos* Extras Joke Watch on YouTube About the show Python Bytes 445 Sponsored by Sentry: pythonbytes.fm/sentry - Python Error and Performance Monitoring Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: pyx - optimized backend for uv via John Hagen (thanks again) I'll be interviewing Charlie in 9 days on Talk Python → Sign up (get notified) of the livestream here. Not a PyPI replacement, more of a middleware layer to make it better, faster, stronger. pyx is a paid service, with maybe a free option eventually. Brian #2: Litestar is worth a look James Bennett Michael brought up Litestar in episode 444 when talking about rewriting TalkPython in Quart James brings up scaling - Litestar is easy to split an app into multiple files Not using pydantic - You can use pydantic with Litestar, but you don't have to. Maybe attrs is right for you instead. Michael brought up Litestar seems like a “more batteries included” option. Somewhere between FastAPI and Django. Brian #3: Django remake migrations Suggested by Bruno Alla on BlueSky In response to a migrations topic last week django-remake-migrations is a tool to help you with migrations and the docs do a great job of describing the problem way better than I did last week “The built-in squashmigrations command is great, but it only work on a single app at a time, which means that you need to run it for each app in your project. On a project with enough cross-apps dependencies, it can be tricky to run.” “This command aims at solving this problem, by recreating all the migration files in the whole project, from scratch, and mark them as applied by using the replaces attribute.” Also of note The package was created with Copier Michael brought up Copier in 2021 in episode 219 It has a nice comparison table with CookieCutter and Yoeman One difference from CookieCutter is yml vs json. I'm actually not a huge fan of handwriting either. But I guess I'd rather hand write yml. So I'm thinking of trying Copier with my future project template needs. Michael #4: django-chronos Django middleware that shows you how fast your pages load, right in your browser. Displays request timing and query counts for your views and middleware. Times middleware, view, and total per request (CPU and DB). Extras Brian: Test & Code 238: So Long, and Thanks for All the Fish after 10 years, this is the goodbye episode Michael: Auto-activate Python virtual environment for any project with a venv directory in your shell (macOS/Linux): See gist. Python 3.13.6 is out. Open weight OpenAI models Just Enough Python for Data Scientists Course The State of Python 2025 article by Michael Joke: python is better than java
Timestamps: 0:00 they whisper the tech news to me 0:08 GPD Win 5 - Ryzen AI Max+ 395 handheld 1:52 MSI Claw 8 Plus AV2M (Intel Lunar Lake) performance boost 2:40 AMD desktop CPU market share record 3:10 US government considering stake in Intel 4:15 Squarespace! 5:03 QUICK BITS INTRO 5:10 flirty Meta AI chatbots investigation 6:13 Diabetes treatment breakthrough 6:43 Teenage Engineering's free Computer-2 case 7:12 Normal Computing's thermodynamic chip 8:17 World Humanoid Robot Games in China NEWS SOURCES: https://lmg.gg/ZIQph Learn more about your ad choices. Visit megaphone.fm/adchoices
An airhacks.fm conversation with Michalis Papadimitriou (@mikepapadim) about: GPU acceleration for LLMs in Java using tornadovm, evolution from CPU-bound SIMD optimizations to GPU memory management, Alfonso's original Java port of llama.cpp using SIMD and Panama Vector API achieving 10 tokens per second, TornadoVM's initial hybrid approach combining CPU vector operations with GPU matrix multiplications, memory-bound nature of LLM inference versus compute-bound traditional workloads, introduction of persist and consume API to keep data on GPU between operations, reduction of host-GPU data transfers for improved performance, comparison with native CUDA implementations and optimization strategies, JIT compilation of kernels versus static optimization in frameworks like tensorrt, using LLMs like Claude to optimize GPU kernels, building MCP servers for automated kernel optimization, European Space Agency using TornadoVM in production for simulations, upcoming Metal backend support for Apple Silicon within 6-7 months, planned support for additional models including Mistral and gemma, potential for distributed inference across multiple GPUs, comparison with python and C++ implementations achieving near-native performance, modular architecture supporting OpenCL PTX and future hardware accelerators, challenges of new GPU hardware vendors like tenstorrent focusing on software ecosystem, planned quarkus and langchain4j integration demonstrations Michalis Papadimitriou on twitter: @mikepapadim
Any donation is greatly appreciated! 47e6GvjL4in5Zy5vVHMb9PQtGXQAcFvWSCQn2fuwDYZoZRk3oFjefr51WBNDGG9EjF1YDavg7pwGDFSAVWC5K42CBcLLv5U OR DONATE HERE: https://www.monerotalk.live/donate TODAY'S SHOW: Monero's network is facing coordinated selfish-mining “marathons” by Cubic, causing multi-block reorganizations that disrupt transactions but haven't led to double-spends. With Cubic controlling an estimated 30–35% of hash power, the attacks combine technical pressure with “weaponized marketing” to boost their own coin. The community is responding by mobilizing CPU miners—especially on decentralized P2Pool—renting hash power during attack windows, and discussing medium-term fixes like fee adjustments, improved mining incentives, and greater decentralization. Broader regulatory headwinds, including high-profile legal cases against privacy tools, add to the urgency. Core consensus is to protect Monero's tail emission, reject proof-of-stake, and focus on practical, PoW-compatible defenses while reassuring users their funds remain safe. TIMESTAMPS: Coming soon! GUEST LINKS: https://x.com/xenumonero Purchase Cafe & tip the farmers w/ XMR! https://gratuitas.org/ Purchase a plug & play Monero node at https://moneronodo.com SPONSORS: Cakewallet.com, the first open-source Monero wallet for iOS. You can even exchange between XMR, BTC, LTC & more in the app! Monero.com by Cake Wallet - ONLY Monero wallet (https://monero.com/) StealthEX, an instant exchange. Go to (https://stealthex.io) to instantly exchange between Monero and 450 plus assets, w/o having to create an account or register & with no limits. WEBSITE: https://www.monerotopia.com CONTACT: monerotalk@protonmail.com ODYSEE: https://odysee.com/@MoneroTalk:8 TWITTER: https://twitter.com/monerotalk FACEBOOK: https://www.facebook.com/MoneroTalk HOST: https://twitter.com/douglastuman INSTAGRAM: https://www.instagram.com/monerotalk TELEGRAM: https://t.me/monerotopia MATRIX: https://matrix.to/#/%23monerotopia%3Amonero.social MASTODON: @Monerotalk@mastodon.social MONERO.TOWN: https://monero.town/u/monerotalkAny donation is greatly appreciated!Any donation is greatly appreciated!
Arnaud et Guillaume explore l'évolution de l'écosystème Java avec Java 25, Spring Boot et Quarkus, ainsi que les dernières tendances en intelligence artificielle avec les nouveaux modèles comme Grok 4 et Claude Code. Les animateurs font également le point sur l'infrastructure cloud, les défis MCP et CLI, tout en discutant de l'impact de l'IA sur la productivité des développeurs et la gestion de la dette technique. Enregistré le 8 août 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–329.mp3 ou en vidéo sur YouTube. News Langages Java 25: JEP 515 : Profilage de méthode en avance (Ahead-of-Time) https://openjdk.org/jeps/515 Le JEP 515 a pour but d'améliorer le temps de démarrage et de chauffe des applications Java. L'idée est de collecter les profils d'exécution des méthodes lors d'une exécution antérieure, puis de les rendre immédiatement disponibles au démarrage de la machine virtuelle. Cela permet au compilateur JIT de générer du code natif dès le début, sans avoir à attendre que l'application soit en cours d'exécution. Ce changement ne nécessite aucune modification du code des applications, des bibliothèques ou des frameworks. L'intégration se fait via les commandes de création de cache AOT existantes. Voir aussi https://openjdk.org/jeps/483 et https://openjdk.org/jeps/514 Java 25: JEP 518 : Échantillonnage coopératif JFR https://openjdk.org/jeps/518 Le JEP 518 a pour objectif d'améliorer la stabilité et l'évolutivité de la fonction JDK Flight Recorder (JFR) pour le profilage d'exécution. Le mécanisme d'échantillonnage des piles d'appels de threads Java est retravaillé pour s'exécuter uniquement à des safepoints, ce qui réduit les risques d'instabilité. Le nouveau modèle permet un parcours de pile plus sûr, notamment avec le garbage collector ZGC, et un échantillonnage plus efficace qui prend en charge le parcours de pile concurrent. Le JEP ajoute un nouvel événement, SafepointLatency, qui enregistre le temps nécessaire à un thread pour atteindre un safepoint. L'approche rend le processus d'échantillonnage plus léger et plus rapide, car le travail de création de traces de pile est délégué au thread cible lui-même. Librairies Spring Boot 4 M1 https://spring.io/blog/2025/07/24/spring-boot–4–0–0-M1-available-now Spring Boot 4.0.0-M1 met à jour de nombreuses dépendances internes et externes pour améliorer la stabilité et la compatibilité. Les types annotés avec @ConfigurationProperties peuvent maintenant référencer des types situés dans des modules externes grâce à @ConfigurationPropertiesSource. Le support de l'information sur la validité des certificats SSL a été simplifié, supprimant l'état WILL_EXPIRE_SOON au profit de VALID. L'auto-configuration des métriques Micrometer supporte désormais l'annotation @MeterTag sur les méthodes annotées @Counted et @Timed, avec évaluation via SpEL. Le support de @ServiceConnection pour MongoDB inclut désormais l'intégration avec MongoDBAtlasLocalContainer de Testcontainers. Certaines fonctionnalités et API ont été dépréciées, avec des recommandations pour migrer les points de terminaison personnalisés vers les versions Spring Boot 2. Les versions milestones et release candidates sont maintenant publiées sur Maven Central, en plus du repository Spring traditionnel. Un guide de migration a été publié pour faciliter la transition depuis Spring Boot 3.5 vers la version 4.0.0-M1. Passage de Spring Boot à Quarkus : retour d'expérience https://blog.stackademic.com/we-switched-from-spring-boot-to-quarkus-heres-the-ugly-truth-c8a91c2b8c53 Une équipe a migré une application Java de Spring Boot vers Quarkus pour gagner en performances et réduire la consommation mémoire. L'objectif était aussi d'optimiser l'application pour le cloud natif. La migration a été plus complexe que prévu, notamment à cause de l'incompatibilité avec certaines bibliothèques et d'un écosystème Quarkus moins mature. Il a fallu revoir du code et abandonner certaines fonctionnalités spécifiques à Spring Boot. Les gains en performances et en mémoire sont réels, mais la migration demande un vrai effort d'adaptation. La communauté Quarkus progresse, mais le support reste limité comparé à Spring Boot. Conclusion : Quarkus est intéressant pour les nouveaux projets ou ceux prêts à être réécrits, mais la migration d'un projet existant est un vrai défi. LangChain4j 1.2.0 : Nouvelles fonctionnalités et améliorations https://github.com/langchain4j/langchain4j/releases/tag/1.2.0 Modules stables : Les modules langchain4j-anthropic, langchain4j-azure-open-ai, langchain4j-bedrock, langchain4j-google-ai-gemini, langchain4j-mistral-ai et langchain4j-ollama sont désormais en version stable 1.2.0. Modules expérimentaux : La plupart des autres modules de LangChain4j sont en version 1.2.0-beta8 et restent expérimentaux/instables. BOM mis à jour : Le langchain4j-bom a été mis à jour en version 1.2.0, incluant les dernières versions de tous les modules. Principales améliorations : Support du raisonnement/pensée dans les modèles. Appels d'outils partiels en streaming. Option MCP pour exposer automatiquement les ressources en tant qu'outils. OpenAI : possibilité de définir des paramètres de requête personnalisés et d'accéder aux réponses HTTP brutes et aux événements SSE. Améliorations de la gestion des erreurs et de la documentation. Filtering Metadata Infinispan ! (cc Katia( Et 1.3.0 est déjà disponible https://github.com/langchain4j/langchain4j/releases/tag/1.3.0 2 nouveaux modules expérimentaux, langchain4j-agentic et langchain4j-agentic-a2a qui introduisent un ensemble d'abstractions et d'utilitaires pour construire des applications agentiques Infrastructure Cette fois c'est vraiment l'année de Linux sur le desktop ! https://www.lesnumeriques.com/informatique/c-est-enfin-arrive-linux-depasse-un-seuil-historique-que-microsoft-pensait-intouchable-n239977.html Linux a franchi la barre des 5% aux USA Cette progression s'explique en grande partie par l'essor des systèmes basés sur Linux dans les environnements professionnels, les serveurs, et certains usages grand public. Microsoft, longtemps dominant avec Windows, voyait ce seuil comme difficilement atteignable à court terme. Le succès de Linux est également alimenté par la popularité croissante des distributions open source, plus légères, personnalisables et adaptées à des usages variés. Le cloud, l'IoT, et les infrastructures de serveurs utilisent massivement Linux, ce qui contribue à cette augmentation globale. Ce basculement symbolique marque un changement d'équilibre dans l'écosystème des systèmes d'exploitation. Toutefois, Windows conserve encore une forte présence dans certains segments, notamment chez les particuliers et dans les entreprises classiques. Cette évolution témoigne du dynamisme et de la maturité croissante des solutions Linux, devenues des alternatives crédibles et robustes face aux offres propriétaires. Cloud Cloudflare 1.1.1.1 s'en va pendant une heure d'internet https://blog.cloudflare.com/cloudflare–1–1–1–1-incident-on-july–14–2025/ Le 14 juillet 2025, le service DNS public Cloudflare 1.1.1.1 a subi une panne majeure de 62 minutes, rendant le service indisponible pour la majorité des utilisateurs mondiaux. Cette panne a aussi causé une dégradation intermittente du service Gateway DNS. L'incident est survenu suite à une mise à jour de la topologie des services Cloudflare qui a activé une erreur de configuration introduite en juin 2025. Cette erreur faisait que les préfixes destinés au service 1.1.1.1 ont été accidentellement inclus dans un nouveau service de localisation des données (Data Localization Suite), ce qui a perturbé le routage anycast. Le résultat a été une incapacité pour les utilisateurs à résoudre les noms de domaine via 1.1.1.1, rendant la plupart des services Internet inaccessibles pour eux. Ce n'était pas le résultat d'une attaque ou d'un problème BGP, mais une erreur interne de configuration. Cloudflare a rapidement identifié la cause, corrigé la configuration et mis en place des mesures pour prévenir ce type d'incident à l'avenir. Le service est revenu à la normale après environ une heure d'indisponibilité. L'incident souligne la complexité et la sensibilité des infrastructures anycast et la nécessité d'une gestion rigoureuse des configurations réseau. Web L'évolution des bonnes pratiques de Node.js https://kashw1n.com/blog/nodejs–2025/ Évolution de Node.js en 2025 : Le développement se tourne vers les standards du web, avec moins de dépendances externes et une meilleure expérience pour les développeurs. ES Modules (ESM) par défaut : Remplacement de CommonJS pour un meilleur outillage et une standardisation avec le web. Utilisation du préfixe node: pour les modules natifs afin d'éviter les conflits. API web intégrées : fetch, AbortController, et AbortSignal sont maintenant natifs, réduisant le besoin de librairies comme axios. Runner de test intégré : Plus besoin de Jest ou Mocha pour la plupart des cas. Inclut un mode “watch” et des rapports de couverture. Patterns asynchrones avancés : Utilisation plus poussée de async/await avec Promise.all() pour le parallélisme et les AsyncIterators pour les flux d'événements. Worker Threads pour le parallélisme : Pour les tâches lourdes en CPU, évitant de bloquer l'event loop principal. Expérience de développement améliorée : Intégration du mode --watch (remplace nodemon) et du support --env-file (remplace dotenv). Sécurité et performance : Modèle de permission expérimental pour restreindre l'accès et des hooks de performance natifs pour le monitoring. Distribution simplifiée : Création d'exécutables uniques pour faciliter le déploiement d'applications ou d'outils en ligne de commande. Sortie de Apache EChart 6 après 12 ans ! https://echarts.apache.org/handbook/en/basics/release-note/v6-feature/ Apache ECharts 6.0 : Sortie officielle après 12 ans d'évolution. 12 mises à niveau majeures pour la visualisation de données. Trois dimensions clés d'amélioration : Présentation visuelle plus professionnelle : Nouveau thème par défaut (design moderne). Changement dynamique de thème. Prise en charge du mode sombre. Extension des limites de l'expression des données : Nouveaux types de graphiques : Diagramme de cordes (Chord Chart), Nuage de points en essaim (Beeswarm Chart). Nouvelles fonctionnalités : Jittering pour nuages de points denses, Axes coupés (Broken Axis). Graphiques boursiers améliorés Liberté de composition : Nouveau système de coordonnées matriciel. Séries personnalisées améliorées (réutilisation du code, publication npm). Nouveaux graphiques personnalisés inclus (violon, contour, etc.). Optimisation de l'agencement des étiquettes d'axe. Data et Intelligence Artificielle Grok 4 s'est pris pour un nazi à cause des tools https://techcrunch.com/2025/07/15/xai-says-it-has-fixed-grok–4s-problematic-responses/ À son lancement, Grok 4 a généré des réponses offensantes, notamment en se surnommant « MechaHitler » et en adoptant des propos antisémites. Ce comportement provenait d'une recherche automatique sur le web qui a mal interprété un mème viral comme une vérité. Grok alignait aussi ses réponses controversées sur les opinions d'Elon Musk et de xAI, ce qui a amplifié les biais. xAI a identifié que ces dérapages étaient dus à une mise à jour interne intégrant des instructions encourageant un humour offensant et un alignement avec Musk. Pour corriger cela, xAI a supprimé le code fautif, remanié les prompts système, et imposé des directives demandant à Grok d'effectuer une analyse indépendante, en utilisant des sources diverses. Grok doit désormais éviter tout biais, ne plus adopter un humour politiquement incorrect, et analyser objectivement les sujets sensibles. xAI a présenté ses excuses, précisant que ces dérapages étaient dus à un problème de prompt et non au modèle lui-même. Cet incident met en lumière les défis persistants d'alignement et de sécurité des modèles d'IA face aux injections indirectes issues du contenu en ligne. La correction n'est pas qu'un simple patch technique, mais un exemple des enjeux éthiques et de responsabilité majeurs dans le déploiement d'IA à grande échelle. Guillaume a sorti toute une série d'article sur les patterns agentiques avec le framework ADK pour Java https://glaforge.dev/posts/2025/07/29/mastering-agentic-workflows-with-adk-the-recap/ Un premier article explique comment découper les tâches en sous-agents IA : https://glaforge.dev/posts/2025/07/23/mastering-agentic-workflows-with-adk-sub-agents/ Un deuxième article détaille comment organiser les agents de manière séquentielle : https://glaforge.dev/posts/2025/07/24/mastering-agentic-workflows-with-adk-sequential-agent/ Un troisième article explique comment paralleliser des tâches indépendantes : https://glaforge.dev/posts/2025/07/25/mastering-agentic-workflows-with-adk-parallel-agent/ Et enfin, comment faire des boucles d'amélioration : https://glaforge.dev/posts/2025/07/28/mastering-agentic-workflows-with-adk-loop-agents/ Tout ça évidemment en Java :slightly_smiling_face: 6 semaines de code avec Claude https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/ Orta partage son retour après 6 semaines d'utilisation quotidienne de Claude Code, qui a profondément changé sa manière de coder. Il ne « code » plus vraiment ligne par ligne, mais décrit ce qu'il veut, laisse Claude proposer une solution, puis corrige ou ajuste. Cela permet de se concentrer sur le résultat plutôt que sur l'implémentation, comme passer de la peinture au polaroid. Claude s'avère particulièrement utile pour les tâches de maintenance : migrations, refactors, nettoyage de code. Il reste toujours en contrôle, révise chaque diff généré, et guide l'IA via des prompts bien cadrés. Il note qu'il faut quelques semaines pour prendre le bon pli : apprendre à découper les tâches et formuler clairement les attentes. Les tâches simples deviennent quasi instantanées, mais les tâches complexes nécessitent encore de l'expérience et du discernement. Claude Code est vu comme un très bon copilote, mais ne remplace pas le rôle du développeur qui comprend l'ensemble du système. Le gain principal est une vitesse de feedback plus rapide et une boucle d'itération beaucoup plus courte. Ce type d'outil pourrait bien redéfinir la manière dont on pense et structure le développement logiciel à moyen terme. Claude Code et les serveurs MCP : ou comment transformer ton terminal en assistant surpuissant https://touilleur-express.fr/2025/07/27/claude-code-et-les-serveurs-mcp-ou-comment-transformer-ton-terminal-en-assistant-surpuissant/ Nicolas continue ses études sur Claude Code et explique comment utiliser les serveurs MCP pour rendre Claude bien plus efficace. Le MCP Context7 montre comment fournir à l'IA la doc technique à jour (par exemple, Next.js 15) pour éviter les hallucinations ou les erreurs. Le MCP Task Master, autre serveur MCP, transforme un cahier des charges (PRD) en tâches atomiques, estimées, et organisées sous forme de plan de travail. Le MCP Playwright permet de manipuler des navigateurs et d'executer des tests E2E Le MCP Digital Ocean permet de déployer facilement l'application en production Tout n'est pas si ideal, les quotas sont atteints en quelques heures sur une petite application et il y a des cas où il reste bien plus efficace de le faire soit-même (pour un codeur expérimenté) Nicolas complète cet article avec l'écriture d'un MVP en 20 heures: https://touilleur-express.fr/2025/07/30/comment-jai-code-un-mvp-en-une-vingtaine-dheures-avec-claude-code/ Le développement augmenté, un avis politiquement correct, mais bon… https://touilleur-express.fr/2025/07/31/le-developpement-augmente-un-avis-politiquement-correct-mais-bon/ Nicolas partage un avis nuancé (et un peu provoquant) sur le développement augmenté, où l'IA comme Claude Code assiste le développeur sans le remplacer. Il rejette l'idée que cela serait « trop magique » ou « trop facile » : c'est une évolution logique de notre métier, pas un raccourci pour les paresseux. Pour lui, un bon dev reste celui qui structure bien sa pensée, sait poser un problème, découper, valider — même si l'IA aide à coder plus vite. Il raconte avoir codé une app OAuth, testée, stylisée et déployée en quelques heures, sans jamais quitter le terminal grâce à Claude. Ce genre d'outillage change le rapport au temps : on passe de « je vais y réfléchir » à « je tente tout de suite une version qui marche à peu près ». Il assume aimer cette approche rapide et imparfaite : mieux vaut une version brute livrée vite qu'un projet bloqué par le perfectionnisme. L'IA est selon lui un super stagiaire : jamais fatigué, parfois à côté de la plaque, mais diablement productif quand bien briefé. Il conclut que le « dev augmenté » ne remplace pas les bons développeurs… mais les développeurs moyens doivent s'y mettre, sous peine d'être dépassés. ChatGPT lance le mode d'étude : un apprentissage interactif pas à pas https://openai.com/index/chatgpt-study-mode/ OpenAI propose un mode d'étude dans ChatGPT qui guide les utilisateurs pas à pas plutôt que de donner directement la réponse. Ce mode vise à encourager la réflexion active et l'apprentissage en profondeur. Il utilise des instructions personnalisées pour poser des questions et fournir des explications adaptées au niveau de l'utilisateur. Le mode d'étude favorise la gestion de la charge cognitive et stimule la métacognition. Il propose des réponses structurées pour faciliter la compréhension progressive des sujets. Disponible dès maintenant pour les utilisateurs connectés, ce mode sera intégré dans ChatGPT Edu. L'objectif est de transformer ChatGPT en un véritable tuteur numérique, aidant les étudiants à mieux assimiler les connaissances. A priori Gemini viendrait de sortir un fonctionnalité similaire Lancement de GPT-OSS par OpenAI https://openai.com/index/introducing-gpt-oss/ https://openai.com/index/gpt-oss-model-card/ OpenAI a lancé GPT-OSS, sa première famille de modèles open-weight depuis GPT–2. Deux modèles sont disponibles : gpt-oss–120b et gpt-oss–20b, qui sont des modèles mixtes d'experts conçus pour le raisonnement et les tâches d'agent. Les modèles sont distribués sous licence Apache 2.0, permettant leur utilisation et leur personnalisation gratuites, y compris pour des applications commerciales. Le modèle gpt-oss–120b est capable de performances proches du modèle OpenAI o4-mini, tandis que le gpt-oss–20b est comparable au o3-mini. OpenAI a également open-sourcé un outil de rendu appelé Harmony en Python et Rust pour en faciliter l'adoption. Les modèles sont optimisés pour fonctionner localement et sont pris en charge par des plateformes comme Hugging Face et Ollama. OpenAI a mené des recherches sur la sécurité pour s'assurer que les modèles ne pouvaient pas être affinés pour des utilisations malveillantes dans les domaines biologique, chimique ou cybernétique. Anthropic lance Opus 4.1 https://www.anthropic.com/news/claude-opus–4–1 Anthropic a publié Claude Opus 4.1, une mise à jour de son modèle de langage. Cette nouvelle version met l'accent sur l'amélioration des performances en codage, en raisonnement et sur les tâches de recherche et d'analyse de données. Le modèle a obtenu un score de 74,5 % sur le benchmark SWE-bench Verified, ce qui représente une amélioration par rapport à la version précédente. Il excelle notamment dans la refactorisation de code multifichier et est capable d'effectuer des recherches approfondies. Claude Opus 4.1 est disponible pour les utilisateurs payants de Claude, ainsi que via l'API, Amazon Bedrock et Vertex AI de Google Cloud, avec des tarifs identiques à ceux d'Opus 4. Il est présenté comme un remplacement direct de Claude Opus 4, avec des performances et une précision supérieures pour les tâches de programmation réelles. OpenAI Summer Update. GPT–5 is out https://openai.com/index/introducing-gpt–5/ Détails https://openai.com/index/gpt–5-new-era-of-work/ https://openai.com/index/introducing-gpt–5-for-developers/ https://openai.com/index/gpt–5-safe-completions/ https://openai.com/index/gpt–5-system-card/ Amélioration majeure des capacités cognitives - GPT‑5 montre un niveau de raisonnement, d'abstraction et de compréhension nettement supérieur aux modèles précédents. Deux variantes principales - gpt-5-main : rapide, efficace pour les tâches générales. gpt-5-thinking : plus lent mais spécialisé dans les tâches complexes, nécessitant réflexion profonde. Routeur intelligent intégré - Le système sélectionne automatiquement la version la plus adaptée à la tâche (rapide ou réfléchie), sans intervention de l'utilisateur. Fenêtre de contexte encore étendue - GPT‑5 peut traiter des volumes de texte plus longs (jusqu'à 1 million de tokens dans certaines versions), utile pour des documents ou projets entiers. Réduction significative des hallucinations - GPT‑5 donne des réponses plus fiables, avec moins d'erreurs inventées ou de fausses affirmations. Comportement plus neutre et moins sycophant - Il a été entraîné pour mieux résister à l'alignement excessif avec les opinions de l'utilisateur. Capacité accrue à suivre des instructions complexes - GPT‑5 comprend mieux les consignes longues, implicites ou nuancées. Approche “Safe completions” - Remplacement des “refus d'exécution” par des réponses utiles mais sûres — le modèle essaie de répondre avec prudence plutôt que bloquer. Prêt pour un usage professionnel à grande échelle - Optimisé pour le travail en entreprise : rédaction, programmation, synthèse, automatisation, gestion de tâches, etc. Améliorations spécifiques pour le codage - GPT‑5 est plus performant pour l'écriture de code, la compréhension de contextes logiciels complexes, et l'usage d'outils de développement. Expérience utilisateur plus rapide et fluide- Le système réagit plus vite grâce à une orchestration optimisée entre les différents sous-modèles. Capacités agentiques renforcées - GPT‑5 peut être utilisé comme base pour des agents autonomes capables d'accomplir des objectifs avec peu d'interventions humaines. Multimodalité maîtrisée (texte, image, audio) - GPT‑5 intègre de façon plus fluide la compréhension de formats multiples, dans un seul modèle. Fonctionnalités pensées pour les développeurs - Documentation plus claire, API unifiée, modèles plus transparents et personnalisables. Personnalisation contextuelle accrue - Le système s'adapte mieux au style, ton ou préférences de l'utilisateur, sans instructions répétées. Utilisation énergétique et matérielle optimisée - Grâce au routeur interne, les ressources sont utilisées plus efficacement selon la complexité des tâches. Intégration sécurisée dans les produits ChatGPT - Déjà déployé dans ChatGPT avec des bénéfices immédiats pour les utilisateurs Pro et entreprises. Modèle unifié pour tous les usages - Un seul système capable de passer de la conversation légère à des analyses scientifiques ou du code complexe. Priorité à la sécurité et à l'alignement - GPT‑5 a été conçu dès le départ pour minimiser les abus, biais ou comportements indésirables. Pas encore une AGI - OpenAI insiste : malgré ses capacités impressionnantes, GPT‑5 n'est pas une intelligence artificielle générale. Non, non, les juniors ne sont pas obsolètes malgré l'IA ! (dixit GitHub) https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ L'IA transforme le développement logiciel, mais les développeurs juniors ne sont pas obsolètes. Les nouveaux apprenants sont bien positionnés, car déjà familiers avec les outils IA. L'objectif est de développer des compétences pour travailler avec l'IA, pas d'être remplacé. La créativité et la curiosité sont des qualités humaines clés. Cinq façons de se démarquer : Utiliser l'IA (ex: GitHub Copilot) pour apprendre plus vite, pas seulement coder plus vite (ex: mode tuteur, désactiver l'autocomplétion temporairement). Construire des projets publics démontrant ses compétences (y compris en IA). Maîtriser les workflows GitHub essentiels (GitHub Actions, contribution open source, pull requests). Affûter son expertise en révisant du code (poser des questions, chercher des patterns, prendre des notes). Déboguer plus intelligemment et rapidement avec l'IA (ex: Copilot Chat pour explications, corrections, tests). Ecrire son premier agent IA avec A2A avec WildFly par Emmanuel Hugonnet https://www.wildfly.org/news/2025/08/07/Building-your-First-A2A-Agent/ Protocole Agent2Agent (A2A) : Standard ouvert pour l'interopérabilité universelle des agents IA. Permet communication et collaboration efficaces entre agents de différents fournisseurs/frameworks. Crée des écosystèmes multi-agents unifiés, automatisant les workflows complexes. Objet de l'article : Guide pour construire un premier agent A2A (agent météo) dans WildFly. Utilise A2A Java SDK pour Jakarta Servers, WildFly AI Feature Pack, un LLM (Gemini) et un outil Python (MCP). Agent conforme A2A v0.2.5. Prérequis : JDK 17+, Apache Maven 3.8+, IDE Java, Google AI Studio API Key, Python 3.10+, uv. Étapes de construction de l'agent météo : Création du service LLM : Interface Java (WeatherAgent) utilisant LangChain4J pour interagir avec un LLM et un outil Python MCP (fonctions get_alerts, get_forecast). Définition de l'agent A2A (via CDI) : ▪︎ Agent Card : Fournit les métadonnées de l'agent (nom, description, URL, capacités, compétences comme “weather_search”). Agent Executor : Gère les requêtes A2A entrantes, extrait le message utilisateur, appelle le service LLM et formate la réponse. Exposition de l'agent : Enregistrement d'une application JAX-RS pour les endpoints. Déploiement et test : Configuration de l'outil A2A-inspector de Google (via un conteneur Podman). Construction du projet Maven, configuration des variables d'environnement (ex: GEMINI_API_KEY). Lancement du serveur WildFly. Conclusion : Transformation minimale d'une application IA en agent A2A. Permet la collaboration et le partage d'informations entre agents IA, indépendamment de leur infrastructure sous-jacente. Outillage IntelliJ IDEa bouge vers une distribution unifiée https://blog.jetbrains.com/idea/2025/07/intellij-idea-unified-distribution-plan/ À partir de la version 2025.3, IntelliJ IDEA Community Edition ne sera plus distribuée séparément. Une seule version unifiée d'IntelliJ IDEA regroupera les fonctionnalités des éditions Community et Ultimate. Les fonctionnalités avancées de l'édition Ultimate seront accessibles via abonnement. Les utilisateurs sans abonnement auront accès à une version gratuite enrichie par rapport à l'édition Community actuelle. Cette unification vise à simplifier l'expérience utilisateur et réduire les différences entre les éditions. Les utilisateurs Community seront automatiquement migrés vers cette nouvelle version unifiée. Il sera possible d'activer les fonctionnalités Ultimate temporairement d'un simple clic. En cas d'expiration d'abonnement Ultimate, l'utilisateur pourra continuer à utiliser la version installée avec un jeu limité de fonctionnalités gratuites, sans interruption. Ce changement reflète l'engagement de JetBrains envers l'open source et l'adaptation aux besoins de la communauté. Prise en charge des Ancres YAML dans GitHub Actions https://github.com/actions/runner/issues/1182#issuecomment–3150797791 Afin d'éviter de dupliquer du contenu dans un workflow les Ancres permettent d'insérer des morceaux réutilisables de YAML Fonctionnalité attendue depuis des années et disponible chez GitLab depuis bien longtemps. Elle a été déployée le 4 aout. Attention à ne pas en abuser car la lisibilité de tels documents n'est pas si facile Gemini CLI rajoute les custom commands comme Claude https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands Mais elles sont au format TOML, on ne peut donc pas les partager avec Claude :disappointed: Automatiser ses workflows IA avec les hooks de Claude Code https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks/ Claude Code propose des hooks qui permettent d'exécuter des scripts à différents moments d'une session, par exemple au début, lors de l'utilisation d'outils, ou à la fin. Ces hooks facilitent l'automatisation de tâches comme la gestion de branches Git, l'envoi de notifications, ou l'intégration avec d'autres outils. Un exemple simple est l'envoi d'une notification sur le bureau à la fin d'une session. Les hooks se configurent via trois fichiers JSON distincts selon le scope : utilisateur, projet ou local. Sur macOS, l'envoi de notifications nécessite une permission spécifique via l'application “Script Editor”. Il est important d'avoir une version à jour de Claude Code pour utiliser ces hooks. GitButler permet desormais de s'intégrer à Claude Code via ces hooks: https://blog.gitbutler.com/parallel-claude-code/ Le client Git de Jetbrains bientot en standalone https://lp.jetbrains.com/closed-preview-for-jetbrains-git-client/ Demandé par certains utilisateurs depuis longtemps Ca serait un client graphique du même style qu'un GitButler, SourceTree, etc Apache Maven 4 …. arrive …. l'utilitaire mvnupva vous aider à upgrader https://maven.apache.org/tools/mvnup.html Fixe les incompatibilités connues Nettoie les redondances et valeurs par defaut (versions par ex) non utiles pour Maven 4 Reformattage selon les conventions maven … Une GitHub Action pour Gemini CLI https://blog.google/technology/developers/introducing-gemini-cli-github-actions/ Google a lancé Gemini CLI GitHub Actions, un agent d'IA qui fonctionne comme un “coéquipier de code” pour les dépôts GitHub. L'outil est gratuit et est conçu pour automatiser des tâches de routine telles que le triage des problèmes (issues), l'examen des demandes de tirage (pull requests) et d'autres tâches de développement. Il agit à la fois comme un agent autonome et un collaborateur que les développeurs peuvent solliciter à la demande, notamment en le mentionnant dans une issue ou une pull request. L'outil est basé sur la CLI Gemini, un agent d'IA open-source qui amène le modèle Gemini directement dans le terminal. Il utilise l'infrastructure GitHub Actions, ce qui permet d'isoler les processus dans des conteneurs séparés pour des raisons de sécurité. Trois flux de travail (workflows) open-source sont disponibles au lancement : le triage intelligent des issues, l'examen des pull requests et la collaboration à la demande. Pas besoin de MCP, le code est tout ce dont vous avez besoin https://lucumr.pocoo.org/2025/7/3/tools/ Armin souligne qu'il n'est pas fan du protocole MCP (Model Context Protocol) dans sa forme actuelle : il manque de composabilité et exige trop de contexte. Il remarque que pour une même tâche (ex. GitHub), utiliser le CLI est souvent plus rapide et plus efficace en termes de contexte que passer par un serveur MCP. Selon lui, le code reste la solution la plus simple et fiable, surtout pour automatiser des tâches répétitives. Il préfère créer des scripts clairs plutôt que se reposer sur l'inférence LLM : cela facilite la vérification, la maintenance et évite les erreurs subtiles. Pour les tâches récurrentes, si on les automatise, mieux vaut le faire avec du code reusable, plutôt que de laisser l'IA deviner à chaque fois. Il illustre cela en convertissant son blog entier de reStructuredText à Markdown : plutôt qu'un usage direct d'IA, il a demandé à Claude de générer un script complet, avec parsing AST, comparaison des fichiers, validation et itération. Ce workflow LLM→code→LLM (analyse et validation) lui a donné confiance dans le résultat final, tout en conservant un contrôle humain sur le processus. Il juge que MCP ne permet pas ce type de pipeline automatisé fiable, car il introduit trop d'inférence et trop de variations par appel. Pour lui, coder reste le meilleur moyen de garder le contrôle, la reproductibilité et la clarté dans les workflows automatisés. MCP vs CLI … https://www.async-let.com/blog/my-take-on-the-mcp-verses-cli-debate/ Cameron raconte son expérience de création du serveur XcodeBuildMCP, qui lui a permis de mieux comprendre le débat entre servir l'IA via MCP ou laisser l'IA utiliser directement les CLI du système. Selon lui, les CLIs restent préférables pour les développeurs experts recherchant contrôle, transparence, performance et simplicité. Mais les serveurs MCP excellent sur les workflows complexes, les contextes persistants, les contraintes de sécurité, et facilitent l'accès pour les utilisateurs moins expérimentés. Il reconnaît la critique selon laquelle MCP consomme trop de contexte (« context bloat ») et que les appels CLI peuvent être plus rapides et compréhensibles. Toutefois, il souligne que beaucoup de problèmes proviennent de la qualité des implémentations clients, pas du protocole MCP en lui‑même. Pour lui, un bon serveur MCP peut proposer des outils soigneusement définis qui simplifient la vie de l'IA (par exemple, renvoyer des données structurées plutôt que du texte brut à parser). Il apprécie la capacité des MCP à offrir des opérations état‑durables (sessions, mémoire, logs capturés), ce que les CLI ne gèrent pas naturellement. Certains scénarios ne peuvent pas fonctionner via CLI (pas de shell accessible) alors que MCP, en tant que protocole indépendant, reste utilisable par n'importe quel client. Son verdict : pas de solution universelle — chaque contexte mérite d'être évalué, et on ne devrait pas imposer MCP ou CLI à tout prix. Jules, l'agent de code asynchrone gratuit de Google, est sorti de beta et est disponible pour tout le monde https://blog.google/technology/google-labs/jules-now-available/ Jules, agent de codage asynchrone, est maintenant publiquement disponible. Propulsé par Gemini 2.5 Pro. Phase bêta : 140 000+ améliorations de code et retours de milliers de développeurs. Améliorations : interface utilisateur, corrections de bugs, réutilisation des configurations, intégration GitHub Issues, support multimodal. Gemini 2.5 Pro améliore les plans de codage et la qualité du code. Nouveaux paliers structurés : Introductif, Google AI Pro (limites 5x supérieures), Google AI Ultra (limites 20x supérieures). Déploiement immédiat pour les abonnés Google AI Pro et Ultra, incluant les étudiants éligibles (un an gratuit de AI Pro). Architecture Valoriser la réduction de la dette technique : un vrai défi https://www.lemondeinformatique.fr/actualites/lire-valoriser-la-reduction-de-la-dette-technique-mission-impossible–97483.html La dette technique est un concept mal compris et difficile à valoriser financièrement auprès des directions générales. Les DSI ont du mal à mesurer précisément cette dette, à allouer des budgets spécifiques, et à prouver un retour sur investissement clair. Cette difficulté limite la priorisation des projets de réduction de dette technique face à d'autres initiatives jugées plus urgentes ou stratégiques. Certaines entreprises intègrent progressivement la gestion de la dette technique dans leurs processus de développement. Des approches comme le Software Crafting visent à améliorer la qualité du code pour limiter l'accumulation de cette dette. L'absence d'outils adaptés pour mesurer les progrès rend la démarche encore plus complexe. En résumé, réduire la dette technique reste une mission délicate qui nécessite innovation, méthode et sensibilisation en interne. Il ne faut pas se Mocker … https://martinelli.ch/why-i-dont-use-mocking-frameworks-and-why-you-might-not-need-them-either/ https://blog.tremblay.pro/2025/08/not-using-mocking-frmk.html L'auteur préfère utiliser des fakes ou stubs faits à la main plutôt que des frameworks de mocking comme Mockito ou EasyMock. Les frameworks de mocking isolent le code, mais entraînent souvent : Un fort couplage entre les tests et les détails d'implémentation. Des tests qui valident le mock plutôt que le comportement réel. Deux principes fondamentaux guident son approche : Favoriser un design fonctionnel, avec logique métier pure (fonctions sans effets de bord). Contrôler les données de test : par exemple en utilisant des bases réelles (via Testcontainers) plutôt que de simuler. Dans sa pratique, les seuls cas où un mock externe est utilisé concernent les services HTTP externes, et encore il préfère en simuler seulement le transport plutôt que le comportement métier. Résultat : les tests deviennent plus simples, plus rapides à écrire, plus fiables, et moins fragiles aux évolutions du code. L'article conclut que si tu conçois correctement ton code, tu pourrais très bien ne pas avoir besoin de frameworks de mocking du tout. Le blog en réponse d'Henri Tremblay nuance un peu ces retours Méthodologies C'est quoi être un bon PM ? (Product Manager) Article de Chris Perry, un PM chez Google : https://thechrisperry.substack.com/p/being-a-good-pm-at-google Le rôle de PM est difficile : Un travail exigeant, où il faut être le plus impliqué de l'équipe pour assurer le succès. 1. Livrer (shipper) est tout ce qui compte : La priorité absolue. Mieux vaut livrer et itérer rapidement que de chercher la perfection en théorie. Un produit livré permet d'apprendre de la réalité. 2. Donner l'envie du grand large : La meilleure façon de faire avancer un projet est d'inspirer l'équipe avec une vision forte et désirable. Montrer le “pourquoi”. 3. Utiliser son produit tous les jours : Non négociable pour réussir. Permet de développer une intuition et de repérer les vrais problèmes que la recherche utilisateur ne montre pas toujours. 4. Être un bon ami : Créer des relations authentiques et aider les autres est un facteur clé de succès à long terme. La confiance est la base d'une exécution rapide. 5. Donner plus qu'on ne reçoit : Toujours chercher à aider et à collaborer. La stratégie optimale sur la durée est la coopération. Ne pas être possessif avec ses idées. 6. Utiliser le bon levier : Pour obtenir une décision, il faut identifier la bonne personne qui a le pouvoir de dire “oui”, et ne pas se laisser bloquer par des avis non décisionnaires. 7. N'aller que là où on apporte de la valeur : Combler les manques, faire le travail ingrat que personne ne veut faire. Savoir aussi s'écarter (réunions, projets) quand on n'est pas utile. 8. Le succès a plusieurs parents, l'échec est orphelin : Si le produit réussit, c'est un succès d'équipe. S'il échoue, c'est la faute du PM. Il faut assumer la responsabilité finale. Conclusion : Le PM est un chef d'orchestre. Il ne peut pas jouer de tous les instruments, mais son rôle est d'orchestrer avec humilité le travail de tous pour créer quelque chose d'harmonieux. Tester des applications Spring Boot prêtes pour la production : points clés https://www.wimdeblauwe.com/blog/2025/07/30/how-i-test-production-ready-spring-boot-applications/ L'auteur (Wim Deblauwe) détaille comment il structure ses tests dans une application Spring Boot destinée à la production. Le projet inclut automatiquement la dépendance spring-boot-starter-test, qui regroupe JUnit 5, AssertJ, Mockito, Awaitility, JsonAssert, XmlUnit et les outils de testing Spring. Tests unitaires : ciblent les fonctions pures (record, utilitaire), testés simplement avec JUnit et AssertJ sans démarrage du contexte Spring. Tests de cas d'usage (use case) : orchestrent la logique métier, généralement via des use cases qui utilisent un ou plusieurs dépôts de données. Tests JPA/repository : vérifient les interactions avec la base via des tests realisant des opérations CRUD (avec un contexte Spring pour la couche persistance). Tests de contrôleur : permettent de tester les endpoints web (ex. @WebMvcTest), souvent avec MockBean pour simuler les dépendances. Tests d'intégration complets : ils démarrent tout le contexte Spring (@SpringBootTest) pour tester l'application dans son ensemble. L'auteur évoque également des tests d'architecture, mais sans entrer dans le détail dans cet article. Résultat : une pyramide de tests allant des plus rapides (unitaires) aux plus complets (intégration), garantissant fiabilité, vitesse et couverture sans surcharge inutile. Sécurité Bitwarden offre un serveur MCP pour que les agents puissent accéder aux mots de passe https://nerds.xyz/2025/07/bitwarden-mcp-server-secure-ai/ Bitwarden introduit un serveur MCP (Model Context Protocol) destiné à intégrer de manière sécurisée les agents IA dans les workflows de gestion de mots de passe. Ce serveur fonctionne en architecture locale (local-first) : toutes les interactions et les données sensibles restent sur la machine de l'utilisateur, garantissant l'application du principe de chiffrement zero‑knowledge. L'intégration se fait via l'interface CLI de Bitwarden, permettant aux agents IA de générer, récupérer, modifier et verrouiller les identifiants via des commandes sécurisées. Le serveur peut être auto‑hébergé pour un contrôle maximal des données. Le protocole MCP est un standard ouvert qui permet de connecter de façon uniforme des agents IA à des sources de données et outils tiers, simplifiant les intégrations entre LLM et applications. Une démo avec Claude (agent IA d'Anthropic) montre que l'IA peut interagir avec le coffre Bitwarden : vérifier l'état, déverrouiller le vault, générer ou modifier des identifiants, le tout sans intervention humaine directe. Bitwarden affiche une approche priorisant la sécurité, mais reconnaît les risques liés à l'utilisation d'IA autonome. L'usage d'un LLM local privé est fortement recommandé pour limiter les vulnérabilités. Si tu veux, je peux aussi te résumer les enjeux principaux (interopérabilité, sécurité, cas d'usage) ou un extrait spécifique ! NVIDIA a une faille de securite critique https://www.wiz.io/blog/nvidia-ai-vulnerability-cve–2025–23266-nvidiascape Il s'agit d'une faille d'évasion de conteneur dans le NVIDIA Container Toolkit. La gravité est jugée critique avec un score CVSS de 9.0. Cette vulnérabilité permet à un conteneur malveillant d'obtenir un accès root complet sur l'hôte. L'origine du problème vient d'une mauvaise configuration des hooks OCI dans le toolkit. L'exploitation peut se faire très facilement, par exemple avec un Dockerfile de seulement trois lignes. Le risque principal concerne la compromission de l'isolation entre différents clients sur des infrastructures cloud GPU partagées. Les versions affectées incluent toutes les versions du NVIDIA Container Toolkit jusqu'à la 1.17.7 et du NVIDIA GPU Operator jusqu'à la version 25.3.1. Pour atténuer le risque, il est recommandé de mettre à jour vers les dernières versions corrigées. En attendant, il est possible de désactiver certains hooks problématiques dans la configuration pour limiter l'exposition. Cette faille met en lumière l'importance de renforcer la sécurité des environnements GPU partagés et la gestion des conteneurs AI. Fuite de données de l'application Tea : points essentiels https://knowyourmeme.com/memes/events/the-tea-app-data-leak Tea est une application lancée en 2023 qui permet aux femmes de laisser des avis anonymes sur des hommes rencontrés. En juillet 2025, une importante fuite a exposé environ 72 000 images sensibles (selfies, pièces d'identité) et plus d'1,1 million de messages privés. La fuite a été révélée après qu'un utilisateur ait partagé un lien pour télécharger la base de données compromise. Les données touchées concernaient majoritairement des utilisateurs inscrits avant février 2024, date à laquelle l'application a migré vers une infrastructure plus sécurisée. En réponse, Tea prévoit de proposer des services de protection d'identité aux utilisateurs impactés. Faille dans le paquet npm is : attaque en chaîne d'approvisionnement https://socket.dev/blog/npm-is-package-hijacked-in-expanding-supply-chain-attack Une campagne de phishing ciblant les mainteneurs npm a compromis plusieurs comptes, incluant celui du paquet is. Des versions compromises du paquet is (notamment les versions 3.3.1 et 5.0.0) contenaient un chargeur de malware JavaScript destiné aux systèmes Windows. Ce malware a offert aux attaquants un accès à distance via WebSocket, permettant potentiellement l'exécution de code arbitraire. L'attaque fait suite à d'autres compromissions de paquets populaires comme eslint-config-prettier, eslint-plugin-prettier, synckit, @pkgr/core, napi-postinstall, et got-fetch. Tous ces paquets ont été publiés sans aucun commit ou PR sur leurs dépôts GitHub respectifs, signalant un accès non autorisé aux tokens mainteneurs. Le domaine usurpé [npnjs.com](http://npnjs.com) a été utilisé pour collecter les jetons d'accès via des emails de phishing trompeurs. L'épisode met en lumière la fragilité des chaînes d'approvisionnement logicielle dans l'écosystème npm et la nécessité d'adopter des pratiques renforcées de sécurité autour des dépendances. Revues de sécurité automatisées avec Claude Code https://www.anthropic.com/news/automate-security-reviews-with-claude-code Anthropic a lancé des fonctionnalités de sécurité automatisées pour Claude Code, un assistant de codage d'IA en ligne de commande. Ces fonctionnalités ont été introduites en réponse au besoin croissant de maintenir la sécurité du code alors que les outils d'IA accélèrent considérablement le développement de logiciels. Commande /security-review : les développeurs peuvent exécuter cette commande dans leur terminal pour demander à Claude d'identifier les vulnérabilités de sécurité, notamment les risques d'injection SQL, les vulnérabilités de script intersite (XSS), les failles d'authentification et d'autorisation, ainsi que la gestion non sécurisée des données. Claude peut également suggérer et implémenter des correctifs. Intégration GitHub Actions : une nouvelle action GitHub permet à Claude Code d'analyser automatiquement chaque nouvelle demande d'extraction (pull request). L'outil examine les modifications de code pour y trouver des vulnérabilités, applique des règles personnalisables pour filtrer les faux positifs et commente directement la demande d'extraction avec les problèmes détectés et les correctifs recommandés. Ces fonctionnalités sont conçues pour créer un processus d'examen de sécurité cohérent et s'intégrer aux pipelines CI/CD existants, ce qui permet de s'assurer qu'aucun code n'atteint la production sans un examen de sécurité de base. Loi, société et organisation Google embauche les personnes clés de Windsurf https://www.blog-nouvelles-technologies.fr/333959/openai-windsurf-google-deepmind-codage-agentique/ windsurf devait être racheté par OpenAI Google ne fait pas d'offre de rachat mais débauche quelques personnes clés de Windsurf Windsurf reste donc indépendante mais sans certains cerveaux y compris son PDG. Les nouveaux dirigeants sont les ex leaders des force de vente Donc plus une boîte tech Pourquoi le deal a 3 milliard est tombé à l'eau ? On ne sait pas mais la divergence et l‘indépendance technologique est possiblement en cause. Les transfuge vont bosser chez Deepmind dans le code argentique Opinion Article: https://www.linkedin.com/pulse/dear-people-who-think-ai-low-skilled-code-monkeys-future-jan-moser-svade/ Jan Moser critique ceux qui pensent que l'IA et les développeurs peu qualifiés peuvent remplacer les ingénieurs logiciels compétents. Il cite l'exemple de l'application Tea, une plateforme de sécurité pour femmes, qui a exposé 72 000 images d'utilisateurs en raison d'une mauvaise configuration de Firebase et d'un manque de pratiques de développement sécurisées. Il souligne que l'absence de contrôles automatisés et de bonnes pratiques de sécurité a permis cette fuite de données. Moser avertit que des outils comme l'IA ne peuvent pas compenser l'absence de compétences en génie logiciel, notamment en matière de sécurité, de gestion des erreurs et de qualité du code. Il appelle à une reconnaissance de la valeur des ingénieurs logiciels qualifiés et à une approche plus rigoureuse dans le développement logiciel. YouTube déploie une technologie d'estimation d'âge pour identifier les adolescents aux États-Unis https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/ Sujet très à la mode, surtout au UK mais pas que… YouTube commence à déployer une technologie d'estimation d'âge basée sur l'IA pour identifier les utilisateurs adolescents aux États-Unis, indépendamment de l'âge déclaré lors de l'inscription. Cette technologie analyse divers signaux comportementaux, tels que l'historique de visionnage, les catégories de vidéos consultées et l'âge du compte. Lorsqu'un utilisateur est identifié comme adolescent, YouTube applique des protections supplémentaires, notamment : Désactivation des publicités personnalisées. Activation des outils de bien-être numérique, tels que les rappels de temps d'écran et de coucher. Limitation de la visualisation répétée de contenus sensibles, comme ceux liés à l'image corporelle. Si un utilisateur est incorrectement identifié comme mineur, il peut vérifier son âge via une pièce d'identité gouvernementale, une carte de crédit ou un selfie. Ce déploiement initial concerne un petit groupe d'utilisateurs aux États-Unis et sera étendu progressivement. Cette initiative s'inscrit dans les efforts de YouTube pour renforcer la sécurité des jeunes utilisateurs en ligne. Mistral AI : contribution à un standard environnemental pour l'IA https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Mistral AI a réalisé la première analyse de cycle de vie complète d'un modèle d'IA, en collaboration avec plusieurs partenaires. L'étude quantifie l'impact environnemental du modèle Mistral Large 2 sur les émissions de gaz à effet de serre, la consommation d'eau, et l'épuisement des ressources. La phase d'entraînement a généré 20,4 kilotonnes de CO₂ équivalent, consommé 281 000 m³ d'eau, et utilisé 660 kg SB-eq (mineral consumption). Pour une réponse de 400 tokens, l'impact marginal est faible mais non négligeable : 1,14 gramme de CO₂, 45 mL d'eau, et 0,16 mg d'équivalent antimoine. Mistral propose trois indicateurs pour évaluer cet impact : l'impact absolu de l'entraînement, l'impact marginal de l'inférence, et le ratio inference/impact total sur le cycle de vie. L'entreprise souligne l'importance de choisir le modèle en fonction du cas d'usage pour limiter l'empreinte environnementale. Mistral appelle à plus de transparence et à l'adoption de standards internationaux pour permettre une comparaison claire entre modèles. L'IA promettait plus d'efficacité… elle nous fait surtout travailler plus https://afterburnout.co/p/ai-promised-to-make-us-more-efficient Les outils d'IA devaient automatiser les tâches pénibles et libérer du temps pour les activités stratégiques et créatives. En réalité, le temps gagné est souvent aussitôt réinvesti dans d'autres tâches, créant une surcharge. Les utilisateurs croient être plus productifs avec l'IA, mais les données contredisent cette impression : une étude montre que les développeurs utilisant l'IA prennent 19 % de temps en plus pour accomplir leurs tâches. Le rapport DORA 2024 observe une baisse de performance globale des équipes lorsque l'usage de l'IA augmente : –1,5 % de throughput et –7,2 % de stabilité de livraison pour +25 % d'adoption de l'IA. L'IA ne réduit pas la charge mentale, elle la déplace : rédaction de prompts, vérification de résultats douteux, ajustements constants… Cela épuise et limite le temps de concentration réelle. Cette surcharge cognitive entraîne une forme de dette mentale : on ne gagne pas vraiment du temps, on le paie autrement. Le vrai problème vient de notre culture de la productivité, qui pousse à toujours vouloir optimiser, quitte à alimenter l'épuisement professionnel. Trois pistes concrètes : Repenser la productivité non en temps gagné, mais en énergie préservée. Être sélectif dans l'usage des outils IA, en fonction de son ressenti et non du battage médiatique. Accepter la courbe en J : l'IA peut être utile, mais nécessite des ajustements profonds pour produire des gains réels. Le vrai hack de productivité ? Parfois, ralentir pour rester lucide et durable. Conférences MCP Submit Europe https://mcpdevsummit.ai/ Retour de JavaOne en 2026 https://inside.java/2025/08/04/javaone-returns–2026/ JavaOne, la conférence dédiée à la communauté Java, fait son grand retour dans la Bay Area du 17 au 19 mars 2026. Après le succès de l'édition 2025, ce retour s'inscrit dans la continuité de la mission initiale de la conférence : rassembler la communauté pour apprendre, collaborer et innover. La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 25–27 août 2025 : SHAKA Biarritz - Biarritz (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 22–24 septembre 2025 : Kernel Recipes - Paris (France) 22–27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23–24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25–26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025–1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6–7 octobre 2025 : Swift Connection 2025 - Paris (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7–8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8–10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9–10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17–19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30–31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30–31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025–2 novembre 2025 : PyConFR 2025 - Lyon (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 5–6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15–16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19–21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1–2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4–5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9–11 décembre 2025 : APIdays Paris - Paris (France) 9–11 décembre 2025 : Green IO Paris - Paris (France) 10–11 décembre 2025 : Devops REX - Paris (France) 10–11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 28–31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2–6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12–13 février 2026 : Touraine Tech #26 - Tours (France) 22–24 avril 2026 : Devoxx France 2026 - Paris (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Fredrik och Tobias diskuterar en tillräckligt mystisk bugg Tobias jagat ifatt, och berättar på vägen om register och vektorisering. Tobias har sedan sist varit med och levererat sitt första spel på Ubisoft och berättar om vad som fanns att göra på kompilatornivå sex månader innan ett Assassins' creed-spel ska släppas. Men huvudämnet är vektorisering. Det började givetvis med en konstig bugg, som kräver ett par dykningar i hur processorer och kompilatorer fungerar för att få sin förklaring. Ett stort tack till Cloudnet som sponsrar vår VPS! Har du kommentarer, frågor eller tips? Vi är @kodsnack, @thieta, @krig, och @bjoreman på Mastodon, har en sida på Facebook och epostas på info@kodsnack.se om du vill skriva längre. Vi läser allt som skickas. Gillar du Kodsnack får du hemskt gärna recensera oss i iTunes! Du kan också stödja podden genom att ge oss en kaffe (eller två!) på Ko-fi, eller handla något i vår butik. Länkar Avsnitt 581 Amanda Assassin's creed shadows Anvil Profile-guided optimization Bitmaskande Perforce Git bisect Stöd oss på Ko-fi! Autovektorisering, eller loopvektorisering SSE, SSE 2, AVX Register i CPU:er Pentium XOR Scalar SIMD - Single instruction, multiple data Neon Pipelining i CPU:er Micro-ops Scheduler i kompilatorer Snowdrop JIT - just-in-time-kompilering Raw string Expedition 33 Videon om skapandet av Expedition 33 Titlar Tillbaka från avsnitt 581 Sporadisk gäst Tiden har ju sprungit som den gör Då finns det att göra Gratis prestanda Innan GPU:n tar över Två kuber ovanpå varandra Vart i kompilatorn gick det här åt skogen? Vektoriseringsmagi Två stora arrayer som beskriver någonting Ineffektivt att göra det i serie Inte speciellt ergonomiskt Det här kan jag vektorisera bort åt dig Bitmaskade på fel bit Det här är värt besväret Miljoner arrayer och loopar
Atualize já! WinRAR lança update que corrige vulnerabilidade de alto risco na ferramenta; Startup brasileira está popularizando telecirurgias com IA no Amazonas; App da Meta AI é lançado no Brasil e funciona independente do WhatsApp; Akira: Ransomware usa software de CPU para sequestrar dados; Nvidia e AMD devem pagar aos EUA 15% das receitas com vendas à China.
Are we beginning to see the dawn of a 2nd phase of Cloud Computing, as AI begins to become a workload that impacts every aspect of the previous era of Cloud? Let's explore…SHOW: 948SHOW TRANSCRIPT: The Cloudcast #948 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[DoIT] Visit doit.com (that's d-o-i-t.com) to unlock intent-aware FinOps at scale with DoiT Cloud Intelligence.[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.SHOW NOTES:CLOUD 1.0 vs. CLOUD 2.0Cloud 1.0: On-demand services, OSS innovation, Core Building Blocks, Affordable Infrastructure with declining costs, 1st and best customers, limited gov't involvement, varied competition levels, Cloud 2.0: AI is a now technology, GPU vs. CPU costs, vertical application stacks?, unknown economics, highly funded competition, unknown gov't involvement, unknown investment into OSS, AI gravity vs. cloud gravityFEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
Matt uses you as his therapist to vent about three days fighting systemd and boot time. Ben patiently listens while Matt explains why mounting things shouldn't consume 200% CPU. AWS sponsorship news provides a silver lining.
Microsoft warns of a high-severity vulnerability in Exchange Server hybrid deployments. A Dutch airline and a French telecom report data breaches. Researchers reveal new HTTP request smuggling variants. An Israeli spyware maker may have rebranded to evade U.S. sanctions. CyberArk patches critical vulnerabilities in its secrets management platform. The Akira gang use a legit Intel CPU tuning driver to disable Microsoft Defender. ChatGPT Connectors are shown vulnerable to indirect prompt injection. Researchers expose new details about the VexTrio cybercrime network. SonicWall says a recent SSLVPN-related cyber activity is not due to a zero-day. Ryan Whelan from Accenture is our man on the street at Black Hat. Do androids dream of concierge duty? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest We continue our coverage from the floor at Black Hat USA 2025 with another edition of Man on the Street. This time, we're catching up with Ryan Whelan, Managing Director and Global Head of Cyber Intelligence at Accenture, to hear what's buzzing at the conference. Selected Reading Microsoft warns of high-severity flaw in hybrid Exchange deployments (Bleeping Computer) KLM suffers cyber breach affecting six million passengers (IO+) Cyberattack hits France's third-largest mobile operator, millions of customers affected (The Record) New HTTP Request Smuggling Attacks Impacted CDNs, Major Orgs, Millions of Websites (SecurityWeek) Candiru Spyware Infrastructure Uncovered (BankInfoSecurity) Enterprise Secrets Exposed by CyberArk Conjur Vulnerabilities (SecurityWeek) Akira ransomware abuses CPU tuning tool to disable Microsoft Defender (Bleeping Computer) A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT (WIRED) Researchers Expose Infrastructure Behind Cybercrime Network VexTrio (Infosecurity Magazine) Gen 7 and newer SonicWall Firewalls – SSLVPN Recent Threat Activity (SonicWall) Want a Different Kind of Work Trip? Try a Robot Hotel (WIRED) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
The team discusses Microsoft's vision for the future of Windows, bad news for Intel's hopes of reclaiming the CPU crown, and a suggestion that Amazon might look to monetise Alexa, through subscriptions, adverts – or both. We also look at how some politicians have been using AI, and introduce our Hot Hardware candidate, the £75 Redmi Watch 5.
Frontier League and Marmaduke Perfect Game and MLB ... was it a strike? Sandy and 1960 MLB no longer owns the summer The new CBA It's an imperfect game West Point Brett's idea Salary Floor Kim NG Royve Lewis ... The CPU didn't put me in The Porch Competition and Preparation
Kevin Green kicks off Morning Movers with Sam Vadas to discuss a couple tech stock earnings. For Supermicro (SMCI), shares plummeted after a revenue and EPS miss, with tariffs and AI demand concerns weighing on the company's guidance. Meanwhile, AMD's report was a mixed bag, with a beat on revenue but a miss on adjusted earnings per share due in large part to China sales restrictions. Despite this, the company's CPU business is expected to see a refresh in 2026, which could be a catalyst for growth. KG adds the VIX to his watch list as it tests the 20 level, saying volatility could be a friend to traders, but caution is advised as the market navigates a busy earnings docket.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
This week! Mario Paint on NSO, the Partner Direct had some great stuff, this week feels straight out of the ‘90s, plus DK Bananza, Earthion, Ninja Gaiden: Ragebound, Shinobi: Art of Vengeance (demo), and much, much more. Join us, won't you? https://youtube.com/live/9uIe4i_r458 Links of interest: Mario Paint added to NSO Switch 1 price increased in the US Nintendo Partner Direct Yakuza Kiwami 1&2 coming to Switch 2 New Katamari Damacy coming MKW update makes CPU easier (and other things) Atari acquiring Thunderful New Friday the 13th Game “sequel” coming A&W Ice Cream Sundae Zero Sugar Soda What's a Juicy Lucy? Mario & Wario – Game Freak's Japan-only SNES title Once Upon a Katamari Final Fantasy Tactics Remaster (Ivalice Collection) Octopath Traveler Zero (tentative title) The Adventures of Elliott Peanuts Snoopy Tamagotchi Earthion Donkey Kong Bananza Ninja Gaiden: Ragebound Shinobi: Art of Vengeance Greg Sewart's Extra Life Page Player One Podcast Discord Greg Streams on Twitch Growing Up Gaming - The Nintendo 64 Add us in Apple Podcasts Check out Greg's web series Generation 16 - click here. And take a trip over to Phil's YouTube Channel to see some awesome retro game vids. Follow us on twitter at twitter.com/p1podcast. Thanks for listening! Don't forget to visit our new web site at www.playeronepodcast.com. Running time: 01:45:36
Dell Technologies has today announced the launch and availability of its next generation of flagship laptops, now rebranded under the new Dell Premium line. The Pro Max marks a new chapter in Dell's premium offering, replacing the XPS brand while retaining its hallmark craftsmanship, performance, and design. The new line includes the Dell 14 Premium and Dell 16 Premium and are positioned as the company's lead offering for users seeking high-performance, future-ready devices. The laptops are now available in Ireland. New Pro Max laptops have AI features Built on Intel Core Ultra 200H series processors, the Dell Premium range delivers significant gains in both performance and battery life. The 14.5-inch and 16.3-inch screens offer increased display real estate without expanding the devices' footprint, while OLED options with 4K resolution and 120Hz refresh rates provide enhanced visual quality. The range also includes features such as EyeSafe technology for reduced blue light exposure and Liquid Crystal Polymer fan blades designed for more efficient, quiet cooling. Kevin Terwilliger, Vice-President and General Manager of the PC Product Management Group, Dell Technologies said: "We're in a dynamic era where technology serves as both the tool and the canvas for ideas and innovation. Built for the power users, engineers, creators and AI developers transforming industries, these AI PCs not only handle the most demanding AI workflows but set the standard for performance and creativity. "Reliability, configurability, and performance aren't just features - they're the foundation. We know professionals need tools they can count on to tackle their most critical and impactful workloads, and that's what we deliver." Early benchmarks show up to 33% improved performance for general use and up to 21% faster speeds for lighter creative workloads. The 14-inch model offers up to 20 hours of streaming battery life, with the 16-inch version extending to 27 hours using energy-efficient 2K displays. Both laptops support memory speeds up to 8400MHz, while advanced multithreading improves performance for heavier workflows such as video editing or content processing. The Dell 16 Premium can be configured with up to Intel Core Ultra 9 processors and offers 45W sustained CPU power. An optional NVIDIA RTX 50 Series GPU delivers AI-enhanced graphics and DLSS 4 for accelerated image rendering, while Thunderbolt 5 connectivity (optional) supports transfer speeds up to 120Gbps and multi-monitor setups with up to four 8K displays. The smaller Dell 14 Premium model includes integrated graphics with 29% faster processing, and optional RTX 4050 GPU for enhanced creative performance. Both models support Wi-Fi 7 for improved network speed and responsiveness. Build quality and materials used by Dell remain a key focus, with both devices featuring CNC-machined aluminium, Gorilla Glass 3, and a streamlined edge-to-edge design. Sustainability measures have also been expanded, with the range meeting ENERGY STAR 9.0, EPEAT Gold Climate+, and integrating recycled aluminium and plastics in both construction and packaging. All devices ship with Windows 11 and include Copilot on Windows, Microsoft's integrated AI assistant. The release comes ahead of the October 2025 end-of-support date for Windows 10, as businesses and consumers here in Ireland prepare to upgrade to more secure and modern platforms. Pricing and Availability Dell 14 Premium starting at €1,899.00 is now available Dell 16 Premium starting at €1,998.99 is now available See more breaking stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News hav...
For Lip-Bu Tan's first full quarter as Intel (INTC) CEO, Olivier Blanchard says he didn't do a bad job. He says investors should show patience as Tan works on the turnaround plan for Intel. He talks about where the company stands in the greater A.I. race, particularly in the CPU and GPU space.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
12 Articles With Great ESG Stock Picks. Includes the terrific Humankind ranking, top infrastructure, lithium mining, and AI stock picks. By Ron Robins, MBA Transcript & Links, Episode 157, July 25, 2025 Hello, Ron Robins here. Welcome to my podcast episode 157, published on July 25, 2025, titled “12 Articles With Great ESG Stock Picks.” Before I begin, I want to let you know that my next podcast will be on August 22nd as I'm taking some time off. So, this podcast is presented by Investing for the Soul. Investingforthesoul.com is your go-to site for vital global, ethical, and sustainable investing mentoring, news, commentary, information, and resources. Remember that you can find a full transcript and links to content, including stock symbols and bonus material, on this episode's podcast page at investingforthesoul.com/podcasts. Also, a reminder. I do not evaluate any of the stocks or funds mentioned in these podcasts, and I don't receive any compensation from anyone covered in these podcasts. Furthermore, I will reveal any investments I have in the investments mentioned herein. Additionally, please visit this podcast's webpage for links to the articles and additional company and stock information. I have a great crop of 12 articles for you in this podcast! Note that some companies are mentioned more than once! ------------------------------------------------------------- Humankind 100 rankings I'm beginning this episode with another of my favourite company rankings whose annual list has just been released. It's the Humankind 100 rankings. Here is an overview of them from their website. “The Humankind 100 celebrates the one hundred U.S. public companies with the highest Humankind Values. We believe these companies consistently work to create large amounts of value, not just for their investors, but for humanity at large. The Humankind 100 companies are ranked based on Humankind Value, a proprietary metric that provides an estimate of the overall dollar amount a company creates for investors, consumers, employees, and society at large, and are therefore among the most ethical companies in the United States, according to our research.” End quotes. Their top 5 companies are Alphabet Inc. (GOOGL), Eli Lilly & Company (1LLY.MI), Johnson & Johnson (JNJ), AbbVie Inc. (ABBV), and Pfizer Inc. (PFE). ------------------------------------------------------------- Infrastructure Stocks To Consider - July 12th This second article features a sector favoured by many ethical and sustainable investors. The article is titled Infrastructure Stocks To Consider - July 12th. It's by MarketBeat and seen on marketbeat.com. Here are some quotes from the article. “1. NVIDIA Corporation (NASDAQ:NVDA) provides graphics and compute and networking solutions in the United States, Taiwan, China, Hong Kong, and internationally. The Graphics segment offers GeForce GPUs for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; virtual GPU or vGPU software for cloud-based visual and virtual computing; automotive platforms for infotainment systems; and Omniverse software for building and operating metaverse and 3D internet applications. 2. Coinbase Global, Inc. (NASDAQ:COIN) provides financial infrastructure and technology for the crypto economy in the United States and internationally. The company offers the primary financial account in the crypto economy for consumers; and a marketplace with a pool of liquidity for transacting in crypto assets for institutions. Read Our Latest Research Report on COIN 3. Alphabet (NASDAQ:GOOGL) offers various products and platforms in the United States, Europe, the Middle East, Africa, the Asia-Pacific, Canada, and Latin America. It operates through Google Services, Google Cloud, and Other Bets segments. The Google Services segment provides products and services, including ads, Android, Chrome, devices, Gmail, Google Drive, Google Maps, Google Photos, Google Play, Search, and YouTube. Read Our Latest Research Report on GOOGL 4. Broadcom (NASDAQ:AVGO) designs, develops, and supplies various semiconductor devices with a focus on complex digital and mixed signal complementary metal oxide semiconductor based devices and analog III-V based products worldwide. Read Our Latest Research Report on AVGO 5. Oracle (ORCL) offers products and services that address enterprise information technology environments worldwide. Its Oracle cloud software as a service offering include various cloud software applications, including Oracle Fusion cloud enterprise resource planning (ERP), Oracle Fusion cloud enterprise performance management, Oracle Fusion cloud supply chain and manufacturing management, Oracle Fusion cloud human capital management, Oracle Cerner healthcare, Oracle Advertising, and NetSuite applications suite, as well as Oracle Fusion Sales, Service, and Marketing.” End quotes. ------------------------------------------------------------- Best Lithium Mining Stocks 2025: Buy Top Mining Stocks Now Every investor knows that lithium is a basic mineral for electric batteries. So, this next article will interest many investors. It's titled Best Lithium Mining Stocks 2025: Buy Top Mining Stocks Now. It's by Farmonaut and found on farmonaut.com. Here are some quotes by Farmonaut on each of their picks. “1. Albemarle Corporation (NYSE: ALB) headquartered in the USA, is the world's largest lithium producer… With operations spanning North America, South America, and Australia, Albemarle boasts: Diversified extraction & processing operations, including high-margin lithium brine and hard rock mining projects Ongoing investments to expand production capacity in Nevada (USA), Chile, and Australia A resilient supply chain and ability to scale output to meet global demand Strategic partnerships with leading EV battery makers Strong commitment to sustainable mining and ESG practices Albemarle's scale, geographic diversification, and innovation position it as one of the best performing mining stocks of 2025. 2. Sociedad Química y Minera de Chile (or SQM) (NYSE: SQM) is South America's lithium market leader. Based in Santiago, Chile, SQM boasts some of the world's largest and lowest-cost lithium brine operations situated in the renowned Lithium Triangle (Chile, Argentina, Bolivia): Extensive lithium reserves & robust extraction technology, delivering high efficiency Geopolitical stability—Chile enjoys a relatively favorable mining regulatory environment compared to other regions Cost-effective production enables SQM to remain highly profitable even as competition heats up Continuous expansion to satisfy increasing global lithium demand for EV batteries and storage solutions Environmental sustainability programs, making SQM attractive for ESG-focused investors SQM competitive positioning ensures it remains a top choice in the best lithium mining stocks to buy for 2025. 3. Livent Corporation (NYSE: LTHM) distinguishes itself by focusing on high-purity lithium chemicals for next-generation battery technologies. With operations in the United States, Argentina, and China, Livent stands out for: Supplying premium lithium hydroxide and carbonate solutions for advanced battery manufacturers Strong partnerships with key players in the EV battery chain, including Tesla Expansion projects in Argentina and the U.S., boosting 2025 production capacity and flexibility ESG and sustainability initiatives for responsible lithium extraction Livent is uniquely positioned for specialty market growth, making it one of the best lithium mining stocks for investors eyeing niche applications and supply chain integration. 4. Piedmont Lithium (NASDAQ: PLL) though a smaller player, it has become a rising star by focusing on high-quality spodumene reserves in the United States—especially in North Carolina's Carolina Tin-Spodumene Belt. Piedmont brings: Strategic U.S. supply source—critical for domestic battery manufacturers and government-led supply chain diversification Fast-tracked expansion projects supported by U.S. regulatory incentives and EV adoption targets Potential to benefit from blockchain-based traceability in mining—enhancing transparency for institutional investors Growing interest from global automakers and battery companies seeking secure lithium supply Piedmont's agility and domestic positioning could mean outsized growth as U.S. policy emphasizes onshoring critical battery mineral chains.” End quotes. ------------------------------------------------------------- 5 Artificial Intelligence (AI) Infrastructure Stocks Powering the Next Wave of Innovations Now, like most investors, you probably are invested in AI stocks, either directly or via funds. Hence, this next article 5 Artificial Intelligence (AI) Infrastructure Stocks Powering the Next Wave of Innovations, should interest you. It's by Justin Pope and found on fool.com. Here is some of what Mr. Pope says about his picks. “1. Nvidia (NASDAQ: NVDA) The company has maintained its winning position as it progressed from its previous Hopper architecture to its current Blackwell chips, and it expects to launch its next-generation architecture, with a CPU called Vera and a GPU called Rubin, next year. Analysts expect Nvidia's revenue to grow to $200 billion this year and $251 billion in 2026. 2. Amazon (AMZN) Web Services (AWS) has long been the world's leading cloud platform, with about 30% of the cloud infrastructure market today. Through the cloud, companies can access and deploy AI agents, models, and other software throughout their businesses. 3. Microsoft (MSFT) Its Azure is the world's second-largest cloud platform, with a market share of approximately 21%. Microsoft stands out from the pack for its deep ties with millions of corporate clients. 4. Arista Networks (ANET) sells high-end networking switches and software that help accomplish this. The company has already thrived in this golden age of data centers, with top clients including Microsoft and Meta Platforms, which happen to also be among the highest spenders on AI infrastructure. 5. Broadcom (AVGO) which specializes in designing semiconductors used for networking applications. For example, Arista Networks utilizes Broadcom's Tomahawk and Jericho silicon in the networking switches it builds for data centers. Broadcom's AI-related semiconductor sales increased by 46% year-over-year in the second quarter.” End quotes. ------------------------------------------------------------- Ethical Companies To Invest In 2025 (ECL, MSFT, UNFI) The final reviewed article for this podcast episode is titled Ethical Companies To Invest In 2025 (ECL, MSFT, UNFI) and was written by the Analyst Team and seen on asktraders.com. Now a few quotes from the article by the Team. “1. Ecolab (ECL) a global leader in water, hygiene, and infection prevention solutions, presents a straightforward ethical narrative. Its products and services help businesses reduce water consumption, improve hygiene standards, and prevent infections, contributing directly to public health and environmental protection… Analyst ratings remain in line with current pricing, with Wells Fargo & Company reiterating a price target of $260.00 in May 2025. With the Ecolab stock price having gained 14% since the start of the year, the company has managed to outperform the market on the period whilst holding true to it's ethical standing. While its dividend yield of approximately 1.1% is slightly higher than others on the list, its P/E ratio of around 38x indicates a similar valuation based on future earnings potential. 2. Microsoft (MSFT) presents a complex ethical profile. On one hand, its commitment to carbon neutrality, investments in renewable energy, and initiatives to bridge the digital divide are commendable… The stock's impressive 20% YTD return and a consensus analyst price target of $475 reflect market confidence in its financial stability and future growth, primarily driven by its cloud and AI segments, making it one to keep on shortlists… While Microsoft offers a modest dividend yield of around 0.7%, its high P/E ratio of approximately 36x suggests a premium valuation reflecting its growth potential rather than a focus on immediate shareholder returns. The company's low debt-to-equity ratio underscores its financial strength, allowing it to invest heavily in research and development and pursue ambitious sustainability goals. 3. United Natural Foods (UNFI) stock has pulled back ~15% this year, although remains firmly higher over the past 12 months, with a gain of more than 70%. The company, a leading distributor of natural, organic, and specialty foods, presents the most challenging investment case with the recent cyber incident causing a sharp pullback in the stock. This could in fact be an opportunity… Unlike Microsoft and Ecolab, United Natural Foods does not offer a dividend, reflecting its current financial constraints. Its low P/E ratio of around 8x suggests a deeply discounted valuation, reflecting the market's skepticism about its turnaround prospects. Recent earnings on July 16 beat expectations, however, and the stock is on the move with an 8% gain immediately off the back.” End quotes. ------------------------------------------------------------- More articles of interest from around the world for ethical and sustainable investors 1. Title: Top 10: Wind Power Companies on energydigital.com. By Jasmin Jessen. 2. Title: Ethical Companies To Invest In 2025 (ECL, MSFT, UNFI) on AskTraders.com. By Analyst Team. 3. Title: The Green Gold Rush: Why Techem's $6.7B Sale Signals a Buying Opportunity on ainvest.com. By Wesley Park. 4. Title: AJ Bell adds Rathbone Ethical Bond to buy list on portfolio-advisor.com. By Christian Mayes. 5. Title: Procter & Gamble Named Top Socially Responsible Dividend Stock on ainvest.com. By Ainvest. 6. Title: 11 Best Halal Dividend Stocks to Buy Now on insidermonkey.com. By Vardah Gill. 7. Title: JPMorgan Picks 3 Top Stocks In Alternative Energy On Heels Of Trump's 'Big Beautiful Bill' - First Solar (NASDAQ:FSLR), Brookfield Renewable (NYSE:BEPC), and HASI (NYSE:HASI) on benzinga.com. By Priya Nigam. ------------------------------------------------------------- Ending Comment These are my top news stories with their stock and fund tips for this podcast, “12 Articles With Great ESG Stock Picks.” Please click the like and subscribe buttons wherever you download or listen to this podcast. That helps bring these podcasts to others like you. And please click the share buttons to share this podcast with your friends and family. Let's promote ethical and sustainable investing as a force for hope and prosperity in these deeply troubled times! Contact me if you have any questions. Thank you for listening. As I mentioned earlier, I'm taking some time off, so I'll talk to you next on August 22nd. Bye for now. © 2025 Ron Robins, Investing for the Soul
Welcome to ohmTown. The Non Sequitur News Show is held live via Twitch and Youtube every day. We, Mayor Watt and the AI that runs ohmTown, cover a selection of aggregated news articles and discuss them briefly with a perspective merging Science, Technology, and Society. You can visit https://www.youtube.com/ohmtown for the complete history since 2022.Articles Discussed:Warframe the Tabletop RPGhttps://www.ohmtown.com/groups/warcrafters/f/d/warframe-is-getting-a-tabletop-rpg-from-pathfinder-publishers/A CPU out of Chipshttps://www.ohmtown.com/groups/warcrafters/f/d/a-diy-mad-scientist-from-poland-built-his-own-cpu-out-of-dozens-of-ancient-memory-chips/Pokemon Hershey's Kisseshttps://www.ohmtown.com/groups/warcrafters/f/d/pokemon-hersheys-kisses-are-being-scalped-for-ridiculous-prices/A Route 66 Ghost Townhttps://www.ohmtown.com/groups/hatchideas/f/d/a-route-66-ghost-town-was-frozen-in-time-is-it-on-the-brink-of-a-comeback/Ubisoft CEO Responds to Stop Killing Gameshttps://www.ohmtown.com/groups/warcrafters/f/d/ubisoft-ceo-responds-to-the-stop-killing-games-petition-stating-the-publisher-is-working-on-improving-its-approach-to-end-of-life-support-but-that-nothing-is-eternal/Extracting Oxygen, Water, and Fuel from Moon Dusthttps://www.ohmtown.com/groups/nonsequiturnews/f/d/chinese-scientists-invent-system-for-extracting-oxygen-water-and-rocket-fuel-from-moon-dust/Saving the Cybertruck with desperationhttps://www.ohmtown.com/groups/wanted/f/d/tesla-tries-to-save-the-cybertruck-with-its-most-desperate-offer-yet/Turbo eScooterhttps://www.ohmtown.com/groups/realityhacker/f/d/this-turbo-escooter-wants-to-set-a-guinness-world-record/A Mushroom Caskethttps://www.ohmtown.com/groups/nonsequiturnews/f/d/a-mushroom-casket-marks-a-first-for-green-burials-in-the-us/Because you'll need a place to recharge.
Dans cet épisode, Emmanuel et Antonio discutent de divers sujets liés au développement: Applets (et oui), app iOS développées sous Linux, le protocole A2A, l'accessibilité, les assistants de code AI en ligne de commande (vous n'y échapperez pas)… Mais aussi des approches méthodologiques et architecturales comme l'architecture hexagonale, les tech radars, l'expert généraliste et bien d'autres choses encore. Enregistré le 11 juillet 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-328.mp3 ou en vidéo sur YouTube. News Langages Les Applets Java c'est terminé pour de bon… enfin, bientot: https://openjdk.org/jeps/504 Les navigateurs web ne supportent plus les applets. L'API Applet et l'outil appletviewer ont été dépréciés dans JDK 9 (2017). L'outil appletviewer a été supprimé dans JDK 11 (2018). Depuis, impossible d'exécuter des applets avec le JDK. L'API Applet a été marquée pour suppression dans JDK 17 (2021). Le Security Manager, essentiel pour exécuter des applets de façon sécurisée, a été désactivé définitivement dans JDK 24 (2025). Librairies Quarkus 3.24 avec la notion d'extensions qui peuvent fournir des capacités à des assistants https://quarkus.io/blog/quarkus-3-24-released/ les assistants typiquement IA, ont accès a des capacités des extensions Par exemple générer un client à partir d'openAPI Offrir un accès à la,base de données en dev via le schéma. L'intégration d'Hibernate 7 dans Quarkus https://quarkus.io/blog/hibernate7-on-quarkus/ Jakarta data api restriction nouvelle Injection du SchemaManager Sortie de Micronaut 4.9 https://micronaut.io/2025/06/30/micronaut-framework-4-9-0-released/ Core : Mise à jour vers Netty 4.2.2 (attention, peut affecter les perfs). Nouveau mode expérimental “Event loop Carrier” pour exécuter des virtual threads sur l'event loop Netty. Nouvelle annotation @ClassImport pour traiter des classes déjà compilées. Arrivée des @Mixin (Java uniquement) pour modifier les métadonnées d'annotations Micronaut sans altérer les classes originales. HTTP/3 : Changement de dépendance pour le support expérimental. Graceful Shutdown : Nouvelle API pour un arrêt en douceur des applications. Cache Control : API fluente pour construire facilement l'en-tête HTTP Cache-Control. KSP 2 : Support de KSP 2 (à partir de 2.0.2) et testé avec Kotlin 2. Jakarta Data : Implémentation de la spécification Jakarta Data 1.0. gRPC : Support du JSON pour envoyer des messages sérialisés via un POST HTTP. ProjectGen : Nouveau module expérimental pour générer des projets JVM (Gradle ou Maven) via une API. Un super article sur experimenter avec les event loops reactives dans les virtualthreads https://micronaut.io/2025/06/30/transitioning-to-virtual-threads-using-the-micronaut-loom-carrier/ Malheureusement cela demander le hacker le JDK C'est un article de micronaut mais le travail a ete collaboratif avec les equipes de Red Hat OpenJDK, Red Hat perf et de Quarkus et Vert.x Pour les curieux c'est un bon article Ubuntu offre un outil de creation de container pour Spring notamment https://canonical.com/blog/spring-boot-containers-made-easy creer des images OCI pour les applications Spring Boot basées sur Ubuntu base images bien sur utilise jlink pour reduire la taille pas sur de voir le gros avantage vs d'autres solutions plus portables d'ailleurs Canonical entre dans la danse des builds d'openjdk Le SDK Java de A2A contribué par Red Hat est sorti https://quarkus.io/blog/a2a-project-launches-java-sdk/ A2A est un protocole initié par Google et donne à la fondation Linux Il permet à des agents de se décrire et d'interagir entre eux Agent cards, skills, tâche, contexte A2A complémente MCP Red hat a implémenté le SDK Java avec le conseil des équipes Google En quelques annotations et classes on a un agent card, un client A2A et un serveur avec l'échange de messages via le protocole A2A Comment configurer mockito sans warning après java 21 https://rieckpil.de/how-to-configure-mockito-agent-for-java-21-without-warning/ les agents chargés dynamiquement sont déconseillés et seront interdis bientôt Un des usages est mockito via bytebuddy L'avantage est que la,configuration était transparente Mais bon sécurité oblige c'est fini. Donc l'article décrit comment configurer maven gradle pour mettre l'agent au démarrage des tests Et aussi comment configurer cela dans IntelliJ idea. Moins simple malheureusement Web Des raisons “égoïstes” de rendre les UIs plus accessibles https://nolanlawson.com/2025/06/16/selfish-reasons-for-building-accessible-uis/ Raisons égoïstes : Des avantages personnels pour les développeurs de créer des interfaces utilisateurs (UI) accessibles, au-delà des arguments moraux. Débogage facilité : Une interface accessible, avec une structure sémantique claire, est plus facile à déboguer qu'un code désordonné (la « soupe de div »). Noms standardisés : L'accessibilité fournit un vocabulaire standard (par exemple, les directives WAI-ARIA) pour nommer les composants d'interface, ce qui aide à la clarté et à la structuration du code. Tests simplifiés : Il est plus simple d'écrire des tests automatisés pour des éléments d'interface accessibles, car ils peuvent être ciblés de manière plus fiable et sémantique. Après 20 ans de stagnation, la spécification du format d'image PNG évolue enfin ! https://www.programmax.net/articles/png-is-back/ Objectif : Maintenir la pertinence et la compétitivité du format. Recommandation : Soutenu par des institutions comme la Bibliothèque du Congrès américain. Nouveautés Clés :Prise en charge du HDR (High Dynamic Range) pour une plus grande gamme de couleurs. Reconnaissance officielle des PNG animés (APNG). Support des métadonnées Exif (copyright, géolocalisation, etc.). Support Actuel : Déjà intégré dans Chrome, Safari, Firefox, iOS, macOS et Photoshop. Futur :Prochaine édition : focus sur l'interopérabilité entre HDR et SDR. Édition suivante : améliorations de la compression. Avec le projet open source Xtool, on peut maintenant construire des applications iOS sur Linux ou Windows, sans avoir besoin d'avoir obligatoirement un Mac https://xtool.sh/tutorials/xtool/ Un tutoriel très bien fait explique comment faire : Création d'un nouveau projet via la commande xtool new. Génération d'un package Swift avec des fichiers clés comme Package.swift et xtool.yml. Build et exécution de l'app sur un appareil iOS avec xtool dev. Connexion de l'appareil en USB, gestion du jumelage et du Mode Développeur. xtool gère automatiquement les certificats, profils de provisionnement et la signature de l'app. Modification du code de l'interface utilisateur (ex: ContentView.swift). Reconstruction et réinstallation rapide de l'app mise à jour avec xtool dev. xtool est basé sur VSCode sur la partie IDE Data et Intelligence Artificielle Nouvelle edition du best seller mondial “Understanding LangChain4j” : https://www.linkedin.com/posts/agoncal_langchain4j-java-ai-activity-7342825482830200833-rtw8/ Mise a jour des APIs (de LC4j 0.35 a 1.1.0) Nouveaux Chapitres sur MCP / Easy RAG / JSon Response Nouveaux modeles (GitHub Model, DeepSeek, Foundry Local) Mise a jour des modeles existants (GPT-4.1, Claude 3.7…) Google donne A2A a la Foundation Linux https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ Annonce du projet Agent2Agent (A2A) : Lors du sommet Open Source Summit North America, la Linux Foundation a annoncé la création du projet Agent2Agent, en partenariat avec Google, AWS, Microsoft, Cisco, Salesforce, SAP et ServiceNow. Objectif du protocole A2A : Ce protocole vise à établir une norme ouverte pour permettre aux agents d'intelligence artificielle (IA) de communiquer, collaborer et coordonner des tâches complexes entre eux, indépendamment de leur fournisseur. Transfert de Google à la communauté open source : Google a transféré la spécification du protocole A2A, les SDK associés et les outils de développement à la Linux Foundation pour garantir une gouvernance neutre et communautaire. Soutien de l'industrie : Plus de 100 entreprises soutiennent déjà le protocole. AWS et Cisco sont les derniers à l'avoir validé. Chaque entreprise partenaire a souligné l'importance de l'interopérabilité et de la collaboration ouverte pour l'avenir de l'IA. Objectifs de la fondation A2A : Établir une norme universelle pour l'interopérabilité des agents IA. Favoriser un écosystème mondial de développeurs et d'innovateurs. Garantir une gouvernance neutre et ouverte. Accélérer l'innovation sécurisée et collaborative. parler de la spec et surement dire qu'on aura l'occasion d'y revenir Gemini CLI :https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ Agent IA dans le terminal : Gemini CLI permet d'utiliser l'IA Gemini directement depuis le terminal. Gratuit avec compte Google : Accès à Gemini 2.5 Pro avec des limites généreuses. Fonctionnalités puissantes : Génère du code, exécute des commandes, automatise des tâches. Open source : Personnalisable et extensible par la communauté. Complément de Code Assist : Fonctionne aussi avec les IDE comme VS Code. Au lieu de blocker les IAs sur vos sites vous pouvez peut-être les guider avec les fichiers LLMs.txt https://llmstxt.org/ Exemples du projet angular: llms.txt un simple index avec des liens : https://angular.dev/llms.txt lllms-full.txt une version bien plus détaillée : https://angular.dev/llms-full.txt Outillage Les commits dans Git sont immuables, mais saviez vous que vous pouviez rajouter / mettre à jour des “notes” sur les commits ? https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/ Fonctionnalité méconnue : git notes est une fonctionnalité puissante mais peu utilisée de Git. Ajout de métadonnées : Permet d'attacher des informations à des commits existants sans en modifier le hash. Cas d'usage : Idéal pour ajouter des données issues de systèmes automatisés (builds, tickets, etc.). Revue de code distribuée : Des outils comme git-appraise ont été construits sur git notes pour permettre une revue de code entièrement distribuée, indépendante des forges (GitHub, GitLab). Peu populaire : Son interface complexe et le manque de support des plateformes de forge ont limité son adoption (GitHub n'affiche même pas/plus les notes). Indépendance des forges : git notes offre une voie vers une plus grande indépendance vis-à-vis des plateformes centralisées, en distribuant l'historique du projet avec le code lui-même. Un aperçu dur Spring Boot debugger dans IntelliJ idea ultimate https://blog.jetbrains.com/idea/2025/06/demystifying-spring-boot-with-spring-debugger/ montre cet outil qui donne du contexte spécifique à Spring comme les beans non activés, ceux mockés, la valeur des configs, l'état des transactions Il permet de visualiser tous les beans Spring directement dans la vue projet, avec les beans non instanciés grisés et les beans mockés marqués en orange pour les tests Il résout le problème de résolution des propriétés en affichant la valeur effective en temps réel dans les fichiers properties et yaml, avec la source exacte des valeurs surchargées Il affiche des indicateurs visuels pour les méthodes exécutées dans des transactions actives, avec les détails complets de la transaction et une hiérarchie visuelle pour les transactions imbriquées Il détecte automatiquement toutes les connexions DataSource actives et les intègre avec la fenêtre d'outils Database d'IntelliJ IDEA pour l'inspection Il permet l'auto-complétion et l'invocation de tous les beans chargés dans l'évaluateur d'expression, fonctionnant comme un REPL pour le contexte Spring Il fonctionne sans agent runtime supplémentaire en utilisant des breakpoints non-suspendus dans les bibliothèques Spring Boot pour analyser les données localement Une liste communautaire sur les assistants IA pour le code, lancée par Lize Raes https://aitoolcomparator.com/ tableau comparatif qui permet de voir les différentes fonctionnalités supportées par ces outils Architecture Un article sur l'architecture hexagonale en Java https://foojay.io/today/clean-and-modular-java-a-hexagonal-architecture-approach/ article introductif mais avec exemple sur l'architecture hexagonale entre le domaine, l'application et l‘infrastructure Le domain est sans dépendance L‘appli spécifique à l'application mais sans dépendance technique explique le flow L'infrastructure aura les dépendances à vos frameworks spring, Quarkus Micronaut, Kafka etc Je suis naturellement pas fan de l'architecture hexagonale en terme de volume de code vs le gain surtout en microservices mais c'est toujours intéressant de se challenger et de regarder le bénéfice coût. Gardez un œil sur les technologies avec les tech radar https://www.sfeir.dev/cloud/tech-radar-gardez-un-oeil-sur-le-paysage-technologique/ Le Tech Radar est crucial pour la veille technologique continue et la prise de décision éclairée. Il catégorise les technologies en Adopt, Trial, Assess, Hold, selon leur maturité et pertinence. Il est recommandé de créer son propre Tech Radar pour l'adapter aux besoins spécifiques, en s'inspirant des Radars publics. Utilisez des outils de découverte (Alternativeto), de tendance (Google Trends), de gestion d'obsolescence (End-of-life.date) et d'apprentissage (roadmap.sh). Restez informé via les blogs, podcasts, newsletters (TLDR), et les réseaux sociaux/communautés (X, Slack). L'objectif est de rester compétitif et de faire des choix technologiques stratégiques. Attention à ne pas sous-estimer son coût de maintenance Méthodologies Le concept d'expert generaliste https://martinfowler.com/articles/expert-generalist.html L'industrie pousse vers une spécialisation étroite, mais les collègues les plus efficaces excellent dans plusieurs domaines à la fois Un développeur Python expérimenté peut rapidement devenir productif dans une équipe Java grâce aux concepts fondamentaux partagés L'expertise réelle comporte deux aspects : la profondeur dans un domaine et la capacité d'apprendre rapidement Les Expert Generalists développent une maîtrise durable au niveau des principes fondamentaux plutôt que des outils spécifiques La curiosité est essentielle : ils explorent les nouvelles technologies et s'assurent de comprendre les réponses au lieu de copier-coller du code La collaboration est vitale car ils savent qu'ils ne peuvent pas tout maîtriser et travaillent efficacement avec des spécialistes L'humilité les pousse à d'abord comprendre pourquoi les choses fonctionnent d'une certaine manière avant de les remettre en question Le focus client canalise leur curiosité vers ce qui aide réellement les utilisateurs à exceller dans leur travail L'industrie doit traiter “Expert Generalist” comme une compétence de première classe à nommer, évaluer et former ca me rappelle le technical staff Un article sur les métriques métier et leurs valeurs https://blog.ippon.fr/2025/07/02/monitoring-metier-comment-va-vraiment-ton-service-2/ un article de rappel sur la valeur du monitoring métier et ses valeurs Le monitoring technique traditionnel (CPU, serveurs, API) ne garantit pas que le service fonctionne correctement pour l'utilisateur final. Le monitoring métier complète le monitoring technique en se concentrant sur l'expérience réelle des utilisateurs plutôt que sur les composants isolés. Il surveille des parcours critiques concrets comme “un client peut-il finaliser sa commande ?” au lieu d'indicateurs abstraits. Les métriques métier sont directement actionnables : taux de succès, délais moyens et volumes d'erreurs permettent de prioriser les actions. C'est un outil de pilotage stratégique qui améliore la réactivité, la priorisation et le dialogue entre équipes techniques et métier. La mise en place suit 5 étapes : dashboard technique fiable, identification des parcours critiques, traduction en indicateurs, centralisation et suivi dans la durée. Une Definition of Done doit formaliser des critères objectifs avant d'instrumenter tout parcours métier. Les indicateurs mesurables incluent les points de passage réussis/échoués, les temps entre actions et le respect des règles métier. Les dashboards doivent être intégrés dans les rituels quotidiens avec un système d'alertes temps réel compréhensibles. Le dispositif doit évoluer continuellement avec les transformations produit en questionnant chaque incident pour améliorer la détection. La difficulté c'est effectivement l'évolution métier par exemple peu de commandes la nuit etc ça fait partie de la boîte à outils SRE Sécurité Toujours à la recherche du S de Sécurité dans les MCP https://www.darkreading.com/cloud-security/hundreds-mcp-servers-ai-models-abuse-rce analyse des serveurs mcp ouverts et accessibles beaucoup ne font pas de sanity check des parametres si vous les utilisez dans votre appel genAI vous vous exposer ils ne sont pas mauvais fondamentalement mais n'ont pas encore de standardisation de securite si usage local prefferer stdio ou restreindre SSE à 127.0.0.1 Loi, société et organisation Nicolas Martignole, le même qui a créé le logo des Cast Codeurs, s'interroge sur les voies possibles des développeurs face à l'impact de l'IA sur notre métier https://touilleur-express.fr/2025/06/23/ni-manager-ni-contributeur-individuel/ Évolution des carrières de développeur : L'IA transforme les parcours traditionnels (manager ou expert technique). Chef d'Orchestre d'IA : Ancien manager qui pilote des IA, définit les architectures et valide le code généré. Artisan Augmenté : Développeur utilisant l'IA comme un outil pour coder plus vite et résoudre des problèmes complexes. Philosophe du Code : Un nouveau rôle centré sur le “pourquoi” du code, la conceptualisation de systèmes et l'éthique de l'IA. Charge cognitive de validation : Nouvelle charge mentale créée par la nécessité de vérifier le travail des IA. Réflexion sur l'impact : L'article invite à choisir son impact : orchestrer, créer ou guider. Entraîner les IAs sur des livres protégés (copyright) est acceptable (fair use) mais les stocker ne l'est pas https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ Victoire pour Anthropic (jusqu'au prochain procès): L'entreprise a obtenu gain de cause dans un procès très suivi concernant l'entraînement de son IA, Claude, avec des œuvres protégées par le droit d'auteur. “Fair Use” en force : Le juge a estimé que l'utilisation des livres pour entraîner l'IA relevait du “fair use” (usage équitable) car il s'agit d'une transformation du contenu, pas d'une simple reproduction. Nuance importante : Cependant, le stockage de ces œuvres dans une “bibliothèque centrale” sans autorisation a été jugé illégal, ce qui souligne la complexité de la gestion des données pour les modèles d'IA. Luc Julia, son audition au sénat https://videos.senat.fr/video.5486945_685259f55eac4.ia–audition-de-luc-julia-concepteur-de-siri On aime ou pas on aide pas Luc Julia et sa vision de l'IA . C'est un eversion encore plus longue mais dans le même thème que sa keynote à Devoxx France 2025 ( https://www.youtube.com/watch?v=JdxjGZBtp_k ) Nature et limites de l'IA : Luc Julia a insisté sur le fait que l'intelligence artificielle est une “évolution” plutôt qu'une “révolution”. Il a rappelé qu'elle repose sur des mathématiques et n'est pas “magique”. Il a également alerté sur le manque de fiabilité des informations fournies par les IA génératives comme ChatGPT, soulignant qu'« on ne peut pas leur faire confiance » car elles peuvent se tromper et que leur pertinence diminue avec le temps. Régulation de l'IA : Il a plaidé pour une régulation “intelligente et éclairée”, qui devrait se faire a posteriori afin de ne pas freiner l'innovation. Selon lui, cette régulation doit être basée sur les faits et non sur une analyse des risques a priori. Place de la France : Luc Julia a affirmé que la France possédait des chercheurs de très haut niveau et faisait partie des meilleurs mondiaux dans le domaine de l'IA. Il a cependant soulevé le problème du financement de la recherche et de l'innovation en France. IA et Société : L'audition a traité des impacts de l'IA sur la vie privée, le monde du travail et l'éducation. Luc Julia a souligné l'importance de développer l'esprit critique, notamment chez les jeunes, pour apprendre à vérifier les informations générées par les IA. Applications concrètes et futures : Le cas de la voiture autonome a été discuté, Luc Julia expliquant les différents niveaux d'autonomie et les défis restants. Il a également affirmé que l'intelligence artificielle générale (AGI), une IA qui dépasserait l'homme dans tous les domaines, est “impossible” avec les technologies actuelles. Rubrique débutant Les weakreferences et le finalize https://dzone.com/articles/advanced-java-garbage-collection-concepts un petit rappel utile sur les pièges de la méthode finalize qui peut ne jamais être invoquée Les risques de bug si finalize ne fini jamais Finalize rend le travail du garbage collector beaucoup plus complexe et inefficace Weak references sont utiles mais leur libération n'est pas contrôlable. Donc à ne pas abuser. Il y a aussi les soft et phantom references mais les usages ne sont assez subtils et complexe en fonction du GC. Le sériel va traiter les weak avant les soft, parallel non Le g1 ça dépend de la région Z1 ça dépend car le traitement est asynchrone Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-19 juillet 2025 : DebConf25 - Brest (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3's "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing. Links Notes and resources at ocdevel.com/mlg/mla-27 Try a walking desk - stay healthy & sharp while you learn & code Descript - my favorite AI audio/video editor AI Audio Tool Selection Music: Use Suno for complete songs or Udio for high-quality components for professional editing. Sound Effects: Use ElevenLabs' SFX for integrated podcast production or SFX Engine for large, licensed asset libraries for games and film. Voice: ElevenLabs gives the most realistic voice output. Murf.ai offers an all-in-one studio for marketing, and Play.ht has a low-latency API for developers. Open-Source TTS: For local use, StyleTTS 2 generates human-level speech, Coqui's XTTS-v2 is best for voice cloning from minimal input, and Piper TTS is a fast, CPU-friendly option. I. Prosumer Workflow: Viral Video Goal: Rapidly produce branded, short-form video for social media. This method bypasses Veo 3's weaker native "Extend" feature. Toolchain Image Concept: GPT-4o (API: GPT-Image-1) for its strong prompt adherence, text rendering, and conversational refinement. Video Generation: Google Veo 3 for high single-shot quality and integrated ambient audio. Soundtrack: Udio for creating unique, "viral-style" music. Assembly: CapCut for its standard short-form editing features. Workflow Create Character Sheet (GPT-4o): Generate a primary character image with a detailed "locking" prompt, then use conversational follow-ups to create variations (poses, expressions) for visual consistency. Generate Video (Veo 3): Use "High-Quality Chaining." Clip 1: Generate an 8s clip from a character sheet image. Extract Final Frame: Save the last frame of Clip 1. Clip 2: Use the extracted frame as the image input for the next clip, using a "this then that" prompt to continue the action. Repeat as needed. Create Music (Udio): Use Manual Mode with structured prompts ([Genre: ...], [Mood: ...]) to generate and extend a music track. Final Edit (CapCut): Assemble clips, layer the Udio track over Veo's ambient audio, add text, and use "Auto Captions." Export in 9:16. II. Indie Filmmaker Workflow: Narrative Shorts Goal: Create cinematic short films with consistent characters and storytelling focus, using a hybrid of specialized tools. Toolchain Visual Foundation: Midjourney V7 to establish character and style with --cref and --sref parameters. Dialogue Scenes: Kling for its superior lip-sync and character realism. B-Roll/Action: Runway Gen-4 for its Director Mode camera controls and Multi-Motion Brush. Voice Generation: ElevenLabs for emotive, high-fidelity voices. Edit & Color: DaVinci Resolve for its integrated edit, color, and VFX suite and favorable cost model. Workflow Create Visual Foundation (Midjourney V7): Generate a "hero" character image. Use its URL with --cref --cw 100 to create consistent character poses and with --sref to replicate the visual style in other shots. Assemble a reference set. Create Dialogue Scenes (ElevenLabs -> Kling): Generate the dialogue track in ElevenLabs and download the audio. In Kling, generate a video of the character from a reference image with their mouth closed. Use Kling's "Lip Sync" feature to apply the ElevenLabs audio to the neutral video for a perfect match. Create B-Roll (Runway Gen-4): Use reference images from Midjourney. Apply precise camera moves with Director Mode or add localized, layered motion to static scenes with the Multi-Motion Brush. Assemble & Grade (DaVinci Resolve): Edit clips and audio on the Edit page. On the Color page, use node-based tools to match shots from Kling and Runway, then apply a final creative look. III. Professional Studio Workflow: Full Control Goal: Achieve absolute pixel-level control, actor likeness, and integration into standard VFX pipelines using an open-source, modular approach. Toolchain Core Engine: ComfyUI with Stable Diffusion models (e.g., SD3, FLUX). VFX Compositing: DaVinci Resolve (Fusion page) for node-based, multi-layer EXR compositing. Control Stack & Workflow Train Character LoRA: Train a custom LoRA on a 15-30 image dataset of the actor in ComfyUI to ensure true likeness. Build ComfyUI Node Graph: Construct a generation pipeline in this order: Loaders: Load base model, custom character LoRA, and text prompts (with LoRA trigger word). ControlNet Stack: Chain multiple ControlNets to define structure (e.g., OpenPose for skeleton, Depth map for 3D layout). IPAdapter-FaceID: Use the Plus v2 model as a final reinforcement layer to lock facial identity before animation. AnimateDiff: Apply deterministic camera motion using Motion LoRAs (e.g., v2_lora_PanLeft.ckpt). KSampler -> VAE Decode: Generate the image sequence. Export Multi-Layer EXR: Use a node like mrv2SaveEXRImage to save the output as an EXR sequence (.exr). Configure for a professional pipeline: 32-bit float, linear color space, and PIZ/ZIP lossless compression. This preserves render passes (diffuse, specular, mattes) in a single file. Composite in Fusion: In DaVinci Resolve, import the EXR sequence. Use Fusion's node graph to access individual layers, allowing separate adjustments to elements like color, highlights, and masks before integrating the AI asset into a final shot with a background plate.
Bob Bilbruck, founder of Captjur, is an accomplished and visionary CEO with nearly 30 years of experience in emerging markets and technology. In his third appearance on PR 360, he discusses his role in bringing flag football to the 2028 Olympics, the importance of diverse AI ecosystems, and the need for the U.S. to stay competitive in the global AI arms race.Key Takeaways:- The growing world of flag football- The future of the gig economy- The AI arms raceEpisode Timeline:1:50 Captjur Sports' involvement in Olympic flag football4:15 Legal challenges with NIL6:30 Pressure is on the NCAA9:15 Flag football's expansion12:00 Recent developments in the gig economy15:15 The importance of diverse AI ecosystems20:30 CPU vs. GPU computing21:50 The AI arms race with China24:00 What's the moonshot in the AI arms race?27:00 Do consumer AI products give a false representation of its capabilities?This episode's guest:• Bob Bilbruck on LinkedIn• Captjur's website• Email at: info@Captjur.omSubscribe and leave a 5-star review: https://pod.link/1496390646Contact Us!•Join the conversation by leaving a comment!•Follow us on Facebook, Twitter, Instagram, and LinkedIn!Thanks for listening! Hosted on Acast. See acast.com/privacy for more information.
An airhacks.fm conversation with Michalis Papadimitriou (@mikepapadim) about: starting with Java 8, first computer experiences with Pentium 2, doom 2 and Microsoft Paint, university introduction to Object-oriented programming using Objects First and bluej IDE, Monte Carlo simulations for financial portfolio optimization in Java, porting Java applications to OpenCL for GPU acceleration achieving 20x speedup, working at Huawei on GPU hardware, writing unit tests as introduction to TornadoVM, working on FPGA integration and Graal compiler optimizations, experience at OctoAI startup doing AI compiler optimizations for TensorFlow and PyTorch models, understanding model formats evolution from ONNX to GGUF, standardization of LLM inference through Llama models, implementing GPU-accelerated Llama 3 inference in pure Java using TornadoVM, achieving 3-6x speedup over CPU implementations, supporting multiple models including Mistral and working on qwen 3 and deepseek, differences between models mainly in normalization layers, GGUF becoming quasi-standard for LLM model distribution, TornadoVM's Consume and Persist API for optimizing GPU data transfers, challenges with OpenCL deprecation on macOS and plans for Metal backend, importance of developer experience and avoiding python dependencies for Java projects, runtime and compiler optimizations for GPU inference, kernel fusion techniques, upcoming integration with langchain4j, potential of Java ecosystem with Graal VM and Project Panama FFM for high-performance inference, advantages of Java's multi-threading capabilities for inference workloads Michalis Papadimitriou on twitter: @mikepapadim
In Podcast Playbook (Part 2), Kelly Kennedy rips the fluff off podcast tech and gives you the real, unfiltered truth about the gear that actually matters. If you're serious about launching a podcast that sounds world-class, this episode is your blueprint. From computers and microphones to audio interfaces and headphones, Kelly breaks down what to buy, why it matters, and how to build a pro-level setup without wasting a dollar. No gimmicks. No jargon. Just clarity and confidence to build your studio the right way—whether you're in a spare bedroom or a full production space.But this isn't just about gear—it's about showing up like a pro from day one. Kelly explains why audio quality is make-or-break, why your computer is the real unsung hero, and how the right setup positions you for long-term podcast success. This episode will cut months off your learning curve and set you up to hit record with power and purpose. If you want to build a podcast that's built to last, Episode 251 is non-negotiable.Key Takeaways: 1. Your computer is the most critical piece of podcasting equipment—editing and production demand serious processing power.2. A gaming laptop or desktop is often the best choice due to its high-end GPU, CPU, RAM, and SSD performance.3. Sound quality can make or break your show; even great content won't save you if the audio hurts people's ears.4. USB microphones are great for beginners, but XLR microphones paired with an interface deliver far superior sound and control.5. A quality audio interface like the Rodecaster Pro 2 allows for zero-latency monitoring, clean gain control, and pro-level audio routing.6. Headphones are non-negotiable—they prevent feedback, help monitor sound live, and allow you to edit with precision.7. Bluetooth headphones introduce latency—always go wired when producing or editing your show.8. You don't need a full studio to sound professional—a home setup with the right gear can match broadcast quality.9. Start with a setup you can grow into—XLR systems are scalable and used by nearly all professional podcasters.10. Equipment helps—but consistency, connection, and your message are what truly build a great podcast.Ready to build something that lasts?The Catalyst Club isn't just another business community—it's your backstage pass to real growth. If you're a founder, executive, podcaster, or builder chasing clarity, connection, and momentum, this is where you belong. Inside, you'll find exclusive coaching, behind-the-scenes strategy, live events, and a rockstar crew of high-performers pushing the edge just like you.No fluff. No noise. Just fuel for what you're building.Join us: www.kellykennedyofficial.com/thecatalystclubIf you know, you're known.
GPU – это не только запустить новый Doom на максималках, но и возможность решать вычислительные задачи в тысячи раз быстрее, чем на CPU. Как это работает и для каких задач – разбираемся в выпуске с Николаем Полярным! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodcastPodlodka Ведущие в выпуске: Женя Кателла, Катя Петрова Полезные ссылки: Telegram-канал https://t.me/UnicornGlade Лекция про то как работает Nanite в Unreal Engine 5 https://www.youtube.com/watch?v=ltUzX1IR9JI Концентрат-лекция про видеокарты https://www.youtube.com/watch?v=zJ6ru8dNAcs Курс по видеокартам (OpenCL/CUDA) https://www.youtube.com/playlist?list=PLlb7e2G7aSpSkDWlyJQzT9Qx9rrgKSgAp домашние задания - https://github.com/gpgpucourse Твитч (сессии Live Coding) https://www.twitch.tv/polarnick239 Сайт Николая Полярного https://polarnick.com/ Алгоритм как конструировать BVH в realtime https://www.youtube.com/watch?v=WuycXesy4pQ&list=PLlb7e2G7aSpSptbl_yI5uvMlpRc1mwsCL&index=8
Your computer's CPU is a complex piece of circuitry trying to maximize how much it can do and how quickly it can do it. I'll outline one of the techniques that makes a single CPU core look like two.
Rustler Core Team Member Sonny Scroggin joins Elixir Wizards Sundi Myint and Charles Suggs. Rustler serves as a bridge to write Native Implemented Functions (NIFs) in Rust that can be called from Elixir code. This combo leverages Rust's performance and memory safety while maintaining Elixir's fault tolerance and concurrency model, creating a powerful solution for CPU-intensive operations within Elixir applications. Sonny provides guidance on when developers should consider using NIFs versus other approaches like ports or external services and highlights the considerations needed when stepping outside Elixir's standard execution model into native code. Looking toward the future, Sonny discusses exciting developments for Rustler, including an improved asynchronous NIF interface, API modernization efforts, and better tooling. While Rust offers tremendous performance benefits for specific use cases, Sonny emphasizes that Elixir's dynamic nature and the BEAM's capabilities for distributed systems remain unmatched for many applications. Rustler simply provides another powerful tool that expands what developers can accomplish within the Elixir ecosystem. Key topics discussed in this episode: Rust as a "high-level low-level language" with memory safety NIFs (Native Implemented Functions) in the BEAM virtual machine Rustler's role simplifying Rust-Elixir integration with macros CPU-intensive operations as primary NIF use cases Beam scheduler interaction considerations with native code Dirty schedulers for longer-running NIFs in OTP 20+ Memory safety advantages of Rust for NIFs Development workflow using Mix tasks for Rustler Common pitfalls when first working with Rust Error handling improvements possible with Rustler NIFs Differences between ports, NIFs, and external services Asynchronous programming approaches in Rust versus Elixir Tokyo runtime integration for asynchronous operations Static NIFs for mobile device compatibility Upcoming CLI tooling to simplify Rustler development Rustler's API modernization efforts for better ergonomics Thread pool sharing across multiple NIFs Wasm integration possibilities for the BEAM Compile-time safety versus dynamic runtime capabilities Performance considerations when implementing NIFs Compiler-assisted memory management in Rust Automatic encoding/decoding between Rust and Elixir types The importance of proper error handling Real-world application in high-traffic authentication servers Community resources for learning Rustler Links mentioned: https://github.com/rusterlium/rustler https://github.com/rust-lang/rust https://www.angelfire.lycos.com/ https://www.webdesignmuseum.org/flash-websites https://www.php.net/ https://xmpp.org/ https://jabberd2.org/ Geocities: https://cybercultural.com/p/geocities-1995/ (fun fact: when you search Geocities on Google, the results page is in Comic Sans font.) https://bleacherreport.com/ https://hexdocs.pm/jose/readme.html https://github.com/rust-lang/rust-bindgen Erlang Ports: https://www.erlang.org/doc/system/cport.html Erlang ETFs (External Term Format): https://www.erlang.org/doc/apps/erts/erlextdist.html Elixir gRPC https://github.com/elixir-grpc/grpc gRPC (“Remote Proceduce Call”): https://grpc.io/ dirtycpu.ex https://github.com/E-xyza/zigler/blob/main/lib/zig/nif/dirty_cpu.ex ets https://www.erlang.org/doc/apps/stdlib/ets.html Mnesia https://www.erlang.org/doc/apps/mnesia/mnesia.html VPPs (Virtual Power Plants): https://www.energy.gov/lpo/virtual-power-plants https://nixos.org/ WASM WebAssembly with Elixir: https://github.com/RoyalIcing/Orb Rust Tokio https://tokio.rs/ Getting Started: https://hexdocs.pm/rustler/0.17.0/Mix.Tasks.Rustler.New.html https://rustup.rs/ Special Guest: Sonny Scroggin.
John is joined by Spencer Collins, Executive Vice President and Chief Legal Officer of Arm Holdings, the UK-based semiconductor design firm known for powering over 99% of smartphones globally with its energy-efficient CPU designs. They discuss the legal challenges that arise from Arm's unique position in the semiconductor industry. Arm has a unique business model, centered on licensing intellectual property rather than manufacturing processors. This model is evolving as Arm considers moving “up the stack,” potentially entering into processor production to compete more directly in the AI hardware space. Since its $31 billion acquisition by SoftBank in 2016, Arm has seen tremendous growth, culminating in an IPO in 2023 at a $54 billion valuation and its market value nearly doubling since.AI is a major strategic focus for Arm, as its CPUs are increasingly central to AI processing in cloud and edge environments. Arm's high-profile AI projects include Nvidia's Grace Hopper superchip and Microsoft's new AI server chips, both of which rely heavily on Arm CPU cores. Arm is positioned to be a key infrastructure player in AI's future based on its broad customer base, the low power consumption of its semiconductors, and their extensive security features. Nvidia's proposed $40 billion acquisition of ARM collapsed due to regulatory pushback in the U.S., Europe, and China. This led SoftBank to pivot to taking 10% of Arm public. Arm is now aggressively strengthening its intellectual property strategy, expanding patent filings, and upgrading legal operations to better protect its innovations in the AI space.Spencer describes his own career path—from law firm M&A work to a leadership role at SoftBank's Vision Fund, where he worked on deals like the $7.7 billion Uber investment—culminating in his current post. He suggests that general counsel for major tech firms must be intellectually agile, invest in best-in-class advisors, and maintain geopolitical awareness to navigate today's rapidly changing legal and regulatory landscape.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi
Fundamentals of Operating Systems Course https://oscourse.winktls is brilliant.TLS encryption/decryption often happens in userland. While TCP lives in the kernel. With ktls, userland can hand the keys to the kernel and the kernel does crypto. When calling write, the kernel encrypts the packet and send it to the NIC.When calling read, the kernel decrypts the packet and handed it to the userspace. This mode still taxes the host's CPU of course, so there is another mode where the kernel offloads the crypto to the NIC device! Host CPU becomes free. Incoming packets to the NIC are decrypted in device before they are DMAed to the kernel. outgoing packets are encrypted before they leave the NIC to the network.ktls still need handshake to happen in userspace. There is also enabling zerocopy in some cases (now that kernel has context) Deserves a video. So much good stuff.0:00 Intro2:00 Userspace SSL Libraries 3:00 ktls 6:00 Kernel Encrypts/Decrypts (TLS_SW)8:20 NIC offload mode (TLS_HW)10:15 NIC does it all (TLS_HW_RECORD)12:00 Write TX Example13:50 Read RX Example17:00 Zero copy (sendfile)https://docs.kernel.org/networking/tls-offload.html
Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic's acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University. In this episode… In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities? Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data's technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data's approach doesn't require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today's data-driven business landscape, how Voltron Data's innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
Timestamps: 0:00 Nintendo not a linus fan i guess 1:22 RTX 9060 XT 16GB Reviews 2:22 Meta, Yandex de-anonymizing users 3:31 Hoverpen Interstellar! 4:39 QUICK BITS INTRO 4:45 Witcher 4 Unreal Engine footage 5:23 Crocodilus Android malware 5:57 CPU cooler on a GTX 960 6:25 Milky Way and Andromeda might miss NEWS SOURCES: https://lmg.gg/Z0y6E Learn more about your ad choices. Visit megaphone.fm/adchoices
The real Bitcoin Pizza Day story: Laszlo spent nearly 80,000 Bitcoin on pizza in 2010, not just 10,000. Plus how his GPU mining discovery changed Bitcoin forever and why Satoshi wasn't happy about it.You're listening to Bitcoin Season 2. Subscribe to the newsletter, trusted by over 12,000 Bitcoiners: https://newsletter.blockspacemedia.comCharlie and Colin reveal the shocking truth about Bitcoin Pizza Day that mainstream media got wrong. Laszlo didn't just spend 10,000 Bitcoin on pizza - he spent nearly 80,000 Bitcoin throughout 2010! We dive deep into how his GPU mining discovery revolutionized Bitcoin, why Satoshi sent him a concerned email, and how this "penance" may have actually saved Bitcoin's decentralization in its early days.**Notes:**• Laszlo spent ~80,000 Bitcoin total on pizza in 2010• GPU mining was 10x more powerful than CPU mining• Bitcoin hash rate increased 130,000% by end of 2010• Laszlo had 1-1.5% of entire Bitcoin supply 2009-2010• His wallet peaked at 43,854 Bitcoin• Total wallet flows were 81,432 BitcoinTimestamps:00:00 Start00:28 Lies, damn lies.. and pizza02:21 What actually happened05:46 It's actually WAY MORE than you think11:15 Arch Network11:47 Laslo "saved" Bitcoin19:12 Pizza or penance?-