POPULARITY
Categories
OpenAI has made significant strides in the AI landscape with a series of announcements that position it as a leading platform in the industry. The introduction of new models, including the GPT-5 Pro and Sora 2, alongside app integrations like Slack and a new Apps SDK, marks a pivotal moment for the company. These developments aim to enhance user interaction and streamline workflows, allowing users to perform tasks directly within the ChatGPT interface. The partnership with Advanced Micro Devices (AMD) for a multi-billion dollar chip deal further solidifies OpenAI's commitment to expanding its computing capabilities, crucial for the advancement of its AI technologies.In a contrasting scenario, Deloitte has faced scrutiny after delivering a flawed report to the Australian government, which included errors attributed to the use of AI. Despite this setback, Deloitte is moving forward with a significant partnership with Anthropic to deploy their AI chatbot, Claude, across its workforce. This juxtaposition highlights the challenges and risks associated with AI integration in business operations, emphasizing the need for careful governance and oversight. The incident serves as a cautionary tale about the potential pitfalls of relying too heavily on AI without proper verification.The podcast also discusses the broader implications of AI adoption in enterprises, revealing that a majority of AI projects are failing due to governance gaps and a lack of trust in the technology. A survey by Gartner indicates that many IT leaders are concerned about regulatory compliance, with only a small percentage feeling confident in their organizations' ability to manage AI tools effectively. This situation underscores the importance of establishing robust governance frameworks to ensure that AI implementations are both effective and trustworthy.As the AI landscape continues to evolve, the podcast suggests that service providers should pivot towards building governance frameworks and risk management strategies rather than simply promoting AI hype. The focus should shift to creating value through responsible AI use, ensuring that clients can trust the technology they are implementing. This new approach positions governance as a critical service line, essential for navigating the complexities of AI adoption and maintaining client trust in an increasingly automated world. Three things to know today 00:00 OpenAI Builds the Windows of AI: New Models, App Store, SDKs, and a Chip Deal Signal Platform Takeover06:50 Deloitte's AI Paradox — A Costly Error in Australia, Followed by Its Biggest AI Expansion Yet09:38 AI's Next Frontier Isn't Innovation — It's Accountability, and That's Where MSPs Win This is the Business of Tech. Supported by: https://mailprotector.com/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
If your MCP server has dozens of tools, it's probably built wrong.You need tools that are specific and clear for each use case—but you also can't have too many. This creates an almost impossible tradeoff that most companies don't know how to solve.That's why we interviewed Alex Rattray, the founder and CEO of Stainless. Stainless builds APIs, SDKs, and MCP servers for companies like OpenAI and Anthropic. Alex has spent years mastering how to make software talk to software, and he came on the show to share what he knows. We get into MCP and the future of the AI-native internet.If you found this episode interesting, please like, subscribe, comment, and share.Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:- Subscribe to Every: https://every.to/subscribe- Follow him on X: https://twitter.com/danshipperReady to build a site that looks hand-coded—without hiring a developer? Launch your site for free at Framer.com, and use code DAN to get your first month of Pro on the house.Timestamps:00:00:00 - Start00:01:14 - Introduction00:02:54 - Why Alex likes running barefoot00:05:09 - APIs and MCP, the connectors of the new internet00:10:53 - Why MCP servers are hard to get right00:20:07 - Design principles for reliable MCP servers00:23:50 - Scaling MCP servers for large APIs00:25:14 - Using MCP for business ops at Stainless00:28:12 - Building a company brain with Claude Code00:33:59 - Where MCP goes from here00:41:10 - Alex's take on the security model for MCPLinks to resources mentioned in the episode:- Alex Rattray: Alex Rattray (@RattrayAlex), Alex Rattray - Stainless: https://www.stainless.com/
Just as a detective must gather clues and consider all the evidence, ITOps teams investigating an outage must consider all available data points in context to understand where system failures likely occurred. Recent outages that impacted Starlink and Google Maps SDKs powerfully illustrated this point. Tune in to learn more about these incidents and also get an update on the September Red Sea cable cuts. ——— CHAPTERS 00:00 Intro 00:53 Starlink Outage 03:39 Google Maps Outage 06:50 Update: Red Sea Cable Cuts 13:26 Outage Trends: By the Numbers 14:34 Get in Touch ——— Want to get in touch? If you have questions, feedback, or guests you would like to see featured on the show, send us a note at InternetReport@thousandeyes.com. Or follow us on LinkedIn or X: @thousandeyes ——— ABOUT THE INTERNET REPORT This is The Internet Report, a podcast uncovering what's working and what's breaking on the Internet—and why. Tune in to hear ThousandEyes' Internet experts dig into some of the most interesting outage events from the past couple weeks, discussing what went awry—was it the Internet, or an application issue? Plus, learn about the latest trends in ISP outages, cloud network outages, collaboration network outages, and more. Catch all the episodes on YouTube or your favorite podcast platform: - Apple Podcasts: https://podcasts.apple.com/us/podcast/the-internet-report/id1506984526 - Spotify: https://open.spotify.com/show/5ADFvqAtgsbYwk4JiZFqHQ?si=00e9c4b53aff4d08&nd=1&dlsi=eab65c9ea39d4773 - SoundCloud: https://soundcloud.com/ciscopodcastnetwork/sets/the-internet-report
Join us for an exciting deep dive into Topia with founder Daniel as we explore how this spatial platform is revolutionizing virtual education! Discover how proximity chat, customizable worlds, and innovative SDK tools are creating delightful learning experiences for K-12 students in virtual schools, micro schools, and homeschooling environments.From supporting 500+ concurrent users to integrating with major LMS platforms, Topia is proving that online education can be engaging, social, AND safe. We discuss the platform's evolution, the importance of "delight" as a metric, and why students are literally clicking repeatedly just to get into class early!Whether you're an educator exploring virtual learning tools, a remote worker seeking better collaboration spaces, or just curious about the future of online community building, this episode is packed with insights.Head over to our website at hitechpod.us for all of our episode pages, send some support at Buy Me a Coffee, our Twitter, our YouTube, and to see our faces (maybe skip the last one).Need a journal that's secure and reflective? Sign-up for the Reflection App today! We promise that the free version is enough, but if you want the extra features, paying up is even better with our affiliate discount.
Just as a detective must gather clues and consider all the evidence, ITOps teams investigating an outage must consider all available data points in context to understand where system failures likely occurred. Recent outages that impacted Starlink and Google Maps SDKs powerfully illustrated this point.Tune in to learn more about these incidents and also get an update on the September Red Sea cable cuts.———CHAPTERS00:00 Intro00:53 Starlink Outage03:39 Google Maps Outage06:50 Update: Red Sea Cable Cuts13:26 Outage Trends: By the Numbers14:34 Get in Touch———Want to get in touch?If you have questions, feedback, or guests you would like to see featured on the show, send us a note at InternetReport@thousandeyes.com. Or follow us on LinkedIn or X: @thousandeyes———ABOUT THE INTERNET REPORTThis is The Internet Report, a podcast uncovering what's working and what's breaking on the Internet—and why. Tune in to hear ThousandEyes' Internet experts dig into some of the most interesting outage events from the past couple weeks, discussing what went awry—was it the Internet, or an application issue?Plus, learn about the latest trends in ISP outages, cloud network outages, collaboration network outages, and more.Catch all the episodes on YouTube or your favorite podcast platform:- Apple Podcasts: https://podcasts.apple.com/us/podcast/the-internet-report/id1506984526- Spotify: https://open.spotify.com/show/5ADFvqAtgsbYwk4JiZFqHQ?si=00e9c4b53aff4d08&nd=1&dlsi=eab65c9ea39d4773- SoundCloud: https://soundcloud.com/ciscopodcastnetwork/sets/the-internet-report
Client SDKs: Die schöneren APIs?APIs sind das Rückgrat moderner Softwareentwicklung, doch wer kennt nicht das Dilemma? Die API ändert sich, Fehlermeldungen stapeln sich im Postfach, und plötzlich hängt dein Workflow am seidenen HTTP-Thread. Genau dort kommen Client SDKs ins Spiel. Sie machen aus kryptischen API-Endpunkten handliche, sprachnahe Werkzeuge, die dir nicht nur Nerven, sondern auch Zeit sparen.In dieser Episode schauen wir hinter die Kulissen der SDK-Entwicklung. Wir sprechen aus Maintainer-Perspektive über Supportdruck, Burnout und die (oft unterschätzte) Verantwortung in Open Source. Gleichzeitig tauchen wir tief in die Praxis ein: Was ist ein Client SDK genau? Wann lohnt sich Handarbeit, wann die Code-Generation? Warum ist idiomatisches SDK-Design mehr als nur Style – und weshalb boosten einige SDKs wie das von Stripe oder AWS sogar den wirtschaftlichen Erfolg ganzer Unternehmen?Gemeinsam werfen wir einen Blick auf Architektur, Best Practices, Edge Cases, Testing, Dokumentation und Wartung. Und natürlich diskutieren wir, wann ein SDK wirklich sinnvoll ist – und in welchen Fällen du lieber einen simplen HTTP-Aufruf selbst schreibst.Bonus: Wieso Atlassian Merch statt Sponsoring schickt.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:
Here's the thing. We have had brilliant ideas in Web3 for years, along with better tooling and plenty of enthusiasm, yet adoption still feels slower than it should be. In my conversation with Maciej Baj, founder of t3rn, we got under the skin of why that is and what it might take to change the pace. His starting point is simple to state and hard to deliver at scale: make cross-chain interactions feel seamless for users and predictable for developers. If you can do that, the door opens to practical products rather than experiments that only the bravest try. Maciej describes t3rn as a universal execution layer for cross-chain smart contracts, and the phrase matters because it changes how we think about interoperability. Instead of stitching together a mess of bridges and oracles, t3rn lets a contract access state and data across multiple chains from one place. Today it is mapped to the EVM for broad compatibility, but the design is chain agnostic by intent. That choice is less about tribal loyalties and more about meeting developers where they already build while keeping the door open to other ecosystems as the market evolves. Trust shows up in the details, and atomic execution is one of those details that changes behavior. If a multi-chain transaction cannot complete in full, it reverts. No half-finished transfers. No manual recovery adventures. This mirrors what smart contracts already offer on a single chain, which means developers can reason about outcomes without inventing fresh playbooks for every hop. It also reassures users, who care less about the plumbing and more about knowing that funds either arrive or return. Cost matters too. t3rn has been engineered for cost-efficient token movement across chains, which sounds mundane until you price a complex strategy that touches multiple venues. Lower friction makes new use cases economical. Maciej outlined a few that caught my eye. Trading algorithms that read and act on signals from multiple chains without duct tape. Simpler asset movement across ecosystems that do not share a wallet culture or UX conventions. Agent-driven executors that can watch for arbitrage or rebalance a portfolio without constant human oversight. The theme is the same throughout. Reduce the number of hoops and you increase the number of people willing to try something new. We also looked ahead. t3rn is preparing an integration with hyperliquid and rolling out a builder program to widen the ecosystem on top of its execution layer. An SDK is on the way so the community can help bring in new chains faster, rather than waiting for a core team to do all the heavy lifting. There is a governance track forming as well, aimed at giving the community more say in integrations and priorities. None of this guarantees success, but it signals a path from protocol to platform. I left the conversation with a clearer view of why interoperability still matters in 2025. The multi-chain world is not going away. Users move between ecosystems. Developers deploy to several environments at once. Liquidity, identity, and logic already live in many places. A universal execution layer that is reliable, cost aware, and easy to build on is the kind of boring-sounding foundation that ends up changing behavior. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Expo SDK 54 and React Native 0.81 are a perfect match—and our hosts Mazen Chami, Frank Calise, and Tyler Williams are here to break it all down. In this episode, they dive deep into everything new in Expo SDK 54, from faster precompiled iOS builds to the sleek Liquid Glass feature and Android 16 support. If you want the complete rundown of what's fresh, powerful, and ready to use in Expo SDK 54, this episode has you covered. Show NotesExpo SDK 54 beta is now availablePrecompiling the Expo SDK for iOSExpo AutolinkingInfinite Red's articlePhil Pluckthun's article Connect With Us!Mazen Chami: @mazenchamiFrank Calise: @frankcaliseTyler Williams: @coolsoftwaredevReact Native Radio: @reactnativerdio This episode is brought to you by Infinite Red!Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.
This episode is supported by Pneuma Solutions. Creators of accessible tools like Remote Incident Manager and Scribe. Get $20 off with code dt20 at https://pneumasolutions.com/ and enter to win a free subscription at doubletaponair.com/subscribe!Discover the latest on the iPhone 17 Pro Max, Apple's new Action and Camera buttons, and the potential of Meta's Ray-Ban smart glasses with built-in accessibility. Steven Scott, Shaun Preece, and guest Michael Babcock share hands-on experiences, battery performance insights, and why blind users should care about Apple Intelligence and the new Meta Displays.In this episode of Double Tap, Steven and Shaun are joined by Michael Babcock for a relaxed, tech-focused catch-up. The trio dive into the iPhone 17 Pro Max, comparing upgrades from the iPhone 14 Pro Max, including the new USB-C charging, enhanced battery life, Action button customisation, and the Camera Control button. They weigh the pros and cons of newer models like the iPhone Air and discuss how size, weight, and accessibility affect their choices.The conversation shifts to Meta's latest smart glasses, including the Ray-Ban Vanguards and the upcoming Displays with built-in screen readers and developer SDKs. The team explores the real-world benefits for blind users, gestures, and Meta AI's potential for on-device assistance. They also reflect on blind community connections, podcasting experiences, and the excitement of accessible tech innovation.Relevant LinksDouble Tap Newsletter: https://doubletaponair.com/subscribeMichael Babcock's Podcast: https://technicallyworking.showACB Community: https://acb.org Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, I'm joined once again by Alberto Moedano aka Code with Beto. We discuss the exciting features of Expo SDK 54, including the introduction of React Native 0.81, the new Expo Router version 6, and the integration of Expo UI with SwiftUI.Beto and I also delve into the benefits of the Liquid Glass design, the improvements in build times, and the future of Expo Maps.Beto finally shares insights on his successful tool Snap AI and the importance of keeping up with SDK updates for better performance and user experience.
We discuss hands-on impressions of Meta's Ray-Ban Display wristband and Horizon Hyperscape's photorealistic scene capture on Quest 3, the Gen 2 Ray-Ban Meta glasses, Oakley Meta Vanguard with a wider FOV centered camera and IP67 resistance, Garmin watch integration, a new smart glasses SDK, Conversation Focus, and Michael Abrash's vision for always-on contextual AI, plus a tease of the next Horizon OS system UI, an Avatar: Fire And Ash 3D teaser on Quest, Blumhouse Enhanced Cinema bringing M3GAN with immersive effects, Discord coming to Quest next year, and major Horizon upgrades including a 4x faster Engine, 100+ user instances, and AI-driven creation in Horizon Studio.
Dans cet épisode de Connected Mate, PPC décrypte avec Alexandre, les annonces marquantes de MetaConnect 2025 : le lancement des nouvelles Ray-Ban Display avec micro-écran intégré, l'Oakley Meta Vanguard pensée pour le sport, et le très attendu Neural Band, un bracelet neuronal qui transforme nos poignets en interface futuriste.Au programme :La stratégie de Meta pour occuper le terrain du XRLe rôle d'EssilorLuxottica dans la crédibilité hardware et fashionLes cas d'usage solides (guidage, accessibilité, sport, contenu social POV)La question clé de l'autonomie et de l'ouverture aux développeurs via un SDK dédiéLes enjeux sensibles de la data, de la santé et… des potentielles publicités “dans les yeux”Un échange dense, concret et sans filtre pour comprendre comment Meta prépare l'après-smartphone et veut imposer ses lunettes connectées comme le prochain device incontournable.Pour suivre les actualités de ce podcast, abonnez-vous gratuitement à la newsletter écrite avec amour et garantie sans spam https://bonjourppc.substack.com Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Meta is opening its smart glasses to developers with the new Meta Wearables Device Access Toolkit, and Microsoft Seeing AI is the first major partner. This breakthrough could transform accessibility by allowing apps to harness the glasses' cameras, microphones, and audio for real-world assistance. Expanded SummarySteven Scott and Shaun Preece break down Meta's game-changing announcement: the release of an SDK that finally gives developers access to core Meta smart glasses hardware. This means apps like Microsoft Seeing AI can use the glasses to deliver hands-free, real-time descriptions of surroundings, object recognition, and instant text reading—all without holding a phone. The hosts explore why this is massive for accessibility, particularly for blind and visually impaired users. They discuss the future potential for apps like Be My Eyes, Aira, and Envision, along with the limitations of the preview programme, privacy considerations, and how developers can integrate AI via cloud or local processing. They also touch on why this move positions Meta as a serious player before Apple and Google release their own smart glasses SDKs. Relevant LinksMeta Wearables Device Access Toolkit: https://www.meta.comMicrosoft Seeing AI: https://www.microsoft.com/seeing-aiBe My Eyes: https://www.bemyeyes.com Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week on the PHP Podcast, Eric and John discuss NativePHP bringing everything, including the Kitchen Sink, PHP Foundation announcement of the SDK for MCP, Nuno's Explanation of Laravel MCP, PHP 8.5 Pipe Operator, the Supply Chain issue with NPM, and more. Links from the show: GitHub – NativePHP/kitchen-sink-mobile: NativePHP for mobile demo app […] The post PHP Podcast: 2025.09.18 appeared first on PHP Architect.
Metas erste Smart Glasses mit Display ist da: Meta Ray-Ban Display heißt das gute Stück, dass die erste Brille mit Meta Neural Band ist. Dazu gibt es noch zwei weitere Hardware-Vorstellungen, ebenso im Smart Glasses-Bereich: Die zweite Generation der Ray-Ban Meta und die komplett neue Oakley Meta Vanguard, eine Performance KI-Brille, wie Meta sie selbst bezeichnet, die sexier kaum sein könnte. Dazu kommen News über Meta Horizon World, die jetzt mit der Horizon Engine läuft, einer hauseigenen Game-Engine, die bis zu 4-fach schnelleres Lader ermöglichen soll und bis zu 100 Avatare in einer Welt erlaubt. Hyperscape Capture erlaubt jetzt das Scannen der eigenen Umgebungen, die Dank Gaussschem Splatting noch viel besser aussehen. Wir freuen uns auf Demeos Dungeons & Dragons Game Battlemarks. Und auf 3D-Filme, die hoffentlich so gut aussehen wie James Camerons Avatar Fire and Ash. Den Trailer müsst ihr euch unbedingt auf Quest anschauen. Wir besprechen das erwartete Meta Wearable Device Toolkit, ein SDK für iOS und Android, um Zugriff auf Mikro, Lautsprecher, Kamera und Tab-Gesten ihrer Smart Glasses zu ermöglichen. Und wir philosophieren am Ende mit Michael Abrash und Richard Newcombe über Contextual AI und ihre gesellschaftlichen Implikationen für unsere Demokratie.
C'est une grande avancée qui s'annonce dans le secteur du commerce électronique.Le prestataire de paiement Visa vient de dévoiler une mise à jour importante de sa plateforme Visa Intelligent Commerce, avec l'introduction de serveurs Model Context Protocol ou MCP.En clair, c'est une petite révolution pour les développeurs et les entreprises qui souhaitent intégrer l'IA dans leurs solutions de paiement.Facilitation du développement informatiqueD'abord, c'est le développement informatique qui est simplifié grâce à protocole MCP.Dans le détail, MCP facilite l'intégration des agents d'intelligence artificielle dans le réseau de paiement de Visa.En gros, il permet aux développeurs de se connecter plus rapidement aux API de Visa Intelligent Commerce.Visa promet que cette nouvelle couche d'intégration permet de passer de l'idée à un prototype en quelques heures, ce qui va largement accélérer le processus de création d'applications de commerce électronique. Il s'agit d'une vraie opportunité pour les entreprises souhaitant intégrer de l'IA sans se perdre dans des complexités techniques de mise en œuvre.Un SDK pour intégrer les agents d'IABien sûr l'entreprise fournit un kit pour les développeurs, un SDK, nommé Visa Acceptance Agent Toolkit.Le Visa Acceptance Agent Toolkit est l'outil clé pour les développeurs qui veulent travailler avec des agents IA, sans être des experts en codage. Ce kit permet de créer des flux de travail en langage naturel, comme par exemple la génération automatique de factures ou la consultation de rapports financiers via un assistant d'IA.Il donc simplifie les tâches administratives courantes et permet d'intégrer facilement des fonctions de paiement tout en utilisant l'intelligence artificielle pour améliorer l'expérience utilisateur.Stimuler l'adoption de l'IA dans le commerce électroniqueMais sur le long terme, que nous dit l'intégration de MCP à la plateforme Visa ?Visa voit cette initiative comme un moyen de stimuler l'adoption de l'IA dans le commerce électronique, aussi bien pour les entreprises que pour les client.En facilitant l'intégration de l'IA dans les processus de paiement, Visa cherche à rendre le commerce en ligne plus fluide et intuitif, à l'instar de ce que l'on peut déjà voir sur des plateformes comme eBay ou Amazon. La possibilité d'utiliser l'IA pour rechercher des produits ou effectuer des achats devient de plus en plus courante, et les nouvelles solutions proposées par Visa pourraient bien devenir incontournables pour les développeurs.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Roberto Coviello is an engineer at Meta Reality Labs, where he builds open-source showcases and samples to help push XR development forward. He's also known for his YouTube Channel full of in-depth XR tutorials that played a key role in his career. In this conversation we look at:How he started his professional journey as a developer, turned content creator, and the path that led him to become a Software Engineer at MetaWe look at several SDKs provided by Meta and what he thinks is a must-have for any MR appWhat are his thoughts on the role of AI for artists and developersHow he finds the right balance between employment and cultivating his dreamsSubscribe to XR AI Spotlight weekly newsletter
In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to explore Oracle's approach to enterprise AI. The conversation covers the essential components of the Oracle AI stack and how each part, from the foundational infrastructure to business-specific applications, can be leveraged to support AI-driven initiatives. They also delve into Oracle's suite of AI services, including generative AI, language processing, and image recognition. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we discussed why the decision to buy or build matters in the world of AI deployment. Lois: That's right, Niki. Today is all about the Oracle AI stack and how it empowers not just developers and data scientists, but everyday business users as well. Then we'll spend some time exploring Oracle AI services in detail. 01:00 Nikita: Yunus Mohammed, our Principal Instructor, is back with us today. Hi Yunus! Can you talk about the different layers in Oracle's end-to-end AI approach? Yunus: The first base layer is the foundation of AI infrastructure, the powerful compute and storage layer that enables scalable model training and inferences. Sitting above the infrastructure, we have got the data platform. This is where data is stored, cleaned, and managed. Without a reliable data foundation, AI simply can't perform. So base of AI is the data, and the reliable data gives more support to the AI to perform its job. Then, we have AI and ML services. These provide ready-to-use tools for building, training, and deploying custom machine learning models. Next, to the AI/ML services, we have got generative AI services. This is where Oracle enables advanced language models and agentic AI tools that can generate content, summarize documents, or assist users through chat interfaces. Then, we have the top layer, which is called as the applications, things like Fusion applications or industry specific solutions where AI is embedded directly into business workflows for recommendations, forecasting or customer support. Finally, Oracle integrates with a growing ecosystem of AI partners, allowing organizations to extend and enhance their AI capabilities even further. In short, Oracle doesn't just offer AI as a feature. It delivers it as a full stack capability from infrastructure to the layer of applications. 02:59 Nikita: Ok, I want to get into the core AI services offered by Oracle Cloud Infrastructure. But before we get into the finer details, broadly speaking, how do these services help businesses? Yunus: These services make AI accessible, secure, and scalable, enabling businesses to embed intelligence into workflows, improve efficiency, and reduce human effort in repetitive or data-heavy tasks. And the best part is, Oracle makes it easy to consume these through application interfaces, APIs, software development kits like SDKs, and integration with Fusion Applications. So, you can add AI where it matters without needing a data scientist team to do that work. 03:52 Lois: So, let's get down to it. The first core service is Oracle's Generative AI service. What can you tell us about it? Yunus: This is a fully managed service that allows businesses to tap into the power of large language models. You can actually work with these models from scratch to a well-defined develop model. You can use these models for a wide range of use cases like summarizing text, generating content, answering questions, or building AI-powered chat interfaces. 04:27 Lois: So, what will I find on the OCI Generative AI Console? Yunus: OCI Generative AI Console highlights three key components. The first one is the dedicated AI cluster. These are GPU powered environments used to fine tune and host your own custom models. It gives you control and performance at scale. Then, the second point is the custom models. You can take a base language model and fine tune it using your own data, for example, company manuals or HR policies or customer interactions, which are your own personal data. You can use this to create a model that speaks your business language. And last but not the least, the endpoints. These are the interfaces through which your application connect to the model. Once deployed, your app can query the model securely and at different scales, and you don't need to be a developer to get started. Oracle offers a playground, which is a non-core environment where you can try out models, craft parameters, and test responses interactively. So overall, the generative AI service is designed to make enterprise-grade AI accessible and customizable. So, fitting directly into business processes, whether you are building a smart assistant or you're automating the content generation process. 06:00 Lois: The next key service is OCI Generative AI Agents. Can you tell us more about it? Yunus: OCI Generative AI agents combines a natural language interface with generative AI models and enterprise data stores to answer questions and take actions. The agent remembers the context, uses previous interactions, and retrieves deeper product speech details. They aren't just static chat bots. They are context aware, grounded in business data, and able to handle multi-turns, follow-up queries with relevant accurate responses, and driving productivity and decision-making across departments like sales, support, or operations. 06:54 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 07:37 Nikita: Welcome back! Yunus, let's move on to the OCI Language service. Yunus: OCI Language helps business understand and process natural language at scale. It uses pretrained models, which means they are already trained on large industry data sets and are ready to be used right away without requiring AI expertise. It detects over 100 languages, including English, Japanese, Spanish, and more. This is great for global business that receive multilingual inputs from customers. It works with identity sentiments. For different aspects of the sentence, for example, in a review like, “The food was great, but the service sucked,” OCI Language can tell that food has a positive sentiment while service has a negative one. This is called aspect-based sentiment analysis, and it is more insightful than just labeling the entire text as positive or negative. Then we have got to identify key phrases representing important ideas or subjects. So, it helps in extracting these key phrases, words, or terms that capture the core messages. They help automate tagging, summarizing, or even routing of content like support tickets or emails. In real life, the businesses are using this for customer feedback analysis, support ticket routing, social media monitoring, and even regulatory compliances. 09:21 Nikita: That's fantastic. And what about the OCI Speech service? Yunus: The OCI Speech is an AI service that transcribes speech to text. Think of it as an AI-powered transcription engine that listens to the spoken English, whether in audio or video files, and turns it into usable and searchable and readable text. It provides timestamps, so you know exactly when something was said. A valuable feature for reviewing legal discussions, media footages, or compliance audits. OCI Speech even understands different speakers. You don't need to train this from scratch. It is pre-trained model hosted on an API. Just send your audio to the service, and you get an accurate timestamp text back in return. 10:17 Lois: I know we also have a service for object detection… called OCI Vision? Yunus: OCI Vision uses pretrained, deep learning models to understand and analyze visual content. Just like a human might, you can upload an image or videos, and the AI can tell you what is in it and where they might be useful. There are two primary use cases, which you can use this particular OCI Vision for. One is for object detection. You have got a red color car. So OCI Vision is not just identifying that's a car. It is detecting and labeling parts of the car too, like the bumper, the wheels, the design components. This is a critical in industries like manufacturing, retail, or logistics. For example, in quality control, OCI Vision can scan product images to detect missing or defective parts automatically. Then we have got the image classification. This is useful in scenarios like automated tagging of photos, managing digital assets, classifying this particular scene or context of this particular scene. So basically, when we talk about OCI Vision, which is actually a fully managed, no complex model training is required for this particular service. It's available via API. It is also working with defining their own custom model for working with the environments. 11:51 Nikita: And the final service is related to text and called OCI Document Understanding, right? Yunus: So OCI Document Understanding allows businesses to automatically extract structured insights from unstructured documents like invoices, contracts, recipes, and also sometimes resumes, or even business documents. 12:13 Nikita: And how does it work? Yunus: OCI reads the content from the scanned document. The OCR is smarter. It recognizes both printed and handwritten text. Then determines what type of document it is. So document classification is done. Text recognition recognizes text, then classifies the document. For example, if this is a purchase order, or bank statement, or any medical report. If your business handles documents in multiple languages, then the AI can actually help in language detection also, which helps you in routing the language or translating that particular language. Many documents contain structured data in table format. Think pricing tables or line items. OCI will help you in extracting these with high accuracy for reporting on feeding into ERP systems. And finally, I would say the key value extraction. It puts our critical business values like invoice numbers, payment amounts, or customer names from fields that may not always allow a fixed format. So, this service reduces the need for manual review, cuts down processes time, and ensures high accuracy for your system. 13:36 Lois: What are the key takeaways our listeners should walk away with after this episode? Yunus: The first one, Oracle doesn't treat AI as just a standalone tool. Instead, AI is integrated from the ground up. Whether you're talking about infrastructure, data platforms, machine learning services, or applications like HCM, ERP, or CX. In real world, the Oracle AI Services prioritize data management, security, and governance, all essential for enterprise AI use cases. So, it is about trust. Can your AI handle sensitive data? Can it comply with regulations? Oracle builds its AI services with strong foundation in data governance, robust security measures, and tight control over data residency and access. So this makes Oracle AI especially well-suited for industries like health care, finance, logistics, and government, where compliance and control aren't optional. They are critical. 14:44 Nikita: Thank you for another great conversation, Yunus. If you're interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the AI for You course. Lois: In our next episode, we'll get into Predictive AI, Generative AI, Agentic AI, all with respect to Oracle Fusion Applications. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Katia, Emmanuel et Guillaume discutent Java, Kotlin, Quarkus, Hibernate, Spring Boot 4, intelligence artificielle (modèles Nano Banana, VO3, frameworks agentiques, embedding). On discute les vulnerabilités OWASP pour les LLMs, les personalités de codage des différents modèles, Podman vs Docker, comment moderniser des projets legacy. Mais surtout on a passé du temps sur les présentations de Luc Julia et les différents contre points qui ont fait le buzz sur les réseaux. Enregistré le 12 septembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-330.mp3 ou en vidéo sur YouTube. News Langages Dans cette vidéo, José détaille les nouveautés de Java entre Java 21 et 25 https://inside.java/2025/08/31/roadto25-java-language/ Aperçu des nouveautés du JDK 25 : Introduction des nouvelles fonctionnalités du langage Java et des changements à venir [00:02]. Programmation orientée données et Pattern Matching [00:43] : Évolution du “pattern matching” pour la déconstruction des “records” [01:22]. Utilisation des “sealed types” dans les expressions switch pour améliorer la lisibilité et la robustesse du code [01:47]. Introduction des “unnamed patterns” (_) pour indiquer qu'une variable n'est pas utilisée [04:47]. Support des types primitifs dans instanceof et switch (en preview) [14:02]. Conception d'applications Java [00:52] : Simplification de la méthode main [21:31]. Exécution directe des fichiers .java sans compilation explicite [22:46]. Amélioration des mécanismes d'importation [23:41]. Utilisation de la syntaxe Markdown dans la Javadoc [27:46]. Immuabilité et valeurs nulles [01:08] : Problème d'observation de champs final à null pendant la construction d'un objet [28:44]. JEP 513 pour contrôler l'appel à super() et restreindre l'usage de this dans les constructeurs [33:29]. JDK 25 sort le 16 septembre https://openjdk.org/projects/jdk/25/ Scoped Values (JEP 505) - alternative plus efficace aux ThreadLocal pour partager des données immutables entre threads Structured Concurrency (JEP 506) - traiter des groupes de tâches concurrentes comme une seule unité de travail, simplifiant la gestion des threads Compact Object Headers (JEP 519) - Fonctionnalité finale qui réduit de 50% la taille des en-têtes d'objets (de 128 à 64 bits), économisant jusqu'à 22% de mémoire heap Flexible Constructor Bodies (JEP 513) - Relaxation des restrictions sur les constructeurs, permettant du code avant l'appel super() ou this() Module Import Declarations (JEP 511) - Import simplifié permettant d'importer tous les éléments publics d'un module en une seule déclaration Compact Source Files (JEP 512) - Simplification des programmes Java basiques avec des méthodes main d'instance sans classe wrapper obligatoire Primitive Types in Patterns (JEP 455) - Troisième preview étendant le pattern matching et instanceof aux types primitifs dans switch et instanceof Generational Shenandoah (JEP 521) - Le garbage collector Shenandoah passe en mode générationnel pour de meilleures performances JFR Method Timing & Tracing (JEP 520) - Nouvel outillage de profilage pour mesurer le temps d'exécution et tracer les appels de méthodes Key Derivation API (JEP 510) - API finale pour les fonctions de dérivation de clés cryptographiques, remplaçant les implémentations tierces Améliorations du traitement des annotations dans Kotlin 2.2 https://blog.jetbrains.com/idea/2025/09/improved-annotation-handling-in-kotlin-2-2-less-boilerplate-fewer-surprises/ Avant Kotlin 2.2, les annotations sur les paramètres de constructeur n'étaient appliquées qu'au paramètre, pas à la propriété ou au champ Cela causait des bugs subtils avec Spring et JPA où la validation ne fonctionnait qu'à la création d'objet, pas lors des mises à jour La solution précédente nécessitait d'utiliser explicitement @field: pour chaque annotation, créant du code verbeux Kotlin 2.2 introduit un nouveau comportement par défaut qui applique les annotations aux paramètres ET aux propriétés/champs automatiquement Le code devient plus propre sans avoir besoin de syntaxe @field: répétitive Pour l'activer, ajouter -Xannotation-default-target=param-property dans les options du compilateur Gradle IntelliJ IDEA propose un quick-fix pour activer ce comportement à l'échelle du projet Cette amélioration rend l'intégration Kotlin plus fluide avec les frameworks majeurs comme Spring et JPA Le comportement peut être configuré pour garder l'ancien mode ou activer un mode transitoire avec avertissements Cette mise à jour fait partie d'une initiative plus large pour améliorer l'expérience Kotlin + Spring Librairies Sortie de Quarkus 3.26 avec mises à jour d'Hibernate et autres fonctionnalités - https://quarkus.io/blog/quarkus-3-26-released/ mettez à jour vers la 3.26.x car il y a eu une regression vert.x Jalon important vers la version LTS 3.27 prévue fin septembre, basée sur cette version Mise à jour vers Hibernate ORM 7.1, Hibernate Search 8.1 et Hibernate Reactive 3.1 Support des unités de persistance nommées et sources de données dans Hibernate Reactive Démarrage hors ligne et configuration de dialecte pour Hibernate ORM même si la base n'est pas accessible Refonte de la console HQL dans Dev UI avec fonctionnalité Hibernate Assistant intégrée Exposition des capacités Dev UI comme fonctions MCP pour pilotage via outils IA Rafraîchissement automatique des tokens OIDC en cas de réponse 401 des clients REST Extension JFR pour capturer les données runtime (nom app, version, extensions actives) Bump de Gradle vers la version 9.0 par défaut, suppression du support des classes config legacy Guide de démarrage avec Quarkus et A2A Java SDK 0.3.0 (pour faire discuter des agents IA avec la dernière version du protocole A2A) https://quarkus.io/blog/quarkus-a2a-java-0-3-0-alpha-release/ Sortie de l'A2A Java SDK 0.3.0.Alpha1, aligné avec la spécification A2A v0.3.0. Protocole A2A : standard ouvert (Linux Foundation), permet la communication inter-agents IA polyglottes. Version 0.3.0 plus stable, introduit le support gRPC. Mises à jour générales : changements significatifs, expérience utilisateur améliorée (côté client et serveur). Agents serveur A2A : Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Implémentations basées sur Quarkus (alternatives Jakarta existent). Dépendances spécifiques pour chaque transport (ex: a2a-java-sdk-reference-jsonrpc, a2a-java-sdk-reference-grpc). AgentCard : décrit les capacités de l'agent. Doit spécifier le point d'accès primaire et tous les transports supportés (additionalInterfaces). Clients A2A : Dépendance principale : a2a-java-sdk-client. Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Dépendance spécifique pour gRPC : a2a-java-sdk-client-transport-grpc. Création de client : via ClientBuilder. Sélectionne automatiquement le transport selon l'AgentCard et la configuration client. Permet de spécifier les transports supportés par le client (withTransport). Comment générer et éditer des images en Java avec Nano Banana, le “photoshop killer” de Google https://glaforge.dev/posts/2025/09/09/calling-nano-banana-from-java/ Objectif : Intégrer le modèle Nano Banana (Gemini 2.5 Flash Image preview) dans des applications Java. SDK utilisé : GenAI Java SDK de Google. Compatibilité : Supporté par ADK for Java ; pas encore par LangChain4j (limitation de multimodalité de sortie). Capacités de Nano Banana : Créer de nouvelles images. Modifier des images existantes. Assembler plusieurs images. Mise en œuvre Java : Quelle dépendance utiliser Comment s'authentifier Comment configurer le modèle Nature du modèle : Nano Banana est un modèle de chat qui peut retourner du texte et une image (pas simplement juste un modèle générateur d'image) Exemples d'utilisation : Création : Via un simple prompt textuel. Modification : En passant l'image existante (tableau de bytes) et les instructions de modification (prompt). Assemblage : En passant plusieurs images (en bytes) et les instructions d'intégration (prompt). Message clé : Toutes ces fonctionnalités sont accessibles en Java, sans nécessiter Python. Générer des vidéos IA avec le modèle Veo 3, mais en Java ! https://glaforge.dev/posts/2025/09/10/generating-videos-in-java-with-veo3/ Génération de vidéos en Java avec Veo 3 (via le GenAI Java SDK de Google). Veo 3: Annoncé comme GA, prix réduits, support du format 9:16, résolution jusqu'à 1080p. Création de vidéos : À partir d'une invite textuelle (prompt). À partir d'une image existante. Deux versions différentes du modèle : veo-3.0-generate-001 (qualité supérieure, plus coûteux, plus lent). veo-3.0-fast-generate-001 (qualité inférieure, moins coûteux, mais plus rapide). Rod Johnson sur ecrire des aplication agentic en Java plus facilement qu'en python avec Embabel https://medium.com/@springrod/you-can-build-better-ai-agents-in-java-than-python-868eaf008493 Rod the papa de Spring réécrit un exemple CrewAI (Python) qui génère un livre en utilisant Embabel (Java) pour démontrer la supériorité de Java L'application utilise plusieurs agents AI spécialisés : un chercheur, un planificateur de livre et des rédacteurs de chapitres Le processus suit trois étapes : recherche du sujet, création du plan, rédaction parallèle des chapitres puis assemblage CrewAI souffre de plusieurs problèmes : configuration lourde, manque de type safety, utilisation de clés magiques dans les prompts La version Embabel nécessite moins de code Java que l'original Python et moins de fichiers de configuration YAML Embabel apporte la type safety complète, éliminant les erreurs de frappe dans les prompts et améliorant l'outillage IDE La gestion de la concurrence est mieux contrôlée en Java pour éviter les limites de débit des APIs LLM L'intégration avec Spring permet une configuration externe simple des modèles LLM et hyperparamètres Le planificateur Embabel détermine automatiquement l'ordre d'exécution des actions basé sur leurs types requis L'argument principal : l'écosystème JVM offre un meilleur modèle de programmation et accès à la logique métier existante que Python Il y a pas mal de nouveaux framework agentic en Java, notamment le dernier LAngchain4j Agentic Spring lance un serie de blog posts sur les nouveautés de Spring Boot 4 https://spring.io/blog/2025/09/02/road_to_ga_introduction baseline JDK 17 mais rebase sur Jakarta 11 Kotlin 2, Jackson 3 et JUnit 6 Fonctionnalités de résilience principales de Spring : @ConcurrencyLimit, @Retryable, RetryTemplate Versioning d'API dans Spring Améliorations du client de service HTTP L'état des clients HTTP dans Spring Introduction du support Jackson 3 dans Spring Consommateur partagé - les queues Kafka dans Spring Kafka Modularisation de Spring Boot Autorisation progressive dans Spring Security Spring gRPC - un nouveau module Spring Boot Applications null-safe avec Spring Boot 4 OpenTelemetry avec Spring Boot Repos Ahead of Time (Partie 2) Web Faire de la recherche sémantique directement dans le navigateur en local, avec EmbeddingGemma et Transformers.js https://glaforge.dev/posts/2025/09/08/in-browser-semantic-search-with-embeddinggemma/ EmbeddingGemma: Nouveau modèle d'embedding (308M paramètres) de Google DeepMind. Objectif: Permettre la recherche sémantique directement dans le navigateur. Avantages clés de l'IA côté client: Confidentialité: Aucune donnée envoyée à un serveur. Coûts réduits: Pas besoin de serveurs coûteux (GPU), hébergement statique. Faible latence: Traitement instantané sans allers-retours réseau. Fonctionnement hors ligne: Possible après le chargement initial du modèle. Technologie principale: Modèle: EmbeddingGemma (petit, performant, multilingue, support MRL pour réduire la taille des vecteurs). Moteur d'inférence: Transformers.js de HuggingFace (exécute les modèles AI en JavaScript dans le navigateur). Déploiement: Site statique avec Vite/React/Tailwind CSS, déployé sur Firebase Hosting via GitHub Actions. Gestion du modèle: Fichiers du modèle trop lourds pour Git; téléchargés depuis HuggingFace Hub pendant le CI/CD. Fonctionnement de l'app: Charge le modèle, génère des embeddings pour requêtes/documents, calcule la similarité sémantique. Conclusion: Démonstration d'une recherche sémantique privée, économique et sans serveur, soulignant le potentiel de l'IA embarquée dans le navigateur. Data et Intelligence Artificielle Docker lance Cagent, une sorte de framework multi-agent IA utilisant des LLMs externes, des modèles de Docker Model Runner, avec le Docker MCP Tookit. Il propose un format YAML pour décrire les agents d'un système multi-agents. https://github.com/docker/cagent des agents “prompt driven” (pas de code) et une structure pour decrire comment ils sont deployés pas clair comment ils sont appelés a part dans la ligne de commande de cagent fait par david gageot L'owasp décrit l'independance excessive des LLM comme une vulnerabilité https://genai.owasp.org/llmrisk2023-24/llm08-excessive-agency/ L'agence excessive désigne la vulnérabilité qui permet aux systèmes LLM d'effectuer des actions dommageables via des sorties inattendues ou ambiguës. Elle résulte de trois causes principales : fonctionnalités excessives, permissions excessives ou autonomie excessive des agents LLM. Les fonctionnalités excessives incluent l'accès à des plugins qui offrent plus de capacités que nécessaire, comme un plugin de lecture qui peut aussi modifier ou supprimer. Les permissions excessives se manifestent quand un plugin accède aux systèmes avec des droits trop élevés, par exemple un accès en lecture qui inclut aussi l'écriture. L'autonomie excessive survient quand le système effectue des actions critiques sans validation humaine préalable. Un scénario d'attaque typique : un assistant personnel avec accès email peut être manipulé par injection de prompt pour envoyer du spam via la boîte de l'utilisateur. La prévention implique de limiter strictement les plugins aux fonctions minimales nécessaires pour l'opération prévue. Il faut éviter les fonctions ouvertes comme “exécuter une commande shell” au profit d'outils plus granulaires et spécifiques. L'application du principe de moindre privilège est cruciale : chaque plugin doit avoir uniquement les permissions minimales requises. Le contrôle humain dans la boucle reste essentiel pour valider les actions à fort impact avant leur exécution. Lancement du MCP registry, une sorte de méta-annuaire officiel pour référencer les serveurs MCP https://www.marktechpost.com/2025/09/09/mcp-team-launches-the-preview-version-of-the-mcp-registry-a-federated-discovery-layer-for-enterprise-ai/ MCP Registry : Couche de découverte fédérée pour l'IA d'entreprise. Fonctionne comme le DNS pour le contexte de l'IA, permettant la découverte de serveurs MCP publics ou privés. Modèle fédéré : Évite les risques de sécurité et de conformité d'un registre monolithique. Permet des sous-registres privés tout en conservant une source de vérité “upstream”. Avantages entreprises : Découverte interne sécurisée. Gouvernance centralisée des serveurs externes. Réduction de la prolifération des contextes. Support pour les agents IA hybrides (données privées/publiques). Projet open source, actuellement en version preview. Blog post officiel : https://blog.modelcontextprotocol.io/posts/2025-09-08-mcp-registry-preview/ Exploration des internals du transaction log SQL Server https://debezium.io/blog/2025/09/08/sqlserver-tx-log/ C'est un article pour les rugeux qui veulent savoir comment SQLServer marche à l'interieur Debezium utilise actuellement les change tables de SQL Server CDC en polling périodique L'article explore la possibilité de parser directement le transaction log pour améliorer les performances Le transaction log est divisé en Virtual Log Files (VLFs) utilisés de manière circulaire Chaque VLF contient des blocs (512B à 60KB) qui contiennent les records de transactions Chaque record a un Log Sequence Number (LSN) unique pour l'identifier précisément Les données sont stockées dans des pages de 8KB avec header de 96 bytes et offset array Les tables sont organisées en partitions et allocation units pour gérer l'espace disque L'utilitaire DBCC permet d'explorer la structure interne des pages et leur contenu Cette compréhension pose les bases pour parser programmatiquement le transaction log dans un prochain article Outillage Les personalités des codeurs des différents LLMs https://www.sonarsource.com/blog/the-coding-personalities-of-leading-llms-gpt-5-update/ GPT-5 minimal ne détrône pas Claude Sonnet 4 comme leader en performance fonctionnelle malgré ses 75% de réussite GPT-5 génère un code extrêmement verbeux avec 490 000 lignes contre 370 000 pour Claude Sonnet 4 sur les mêmes tâches La complexité cyclomatique et cognitive du code GPT-5 est dramatiquement plus élevée que tous les autres modèles GPT-5 introduit 3,90 problèmes par tâche réussie contre seulement 2,11 pour Claude Sonnet 4 Point fort de GPT-5 : sécurité exceptionnelle avec seulement 0,12 vulnérabilité par 1000 lignes de code Faiblesse majeure : densité très élevée de “code smells” (25,28 par 1000 lignes) nuisant à la maintenabilité GPT-5 produit 12% de problèmes liés à la complexité cognitive, le taux le plus élevé de tous les modèles Tendance aux erreurs logiques fondamentales avec 24% de bugs de type “Control-flow mistake” Réapparition de vulnérabilités classiques comme les failles d'injection et de traversée de chemin Nécessité d'une gouvernance renforcée avec analyse statique obligatoire pour gérer la complexité du code généré Pourquoi j'ai abandonné Docker pour Podman https://codesmash.dev/why-i-ditched-docker-for-podman-and-you-should-too Problème Docker : Le daemon dockerd persistant s'exécute avec des privilèges root, posant des risques de sécurité (nombreuses CVEs citées) et consommant des ressources inutilement. Solution Podman : Sans Daemon : Pas de processus d'arrière-plan persistant. Les conteneurs s'exécutent comme des processus enfants de la commande Podman, sous les privilèges de l'utilisateur. Sécurité Renforcée : Réduction de la surface d'attaque. Une évasion de conteneur compromet un utilisateur non privilégié sur l'hôte, pas le système entier. Mode rootless. Fiabilité Accrue : Pas de point de défaillance unique ; le crash d'un conteneur n'affecte pas les autres. Moins de Ressources : Pas de daemon constamment actif, donc moins de mémoire et de CPU. Fonctionnalités Clés de Podman : Intégration Systemd : Génération automatique de fichiers d'unité systemd pour gérer les conteneurs comme des services Linux standards. Alignement Kubernetes : Support natif des pods et capacité à générer des fichiers Kubernetes YAML directement (podman generate kube), facilitant le développement local pour K8s. Philosophie Unix : Se concentre sur l'exécution des conteneurs, délègue les tâches spécialisées à des outils dédiés (ex: Buildah pour la construction d'images, Skopeo pour leur gestion). Migration Facile : CLI compatible Docker : podman utilise les mêmes commandes que docker (alias docker=podman fonctionne). Les Dockerfiles existants sont directement utilisables. Améliorations incluses : Sécurité par défaut (ports privilégiés en mode rootless), meilleure gestion des permissions de volume, API Docker compatible optionnelle. Option de convertir Docker Compose en Kubernetes YAML. Bénéfices en Production : Sécurité améliorée, utilisation plus propre des ressources. Podman représente une évolution plus sécurisée et mieux alignée avec les pratiques modernes de gestion Linux et de déploiement de conteneurs. Guide Pratique (Exemple FastAPI) : Le Dockerfile ne change pas. podman build et podman run remplacent directement les commandes Docker. Déploiement en production via Systemd. Gestion d'applications multi-services avec les “pods” Podman. Compatibilité Docker Compose via podman-compose ou kompose. Détection améliorée des APIs vulnérables dans les IDEs JetBrains et Qodana - https://blog.jetbrains.com/idea/2025/09/enhanced-vulnerable-api-detection-in-jetbrains-ides-and-qodana/ JetBrains s'associe avec Mend.io pour renforcer la sécurité du code dans leurs outils Le plugin Package Checker bénéficie de nouvelles données enrichies sur les APIs vulnérables Analyse des graphes d'appels pour couvrir plus de méthodes publiques des bibliothèques open-source Support de Java, Kotlin, C#, JavaScript, TypeScript et Python pour la détection de vulnérabilités Activation des inspections via Paramètres > Editor > Inspections en recherchant “Vulnerable API” Surlignage automatique des méthodes vulnérables avec détails des failles au survol Action contextuelle pour naviguer directement vers la déclaration de dépendance problématique Mise à jour automatique vers une version non affectée via Alt+Enter sur la dépendance Fenêtre dédiée “Vulnerable Dependencies” pour voir l'état global des vulnérabilités du projet Méthodologies Le retour de du sondage de Stack Overflow sur l'usage de l'IA dans le code https://medium.com/@amareshadak/stack-overflow-just-exposed-the-ugly-truth-about-ai-coding-tools-b4f7b5992191 84% des développeurs utilisent l'IA quotidiennement, mais 46% ne font pas confiance aux résultats. Seulement 3,1% font “hautement confiance” au code généré. 66% sont frustrés par les solutions IA “presque correctes”. 45% disent que déboguer le code IA prend plus de temps que l'écrire soi-même. Les développeurs seniors (10+ ans) font moins confiance à l'IA (2,6%) que les débutants (6,1%), créant un écart de connaissances dangereux. Les pays occidentaux montrent moins de confiance - Allemagne (22%), UK (23%), USA (28%) - que l'Inde (56%). Les créateurs d'outils IA leur font moins confiance. 77% des développeurs professionnels rejettent la programmation en langage naturel, seuls 12% l'utilisent réellement. Quand l'IA échoue, 75% se tournent vers les humains. 35% des visites Stack Overflow concernent maintenant des problèmes liés à l'IA. 69% rapportent des gains de productivité personnels, mais seulement 17% voient une amélioration de la collaboration d'équipe. Coûts cachés : temps de vérification, explication du code IA aux équipes, refactorisation et charge cognitive constante. Les plateformes humaines dominent encore : Stack Overflow (84%), GitHub (67%), YouTube (61%) pour résoudre les problèmes IA. L'avenir suggère un “développement augmenté” où l'IA devient un outil parmi d'autres, nécessitant transparence et gestion de l'incertitude. Mentorat open source et défis communautaires par les gens de Microcks https://microcks.io/blog/beyond-code-open-source-mentorship/ Microcks souffre du syndrome des “utilisateurs silencieux” qui bénéficient du projet sans contribuer Malgré des milliers de téléchargements et une adoption croissante, l'engagement communautaire reste faible Ce manque d'interaction crée des défis de durabilité et limite l'innovation du projet Les mainteneurs développent dans le vide sans feedback des vrais utilisateurs Contribuer ne nécessite pas de coder : documentation, partage d'expérience, signalement de bugs suffisent Parler du project qu'on aime autour de soi est aussi super utile Microcks a aussi des questions specifiques qu'ils ont posé dans le blog, donc si vous l'utilisez, aller voir Le succès de l'open source dépend de la transformation des utilisateurs en véritables partenaires communautaires c'est un point assez commun je trouve, le ratio parlant / silencieux est tres petit et cela encourage les quelques grandes gueules La modernisation du systemes legacy, c'est pas que de la tech https://blog.scottlogic.com/2025/08/27/holistic-approach-successful-legacy-modernisation.html Un artcile qui prend du recul sur la modernisation de systemes legacy Les projets de modernisation legacy nécessitent une vision holistique au-delà du simple focus technologique Les drivers business diffèrent des projets greenfield : réduction des coûts et mitigation des risques plutôt que génération de revenus L'état actuel est plus complexe à cartographier avec de nombreuses dépendances et risques de rupture Collaboration essentielle entre Architectes, Analystes Business et Designers UX dès la phase de découverte Approche tridimensionnelle obligatoire : Personnes, Processus et Technologie (comme un jeu d'échecs 3D) Le leadership doit créer l'espace nécessaire pour la découverte et la planification plutôt que presser l'équipe Communication en termes business plutôt que techniques vers tous les niveaux de l'organisation Planification préalable essentielle contrairement aux idées reçues sur l'agilité Séquencement optimal souvent non-évident et nécessitant une analyse approfondie des interdépendances Phases projet alignées sur les résultats business permettent l'agilité au sein de chaque phase Sécurité Cyber Attaque su Musée Histoire Naturelle https://www.franceinfo.fr/internet/securite-sur-internet/cyberattaques/le-museum-nati[…]e-d-une-cyberattaque-severe-une-plainte-deposee_7430356.html Compromission massive de packages npm populaires par un malware crypto https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised 18 packages npm très populaires compromis le 8 septembre 2025, incluant chalk, debug, ansi-styles avec plus de 2 milliards de téléchargements hebdomadaires combinés duckdb s'est rajouté à la liste Code malveillant injecté qui intercepte silencieusement l'activité crypto et web3 dans les navigateurs des utilisateurs Le malware manipule les interactions de wallet et redirige les paiements vers des comptes contrôlés par l'attaquant sans signes évidents Injection dans les fonctions critiques comme fetch, XMLHttpRequest et APIs de wallets (window.ethereum, Solana) pour intercepter le trafic Détection et remplacement automatique des adresses crypto sur multiple blockchains (Ethereum, Bitcoin, Solana, Tron, Litecoin, Bitcoin Cash) Les transactions sont modifiées en arrière-plan même si l'interface utilisateur semble correcte et légitime Utilise des adresses “sosies” via correspondance de chaînes pour rendre les échanges moins évidents à détecter Le mainteneur compromis par email de phishing provenant du faux domaine “mailto:support@npmjs.help|support@npmjs.help” enregistré 3 jours avant l'attaque sur une demande de mise a jour de son autheotnfication a deux facteurs après un an Aikido a alerté le mainteneur via Bluesky qui a confirmé la compromission et commencé le nettoyage des packages Attaque sophistiquée opérant à plusieurs niveaux: contenu web, appels API et manipulation des signatures de transactions Les anti-cheats de jeux vidéo : une faille de sécurité majeure ? - https://tferdinand.net/jeux-video-et-si-votre-anti-cheat-etait-la-plus-grosse-faille/ Les anti-cheats modernes s'installent au Ring 0 (noyau système) avec privilèges maximaux Ils obtiennent le même niveau d'accès que les antivirus professionnels mais sans audit ni certification Certains exploitent Secure Boot pour se charger avant le système d'exploitation Risque de supply chain : le groupe APT41 a déjà compromis des jeux comme League of Legends Un attaquant infiltré pourrait désactiver les solutions de sécurité et rester invisible Menace de stabilité : une erreur peut empêcher le démarrage du système (référence CrowdStrike) Conflits possibles entre différents anti-cheats qui se bloquent mutuellement Surveillance en temps réel des données d'utilisation sous prétexte anti-triche Dérive dangereuse selon l'auteur : des entreprises de jeux accèdent au niveau EDR Alternatives limitées : cloud gaming ou sandboxing avec impact sur performances donc faites gaffe aux jeux que vos gamins installent ! Loi, société et organisation Luc Julia au Sénat - Monsieur Phi réagi et publie la vidéo Luc Julia au Sénat : autopsie d'un grand N'IMPORTE QUOI https://www.youtube.com/watch?v=e5kDHL-nnh4 En format podcast de 20 minutes, sorti au même moment et à propos de sa conf à Devoxx https://www.youtube.com/watch?v=Q0gvaIZz1dM Le lab IA - Jérôme Fortias - Et si Luc Julia avait raison https://www.youtube.com/watch?v=KScI5PkCIaE Luc Julia au Senat https://www.youtube.com/watch?v=UjBZaKcTeIY Luc Julia se défend https://www.youtube.com/watch?v=DZmxa7jJ8sI Intelligence artificielle : catastrophe imminente ? - Luc Julia vs Maxime Fournes https://www.youtube.com/watch?v=sCNqGt7yIjo Tech and Co Monsieur Phi vs Luc Julia (put a click) https://www.youtube.com/watch?v=xKeFsOceT44 La tronche en biais https://www.youtube.com/live/zFwLAOgY0Wc Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 22-27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23-24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025-1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7-8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8-10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17-19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 5-6 novembre 2025 : Red Hat Summit: Connect Paris 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19-21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4-5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) 4 septembre 2026 : JUG SUmmer Camp 2026 - La Rochelle (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
PHI Studio sponsored today's episode to highlight that they are expanding their location-based entertainment distribution network to the United States with their EXP Rosemont location in the greater Chicago, IL area that is opening to the public on September 26, 2025. They will be launching with a couple of Excurio pieces including The Horizon of Khufu and Life Chronicles that feature large-scale, free-roaming VR guided tours that I've covered previously in episodes #1430, #1431, and #1588. Both Excurio and PHI Studio are interested in collaborating with creators who are interested in creating large-scale LBE experiences that could draw 100-150 people per hour, and you can reach out to Fabian Barati and/or Julie Tremblay on LinkedIn. Excurio will be making their tools and SDK available to third party developers to expand the number content producers creating this type of large-scale work, and PHI Studio continues to do co-productions across a wide range of formats and throughput scales. I'm excited to see PHI Studio continue to build out their independent distribution network across Canada and North America as they continue to produce and distribute their own experiences as well as distribute the best of large-scale, free roaming experience from Excurio. EXP Rosemont will be launching with a couple of Excurio pieces, but I expect them to eventually distribute some of their own large-scale VR and non-VR, immersive works as well. PHI Studio continues to build out their own independent distribution networks, which will provide new outlets and opportunities for immersive stories that have featured on the festival circuit to have a home beyond this more insular XR industry exhibition network. Not all projects will be a good fit for this high-throughput format, but the revenue generated will help support their other more experimental efforts that are helping to push the boundaries of the medium. Look for my more in-depth coverage of Blur coming out here within the next couple of weeks, which was my personal favorite from Venice Immersive and one of the hottest tickets at this years festival. Thanks again to PHI Studio for sponsoring this episode, and keep an eye on this new location in the greater Chicago area (and apparently only a 10-minute ride from O'Hare Airport if you happen to have an extended layover). I'll be diving more into more 30+ hours of coverage from Venice Immersive within the next couple of weeks, likely after I return from Meta Connect. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality
Improved failed renewal recovery, smarter SaaS emails, and a clearer signup experience for both makers and buyers make it easier to build trust, sell software, and stay top of mind....
In this special Two and a Half Gamers episode, Matej and Felix sit down with Zino Rost van Tonningen (TyrAds) to deep dive into the history, present, and future of rewarded monetization & offerwalls.Key insights:History of rewarded ads:Early “incent installs” → rank-boosting campaigns (TapJoy, FreeMyApps).Multi-reward systems (TapJoy, IronSource, Fyber).Misplay breakthrough: timer-based playtime rewards + personalization.AdJoe & others scaled Misplay's model into SDK solutions.The shift to personalization:Old offerwalls = one-size-fits-all.New generation = hyper-personalized rewards per user.Use of media source data (Google, Unity, AppLovin, etc.) to adapt rewards based on traffic quality.Publisher perspective:Offerwalls can contribute 5–30% of game revenue depending on genre.Biggest impact: retaining non-payers & dolphins by giving them an alternative to IAP.Integration fights today echo old mediation wars (bonuses, rev guarantees, exclusivity deals).Best practices for choosing an offerwall partner:LiveOps environment — events, hot deals, timed offers.Transparency — explain revenue spikes/drops & media source impact.Personalization — reward scaling, segmentation by user type & UA source.UI/UX — aesthetics matter; no more “Windows 95” offerwalls.Zino's TyrAds SDK v3.0:One-time integration, no updates needed.Customizable design to match game branding.Hyper-personalized rewards, dynamic leveling systems.LiveOps events triggered per user (push, in-app messages).Takeaway: Rewarded monetization has entered its 4th generation: hyper-personalized, data-driven, and LiveOps-powered.https://tyrads.com/Get our MERCH NOW: 25gamers.com/shop---------------------------------------This is no BS gaming podcast 2.5 gamers session. Sharing actionable insights, dropping knowledge from our day-to-day User Acquisition, Game Design, and Ad monetization jobs. We are definitely not discussing the latest industry news, but having so much fun! Let's not forget this is a 4 a.m. conference discussion vibe, so let's not take it too seriously.Panelists: Felix Braberg, Matej LancaricSpecial guest: Zino Rost van Tonningenhttps://www.linkedin.com/in/rovato/zino@tyrads.comJoin our slack channel here: https://join.slack.com/t/two-and-half-gamers/shared_invite/zt-2um8eguhf-c~H9idcxM271mnPzdWbipgChapters00:00 Introduction to Rewarded User Acquisition04:30 The Evolution of Rewarded Monetization07:10 The Shift from Incentivized Installs to Quality KPIs09:51 Innovations in Rewarded Advertising: Multi-Reward and Playtime Solutions12:33 The Role of Personalization in Rewarded Monetization14:56 Challenges in Current Rewarded Solutions17:48 Evaluating Monetization Solutions: Key Considerations20:15 The Importance of LiveOps in Engagement23:06 Transparency and Optimization in Offer Walls28:34 Differentiating Offer Walls for Monetization30:44 The Importance of Data in Monetization Solutions31:28 Personalization and User Engagement in Offer Walls33:18 SDK Evolution: From Version 1 to Hyper-Personalization36:14 Leveraging Machine Learning for Offer Wall Optimization40:14 Engaging Users with LiveOps and Hot Deals44:01 Dynamic Leveling Systems for Enhanced User Experience46:58 Criteria for Effective Offer Wall Implementation48:47 Revenue Impact and Client Engagement---------------------------------------Matej LancaricUser Acquisition & Creatives Consultanthttps://lancaric.meFelix BrabergAd monetization consultanthttps://www.felixbraberg.comZino Rost van TonningenCEO of TyrAdshttps://www.linkedin.com/in/rovato/zino@tyrads.com---------------------------------------Please share the podcast with your industry friends, dogs & cats. Especially cats! They love it!Hit the Subscribe button on YouTube, Spotify, and Apple!Please share feedback and comments - matej@lancaric.me
This week's episode is packed with big updates in the React Native world—new tools, major releases, and even a glimpse into the future of the framework.⚛️ React Native Radar:Maestro 2.0 released – faster, more powerful mobile testingAudio support updates from Software MansionLegendList 2 brings better list performanceReanimated 4 stable – the next step for animations in RNNitro Fetch – the network layer gets an upgradeShopify migrates fully to the New ArchitectureModule Federation for React Native appsExpo Launch – a new way to get apps into the store fasterNew GlassEffect module in Expo SDKReact Native 0.81 – Android 16 support, faster iOS builds, SafeAreaView changesExpo SDK 54 beta now availableRFC0929 – removal of the legacy architecture officially on the way
Stainless founder Alex Rattray joins a16z partner Jennifer Li to talk about the future of APIs, SDKs, and the rise of MCP (Model Context Protocol). Drawing on his experience at Stripe—where he helped redesign API docs and built code-generation systems—Alex explains why the SDK is the API for most developers, and why high-quality, idiomatic libraries are essential not just for humans, but now for AI agents as well.They dive into:The evolution of SDK generation and lessons from building at scale inside Stripe.Why MCP reframes APIs as interfaces for large language models.The challenges of designing tools and docs for both developers and AI agents.How context limits, dynamic tool generation, and documentation shape agent usability.The future of developer platforms in an era where “every company is an API company.”Timecodes: 0:00 – Introduction: APIs as the Dendrites of the Internet1:49 – Building API Platforms: Lessons from Stripe3:03 – SDKs: The Developer's Interface6:16 – The MCP Model: APIs for AI Agents9:23 – Designing for LLMs and AI Users13:08 – Solving Context Window Challenges16:57 – The Importance of Strongly Typed SDKs21:07 – The Future of API and Agent Experience24:45 – Lessons from Leading API Companies26:14 – Outro and DisclaimersResources: Find Alex on X: https://x.com/rattrayalexFind Jennifer on X: https://x.com/JenniferHliStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
We discuss Meta's next steps for smart glasses, including Meta AI gaining calendar access and Meta Connect's agenda essentially confirming a smart glasses SDK; Wist's Minority Report-style memory replay on Apple Vision Pro and Quest; a surge in Quest 3 and 3S usage on SteamVR and whether sales and the Xbox Edition drove it; how Windows MR headsets are being revived by the free Oasis SteamVR driver, now auto-installed by the SteamVR beta; Valve bringing Steam Link to Pico headsets; and whether Valve's newly registered Steam Frame trademark points to its next headset.
Solfate Podcast - Interviews with blockchain founders/builders on Solana
A conversation with Robin, CEO of Raiku, about Raiku's approach to guaranteed transaction includion on Solana.
In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma's mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma's customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...
Sam Klehr is Global Head of Sales and Business Development at Chorus One, a leading institutional staking infrastructure provider founded in 2018. Why you should listen Chorus One operates secure, enterprise-grade validator infrastructure across 50–60+ Proof-of-Stake (PoS) blockchain networks—ranging from Ethereum, Solana, Cosmos, Avalanche, and Near to newer entrants like TON and Stacks—positioning itself as one of the largest staking service providers globally. The company offers a suite of services tailored to institutions, exchanges, wallets, foundations, and private investors, including whitelabel validators, ETH staking vaults (OPUS), staking SDKs, and API-based rewards reporting—all backed by ISO 27001-level security, robust infrastructure, and slashing protection mechanisms. Chorus One's value proposition centers on reliability, transparency, and ecosystem engagement. They emphasize continuous uptime, redundancy, and security through custom infrastructure—including hardware security modules and geographically distributed server clusters—thanks in part to partnerships like the one with DataPacket. The firm also provides a $250,000 delegator protection pool, refreshed quarterly, to safeguard user assets. Beyond staking services, Chorus One is deeply involved in ecosystem development, offering research and MEV insights, managing an investment arm (Chorus Ventures), and pioneering institutional solutions like TON Pool (for streamlined TON staking) and support for Bitcoin Layer‑2 protocol Stacks. Supporting links Fidelity Crypto Careers Chorus One Andy on Twitter Brave New Coin on Twitter Brave New Coin If you enjoyed the show please subscribe to the Crypto Conversation and give us a 5-star rating and a positive review in whatever podcast app you are using.
Welcome to a special episode from Samsara Beyond 25 in San Diego, where FreightWaves' Thomas Wasson explores the future of fleet technology and road safety. In this episode, you'll hear from Frank Kopas, Head of Go-to-Market Strategy at Next Billion AI, a Singapore-headquartered tech company providing API and SDKs for mapping, routing, and navigation. Learn how they tackle complex dynamic route optimization for entire fleets, incorporating real-time events to deliver more accurate ETAs, and navigate unique global routing challenges, from scooters in Asia to hazmat routes. Frank also touches on the role of telematics data and the future of "Math AI" in optimizing routes based on historical data and dispatcher interactions. You'll also hear from Peter Goldwasser, Executive Director of Together for Safer Roads, an international road safety NGO formed by major fleets and tech companies like Samsara. Discover their work on leveraging technology for safer roads, including programs like the "Truck of the Future" for reducing blind zones. Peter shares key insights from their report with Samsara on in-cab cameras, emphasizing their use as a beneficial tool for training and exculpatory evidence to increase adoption. Plus, get a glimpse into new research on distracted driving and the importance of employee incentive programs based on collected data, which can improve both safety and driver retention. Tune in to see how these innovations are driving a more efficient, productive, and safer future for fleet operations! Follow the Loaded and Rolling Podcast Other FreightWaves Shows Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to a special episode from Samsara Beyond 25 in San Diego, where FreightWaves' Thomas Wasson explores the future of fleet technology and road safety. In this episode, you'll hear from Frank Kopas, Head of Go-to-Market Strategy at Next Billion AI, a Singapore-headquartered tech company providing API and SDKs for mapping, routing, and navigation. Learn how they tackle complex dynamic route optimization for entire fleets, incorporating real-time events to deliver more accurate ETAs, and navigate unique global routing challenges, from scooters in Asia to hazmat routes. Frank also touches on the role of telematics data and the future of "Math AI" in optimizing routes based on historical data and dispatcher interactions. You'll also hear from Peter Goldwasser, Executive Director of Together for Safer Roads, an international road safety NGO formed by major fleets and tech companies like Samsara. Discover their work on leveraging technology for safer roads, including programs like the "Truck of the Future" for reducing blind zones. Peter shares key insights from their report with Samsara on in-cab cameras, emphasizing their use as a beneficial tool for training and exculpatory evidence to increase adoption. Plus, get a glimpse into new research on distracted driving and the importance of employee incentive programs based on collected data, which can improve both safety and driver retention. Tune in to see how these innovations are driving a more efficient, productive, and safer future for fleet operations! Follow the Loaded and Rolling Podcast Other FreightWaves Shows Learn more about your ad choices. Visit megaphone.fm/adchoices
Great developer experience isn't just about clean docs or helpful error messages—it's about intentionally delighting your user at every step. In this episode of Convergence.fm, host Ashok Sivanand is joined by Kenneth Auchenberg—former product leader at Microsoft and Stripe—for a masterclass on what it really takes to design and scale developer-centric platforms. The Convergence.fm podcast team is taking a break in the month of August, but we'll be back with new episodes in the fall. Until then, Ashok wants to share one of his favorite episodes. We'll be back in September with a new set of episodes on fostering engaged teams who ship delightful products. Thanks for watching and listening. This episode originally aired June 24th, 2024 Kenneth helped shape Visual Studio Code and later played a key role in defining Stripe's gold-standard API experience. In this conversation, he breaks down the building blocks of DevEx success—from friction logging and human-centered design to measuring satisfaction and optimizing for the long tail of developers. They explore the differences between platform and infrastructure businesses, explain why most companies aren't ready to be platforms, and walk through frameworks for product metrics that matter. Whether you're designing your first SDK or scaling a full-fledged platform, you'll leave with actionable insights for making developers love your product. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode… What Stripe got right about developer experience The difference between DevRel and DevEx How to test and measure developer delight When to evolve from infrastructure to platform Why great DevEx starts with product-market fit Mentioned in this episode… Stripe Microsoft / VS Code GitHub AWS Marketplace Shopify Superbase Recent.dev Subscribe to the Convergence podcast wherever you get podcasts including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Marty talks about the changes and additions to Vision OS 26 developer beta 5Follow the live stream at YouTube.com/@VisionProfiles on Monday nights at 9 PM EST or catch the video later on Youtube or audio on any pod catcher servicevisionOS 26 Beta 5 Release Noteshttps://developer.apple.com/documentation/visionos-release-notes/visionos-26-release-notesVersion RecapvisionOS 2.6 Beta 5 (build 23M5311g) released August 5, 2025 Beta 4 (build 23M5300g) arrived in late July 2025 Both are stability/bug‑fix focused, with no new user‑visible features over Beta 4 Developer HighlightsBeta 5 pairs with Xcode 26 Beta 5, ensuring alignment between SDK and runtimeIncludes incremental UI and accessory fixes, building on prior improvements from Great seed for final QA testing ahead of the public releaseUser/Testers HighlightsNo new features, but expect a snappier and smoother system across the boardContinued access to core visionOS 26 features introduced in earlier betas:Spatial widgetsEnhanced PersonasJupiter immersive environment180°/360° video playback supportPSVR2 and Logitech Muse accessory support Minor polish to interaction, especially with widget behavior and accessory pairingMacStockMacstockconferenceandexpo.com
In this solo episode, Felix Braberg cuts through the hype and breaks down what's actually happening in mobile ad monetization networks in 2025. The real mediation market is cornered by AppLovin MAX, Unity LevelPlay, and Google AdMob—with upstarts like Amazon Publisher Services and Moloco making surprise moves. But even the biggest ad-revenue studios (Rollic, Habi) are running from ads to IAP-first strategies. ECPMs are down 20 percent from pandemic peaks, networks are harder to work with, and the “walled garden” is stronger than ever.What's inside:Mediation Monopoly: AppLovin MAX now dominates mediation market share, with exclusive features like AdROAS and BlendedROAS campaigns locking in publishers. Unity and Google still matter, but the stack is closed and hostile to new entrants.Amazon's Banner Play: Amazon Publisher Services quietly became the top banner revenue source in the US and Europe, beating Google—but getting on the platform is a lottery. Last month, Amazon kicked 100 publishers off with no warning. TAM (Transparent Ad Marketplace) deals are lucrative, but rare.Moloco's Rise: Moloco, a DSP, is exploding with rumored $2.2B in 2025 spend, going direct to top publishers with its SDK. Publishers see 8–15 percent ARPDAU boosts on video and 20 percent or more on banners after adding Moloco. The secret? Once DSPs get big enough, they need direct supply, not just reselling.Ad Monetization Shift: The biggest ad studios are pivoting hard—Habby's Whittle Defender and Rollic's latest puzzle hits (Whole People, KnitOut) now get as little as 11–20 percent of revenue from ads, compared to 50–50 splits just two years ago. IAPs are the future, and “remove ads forever” packages are everywhere.Why ECPMs Are Down: ECPMs are 20 percent below 2021. Reasons include: post-pandemic demand crash, privacy updates (especially iOS), the loss of waterfall calls in mediation (bidding is the default), and “walled garden” platform control. It's harder than ever to keep prices up or diversify your stack.Key Takeaway:If you're not building your stack around AppLovin, Amazon, and Moloco, you're missing where the real money flows. But even the best ad ops can't beat macro headwinds—hybrid monetization is dying, and you need more IAP or you'll be left behind.Get our MERCH NOW: 25gamers.com/shop---------------------------------------This is no BS gaming podcast 2.5 gamers session. Sharing actionable insights, dropping knowledge from our day-to-day User Acquisition, Game Design, and Ad monetization jobs. We are definitely not discussing the latest industry news, but having so much fun! Let's not forget this is a 4 a.m. conference discussion vibe, so let's not take it too seriously.Panelists: Jakub Remiar, Felix Braberg, Matej LancaricJoin our slack channel here: https://join.slack.com/t/two-and-half-gamers/shared_invite/zt-2um8eguhf-c~H9idcxM271mnPzdWbipgChapters00:00 Introduction to Ad Monetization Trends02:31 Shift from Ad Revenue to IAPs05:07 ECPM Trends and Market Dynamics07:22 Mediation Platforms Overview09:42 Amazon Publisher Services: A New Player13:30 Moloco: The Rising DSP Star---------------------------------------Matej LancaricUser Acquisition & Creatives Consultanthttps://lancaric.meFelix BrabergAd monetization consultanthttps://www.felixbraberg.comJakub RemiarGame design consultanthttps://www.linkedin.com/in/jakubremiar---------------------------------------Please share the podcast with your industry friends, dogs & cats. Especially cats! They love it!Hit the Subscribe button on YouTube, Spotify, and Apple!Please share feedback and comments - matej@lancaric.me
Jan, founder of OpenMind, joins Sam to discuss how they're building a memory layer for AI agents in Web3.They cover the limitations of stateless agents, OpenMind's modular memory architecture, and how developers can build “smart, sovereign agents” that persist across time, dApps, and chains. Jan also shares insights on use cases across DeFi, NFTs, and DAOs, and how OpenMind is designing for both developers and end users.If you're exploring the future of AI x Web3, this episode is packed with key insights.Key Timestamps[00:00:00] Introduction: Sam introduces Jan and OpenMind's mission.[00:01:00] Jan's Background: Journey into Web3 and AI, and what led to founding OpenMind.[00:02:30] Problem with Agents Today: Why stateless AI agents don't work long term.[00:04:00] What OpenMind Does: A memory infrastructure layer for AI agents in Web3.[00:05:00] Modular Approach: Indexers, vector DBs, and plug-and-play architecture.[00:06:00] Use Cases: Agents for DeFi, personalized NFT experiences, DAO workflows, and beyond.[00:08:00] Developer Tools: APIs, SDKs, and how to get started building with OpenMind.[00:09:00] Open vs Closed Memory: Why on-chain provenance and user control matter.[00:11:00] Vision for AI Agents: Autonomous, persistent, and identity-aware.[00:13:00] Why Web3 Matters: Data ownership, composability, and aligned incentives.[00:15:00] OpenMind's Ask: Builders, partners, and early adopters — reach out!Connecthttps://openmind.org/https://x.com/openmind_agihttps://www.linkedin.com/company/openmindagi/https://x.com/janliphardthttps://www.linkedin.com/in/jan-liphardt/DisclaimerNothing mentioned in this podcast is investment advice and please do your own research. Finally, it would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us - https://www.web3pod.xyz/
Bret is joined by Andrew Tunall, the President and Chief Product Officer at Embrace, to discuss his prediction that we'll all start shipping non-QA'd code (buggier code in production) and QA will need to be replaced with better observability.
This episode covers a month of record growth and strategic shifts, celebrating new customer wins and diving into our marketing strategies. We share project updates, including bucketAV's multi-engine scan, and highlight key AWS topics: simplified AMI deletion and generating SDKs for API Gateway. Tune in for insights, wins, and fails!
On the podcast I talk with John about the fascinating 40-year history of Apple's developer relations, how almost going bankrupt in the 1990s shaped today's control-focused approach, and why we might need an ‘App Store 3.0' reset.Top Takeaways:
Join Tommy Shaughnessy as he dives into the world of Dev.Fun with founders Devlord and Robi. Dev.Fun is revolutionizing app creation by enabling anyone — regardless of technical background — to build and launch applications using AI-powered vibe coding. Learn how the team is merging Web3, tokens, and consumer apps to create a decentralized ecosystem of user-generated tools, games, and communities. They cover technical architecture, distribution strategies, monetization paths, and why Solana is their chain of choice.Explore how Dev.Fun is pushing the boundaries of app creation, developer incentives, and decentralized attention markets — and why streaming, tokens, and memes might be the next big unlock.Dev.Fun: https://dev.fun/
Гость эпизода – Кирилл Потехин, сооснователь и CTO Adapty. В этом выпуске – история нашей компании из первых уст. Как на базе проблем с монетизацией в Easy Ten мы собрали первый SDK, который спустя пять лет стал обрабатывать миллиарды ивентов в день. Почему мы отказались от облачных сервисов, какие решения приняли на старте и что помогает нам масштабироваться до сих пор. Обсуждаем в выпуске: • Как подписки с ручным продлением в 2014-м подтолкнули нас к идее Adapty • Зачем мы с самого начала сделали A/B-тесты частью продукта • Почему отказались от AWS и хостим всё на своих серверах • Как запустили Refund Saver и вернули клиентам сотни тысяч долларов • Кого ищем в команду и почему инженеры у руля Подписывайтесь на канал, чтобы не пропустить следующие выпуски! Кирилл Потехин в Linkedin: https://www.linkedin.com/in/kpotehin/ Виталий Давыдов в Linkedin: https://www.linkedin.com/in/iwitaly/
Що формує ком'юніті, якщо немає споту? Як навчити культури, а не лише рухів? Коли починається справжній хіп-хоп: із winmill чи з knowledge? Чи достатньо п'яти базових елементів, щоб описати культуру хіп-хопу? Що залишається після танцю, крім сліду на паркеті? У дев'ятому епізоді Street Culture Podcast «Міста» говоримо про Черкаси разом із Максимом Оробцем aka Maximus – танцівником, хореографом і наставником. Засновником Explosion School, Max Dance, Ukrainian Underground Camp, Explosion Battle, Start Up, Explosion Choreo Fest. Переможцем Hip Hop International, Juste Debout Швейцарія, SDK, чемпіоном WORLD OF DANCE, EXPLOSION BATTLE, HHI та ще понад 300 івентів. Творцем авторських навчальних програм для педагогів і суддів стрит-напрямків. Макс говорить про рідні Черкаси – місто, з якого все почалось. Від перших хаотичних двіжів до створення власної школи. Розповідаємо про те, як це було – виживати в широких штанах у 2000-х, хто такі «нефари» по-черкаськи і чому хіп-хоп став не просто танцем, а шляхом до себе. Цей епізод – не лише про місто Черкаси, а про шлях, який трансформує і тебе, і середовище навколо. Слухайте епізод про Черкаси на Apple podcasts, Spotify, SoundCloud і MEGOGO Audio. Це фінальний епізод третього сезону Street Culture Podcast «Міста». Ми дослідили локальні сцени, зафіксували живу історію української стрит-культури через голоси тих, хто сам створив свій шлях. Дякуємо всім гостям, які поділилися своїми історіями, слухачам – за довіру та інтерес, а також ведучому Єгору Матюхіну – за енергію, глибину і любов до ком'юніті. Цей сезон – частина академії Street Culture, де ми розповідаємо про лідерів Grassroots Generation, які качають свої міста й змінюють країну знизу догори. Подкаст про вуличну культуру України творимо спільно зі Street Culture, uabreaking та New Democracy Fund. Дізнавайтесь, як брейкінг, графіті, хіп-хоп та стріт-арт змінювали обличчя міст і творили нові ком'юніті.
As AI continues to dominate headlines and crypto continues to evolve behind the scenes, the real story may lie in their convergence. In this episode of Tech Talks Daily, I sat down with Dan Kim from Coinbase to discuss how these two technologies are shaping the future of digital commerce and development. Dan leads the Coinbase Developer Platform, a project focused on simplifying blockchain development for millions of developers worldwide. He shared how the platform abstracts away complexity through familiar SDKs and APIs, removing the need for deep blockchain expertise. This isn't just about making it easier to code on-chain. It's about opening the door for new kinds of applications, many of which are being driven by AI. We dug into the emerging concept of "agentic commerce," where AI agents can autonomously carry out transactions using blockchain infrastructure. These agents are now capable of acting on our behalf, making purchases and managing digital assets within defined parameters. This shift is already changing how developers think about building tools for e-commerce, travel, and digital services. Dan also discussed the evolving role of creators in this new landscape. Blockchain technology combined with AI is creating new ways to monetize content, build applications, and launch experiences without relying on traditional platforms. He even shared a personal example—his own AI-powered music project that turns complex crypto topics into relatable Top 40 tracks. From the reawakening of HTTP's long-forgotten 402 payment code to the real-world implications of AI agents handling financial transactions, this conversation revealed just how quickly things are moving. For developers and business leaders alike, the fusion of AI and crypto is no longer speculative. It's here, and it's changing how we interact, build, and pay.
Itai Turbahn is Co-Founder and CEO of Dynamic (https://www.dynamic.xyz), a Web3 authentication platform that simplifies wallet-based login and onboarding through a flexible SDK, combining authentication, smart wallets, and secure key management. Itai shares his journey from product management leadership roles and consulting at the Boston Consulting Group to co-founding Dynamic, a company backed by a16z crypto, Founders Fund, and others. He discusses how Dynamic's growth, milestones, including sponsoring six major hackathons, supporting 400 teams, and powering millions of monthly user logins, has advanced Web3 adoption. Itai dives into the platform's role in simplifying developer workflows, enhancing user onboarding with features like social logins and Global Identities, and his vision for a more intuitive crypto future where wallet infrastructure empowers seamless cross-chain interactions.
The Daytona founders - Ivan Burazin and Vedran Jukic - discuss their pivot to an AI agent cloud. We dig into the new infrastructure requirements of developing agents that need their own sandboxes to operate in.A year ago, we had them on to talk about Daytona giving us remote development environments for humans, and they have now pivoted the company to focusing on providing cloud hosting environments for AI agents to operate.I suspect this is something we're all gonna eventually need to tackle as we work to automate more of our software engineering. So we spend time breaking down the concepts and the real world needs of humans developing agents, and then the needs of AI that require places to run their own tools in code.Check out the video podcast version here https://youtu.be/l8LBqDUwtV8Creators & Guests Cristi Cotovan - Editor Bret Fisher - Host Beth Fisher - Producer Ivan Burazin - Guest Vedran Jukic - Guest You can also support my content by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com (00:00) - Intro (06:08) - Daytona's Sandbox Technology (12:57) - Practical Applications and Use Cases (14:29) - Security and Isolation in AI Agents (17:59) - Start Up Times for Sandboxing and Kubernetes (22:51) - Daytona vs Lambda (31:06) - Rogue Models and Isolation (34:54) - Humanless Operations and the Future of DevOps (47:17) - SDK vs MCP (50:15) - Human in the Loop (51:13) - Daytona: Open Source vs Product Offering
Andrew Camilleri, better known as Kukks, is one of the most prolific contributors to BTCPay Server & an advocate for using bitcoin as money. Recently, he started building Bitcoin Layer 2 applications for Ark Labs & believes in conservative improvements. Time stamps: (00:00:49) Introduction & Andrew's Background (00:01:46) Getting Into Bitcoin & Altcoin Integrations (00:03:02) Focusing on Bitcoin & Monero Plugin (00:04:04) BTCPay Plugins & Community (00:04:22) Bitcoin's Imperfections & Altcoin Use Cases (00:04:55) Pessimism & Stagnation in Bitcoin Development (00:05:16) Introduction to Ark & Its Evolution (00:06:10) Ark's Technical Evolution (00:07:31) Ark's Impact on Developer Morale (00:07:36) What is Ark? (00:09:08) Ark's Virtual Ledger & Dust Problem (00:09:59) Off-Chain Payments & User Experience (00:11:07) Lightning Network vs. Ark (00:13:21) Custodial Lightning & Ark's Broader Goals (00:15:13) Escrow & Multisig Use Cases (00:16:09) Bitcoin's Usability & Fee Volatility (00:16:51) Miners & Second Layer Economics (00:19:08) Drivechains & Network Fragmentation (00:21:38) Rollups, ZK Proofs, and Simplicity (00:25:53) CTV, Musig2, and Soft Forks (00:28:12) OP_CAT, Collider Script, and Efficiency (00:32:38) Cost, Privacy, and Coinjoin (00:36:12) Stablecoins, Payments, and Swapping (00:38:14) Privacy, TumbleBit, and Ark's Superiority (00:41:03) Expiry, Operators, and User Experience (00:44:14) Becoming an Ark Operator (00:47:31) Fedimints, Liquid, and Privacy (00:49:41) Security Against Operator Theft (00:51:31) HODLing, Expiry, and Automation (00:53:37) Payment Finality & Pre-Confirmation (00:57:49) Government Attacks & Decentralization (01:02:51) Ark's User Experience & Wallet Integration (01:05:11) Lightning Interoperability & Partnerships (01:07:48) Arkade OS & Arcade Script (01:13:06) Underrated Use Cases: Escrow & Synthetic Assets (01:18:29) BTCPay Server's Impact & Bitcoin Payment Adoption (01:22:23) Speculation, Regulation, and Medium of Exchange (01:24:20) Litecoin, Extension Blocks, and Privacy (01:26:01) Coinjoin, Amounts, and Privacy Pools (01:29:09) Bitcoin Upgrades, CTV, and Developer Frustration (01:34:27) Soft Fork Politics & Overselling Upgrades (01:41:53) Payments, Credit Cards, and Onboarding (01:44:11) Stablecoins, Speculation, and Fiat Mindset (01:48:48) Taproot Assets, Altcoins, and Control Tokens (01:52:17) Early Bitcoin Days & Escrow (01:54:53) Gaming, Digital Money, and Bitcoin Adoption (01:59:15) Speculative Attack & Fiat Demand (02:00:01) Supercycle Skepticism & Price Predictions (02:02:22) Hard Forks, Big Blockers, and Research Value (02:24:40) NFTs, Ordinals, and Free Market Transactions (02:36:28) BTCPay Plugins & Comparison to LNBits (02:43:14) Zero Conf, RBF, and Payment Risks (02:47:41) Ark's Future: Liquidity & Decentralization (02:49:25) Testing Ark & Reference Wallet (02:51:00) Browser Wars & Internet Evolution (02:56:26) Scaling Bitcoin Payments & Libra Comparison (02:58:10) Tipping, Custodial Wallets, and Ark's SDK (03:02:12) HODL Culture vs. Spending (03:06:07) Optimism, Pessimism, and User Adoption (03:08:13) Lightning's Complexity & Ark's Simplicity (03:11:18) Competition Among Layer 2s (03:14:13) Ark's Launch, Operators, and Liquidity (03:16:08) Ark Operator Incentives & Fee Structure (03:17:08) Testing, Following, and Final Thoughts
Updating developer tools is essential for developers who want to stay efficient, secure, and competitive. In this episode of Building Better Developers with AI, Rob Broadhead and Michael Meloche explore how maintaining modern toolsets helps individuals and teams deliver better software, faster. With support from AI-generated analysis and real-world experience, they outline the risks of falling behind—and how to move forward. Listen to the full episode of Building Better Developers with AI for practical insights and ideas you can start applying today. Efficiency and Profitability When Updating Developer Tools AI captured the core message well: using outdated tools slows down delivery, creates unnecessary friction, and ultimately reduces profitability. For side hustlers and teams alike, this loss of efficiency can make or break a project. Rob pointed out that many developers begin their careers using only basic tools. Without proper exposure to modern IDEs like IntelliJ, Visual Studio Code, or Eclipse, they miss out on powerful features such as debugging tools, plugin support, container integration, and real-time collaboration. Warning Signs You Should Be Updating Developer Tools How do you know it's time to update your development tools? Rob and Michael discussed key red flags: Frequent crashes or poor performance Lack of support for modern languages or frameworks Weak integration with tools like GitHub Actions or Docker Outdated or unsupported plugins Inconsistent tooling across team members Neglecting to update developer tools can lead to slow onboarding, poor collaboration, and increased bugs—especially in fast-paced or regulated environments. Tool Standardization vs. Flexibility When Updating Tools There's a balance between letting developers choose their tools and ensuring consistency across a team. While personal comfort can boost productivity, it may also cause challenges when teams debug or collaborate. Rob and Michael recommend hosting internal hackathons to explore new toolchains or standardize workflows. These events give teams a structured way to evaluate tools and share findings. The Security Risk of Not Updating Developer Tools Michael highlighted that outdated tooling doesn't just slow developers down—it creates serious security and compliance risks. Being just one or two versions behind can open vulnerabilities that violate standards like HIPPA, OWASP or SOX. Regular updates to SDKs, plugins, and IDEs are essential for staying compliant, especially in sensitive industries like finance or healthcare. How to Evaluate New Tools Before Updating Developer Toolchains Rob offered a practical framework for evaluating new tools: Does it solve a real pain point? Start with a side project or proof of concept. Check for strong community support and documentation. Balance between stable and innovative. Michael added a note of caution: avoid adopting tools with little community activity or long-term support. If a GitHub project has only a couple of contributors and poor maintenance, it's a red flag. Developer Tools to Review and Update Regularly To keep your development environment current, Rob suggested reviewing these tool categories often: IDEs and code editors Version control tools CI/CD systems and build automation Testing and QA frameworks Package managers and dependency systems Containerization and environment management platforms Using AI to convert simple apps into different frameworks can also help evaluate new tools—just make sure not to share proprietary code. Final Thoughts Modern development demands modern tooling. From cleaner code to faster deployment and stronger team collaboration, the benefits of updating developer tools are clear. Whether you're an independent developer or part of a larger organization, regularly reviewing and upgrading your toolset is a habit worth forming. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Navigating Communication Tools in Modern Workplaces Building a Portable Development Environment That is OS-agnostic Modern Tools For Monetizing Content Updating Developer Tools: Keeping Your Tools Sharp and Efficient Building Better Developers With AI Podcast Videos – With Bonus Content
Pre-show: G4 Doorbell Pro Bespoke 3D-printed mount Fancy-schmancy Marco-recommended Flashlight What betas are we running? Virtual Buddy
On the podcast, I talk with Charlie about why Liquid Glass represents a big opportunity for new and existing apps, Apple's new on-device AI models and their practical limitations, and why the improved App Store Analytics complement rather than replace third-party tools like Appfigures and RevenueCat.Top Takeaways:
Co-Founder & COO at Speakeasy discusses the rise of MCP servers, API integration into AI systems, including SDK generation, and the emergence of the Agentic web.SHOW: 933SHOW TRANSCRIPT: The Cloudcast #933 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"SPONSORS:[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.[US CLOUD] Cut Enterprise IT Support Costs by 30-50% with US CloudSHOW NOTES:Speakeasy websiteSpeakeasy MCP HubThe NewStack article on MCP with SpeakeasyTopic 1 - Welcome to the show, Simon. Give everyone a quick introduction.Topic 2 - API tooling and platforms for AI has become the hot topic this year. Of course, Agentic AI has made significant contributions to this. What trends do you see that we need to pay attention to? What problems are organizations trying to solve?Topic 3 - Let's dig into MCP Servers. The interest in MCP has taken off. At the highest level, why do they exist?Topic 4 - MCP servers and all the recent announcements around an Agentic Web have me thinking… Do we need to prepare for an Internet where AI agents talk to each other? We had humans and GUIs (web frontends), then we saw the rise of APIs. Is this a third wave or an evolution of the API wave?Topic 5 - How do SDKs and things like AI API tooling play into all of this? API tools to generate SDKs or AI documentation aren't new. How does the abstraction of AI change this process?Topic 6 - In our experience, API-level integrations are challenging to productize. Sometimes it comes down to something as simple as who is going to pay for it, and developers often have a voice and a seat at the table, but don't have the budget. What has been your experience? Topic 7 - How does the business side of the house see an advantage to this? What metrics tend to matter and are measurable?Topic 8 - If anyone is interested, what's the best way to get started?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
Is building the backend for your AI application slowing you down? In this episode of the MongoDB Podcast, host Jesse Hall sits down with Srikar and Jimmy, the creators of Daemo AI, a revolutionary tool designed to eliminate the tedious "plumbing" of backend development.Discover how Daemo AI is building upon deprecated MongoDB features like Realm App Services, creating a more powerful and flexible solution for developers. We dive deep into their tech stack, including Next.js, Deno, and Express , and explore why they chose MongoDB for its speed and flexibility in AI applications. Plus, you'll see a live demo of Daemo's new SDK and CLI , learn how it can generate data migrations and dummy data on the fly , and get a real answer to the big question: Is AI going to take your job? In This Episode, You Will Learn: What Daemo AI is and how it accelerates development. * How to build AI agents and integrate them with frameworks like LangChain. Why MongoDB is the ideal database for rapid-growth startups and AI. The future of developer jobs in the age of AI.
Wes talks with Peter Pistorius about RedwoodSDK, a new React framework built natively for Cloudflare. They dive into real-time React, server components, zero-cost infrastructure, and why RedwoodSDK empowers developers to ship faster with fewer tradeoffs and more control. Show Notes 00:00 Welcome to Syntax! 00:52 What is RedwoodSDK? 04:49 Choosing openness over abstraction 08:46 More setup, more control 12:20 Why RedwoodSDK only runs on Cloudflare 14:25 What the database setup looks like 16:15 Durable Objects explained – Ep 879: Fullstack Cloudflare 18:14 Middleware and request flow 23:14 No built-in client-side router? 24:07 Integrating routers with defineApp 26:04 React Server Components and real-time updates 29:53 What happened to RedwoodJS? 31:14 Why do opinionated frameworks struggle to catch on? 34:35 The problem with Lambdas 36:16 Cloudflare's JavaScript runtime compatibility 40:04 Brought to you by Sentry.io 41:44 The vision behind RedwoodSDK Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads