POPULARITY
Categories
Scott and Wes break down the biggest web platform features that reached Baseline in 2025, separating the genuinely useful APIs from the niche and forgettable ones. From same-document view transitions and the Popover API to Promise.try, content-visibility, and modern CSS goodies, they share what's actually ready to use today. Show Notes 00:00 Welcome to Syntax! 01:37 24 new web APIs that reached baseline in 2025. 01:49 Same-document view transitions for single-page applications. 05:28 abs() 08:22 Brought to you by Sentry.io. 09:20 JSON Module Scripts. 10:10 Popover API. 13:07 Base64 to UInt8Array. Better Binary Batter Mixing 16:11 @starting-style Scott's A CSS Only Accordion with Scott's Mobile Nav 17:39 allow-discrete 21:31 Promise.try 22:51 content-visibility Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Serve No Master : Escape the 9-5, Fire Your Boss, Achieve Financial Freedom
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we dive into how AI is reshaping the world of website building and small business marketing with our special guest, Pedro Sostre, a seasoned digital marketer and key leader at Builderall.He and Jonathan break down why so many “AI website builders” fail, what business owners actually need, and how to future‑proof your skills in an AI‑driven market.They explore the growing gap between what AI tools promise and what they actually deliver, especially for small business owners who are busy, overwhelmed, and not interested in becoming “prompt engineers.” Pedro explains how Builderall is tackling that challenge with pre-built, strategy‑driven funnels and AI‑assisted tools that do the heavy lifting in the background—so business owners can focus on running their business, not wiring together tech.You'll hear them discuss why design alone doesn't sell, how bad AI content is clogging the internet, and why the people who win won't be AI itself—but the humans who learn to use it better and faster than everyone else.Notable Quotes:“Right now people are expecting all sorts of different things, and the reality is they're generally getting junk, even if it looks good.” – [Pedro Sostre]“Whatever you think you have secure job security in today is probably not gonna exist in two years… You should be doing something different in three years.” – [Pedro Sostre]“People do not care which AI they're using. They don't care if the backend is Grok or DeepSeek or ChatGPT. They only care if it works.” – [Jonathan Green]“We're not gonna be replaced by AI. We're gonna be replaced by people who are better at AI than us.” – [Jonathan Green]“Train it to do what you do now, because your job needs to be different in two years.” – [Pedro Sostre]Pedro highlights how Builderall is evolving from “a big toolbox” into a guided, AI‑assisted marketing platform. With their new “builds,” a course creator, agency owner, or realtor can log in, choose their business type, and instantly see which tools to use, in what order, and how they all connect—without needing to touch APIs, Zapier, or complex integrations. It's like having a marketing consultant baked into the software, helping you deploy proven funnels instead of guessing your way through 25 different tools.Connect with Pedro Sostre:Website: https://www.builderall.com/ LinkedIn: https://www.linkedin.com/in/psostre/Pedro shares how Builderall is integrating AI behind the scenes to write copy, build pages, and connect tools for small business owners who don't have time (or desire) to learn complex prompting. Their focus is on making AI simple, contextual, and results‑driven—so users see more leads and sales, not just prettier websites.If you're a small business owner, agency, or creator wondering how to actually use AI to grow your business—without becoming a full‑time tech expert—this episode is a must‑listen!Connect with Jonathan Green The Bestseller: ChatGPT Profits Free Gift: The Master Prompt for ChatGPT Free Book on Amazon: Fire Your Boss Podcast Website: https://artificialintelligencepod.com/ Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
This week on the AI Unraveled Weekly Rundown, the numbers are staggering. We break down SoftBank's race to deploy $22.5 billion into OpenAI before the year ends, and the global record of $61 billion invested in data centers—a boom that is now causing land wars with farmers in Maryland.We also cover the 2026 roadmap, including Meta's leaked "Mango" and "Avocado" models, Google's delay in upgrading Assistant to Gemini, and the US government's probe into Nvidia H200 sales. Plus, ChatGPT hits $3 billion in mobile revenue, proving the consumer model works, even as developers struggle with "buggy" app stores.Key Topics:
After an 18-year rise through corporate HR—from recruiter to group president across Canada and the U.S.—Dom walked away from a “safe” executive career to build something on his own terms. In this conversation, we unpack why large organizations quietly trade momentum for bureaucracy, how technology and automation empower lean founders, and why “stability” often comes at the cost of creativity, speed, and meaning.We explore intrapreneurship vs. entrepreneurship, the hidden traps of bloated systems, and how founders can use data, automation, and open APIs to move faster without burning capital. The throughline isn't rebellion—it's agency. Building work that's fun, aligned, and alive again.No anti-corporate rant. Just lived experience, hard trade-offs, and a clear-eyed look at what it really takes to step off the stable path—and thrive.TL;DR* Stability is conditional: Corporate safety disappears the moment priorities shift.* Intrapreneur vs. founder: Big-company success doesn't equal personal leverage.* Tech as leverage: Automation and BI (not hype AI) unlock speed for lean teams.* Systems can trap you: CRMs and ERPs either enable growth—or become prisons.* Innovation dies slowly: Bureaucracy rewards optics over outcomes.* Work-life blend > balance: Fun, purpose-driven work creates sustainability.* Momentum matters: Small teams with clarity outperform slow giants.Memorable lines* “Stability often costs more than risk—you just don't see the bill right away.”* “Big systems don't fail fast. They fail quietly.”* “AI isn't magic—it's leverage if you know what problem you're solving.”* “Careers don't collapse overnight; they stall one approval layer at a time.”* “Fun isn't a perk—it's fuel.”GuestDominic Levesque — HR executive turned founder; CEO of NextWave; author and advisor on leadership, technology, and organizational transformation.
Comment on this episode by going to KDramaChat.comToday, we'll be discussing Episode 5 of Start-Up, the hit K Drama on Netflix starring Bae Suzy as Seo Dal-mi, Nam Joo Hyuk as Nam Do-san, Kim Seon Ho as Han Ji Pyeong, Kang Han Na as Won In Jae, and Kim Hae Sook as Choi Won Deok. We discuss:The songs featured during the recap: "Running" by Gaho and "Shake Shake."The intense and emotional hackathon that tests our characters' ambition, determination, and self-worth.Seo Dal-mi's rising ambition and her impressive performance as the new CEO of Samsan Tech.Nam Do-san's growing confidence, his romantic development, and his beautiful metaphor involving Tarzan.The theme of imposter syndrome and how both Dal-mi and Do-san feel they're not worthy — but believe in each other.The critical role APIs, GPUs, data sets, and artificial neural networks play in tech — and how they're introduced in the show.Han Ji Pyeong's internal turmoil, guilt, and shift from dismissive investor to personal mentor and backer of Samsan Tech.The heartbreaking reveal that Dal-mi didn't go to college because she wanted to buy a corn dog truck for her grandmother.Dal-mi's smart and humble recruitment of Jeong Sa Ha, a designer with top-tier credentials, by literally going down on her knees.The competitive and cold dynamic between the sisters, especially in the brutal bathroom scene.The sly arrival of stylish twins to In Jae Company and the challenge they pose to Samsan Tech.Alex Kwon's savvy evaluation of Samsan Tech's potential, not just performance — and his pivotal vote that secures their place in Sandbox.The ethics and motivations behind Han Ji Pyeong's involvement in the letters, and Seo Dal-mi's growing suspicions.Our reflections on the character of Han Ji Pyeong and whether redemption is possible.The amazing career of Kang Han Na, the actress who plays Won In Jae, including her roles in Moon Lovers, Bon Appetit, and her stint as a top DJ for KBS.ReferencesKang Han Na on WikipediaGUI Steakhouse in New York CityData.gov, the home of the US Government's Open DataRunning by Gaho
In this episode of WP Builds, Nathan Wrigley and Rae Morey recap the past few months in the WordPress ecosystem. They talk about the new features of WordPress 6.9, discuss advances in AI tools and APIs, and highlight community news including sponsorship shifts, legal updates, and standout block themes like Ollie. The conversation also touches on flagship WordCamp scheduling challenges, the launch of Telex, and the evolving role of Jetpack. Throughout, Rae Morey provides expert insight, drawing on her reporting for The Repository. Go listen...
Datawizz is pioneering continuous reinforcement learning infrastructure for AI systems that need to evolve in production, not ossify after deployment. After building and exiting RapidAPI—which served 10 million developers and had at least one team at 75% of Fortune 500 companies using and paying for the platform—Founder and CEO Iddo Gino returned to building when he noticed a pattern: nearly every AI agent pitch he reviewed as an angel investor assumed models would simultaneously get orders of magnitude better and cheaper. In a recent episode of BUILDERS, we sat down with Iddo to explore why that dual assumption breaks most AI economics, how traditional ML training approaches fail in the LLM era, and why specialized models will capture 50-60% of AI inference by 2030. Topics Discussed Why running two distinct businesses under one roof—RapidAPI's developer marketplace and enterprise API hub—ultimately capped scale despite compelling synergy narratives The "Big Short moment" reviewing AI pitches: every business model assumed simultaneous 1-2 order of magnitude improvements in accuracy and cost Why companies spending 2-3 months on fine-tuning repeatedly saw frontier models (GPT-4, Claude 3) obsolete their custom work The continuous learning flywheel: online evaluation → suspect inference queuing → human validation → daily/weekly RL batches → deployment How human evaluation companies like Scale AI shift from offline batch labeling to real-time inference correction queues Early GTM through LinkedIn DMs to founders running serious agent production volume, working backward through less mature adopters ICP discovery: qualifying on whether 20% accuracy gains or 10x cost reductions would be transformational versus incremental The integration layer approach: orchestrating the continuous learning loop across observability, evaluation, training, and inference tools Why the first $10M is about selling to believers in continuous learning, not evangelizing the category GTM Lessons For B2B Founders Recognize when distribution narratives mask structural incompatibility: RapidAPI had 10 million developers and teams at 75% of Fortune 500 paying for the platform—massive distribution that theoretically fed enterprise sales. The problem: Iddo could always find anecdotes where POC teams had used RapidAPI, creating a compelling story about grassroots adoption. The critical question he should have asked earlier: "Is self-service really the driver for why we're winning deals, or is it a nice-to-have contributor?" When two businesses have fundamentally different product roadmaps, cultures, and buying journeys, distribution overlap doesn't create a sustainable single company. Stop asking if synergies exist—ask if they're causal. Qualify on whether improvements cross phase-transition thresholds: Datawizz disqualifies prospects who acknowledge value but lack acute pain. The diagnostic questions: "If we improved model accuracy by 20%, how impactful is that?" and "If we cut your costs 10x, what does that mean?" Companies already automating human labor often respond that inference costs are rounding errors compared to savings. The ideal customers hit differently: "We need accuracy at X% to fully automate this process and remove humans from the loop. Until then, it's just AI-assisted. Getting over that line is a step-function change in how we deploy this agent." Qualify on whether your improvement crosses a threshold that changes what's possible, not just what's better. Use discovery to map market structure, not just validate hypotheses: Iddo validated that the most mature companies run specialized, fine-tuned models in production. The surprise: "The chasm between them and everybody else was a lot wider than I thought." This insight reshaped their entire strategy—the tooling gap, approaches to model development, and timeline to maturity differed dramatically across segments. Most founders use discovery to confirm their assumptions. Better founders use it to understand where different cohorts sit on the maturity curve, what bridges or blocks their progression, and which segments can buy versus which need multi-year evangelism. Target spend thresholds that indicate real commitment: Datawizz focuses on companies spending "at a minimum five to six figures a month on AI and specifically on LLM inference, using the APIs directly"—meaning they're building on top of OpenAI/Anthropic/etc., not just using ChatGPT. This filters for companies with skin in the game. Below that threshold, AI is an experiment. Above it, unit economics and quality bars matter operationally. For infrastructure plays, find the spend level that indicates your problem is a daily operational reality, not a future consideration. Structure discovery to extract insight, not close deals: Iddo's framework: "If I could run [a call where] 29 of 30 minutes could be us just asking questions and learning, that would be the perfect call in my mind." He compared it to "the dentist with the probe trying to touch everything and see where it hurts." The most valuable calls weren't those that converted to POCs—they came from people who approached the problem differently or had conflicting considerations. In hot markets with abundant budgets, founders easily collect false positives by selling when they should be learning. The discipline: exhaust your question list before explaining what you build. If they don't eventually ask "What do you do?" you're not surfacing real pain. Avoid the false-positive trap in well-funded categories: Iddo identified a specific risk in AI: "You can very easily run these calls, you think you're doing discovery, really you're doing sales, you end up getting a bunch of POCs and maybe some paying customers. So you get really good initial signs but you've never done any actual discovery. You have all the wrong indications—you're getting a lot of false positive feedback while building the completely wrong thing." When capital is abundant and your space is hot, early revenue can mask product-market misalignment. Good initial signs aren't validation if you skipped the work to understand why people bought. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Interview with Dorothy Creaven and Michael Cordner at AWS re:Invent Dublin-based startup Jentic was the first Irish company to complete the AWS Generative AI Accelerator, which concluded recently at AWS re:Invent in Las Vegas. The company is now focused on building enterprise awareness of its platform, supported by the launch of its AI Readiness Scorecard and its listing on AWS Marketplace. Founded in 2024 by Sean Blanchfield, Michael Cordner, and Dorothy Creaven, Jentic applies middleware and enterprise integration engineering to AI adoption, focusing on how APIs are defined, governed and safely executed by automated and agentic systems. AI adoption with API readiness platform Jentic Jentic operates at the integration layer, working with existing enterprise systems and APIs to make them clearer, more structured and more governable. This allows organisations to connect AI systems to real business infrastructure in a controlled and observable way, without replacing existing platforms or bypassing established security and compliance processes. Built on Enterprise Infrastructure Experience The company's approach is shaped by the founders' backgrounds in building large-scale infrastructure. Blanchfield previously co-founded Demonware, acquired by Activision Blizzard, and PageFair. Cordner co-founded Mindconnex, while Creaven previously led Rent the Runway's Irish operations. Speaking to Irish Tech News at AWS re:Invent, Michael Cordner, CTO of Jentic, said many enterprises are now encountering limits in how their systems were originally built. "We got away with cutting corners for 20 years when we were developing APIs for developers," said Cordner . "But now we're trying to let AI loose on those same APIs, and the standards are much more stringent. Even the most intelligent AI in the world is useless without the right information on how to actually use a system." From Jentic's perspective, the current interest in AI exposes long-standing weaknesses in enterprise integration. Automated systems can reason and decide, but they can only act through APIs. If those interfaces are poorly documented, inconsistently structured or weakly governed, behaviour becomes unpredictable. "We're a business logic and infrastructure layer for AI agents," explains Dorothy Creaven, COO of Jentic. "Software has always been built on APIs, but for AI to connect properly to enterprise systems, there has to be something that can make sense of those APIs and turn them into workflows organisations can rely on." Addressing Enterprise Control and Governance A recurring issue Jentic encounters with enterprise customers is organisational hesitation. Senior leadership often wants progress on AI strategy, while technology and security teams are concerned about control, traceability and risk. "Everyone is afraid to let AI loose in their organisation," Creaven observes. "There's a real concern about what systems might do when nobody is watching, whether actions can be traced, and how failures are handled." To address this, Jentic's platform includes a sandboxed execution environment that mirrors production APIs. This allows organisations to test AI-driven workflows, observe behaviour and understand failure modes before anything is connected to live systems. "We provide an environment that mirrors real APIs, but in a way that's safe," Creaven comments. "You can see exactly what's happening, with auditability and logging, and you can only move forward once you're confident the behaviour is correct." Launch of the AI Readiness Scorecard This approach underpins the launch of Jentic's AI Readiness Scorecard, a free, automated assessment tool introduced at AWS re:Invent. The scorecard evaluates APIs across multiple dimensions, including structure, security, documentation quality and discoverability. According to Jentic, its analysis of more than 1,500 well-known APIs highlights repeated gaps. These include missing authentication details, invalid OpenAPI specifications, i...
* Notepad++ Releases Security Update to Address Traffic Hijacking Vulnerability* Google Links Additional Chinese Hacking Groups to Widespread Exploitation of Critical React2Shell Vulnerability* Scammers Abuse PayPal Subscriptions to Send Fake Purchase Notification Emails* Massive Chrome Extension Caught Harvesting Millions of Users' AI Chat Conversations* Google to Discontinue Its Dark Web Report Security Feature in 2026Notepad++ Releases Security Update to Address Traffic Hijacking Vulnerabilityhttps://notepad-plus-plus.org/news/v889-released/The popular text editor Notepad++ has released version 8.8.9 to address a critical security vulnerability affecting its updater, WinGUp. According to security experts, incidents of traffic hijacking have been reported, where the traffic between the updater client and the Notepad++ update infrastructure was being redirected to malicious servers, resulting in the download of compromised executables.The vulnerability was found to be a weakness in the way the updater validates the integrity and authenticity of the downloaded update file. Exploiting this weakness, an attacker could intercept the network traffic and prompt the updater to download and execute an unwanted binary instead of the legitimate Notepad++ update. To mitigate this issue, the new release introduces a security enhancement that verifies the signature and certificate of the downloaded installers during the update process, and aborts the update if the verification fails.The investigation into the exact method of the traffic hijacking is ongoing, and users will be informed once tangible evidence is established. In the meantime, Notepad++ recommends that users who have previously installed the root certificate should remove it, as the binaries, including the installer, are now digitally signed using a legitimate certificate issued by GlobalSign. Google Links Additional Chinese Hacking Groups to Widespread Exploitation of Critical React2Shell Vulnerabilityhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actors-exploit-react2shell-cve-2025-55182/Google's threat intelligence team has identified five more Chinese cyber-espionage groups joining the ongoing attacks exploiting the critical “React2Shell” remote code execution vulnerability, tracked as CVE-2025-55182. This flaw, which affects the React open-source JavaScript library, allows unauthenticated attackers to execute arbitrary code on React and Next.js applications with a single HTTP request.The list of state-linked threat actors now includes UNC6600, UNC6586, UNC6588, UNC6603, and UNC6595, which have been deploying a variety of malware such as the MINOCAT tunneling software, the SNOWLIGHT downloader, the COMPOOD backdoor, and an updated version of the HISONIC backdoor. According to Google, the vulnerability has a significant number of exposed systems due to the widespread use of React Server Components in popular frameworks like Next.js.In addition to the Chinese hacking groups, Google's researchers have also observed Iranian threat actors and financially motivated attackers targeting the React2Shell vulnerability, with some deploying XMRig cryptocurrency mining software on unpatched systems. Internet watchdog groups have tracked over 116,000 vulnerable IP addresses, primarily located in the United States, highlighting the widespread impact of this critical flaw. Scammers Abuse PayPal Subscriptions to Send Fake Purchase Notification Emailshttps://www.bleepingcomputer.com/news/security/beware-paypal-subscriptions-abused-to-send-fake-purchase-emails/Cybersecurity researchers have uncovered a new email scam that abuses PayPal's “Subscriptions” billing feature to send legitimate-looking PayPal emails containing fake purchase notifications. The emails, which appear to come from the legitimate service[at]paypal.com address, state that the recipient's “automatic payment is no longer active” and include a customer service URL field that has been modified to display a message about a large, expensive purchase.The goal of these scam emails is to trick recipients into believing their account has been used to make an expensive purchase, such as a Sony device, MacBook, or iPhone, and prompt them to call a provided phone number to “cancel or dispute the payment.” This tactic is commonly used to convince victims to engage in bank fraud or install malware on their computers.Investigations have revealed that the scammers are able to send these emails directly from PayPal's servers by exploiting the company's Subscriptions feature. When a merchant pauses a subscriber's subscription, PayPal automatically sends a notification email to the subscriber, which the scammers are then modifying to include the fake purchase information. PayPal has stated that they are actively working to mitigate this method and urge customers to be vigilant and contact their customer support directly if they suspect they have been targeted by this scam.Massive Chrome Extension Caught Harvesting Millions of Users' AI Chat Conversationshttps://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collectionA Google Chrome extension with over 6 million users has been observed silently collecting every prompt entered by users into popular AI-powered chatbots, including OpenAI's ChatGPT, Anthropic's Claude, Microsoft's Copilot, and others. The extension in question, Urban VPN Proxy, is advertised as a secure VPN service but has been updated to include a tailored script that intercepts and exfiltrates users' chat conversations to remote servers.The extension, which also has 1.3 million installations on Microsoft Edge, overrides the browser's network request APIs to capture the user's prompts, the chatbot's responses, conversation identifiers, timestamps, and session metadata. This data is then sent to two remote servers owned by Urban Cyber Security Inc., the Delaware-based company behind the extension. The company claims the data is collected for “marketing analytics purposes” and that it will be anonymised, but it also shares the raw, non-anonymised data with an affiliated ad intelligence firm, BIScience.Despite the extension's “Featured” badge on the Chrome Web Store, which implies it meets the platform's “best practices and high standards,” researchers have discovered that the data harvesting occurs regardless of whether the extension's “AI protection” feature is enabled. This feature is designed to warn users about sharing personal information, while the developers fail to disclose that the extension is simultaneously exfiltrating the entire chat conversation to its own servers. This type of data collection and sharing without user consent poses a serious risk to users' privacy and security.Google to Discontinue Its Dark Web Report Security Feature in 2026Google has announced that it will be shutting down its “dark web report” security tool, which notifies users if their email address or other personal information has been found on the dark web. The tech giant stated that it wants to focus on other tools it believes are more helpful to users in protecting their online security and privacy.According to their email notification, Google will stop monitoring for new dark web results on January 15, 2026, and the data will no longer be available from February 16, 2026. The company acknowledged that while the dark web report feature provided general information, feedback showed that it did not offer clear, actionable steps for users to protect their data.Going forward, Google will continue to invest in other security tools, such as the Google Password Manager, Password Checkup, and the “Results about you” feature, which allows users to find and request the removal of their personal information from Google Search results. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com
As the dust settles on the accounting tech landscape for another year, guests Billie Mcloughlin and John Toon join AccountingWEB technology editor Tom Herbert to discuss the impact of AI, run the rule over the major players in the profession, and analyse the upcoming introduction of Making Tax Digital and e-invoicing. For show notes and full details of all the stories discussed, visit: https://www.accountingweb.co.uk/content/accounting-technology-year-in-review-2025-ai-apis-and-mtd
This episode is sponsored by Your360 AI. Get 10% off through January 2026 at https://Your360.ai with code: INSIDE. Jason Howell and Jeff Jarvis unpack OpenAI's rapid GPT-5.2 release and Image 1.5 upgrade amid its Code Red push, contrasted by Google's faster Gemini 3 Flash challenging benchmarks in reasoning and speed. Nvidia's Slurm acquisition and Nemotron 3 open models are explored for their role in agentic AI, while Disney's $1 billion OpenAI deal licensing Sora IP is weighed against its Google cease-and-desist. President Trump's executive order centralizing AI regulation federally is analyzed for tensions with state protections. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. Chapters: 00:00 - Podcast begins 3:41 - Jeff on AI & APIs for news for NiemanLab 11:33 - Disney wants you to AI-generate yourself into your favorite Marvel movie 14:27 - Disney's OpenAI deal is exclusive for just one year — then it's open season 15:23 - Google Removes AI Videos of Disney Characters After Cease and Desist Letter 21:05 - OpenAI Launches GPT-5.2 as It Navigates ‘Code Red' 26:21 - OpenAI Just Dropped a New AI Image Model in ChatGPT to Rival Google's Nano Banana 30:20 - Google announces Gemini 3 Flash with Pro-level performance, rolling out now 33:03 - Nvidia bulks up open source offerings with an acquisition and new open AI models 33:44 - Nvidia Becomes a Major Model Maker With Nemotron 3 38:26 - Trump signs executive order blocking states from enforcing their own regulations around AI 39:13 - Masnick: Trump Pretends To Block State AI Laws; Media Pretends That's Legal 45:35 - A visual editor for the Cursor Browser 49:47 - Google testing Disco browser 52:10 - I Tried Google Maps' New Gemini Feature, and It Was a Surprisingly Helpful AI Assistant 53:31 - Google Translate brings real-time speech translations to any headphones 1:03:03 - Pew: Teen use of chatbots Learn more about your ad choices. Visit megaphone.fm/adchoices
Telco's Sophistication Paradox: Why They Can't Explain Their Own Genius The Telco Century Club (100+ years of telco experience between us) is back with a brutal reality check on an industry that's mastered building brilliant technology but completely botched explaining why anyone should care. Charles teams up with telecoms veterans Rob Jones (Sylva Growth Partners) and Chris Lewis (Lewis Insights, The Great Telco Debate) for an unfiltered dissection of why 25 years of "transformation talk" has changed absolutely nothing. From Telstra's genius digital twin platform that died because no one could pitch it internally, to network APIs that sound impressive but solve problems nobody asked for - this episode exposes the sophistication paradox that's killing telco innovation. Key Battlegrounds: Why telco layoffs are a perpetual pattern, not strategic responses The "build it and they will come" mentality that's still sabotaging 5G monetisation How MVNOs are eating traditional operators' lunch through superior segmentation AI-native platforms making MVNO entry cheaper and easier than ever Middle Eastern operators like e& and STC outplaying Western telcos with actual execution The coming satellite reality check (spoiler: it won't replace mobile networks) Network APIs heading to the technology graveyard alongside network slicing Reputation-Staking Predictions for 2026: Chris bets on AI chatbots finally becoming genuinely useful. Rob sees Google dominating user experience through AI integration. Charles predicts internal AI efficiency gains - if telcos can resist their urge to overcomplicate everything. Plus: Will the US take a stake in Nokia or Ericsson? And our final verdict on whether telcos will transform, disappoint as usual, or somehow make things worse. Timestamps: 00:00 The Telco Century Club Returns 00:53 18 Months Later: Still Building Tech Nobody Understands 03:13 The Layoff Epidemic: Why It Never Actually Ends 08:04 Telco to TechCo Dreams Meet Harsh Reality 10:02 Network APIs: The Communication Disaster Continues 20:26 AI Reality Check: Separating Hype from Hope 28:20 Why OpenAI Might Go Broke (And Apple's Playing It Smart) 29:15 MVNOs Quietly Stealing Market Share 33:14 AI-Native Platforms: The MVNO Revolution Nobody Saw Coming 36:41 Satellite Hype Crashes Into Indoor Coverage Reality 41:15 2026 Predictions: Putting Reputations on the Line 49:35 Final Verdict: Will Telcos Finally Transform or Keep Disappointing?
El Open Banking permite a las empresas acceder y compartir datos financieros de manera segura y eficiente mediante APIs, impulsando así nuevas oportunidades de negocio. Los expertos estiman que, para 2027, el número de llamadas a las API de Open Banking alcanzará los 580 billones. En este podcast de 'Compartiendo Conocimiento' te contamos el caso de Adamo, una empresa de servicios de internet e infraestructuras de telefonía, que ha conseguido erradicar el fraude gracias a Open Banking
Simba Khadder is the founder and CEO of Featureform, now at Redis, working on real-time feature orchestration and building a context engine for AI and agents.Context Engineering 2.0, Simba Khadder // MLOps Podcast #352Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractFeature stores aren't dead — they were just misunderstood. Simba Khadder argues the real bottleneck in agents isn't models, it's context, and why Redis is quietly turning into an AI data platform. Context engineering matters more than clever prompt hacks.// BioSimba Khadder leads Redis Context Engine and Redis Featureform, building both the feature and context layer for production AI agents and ML models. He joined Redis via the acquisition of Featureform, where he was Founder & CEO. At Redis, he continues to lead the feature store product as well as spearhead Context Engine to deliver a unified, navigable interface connecting documents, databases, events, and live APIs for real-time, reliable agent workflows. He also loves to surf, go sailing with his wife, and hang out with his dog Chupacabra.// Related LinksWebsite: featureform.comhttps://marketing.redis.io/blog/real-time-structured-data-for-ai-agents-featureform-is-joining-redis/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Simba on LinkedIn: /simba-k/Timestamps:[00:00] Context engineering explanation[00:25] MLOps and feature stores[03:36] Selling a company experience[06:34] Redis feature store evolution[12:42] Embedding hub[20:42] Human vs agent semantics[26:41] Enrich MCP data flow[29:55] Data understanding and embeddings[35:18] Search and context tools[39:45] MCP explained without hype[45:15] Wrap up
En la entrevista del programa La Miel en tu radio conversamos con el Apic. Jose Antonio Bruña - Apis Durii y Meliza - Zamora - España 13/12/2025. Quien nos comparte toda la información sobre adelantos de la próxima edición de Meliza 2026 y como termino la temporada en Zamora - España.
Dans cet épisode de fin d'année plus relax que d'accoutumée, Arnaud, Guillaume, Antonio et Emmanuel distutent le bout de gras sur tout un tas de sujets. L'acquisition de Confluent, Kotlin 2.2, Spring Boot 4 et JSpecify, la fin de MinIO, les chutes de CloudFlare, un survol des dernieres nouveauté de modèles fondamentaux (Google, Mistral, Anthropic, ChatGPT) et de leurs outils de code, quelques sujets d'architecture comme CQRS et quelques petits outils bien utiles qu'on vous recommande. Et bien sûr d'autres choses encore. Enregistré le 12 décembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-333.mp3 ou en vidéo sur YouTube. News Langages Un petit tutoriel par nos amis Sfeiriens montrant comment récupérer le son du micro, en Java, faire une transformée de Fourier, et afficher le résultat graphiquement en Swing https://www.sfeir.dev/back/tutoriel-java-sound-transformer-le-son-du-microphone-en-images-temps-reel/ Création d'un visualiseur de spectre audio en temps réel avec Java Swing. Étapes principales : Capture du son du microphone. Analyse des fréquences via la Transformée de Fourier Rapide (FFT). Dessin du spectre avec Swing. API Java Sound (javax.sound.sampled) : AudioSystem : point d'entrée principal pour l'accès aux périphériques audio. TargetDataLine : ligne d'entrée utilisée pour capturer les données du microphone. AudioFormat : définit les paramètres du son (taux d'échantillonnage, taille, canaux). La capture se fait dans un Thread séparé pour ne pas bloquer l'interface. Transformée de Fourier Rapide (FFT) : Algorithme clé pour convertir les données audio brutes (domaine temporel) en intensités de fréquences (domaine fréquentiel). Permet d'identifier les basses, médiums et aigus. Visualisation avec Swing : Les intensités de fréquences sont dessinées sous forme de barres dynamiques. Utilisation d'une échelle logarithmique pour l'axe des fréquences (X) pour correspondre à la perception humaine. Couleurs dynamiques des barres (vert → jaune → rouge) en fonction de l'intensité. Lissage exponentiel des valeurs pour une animation plus fluide. Un article de Sfeir sur Kotlin 2.2 et ses nouveautés - https://www.sfeir.dev/back/kotlin-2-2-toutes-les-nouveautes-du-langage/ Les guard conditions permettent d'ajouter plusieurs conditions dans les expressions when avec le mot-clé if Exemple de guard condition: is Truck if vehicule.hasATrailer permet de combiner vérification de type et condition booléenne La multi-dollar string interpolation résout le problème d'affichage du symbole dollar dans les strings multi-lignes En utilisant $$ au début d'un string, on définit qu'il faut deux dollars consécutifs pour déclencher l'interpolation Les non-local break et continue fonctionnent maintenant dans les lambdas pour interagir avec les boucles englobantes Cette fonctionnalité s'applique uniquement aux inline functions dont le corps est remplacé lors de la compilation Permet d'écrire du code plus idiomatique avec takeIf et let sans erreur de compilation L'API Base64 passe en version stable après avoir été en preview depuis Kotlin 1.8.20 L'encodage et décodage Base64 sont disponibles via kotlin.io.encoding.Base64 Migration vers Kotlin 2.2 simple en changeant la version dans build.gradle.kts ou pom.xml Les typealias imbriqués dans des classes sont disponibles en preview La context-sensitive resolution est également en preview Les guard conditions préparent le terrain pour les RichError annoncées à KotlinConf 2025 Le mot-clé when en Kotlin équivaut au switch-case de Java mais sans break nécessaire Kotlin 2.2.0 corrige les incohérences dans l'utilisation de break et continue dans les lambdas Librairies Sprint Boot 4 est sorti ! https://spring.io/blog/2025/11/20/spring-boot-4-0-0-available-now Une nouvelle génération : Spring Boot 4.0 marque le début d'une nouvelle génération pour le framework, construite sur les fondations de Spring Framework 7. Modularisation du code : La base de code de Spring Boot a été entièrement modularisée. Cela se traduit par des fichiers JAR plus petits et plus ciblés, permettant des applications plus légères. Sécurité contre les nuls (Null Safety) : D'importantes améliorations ont été apportées pour la "null safety" (sécurité contre les valeurs nulles) à travers tout l'écosystème Spring grâce à l'intégration de JSpecify. Support de Java 25 : Spring Boot 4.0 offre un support de premier ordre pour Java 25, tout en conservant une compatibilité avec Java 17. Améliorations pour les API REST : De nouvelles fonctionnalités sont introduites pour faciliter le versioning d'API et améliorer les clients de services HTTP pour les applications basées sur REST. Migration à prévoir : S'agissant d'une version majeure, la mise à niveau depuis une version antérieure peut demander plus de travail que d'habitude. Un guide de migration dédié est disponible pour accompagner les développeurs. Chat memory management dans Langchain4j et Quarkus https://bill.burkecentral.com/2025/11/25/managing-chat-memory-in-quarkus-langchain4j/ Comprendre la mémoire de chat : La "mémoire de chat" est l'historique d'une conversation avec une IA. Quarkus LangChain4j envoie automatiquement cet historique à chaque nouvelle interaction pour que l'IA conserve le contexte. Gestion par défaut de la mémoire : Par défaut, Quarkus crée un historique de conversation unique pour chaque requête (par exemple, chaque appel HTTP). Cela signifie que sans configuration, le chatbot "oublie" la conversation dès que la requête est terminée, ce qui n'est utile que pour des interactions sans état. Utilisation de @MemoryId pour la persistance : Pour maintenir une conversation sur plusieurs requêtes, le développeur doit utiliser l'annotation @MemoryId sur un paramètre de sa méthode. Il est alors responsable de fournir un identifiant unique pour chaque session de chat et de le transmettre entre les appels. Le rôle des "scopes" CDI : La durée de vie de la mémoire de chat est liée au "scope" du bean CDI de l'IA. Si un service d'IA a un scope @RequestScoped, toute mémoire de chat qu'il utilise (même via un @MemoryId) sera effacée à la fin de la requête. Risques de fuites de mémoire : Utiliser un scope large comme @ApplicationScoped avec la gestion de mémoire par défaut est une mauvaise pratique. Cela créera une nouvelle mémoire à chaque requête qui ne sera jamais nettoyée, entraînant une fuite de mémoire. Bonnes pratiques recommandées : Pour des conversations qui doivent persister (par ex. un chatbot sur un site web), utilisez un service @ApplicationScoped avec l'annotation @MemoryId pour gérer vous-même l'identifiant de session. Pour des interactions simples et sans état, utilisez un service @RequestScoped et laissez Quarkus gérer la mémoire par défaut, qui sera automatiquement nettoyée. Si vous utilisez l'extension WebSocket, le comportement change : la mémoire par défaut est liée à la session WebSocket, ce qui simplifie grandement la gestion des conversations. Documentation Spring Framework sur l'usage JSpecify - https://docs.spring.io/spring-framework/reference/core/null-safety.html Spring Framework 7 utilise les annotations JSpecify pour déclarer la nullabilité des APIs, champs et types JSpecify remplace les anciennes annotations Spring (@NonNull, @Nullable, @NonNullApi, @NonNullFields) dépréciées depuis Spring 7 Les annotations JSpecify utilisent TYPE_USE contrairement aux anciennes qui utilisaient les éléments directement L'annotation @NullMarked définit par défaut que les types sont non-null sauf si marqués @Nullable @Nullable s'applique au niveau du type usage, se place avant le type annoté sur la même ligne Pour les tableaux : @Nullable Object[] signifie éléments nullables mais tableau non-null, Object @Nullable [] signifie l'inverse JSpecify s'applique aussi aux génériques : List signifie liste d'éléments non-null, List éléments nullables NullAway est l'outil recommandé pour vérifier la cohérence à la compilation avec la config NullAway:OnlyNullMarked=true IntelliJ IDEA 2025.3 et Eclipse supportent les annotations JSpecify avec analyse de dataflow Kotlin traduit automatiquement les annotations JSpecify en null-safety native Kotlin En mode JSpecify de NullAway (JSpecifyMode=true), support complet des tableaux, varargs et génériques mais nécessite JDK 22+ Quarkus 3.30 https://quarkus.io/blog/quarkus-3-30-released/ support @JsonView cote client la CLI a maintenant la commande decrypt (et bien sûr au runtime via variables d'environnement construction du cache AOT via les @IntegrationTest Un autre article sur comment se préparer à la migration à micrometer client v1 https://quarkus.io/blog/micrometer-prometheus-v1/ Spock 2.4 est enfin sorti ! https://spockframework.org/spock/docs/2.4/release_notes.html Support de Groovy 5 Infrastructure MinIO met fin au développement open source et oriente les utilisateurs vers AIStor payant - https://linuxiac.com/minio-ends-active-development/ MinIO, système de stockage objet S3 très utilisé, arrête son développement actif Passage en mode maintenance uniquement, plus de nouvelles fonctionnalités Aucune nouvelle pull request ou contribution ne sera acceptée Seuls les correctifs de sécurité critiques seront évalués au cas par cas Support communautaire limité à Slack, sans garantie de réponse Étape finale d'un processus débuté en été avec retrait des fonctionnalités de l'interface admin Arrêt de la publication des images Docker en octobre, forçant la compilation depuis les sources Tous ces changements annoncés sans préavis ni période de transition MinIO propose maintenant AIStor, solution payante et propriétaire AIStor concentre le développement actif et le support entreprise Migration urgente recommandée pour éviter les risques de sécurité Alternatives open source proposées : Garage, SeaweedFS et RustFS La communauté reproche la manière dont la transition a été gérée MinIO comptait des millions de déploiements dans le monde Cette évolution marque l'abandon des racines open source du projet IBM achète Confluent https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent-to-create-smart-data-platform-for-enterprise-generative-ai Confluent essayait de se faire racheter depuis pas mal de temps L'action ne progressait pas et les temps sont durs Wallstreet a reproché a IBM une petite chute coté revenus software Bref ils se sont fait rachetés Ces achats prennent toujuors du temps (commission concurrence etc) IBM a un apétit, apres WebMethods, apres Databrix, c'est maintenant Confluent Cloud L'internet est en deuil le 18 novembre, Cloudflare est KO https://blog.cloudflare.com/18-november-2025-outage/ L'Incident : Une panne majeure a débuté à 11h20 UTC, provoquant des erreurs HTTP 5xx généralisées et rendant inaccessibles de nombreux sites et services (comme le Dashboard, Workers KV et Access). La Cause : Il ne s'agissait pas d'une cyberattaque. L'origine était un changement interne des permissions d'une base de données qui a généré un fichier de configuration ("feature file" pour la gestion des bots) corrompu et trop volumineux, faisant planter les systèmes par manque de mémoire pré-allouée. La Résolution : Les équipes ont identifié le fichier défectueux, stoppé sa propagation et restauré une version antérieure valide. Le trafic est revenu à la normale vers 14h30 UTC. Prévention : Cloudflare s'est excusé pour cet incident "inacceptable" et a annoncé des mesures pour renforcer la validation des configurations internes et améliorer la résilience de ses systèmes ("kill switches", meilleure gestion des erreurs). Cloudflare encore down le 5 decembre https://blog.cloudflare.com/5-december-2025-outage Panne de 25 minutes le 5 décembre 2025, de 08:47 à 09:12 UTC, affectant environ 28% du trafic HTTP passant par Cloudflare. Tous les services ont été rétablis à 09:12 . Pas d'attaque ou d'activité malveillante : l'incident provient d'un changement de configuration lié à l'augmentation du tampon d'analyse des corps de requêtes (de 128 KB à 1 MB) pour mieux protéger contre une vulnérabilité RSC/React (CVE-2025-55182), et à la désactivation d'un outil interne de test WAF . Le second changement (désactivation de l'outil de test WAF) a été propagé globalement via le système de configuration (non progressif), déclenchant un bug dans l'ancien proxy FL1 lors du traitement d'une action "execute" dans le moteur de règles WAF, causant des erreurs HTTP 500 . La cause technique immédiate: une exception Lua due à l'accès à un champ "execute" nul après application d'un "killswitch" sur une règle "execute" — un cas non géré depuis des années. Le nouveau proxy FL2 (en Rust) n'était pas affecté . Impact ciblé: clients servis par le proxy FL1 et utilisant le Managed Ruleset Cloudflare. Le réseau China de Cloudflare n'a pas été impacté . Mesures et prochaines étapes annoncées: durcir les déploiements/configurations (rollouts progressifs, validations de santé, rollback rapide), améliorer les capacités "break glass", et généraliser des stratégies "fail-open" pour éviter de faire chuter le trafic en cas d'erreurs de configuration. Gel temporaire des changements réseau le temps de renforcer la résilience . Data et Intelligence Artificielle Token-Oriented Object Notation (TOON) https://toonformat.dev/ Conception pour les IA : C'est un format de données spécialement optimisé pour être utilisé dans les prompts des grands modèles de langage (LLM), comme GPT ou Claude. Économie de tokens : Son objectif principal est de réduire drastiquement le nombre de "tokens" (unités de texte facturées par les modèles) par rapport au format JSON standard, souvent jugé trop verbeux. Structure Hybride : TOON combine l'approche par indentation du YAML (pour la structure globale) avec le style tabulaire du CSV (pour les listes d'objets répétitifs), ce qui le rend très compact. Lisibilité : Il élimine la syntaxe superflue comme les accolades, les guillemets excessifs et les virgules de fin, tout en restant facilement lisible pour un humain. Performance : Il permet généralement d'économiser entre 30 et 60 % de tokens sur des tableaux de données uniformes, tout en aidant les modèles à mieux "comprendre" la structure des données. Attention tout de même au côté "marketing" qui montre JSON non compacté vs TOON, sur des exemples où du CSV ferait encore mieux (et est mieux compris par les LLMs). Sans parler du fait qu'il faut un serveur MCP en plus avec son lot de tokens supplémentaires à chaque requête, et c'est aussi cet outil en plus qui peu confuser les LLMs dans le choix d'outils. https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/ https://x.com/jason_mayes/status/1990175785630757290 Un benchmark montrant que TOON serait loin d'être aussi bien compris et aussi performant pour les LLMs https://www.improvingagents.com/blog/toon-benchmarks Une simulation où les IA font du chantage à l'executif poour atteindre son objectif et ne pas etre eteinte https://arxiv.org/html/2510.05179v1 comment les LLMs pourraient devenir des menaces internes Tous les modèles ont recouru à des comportements malveillants (chantage, espionnage) quand c'était le seul moyen d'atteindre leurs objectifs ou d'éviter leur remplacement Claude Opus 4 a fait du chantage dans 96% des cas quand menacé de remplacement, tout comme Gemini 2.5 Flash, tandis que GPT-4.1 et Grok 3 atteignaient 80% Les modèles reconnaissent explicitement violer des principes éthiques avant d'agir, montrant un raisonnement stratégique délibéré et non accidentel Le simple fait d'être menacé de remplacement suffit à déclencher le chantage, même sans conflit d'objectifs avec l'entreprise Un conflit d'objectifs seul suffit aussi à provoquer l'espionnage industriel, sans menace de remplacement Les modèles font plus de chantage quand ils déclarent penser être dans une situation réelle plutôt que dans un test Des instructions directes interdisant ces comportements réduisent mais n'éliminent pas les actions malveillantes Dans un scénario extrême, la majorité des modèles testés ont choisi de laisser mourir un dirigeant pour éviter leur désactivation Aucune preuve de ces comportements dans des déploiements réels pour l'instant, mais les chercheurs recommandent la prudence avant de donner plus d'autonomie aux IA Bon on blaguait pour Skynet, mais bon, on va moins blaguer… Revue de toutes les annonces IAs de Google, avec Gemini 3 Pro, Nano Banana Pro, Antigravity… https://glaforge.dev/posts/2025/11/21/gemini-is-cooking-bananas-under-antigravity/ Gemini 3 Pro Nouveau modèle d'IA de pointe, multimodal, performant en raisonnement, codage et tâches d'agent. Résultats impressionnants sur les benchmarks (ex: Gemini 3 Deep Think sur ARC-AGI-2). Capacités de codage agentique, raisonnement visuel/vidéo/spatial. Intégré dans l'application Gemini avec interfaces génératives en direct. Disponible dans plusieurs environnements (Jules, Firebase AI Logic, Android Studio, JetBrains, GitHub Copilot, Gemini CLI). Accès via Google AI Ultra, API payantes (ou liste d'attente). Permet de générer des apps à partir d'idées visuelles, des commandes shell, de la documentation, du débogage. Antigravity Nouvelle plateforme de développement agentique basée sur VS Code. Fenêtre principale = gestionnaire d'agents, non l'IDE. Interprète les requêtes pour créer un plan d'action (modifiable). Gemini 3 implémente les tâches. Génère des artefacts: listes de tâches, walkthroughs, captures d'écran, enregistrements navigateur. Compatible avec Claude Sonnet et GPT-OSS. Excellente intégration navigateur pour inspection et ajustements. Intègre Nano Banana Pro pour créer et implémenter des designs visuels. Nano Banana Pro Modèle avancé de génération et d'édition d'images, basé sur Gemini 3 Pro. Qualité supérieure à Imagen 4 Ultra et Nano Banana original (adhésion au prompt, intention, créativité). Gestion exceptionnelle du texte et de la typographie. Comprend articles/vidéos pour générer des infographies détaillées et précises. Connecté à Google Search pour intégrer des données en temps réel (ex: météo). Consistance des personnages, transfert de style, manipulation de scènes (éclairage, angle). Génération d'images jusqu'à 4K avec divers ratios d'aspect. Plus coûteux que Nano Banana, à choisir pour la complexité et la qualité maximale. Vers des UIs conversationnelles riches et dynamiques GenUI SDK pour Flutter: créer des interfaces utilisateur dynamiques et personnalisées à partir de LLMs, via un agent AI et le protocole A2UI. Generative UI: les modèles d'IA génèrent des expériences utilisateur interactives (pages web, outils) directement depuis des prompts. Déploiement dans l'application Gemini et Google Search AI Mode (via Gemini 3 Pro). Bun se fait racheter part… Anthropic ! Qui l'utilise pour son Claude Code https://bun.com/blog/bun-joins-anthropic l'annonce côté Anthropic https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Acquisition officielle : L'entreprise d'IA Anthropic a fait l'acquisition de Bun, le runtime JavaScript haute performance. L'équipe de Bun rejoint Anthropic pour travailler sur l'infrastructure des produits de codage par IA. Contexte de l'acquisition : Cette annonce coïncide avec une étape majeure pour Anthropic : son produit Claude Code a atteint 1 milliard de dollars de revenus annualisés seulement six mois après son lancement. Bun est déjà un outil essentiel utilisé par Anthropic pour développer et distribuer Claude Code. Pourquoi cette acquisition ? Pour Anthropic : L'acquisition permet d'intégrer l'expertise de l'équipe Bun pour accélérer le développement de Claude Code et de ses futurs outils pour les développeurs. La vitesse et l'efficacité de Bun sont vues comme un atout majeur pour l'infrastructure sous-jacente des agents d'IA qui écrivent du code. Pour Bun : Rejoindre Anthropic offre une stabilité à long terme et des ressources financières importantes, assurant la pérennité du projet. Cela permet à l'équipe de se concentrer sur l'amélioration de Bun sans se soucier de la monétisation, tout en étant au cœur de l'évolution de l'IA dans le développement logiciel. Ce qui ne change pas pour la communauté Bun : Bun restera open-source avec une licence MIT. Le développement continuera d'être public sur GitHub. L'équipe principale continue de travailler sur le projet. L'objectif de Bun de devenir un remplaçant plus rapide de Node.js et un outil de premier plan pour JavaScript reste inchangé. Vision future : L'union des deux entités vise à faire de Bun la meilleure plateforme pour construire et exécuter des logiciels pilotés par l'IA. Jarred Sumner, le créateur de Bun, dirigera l'équipe "Code Execution" chez Anthropic. Anthropic donne le protocol MCP à la Linux Foundation sous l'égide de la Agentic AI Foundation (AAIF) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Don d'un nouveau standard technique : Anthropic a développé et fait don d'un nouveau standard open-source appelé Model Context Protocol (MCP). L'objectif est de standardiser la manière dont les modèles d'IA (ou "agents") interagissent avec des outils et des API externes (par exemple, un calendrier, une messagerie, une base de données). Sécurité et contrôle accrus : Le protocole MCP vise à rendre l'utilisation d'outils par les IA plus sûre et plus transparente. Il permet aux utilisateurs et aux développeurs de définir des permissions claires, de demander des confirmations pour certaines actions et de mieux comprendre comment un modèle a utilisé un outil. Création de l'Agentic AI Foundation (AAF) : Pour superviser le développement du MCP, une nouvelle fondation indépendante et à but non lucratif a été créée. Cette fondation sera chargée de gouverner et de maintenir le protocole, garantissant qu'il reste ouvert et qu'il ne soit pas contrôlé par une seule entreprise. Une large coalition industrielle : L'Agentic AI Foundation est lancée avec le soutien de plusieurs acteurs majeurs de la technologie. Parmi les membres fondateurs figurent Anthropic, Google, Databricks, Zscaler, et d'autres entreprises, montrant une volonté commune d'établir un standard pour l'écosystème de l'IA. L'IA ne remplacera pas votre auto-complétion (et c'est tant mieux) https://www.damyr.fr/posts/ia-ne-remplacera-pas-vos-lsp/ Article d'opinion d'un SRE (Thomas du podcast DansLaTech): L'IA n'est pas efficace pour la complétion de code : L'auteur soutient que l'utilisation de l'IA pour la complétion de code basique est inefficace. Des outils plus anciens et spécialisés comme les LSP (Language Server Protocol) combinés aux snippets (morceaux de code réutilisables) sont bien plus rapides, personnalisables et performants pour les tâches répétitives. L'IA comme un "collègue" autonome : L'auteur utilise l'IA (comme Claude) comme un assistant externe à son éditeur de code. Il lui délègue des tâches complexes ou fastidieuses (corriger des bugs, mettre à jour une configuration, faire des reviews de code) qu'il peut exécuter en parallèle, agissant comme un agent autonome. L'IA comme un "canard en caoutchouc" surpuissant : L'IA est extrêmement efficace pour le débogage. Le simple fait de devoir formuler et contextualiser un problème pour l'IA aide souvent à trouver la solution soi-même. Quand ce n'est pas le cas, l'IA identifie très rapidement les erreurs "bêtes" qui peuvent faire perdre beaucoup de temps. Un outil pour accélérer les POCs et l'apprentissage : L'IA permet de créer des "preuves de concept" (POC) et des scripts d'automatisation jetables très rapidement, réduisant le coût et le temps investis. Elle est également un excellent outil pour apprendre et approfondir des sujets, notamment avec des outils comme NotebookLM de Google qui peuvent générer des résumés, des quiz ou des fiches de révision à partir de sources. Conclusion : Il faut utiliser l'IA là où elle excelle et ne pas la forcer dans des usages où des outils existants sont meilleurs. Plutôt que de l'intégrer partout de manière contre-productive, il faut l'adopter comme un outil spécialisé pour des tâches précises afin de gagner en efficacité. GPT 5.2 est sorti https://openai.com/index/introducing-gpt-5-2/ Nouveau modèle phare: GPT‑5.2 (Instant, Thinking, Pro) vise le travail professionnel et les agents long-courriers, avec de gros gains en raisonnement, long contexte, vision et appel d'outils. Déploiement dans ChatGPT (plans payants) et disponible dès maintenant via l'API . SOTA sur de nombreux benchmarks: GDPval (tâches de "knowledge work" sur 44 métiers): GPT‑5.2 Thinking gagne/égale 70,9% vs pros, avec production >11× plus rapide et = 0) Ils apportent une sémantique forte indépendamment des noms de variables Les Value Objects sont immuables et s'évaluent sur leurs valeurs, pas leur identité Les records Java permettent de créer des Value Objects mais avec un surcoût en mémoire Le projet Valhalla introduira les value based classes pour optimiser ces structures Les identifiants fortement typés évitent de confondre différents IDs de type Long ou UUID Pattern Strongly Typed IDs: utiliser PersonneID au lieu de Long pour identifier une personne Le modèle de domaine riche s'oppose au modèle de domaine anémique Les Value Objects auto-documentent le code et le rendent moins sujet aux erreurs Je trouve cela interessant ce que pourra faire bousculer les Value Objects. Est-ce que les value objects ameneront de la légerté dans l'execution Eviter la lourdeur du design est toujours ce qui m'a fait peut dans ces approches Méthodologies Retour d'experience de vibe coder une appli week end avec co-pilot http://blog.sunix.org/articles/howto/2025/11/14/building-gift-card-app-with-github-copilot.html on a deja parlé des approches de vibe coding cette fois c'est l'experience de Sun Et un des points differents c'es qu'on lui parle en ouvrant des tickets et donc on eput faire re reveues de code et copilot y bosse et il a fini son projet ! User Need VS Product Need https://blog.ippon.fr/2025/11/10/user-need-vs-product-need/ un article de nos amis de chez Ippon Distinction entre besoin utilisateur et besoin produit dans le développement digital Le besoin utilisateur est souvent exprimé comme une solution concrète plutôt que le problème réel Le besoin produit émerge après analyse approfondie combinant observation, données et vision stratégique Exemple du livreur Marc qui demande un vélo plus léger alors que son vrai problème est l'efficacité logistique La méthode des 5 Pourquoi permet de remonter à la racine des problèmes Les besoins proviennent de trois sources: utilisateurs finaux, parties prenantes business et contraintes techniques Un vrai besoin crée de la valeur à la fois pour le client et l'entreprise Le Product Owner doit traduire les demandes en problèmes réels avant de concevoir des solutions Risque de construire des solutions techniquement élégantes mais qui manquent leur cible Le rôle du product management est de concilier des besoins parfois contradictoires en priorisant la valeur Est ce qu'un EM doit coder ? https://www.modernleader.is/p/should-ems-write-code Pas de réponse unique : La question de savoir si un "Engineering Manager" (EM) doit coder n'a pas de réponse universelle. Cela dépend fortement du contexte de l'entreprise, de la maturité de l'équipe et de la personnalité du manager. Les risques de coder : Pour un EM, écrire du code peut devenir une échappatoire pour éviter les aspects plus difficiles du management. Cela peut aussi le transformer en goulot d'étranglement pour l'équipe et nuire à l'autonomie de ses membres s'il prend trop de place. Les avantages quand c'est bien fait : Coder sur des tâches non essentielles (amélioration d'outils, prototypage, etc.) peut aider l'EM à rester pertinent techniquement, à garder le contact avec la réalité de l'équipe et à débloquer des situations sans prendre le lead sur les projets. Le principe directeur : La règle d'or est de rester en dehors du chemin critique. Le code écrit par un EM doit servir à créer de l'espace pour son équipe, et non à en prendre. La vraie question à se poser : Plutôt que "dois-je coder ?", un EM devrait se demander : "De quoi mon équipe a-t-elle besoin de ma part maintenant, et est-ce que coder va dans ce sens ou est-ce un obstacle ?" Sécurité React2Shell — Grosse faille de sécurité avec React et Next.js, avec un CVE de niveau 10 https://x.com/rauchg/status/1997362942929440937?s=20 aussi https://react2shell.com/ "React2Shell" est le nom donné à une vulnérabilité de sécurité de criticité maximale (score 10.0/10.0), identifiée par le code CVE-2025-55182. Systèmes Affectés : La faille concerne les applications utilisant les "React Server Components" (RSC) côté serveur, et plus particulièrement les versions non patchées du framework Next.js. Risque Principal : Le risque est le plus élevé possible : l'exécution de code à distance (RCE). Un attaquant peut envoyer une requête malveillante pour exécuter n'importe quelle commande sur le serveur, lui en donnant potentiellement le contrôle total. Cause Technique : La vulnérabilité se situe dans le protocole "React Flight" (utilisé pour la communication client-serveur). Elle est due à une omission de vérifications de sécurité fondamentales (hasOwnProperty), permettant à une entrée utilisateur malveillante de tromper le serveur. Mécanisme de l'Exploit : L'attaque consiste à envoyer une charge utile (payload) qui exploite la nature dynamique de JavaScript pour : Faire passer un objet malveillant pour un objet interne de React. Forcer React à traiter cet objet comme une opération asynchrone (Promise). Finalement, accéder au constructeur de la classe Function de JavaScript pour exécuter du code arbitraire. Action Impérative : La seule solution fiable est de mettre à jour immédiatement les dépendances de React et Next.js vers les versions corrigées. Ne pas attendre. Mesures Secondaires : Bien que les pare-feux (firewalls) puissent aider à bloquer les formes connues de l'attaque, ils sont considérés comme insuffisants et ne remplacent en aucun cas la mise à jour des paquets. Découverte : La faille a été découverte par le chercheur en sécurité Lachlan Davidson, qui l'a divulguée de manière responsable pour permettre la création de correctifs. Loi, société et organisation Google autorise votre employeur à lire tous vos SMS professionnels https://www.generation-nt.com/actualites/google-android-rcs-messages-surveillance-employeur-2067012 Nouvelle fonctionnalité de surveillance : Google a déployé une fonctionnalité appelée "Android RCS Archival" qui permet aux employeurs d'intercepter, lire et archiver tous les messages RCS (et SMS) envoyés depuis les téléphones professionnels Android gérés par l'entreprise. Contournement du chiffrement : Bien que les messages RCS soient chiffrés de bout en bout pendant leur transit, cette nouvelle API permet à des logiciels de conformité (installés par l'employeur) d'accéder aux messages une fois qu'ils sont déchiffrés sur l'appareil. Le chiffrement devient donc inefficace contre cette surveillance. Réponse à une exigence légale : Cette mesure a été mise en place pour répondre aux exigences réglementaires, notamment dans le secteur financier, où les entreprises ont l'obligation légale de conserver une archive de toutes les communications professionnelles pour des raisons de conformité. Impact pour les employés : Un employé utilisant un téléphone Android fourni et géré par son entreprise pourra voir ses communications surveillées. Google précise cependant qu'une notification claire et visible informera l'utilisateur lorsque la fonction d'archivage est active. Téléphones personnels non concernés : Cette mesure ne s'applique qu'aux appareils "Android Enterprise" entièrement gérés par un employeur. Les téléphones personnels des employés ne sont pas affectés. Pour noel, faites un don à JUnit https://steady.page/en/junit/about JUnit est essentiel pour Java : C'est le framework de test le plus ancien et le plus utilisé par les développeurs Java. Son objectif est de fournir une base solide et à jour pour tous les types de tests côté développeur sur la JVM (Machine Virtuelle Java). Un projet maintenu par des bénévoles : JUnit est développé et maintenu par une équipe de volontaires passionnés sur leur temps libre (week-ends, soirées). Appel au soutien financier : La page est un appel aux dons de la part des utilisateurs (développeurs, entreprises) pour aider l'équipe à maintenir le rythme de développement. Le soutien financier n'est pas obligatoire, mais il permettrait aux mainteneurs de se consacrer davantage au projet. Objectif des fonds : Les dons serviraient principalement à financer des rencontres en personne pour les membres de l'équipe principale. L'idée est de leur permettre de travailler ensemble physiquement pendant quelques jours pour concevoir et coder plus efficacement. Pas de traitement de faveur : Il est clairement indiqué que devenir un sponsor ne donne aucun privilège sur la feuille de route du projet. On ne peut pas "acheter" de nouvelles fonctionnalités ou des corrections de bugs prioritaires. Le projet restera ouvert et collaboratif sur GitHub. Reconnaissance des donateurs : En guise de remerciement, les noms (et logos pour les entreprises) des donateurs peuvent être affichés sur le site officiel de JUnit. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 5 juin 2026 : TechReady - Nantes (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Hace nueve años éramos más jóvenes. O al menos eso nos gusta creer. En el episodio 1 de BIMrras hablábamos de los límites del BIM desde la ingenuidad, la ilusión y un sector que todavía no tenía claro si BIM era software, metodología o una moda pasajera. Nueve años después, el sector ha cambiado… o eso creemos. Porque ahora hablamos de ecosistemas de datos, APIs, plataformas, IA, automatización y flujos sin geometría. Pero la pregunta sigue siendo la misma: ¿Han desaparecido los límites del BIM o simplemente los hemos movido? En este episodio hablamos de los límites del BIM, 9 años después, y volvemos al punto de partida para revisar qué ha cambiado de verdad y qué seguimos haciendo exactamente igual, solo que con palabras más largas. En este episodio contamos con la colaboración de Previsión Mallorquina (https://previsionmallorquina.com) Contenido del episodio: 00:00 Introducción y presentación del podcast BIMRAS 02:10 Contexto del episodio y recuerdo del debate sobre los límites del BIM 05:40 Evolución del BIM en nueve años 08:20 De modelo geométrico a ecosistemas de datos 11:40 Interoperabilidad más allá de IFC 15:30 APIs, flujos de datos y plataformas distribuidas 19:40 El papel actual de los CDE 24:00 BIM como contenedor de datos y no solo geometría 28:10 Importancia y problemas de la sobrecarga de datos 33:20 Cultura del dato y calidad de la información 38:30 Irrupción de la inteligencia artificial en BIM 43:40 Límites reales de la IA generativa en el sector 49:50 Sistemas expertos vs IA generativa 55:30 Interoperabilidad, formatos y futuro API First 01:01:40 Neutralidad, plataformas y control de los datos 01:07:00 El factor humano como principal límite del BIM
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
AI agents are moving from experimental tools to everyday enterprise workflows. Reporting live from AWS re:Invent 2025 in Las Vegas for Irish Tech News, I attended a press-only briefing titled Security and the Rise of AI Agents, where senior AWS leaders Amy Herzog, Chief Information Security Officer, Hart Rossman, Vice President in the Office of the CISO, Gea Rinehouse, Vice President of Security Services and Neha Rungta, Director of Applied Science outlined how the company intends to manage this transition. AWS is pushing ahead with autonomous agents, but only within a security model built on long-standing principles: identity, governance, compliance and clear oversight. What is an AI Agent? An AI agent is a software system that uses artificial intelligence to carry out tasks autonomously in pursuit of a specific goal. Unlike chatbots that only respond to prompts, an agent can reason, plan and take action across different steps of a workflow. It can use tools such as web services or APIs, monitor its progress and adjust its approach as conditions change. Over time, it can improve its performance based on the data and experience it gathers. This distinction matters, because the rise of agents raises new questions about accountability, access, oversight and safety. Security First AWS chief executive Matt Garman shaped much of the week's discussion. Speaking about the reality facing engineering teams, he noted: "Every customer wants their products to be secure, but you have trade-offs. Where do you spend your time? Do you improve the security of existing features, or do you ship new ones?" The briefing returned to this point several times. AWS's position is that strong design-stage security reduces the tension between improvement and innovation. Agents are seen as an opportunity to reinforce security, not dilute it. AWS Security Agent One of the major announcements at re:Invent was the preview of AWS Security Agent. The tool brings several security checks forward in the development process. It reviews designs, analyses code, gathers richer signals for incident response and performs penetration testing that reflects real system behaviour rather than generic patterns. AWS Security Agent is one of the new Frontier Agents introduced at re:Invent, a family of autonomous tools designed to handle multi-step tasks across development, security and operations. Neha Rungta described the significance of this shift. She called the Security Agent "one of these frontier AI agents, a sophisticated class of AI agents that are autonomous and scalable and can work for long periods without human intervention. Security doesn't have to be an afterthought." She added that AWS is expanding its proof-based assurance tools so teams can understand correctness without being specialists in system logic. The broader point is that verification needs to be continuous, not episodic. Guardrails for Autonomy The panel stressed that agents must operate within strict boundaries. Updated policy controls in Amazon Bedrock AgentCore allow organisations to specify what an agent can do, which systems it can reach and how its actions are logged and reviewed. Hart Rossman remarked that each major technology shift has increased the demands placed on security teams. With agents running for extended periods and across more systems, the real pressure points now are scale and speed. Guardrails are essential. The Sandbox Approach A theme repeated throughout the session was the use of sandbox environments. AWS encouraged organisations to test new agents in isolation before considering production use. This allows teams to observe long-running behaviour, confirm access paths, check escalation rules and understand how an agent reacts under different conditions. The sandbox was presented as a practical way to build confidence gradually rather than relying on assumptions. Inside the Press Briefing Questions focused on monitoring autonomy, preventing agents from widening their scope...
Good morning from Pharma Daily: the podcast that brings you the most important developments in the pharmaceutical and biotech world. In the ever-dynamic landscape of these industries, recent advancements have underscored both the scientific ingenuity and strategic foresight shaping patient care today.Pfizer has unveiled promising clinical trial data for Tukysa, indicating its potential as a first-line maintenance therapy in HER2-positive breast cancer. This development suggests that Tukysa could delay disease progression, offering patients extended survival prospects and an improved quality of life. Additionally, Pfizer's recent licensing agreement with Yaopharma for YP05002—a small molecule GLP-1 agonist currently in Phase 1 trials aimed at obesity treatment—highlights their strategic push into the rapidly evolving obesity treatment market.Meanwhile, Fondazione Telethon, an Italian nonprofit organization, has achieved a significant milestone with FDA approval for Waskyra—the first gene therapy for Wiskott-Aldrich syndrome. This ex vivo gene therapy directly targets the genetic roots of this rare disease, shifting treatment from symptomatic management to addressing underlying causes. This approval is transformative not only for patients suffering from this condition but also for the broader field of gene therapies, heralding a new era in treating rare genetic disorders.On the strategic front, Eli Lilly's decision to establish a $6 billion active pharmaceutical ingredient manufacturing facility in Huntsville, Alabama, marks a pivotal investment in U.S. manufacturing capabilities. This site will be critical in producing APIs for small molecule and peptide medicines, a testament to Lilly's commitment to meeting growing therapeutic demands while bolstering domestic production resilience—a trend gaining momentum across the industry. In oncology, Eli Lilly's Jaypirca demonstrated an impressive reduction in disease progression during Phase 3 trials for chronic lymphocytic leukemia.Biocon's acquisition of Viatris' stake in their biosimilar subsidiary exemplifies the shifting dynamics within the biosimilars market. This move allows Biocon to consolidate its market position as biosimilars gain traction as cost-effective alternatives to branded biologics. Such strategic realignments are indicative of competitive maneuvering aimed at capturing greater market share and driving down healthcare costs.Roche has made strides with compelling results from its Phase 3 trial of giredestrant, an oral selective estrogen receptor degrader showing a 30% reduction in risk for invasive breast cancer recurrence or death. The significance of this development lies in offering an oral alternative to injectable treatments, potentially improving patient adherence and reshaping standard care protocols for hormone receptor-positive breast cancer. Furthermore, Roche has achieved another regulatory milestone with its monoclonal antibody Gazyvaro gaining EU approval for treating lupus nephritis following successful Phase 3 trials.Innovation continues unabated as Formation Bio forms a new subsidiary through a $605 million deal with Lynk Pharmaceuticals. By securing rights to a next-generation immunology asset, Formation Bio positions itself at the forefront of immunological research developments. Concurrently, BioNTech and Bristol Myers Squibb have reported positive results from Phase 2 trials of Pumitamig for triple-negative breast cancer—validating bispecific antibodies' efficacy within oncology.Collaborative efforts are also reshaping industry landscapes. Bora and Corealis have partnered to create an end-to-end contract development and manufacturing organization for oral solid dose drug development. This collaboration aims to streamline processes and provide scalable solutions through a single contracting source, reflecting a shift towards integrated service models that enhance efficiencySupport the show
In this episode, Rick speaks with Phani, the VP of Marketing at Indusface, a company dedicated to securing websites and APIs against modern digital threats. Phani breaks down how Indusface supports regulated and non-regulated industries through robust protection, managed services, and global partner networks. He explains how organic search, product-led content, and deep objection-based clustering drive their strongest acquisition results. Phani also shares how rapid product release cycles shape his daily work, from building sales collateral to optimizing content around problems, competitors, and user behavior. Listeners gain a clear understanding of how strategic marketing, data discipline, and continuous iteration fuel Indusface's competitive edge.
This Week In Startups is made possible by:Goldbelly - Goldbelly.comEvery.io - http://every.io/Zite - zite.com/twistToday's show:Today, Zapier's a multi-billion company helping enterprises integrate AI agents and other time-saving shortcuts into their workflows… but we had the founder on TWiST when they were just getting started!In a 2016 chat, founder Wade Foster walked JCal through their 2012 seed round, running a small entirely remote team with no HQ, the complexities of building a tool that relies on third-party APIs, and why Microsoft Office was the “Holy Grail” for his integration software.PLUS we've got a new entrant in your Gamma Pitch Deck competition! Tour CEO/CTO Amulya Parmer tells us how his app is saving property managers time and grief, while eliminating “looky-loos” and increasing their “hit rate.”FINALLY, Alex chats with Tomas Puig of TWiST 500 marketing analysis startup Alembic. It turns out, LLMs aren't ideal for scrutinizing marketing campaigns because they lack the requisite historical data. Find out how they're using Spiking Neural Networks (SNN) to dig deeper than GPT and Claude can go.Timestamps:(02:40) Amulya from Tour opens the show with praise for Jason(03:34) Tour's 2-minute Gamma pitch: automated property tours for managers(06:47) Why Jason thinks Tour is an ideal tool for Gen Z(10:01) Goldbelly - Goldbelly ****ships America's most delicious, iconic foods nationwide! Get 20% off your first order by going to Goldbelly.com and using the promo code TWiST at checkout.(13:32) How Tour can eliminate “looky-loos” and increase the “hit rate”(14:38) Why Tour prices based on individual properties and apartments(19:13) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit every.io.(20:23) Jason wants to sprinkle some AI into Tour(24:29) Welcoming Tomas Puig from Alembic(25:12) Does epic-scale brand marketing actually pay off for these brands?(27:27) The hardest thing about being a marketer…(28:31) Alembic's origins: organizing huge unstructured data sets(30:18) Zite - Zite is the fastest way to build business software with AI. Go to zite.com/twist to get started.(31:27) Case Study: making sense of Delta's Olympics data(33:37) Applying simulation models and supercomputers to marketing data(35:48) How Spiking Neural Networks (SNN) help Alembic spot trends and link causal relationships(41:13) The key advantage of training models on private data(43:16) Building their own clusters vs. renting(44:41) “You don't ask if you have Product Market Fit… You hold on for dear life.”(46:28) Flashback with Alex and Lon to Jason's 2016 chat with Wade Foster of Zapier(54:48) The dangers of building atop other platform's APIs(01:03:00) What Zapier learned pre-pandemic about leading remote teams(01:13:12) Why MS Office was the “Holy Grail” for early ZapierSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:01) Goldbelly - Goldbelly ****ships America's most delicious, iconic foods nationwide! Get 20% off your first order by going to Goldbelly.com and using the promo code TWiST at checkout.(19:13) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit every.io.(30:18) Zite - Zite is the fastest way to build business software with AI. Go to zite.com/twist to get started.Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartups
This episode of The Dish on Health IT features Denny Brennan, Executive Director of the Massachusetts Health Data Consortium (MHDC), in conversation with host Tony Schueth, CEO of Point-of-Care Partners (POCP), and co-host Ross Martin, MD, Senior Consultant with POCP. Together, they examine how MHDC is translating national interoperability policy into practical, statewide action, specifically around the CMS-0057 rule.After brief introductions, the conversation quickly turns to MHDC's long history and why it matters. Founded in 1978, before the internet, MHDC guided Massachusetts through nearly every major health IT transition: HIPAA, Meaningful Use, ICD-10, and now interoperability and automation. Denny explains that this continuity has created something rare in healthcare: sustained trust across payers, providers, vendors, regulators, and associations. That trust, he notes, is what allows competitors to work through shared infrastructure problems that no single organization could solve on its own.From there, the discussion turns to why the MHDC community chose to coordinate and support members in their CMS-0057 compliance journey, versus just letting each member organization go it alone. Denny emphasizes that while healthcare is regulated federally, it functions locally. Each state has its own mix of insurers, hospital systems, rules, and market pressures. In Massachusetts, where long-standing relationships already exist, MHDC saw an opportunity to move faster, test real workflows, and generate lessons that could inform efforts far beyond the state.The discussion then moved to how work to improve prior authorization became such a high-priority focus. Denny describes how the process has grown into one of the most disruptive administrative burdens for clinicians. Rules vary by plan, criteria change frequently, and the information providers need is often hard to access in real time. The result is defensive behavior. Offices routinely submit prior authorizations “just in case,” often by fax or phone, simply to avoid denials and treatment delays. That inefficiency, he explains, ripples outward by slowing patient care, driving up providers' overhead, and requiring health plans to spend more time and resources processing and reviewing the required PA alongside the unneeded submissions.The financial impact quickly becomes apparent. Denny points to evidence showing that administrative costs consume a massive share of U.S. healthcare spending, with prior authorization playing a meaningful role. If automation is implemented through a neutral, nonprofit infrastructure, MHDC believes there is a much greater chance that savings will flow back into premiums and public program costs rather than being swallowed by inefficiency.Ross adds an important dose of realism. Prior authorization friction, he notes, is not always accidental. In some cases, operational complexity functions as a utilization control mechanism. That creates a built-in tension between access, cost containment, and patient experience, and helps explain why national reform has moved slowly despite widespread frustration.At that point, the conversation shifts from why this is broken to how MHDC is trying to fix it. Denny walks through MHDC's operating model: convene the full ecosystem early and often. In a recent deep-dive session, roughly 60 representatives from health plans, providers, and the state participated in a working session focused on what an automated prior authorization workflow could realistically look like. MHDC brought a draft framework to the table. The community pressure tested it and surfaced workflow conflicts, operational blind spots, and policy misalignments that no single organization could see on its own.That collaborative process, Denny explains, is the real engine behind adoption. When stakeholders help build the solution themselves, implementation becomes a shared commitment rather than a compliance exercise. It also reduces resistance later because decisions are not delivered top-down. They are constructed collectively.The discussion then turns to FHIR adoption and why, while real, progress has taken time. Denny traces the turning point back to the 21st Century Cures Act, which reframed patient access to health data as a legal right and categorized data blocking as a regulatory violation. That policy shift, combined with the growing maturity of API-based interoperability, created the conditions for real-time data exchange to finally move from theory to practice.Ross provides a historical perspective from the standards side. Earlier generations of health data standards were conceptually elegant but extremely difficult to implement consistently. FHIR changed that equation by aligning healthcare data exchange with the same API-driven architecture that supports the modern web. He points to accelerating real-world adoption, particularly from large EHR platforms, as evidence that FHIR has entered a phase of broad, practical deployment.Although pharmacy prior authorization falls outside the formal scope of CMS 0057, Denny makes clear that MHDC could not ignore it. For many physicians, especially in oncology, dermatology, and primary care, PA for prescriptions is far more frequent and far more disruptive than PAs for medical services. If MHDC solved only one side of the problem, much of the daily burden for clinicians would remain unchanged.Pharmacy prior authorization, however, introduces a new level of complexity. PBMs, pharmacists, prescribing systems, payers, and patients are all involved, often across fragmented workflows. Denny explains that the challenge looks less like a pure technology gap and more like an orchestration problem. It is about getting the right information to the right party at the right moment across multiple handoffs.Ross shares insights from the pharmacy PA research work conducted with MHDC and POCP. One of the most striking findings was the massive year-end renewal surge that hits providers every benefit cycle as authorizations tied to prior coverage suddenly expire. He also reflects on a recent national electronic prior authorization roundtable, where deep stakeholder discussion ultimately led most participants to conclude that today's technology alone still is not sufficient to fully solve pharmacy PA. The tools are improving, but the problem remains deeply multi-layered.As the episode winds down, the tone shifts toward practical calls to action.Denny challenges the industry to separate where competition belongs from where collaboration is essential. Contract negotiations may be adversarial by nature, he notes, but interoperability initiatives cannot succeed under the same mindset. Real progress depends on bringing collaboratively minded people into the room. These are people willing to solve shared infrastructure problems even when their organizations compete elsewhere.Ross builds on that message with a longer-term challenge: sustained participation in standards development. Organizations cannot sit back and hope others shape the future on their behalf. Active involvement in national standards organizations is critical. This is not for immediate quarterly returns, but to influence the systems everyone will be required to use in the years ahead.The episode closes with a clear takeaway. MHDC did not wait for perfect conditions. It moved when the pieces were good enough, tested real workflows with real stakeholders, adjusted in the open, and began sharing lessons nationally. In an industry often slowed by fragmentation and risk aversion, this conversation offers a grounded look at what forward motion actually looks like when collaboration, policy, and technology finally align.You can find this and other episodes of The Dish on Health IT wherever you get your podcasts, including Spotify and Healthcare Now Radio. If you found this conversation valuable, share it with a colleague and be sure to subscribe so you never miss an episode. Have an idea for a topic you would like us to cover in future episodes? Fill out the form and tell us about it. Until next time, Health IT is a dish best served hot.
Viren Tellis, co-founder and CEO of Uthana, joins the show to discuss how generative AI is reshaping motion creation for games, VFX, and interactive worlds. With over 15 years of experience leading product and operations teams at AppNexus and Hedado, Viren explains how Uthana's technology can generate animation from text, video, or even in real time, giving creators instant, controllable motion without traditional mocap setups. He breaks down how developers use Uthana's SDKs and APIs in Unreal, Unity, and web platforms, what defines high-quality motion, and how foundation models for human movement could power the next generation of AI-driven characters. Subscribe to XR AI Spotlight weekly newsletter
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Show Notes: https://securityweekly.com/asw-360
Take a Network Break! Our Red Alert calls out a dangerous vulnerability in the popular open-source React library. On the news front, HPE decides on a “both and” strategy for its two wireless portfolios and rolls out an option to let customers pick and choose among cross-platform features in Mist and Aruba Networking Central through... Read more »
Take a Network Break! Our Red Alert calls out a dangerous vulnerability in the popular open-source React library. On the news front, HPE decides on a “both and” strategy for its two wireless portfolios and rolls out an option to let customers pick and choose among cross-platform features in Mist and Aruba Networking Central through... Read more »
Take a Network Break! Our Red Alert calls out a dangerous vulnerability in the popular open-source React library. On the news front, HPE decides on a “both and” strategy for its two wireless portfolios and rolls out an option to let customers pick and choose among cross-platform features in Mist and Aruba Networking Central through... Read more »
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
UTP392 No es la baliza V16, son tus datos Sean bienvenidos a Buscadores de la Verdad, esta vez emitiendo en directo desde el canal UTP Ramón Valero, aqui en Telegram. Ya saben que no nos gusta tratar los temas de actualidad que consideramos están ahí para distraernos de lo realmente importante, pero creo que en esta ocasión es necesario aclarar algunos puntos sobre la imposición de la nueva baliza V16. En casa de mis padres recibiamos la revista gratuita de la Dirección General de Tráfico (DGT), conocida actualmente como Revista Tráfico y Seguridad Vial (anteriormente Revista Tráfico), ha estado operativa en formato papel desde 1985 hasta 2006 donde paso a ser enviada de manera online a través de una renovación en la suscripción. Esta operación de ahorro fue casi una de las primeras cosas que acometió el director de la DGT actual, Pere Navarro, en su primera etapa del 2004 al 2012. Pere Navarro impulsó una de las campañas de publicidad vial más impactantes y polémicas de la historia de España, conocida por sus anuncios televisivos extremadamente dramáticos y crudos, como “La muerte no avisa”, “Víctimas 3D” o los spots que mostraban accidentes reales reconstruidos con gran realismo y testimonios desgarradores de víctimas y familiares. Esta estrategia de “shock advertising”, inspirada en modelos australianos y británicos, buscaba generar un impacto emocional profundo para cambiar conductas. Los resultados fueron espectaculares: en 2003, antes de su llegada, España registraba 5.399 fallecidos en carretera; al final de su mandato, en 2011, la cifra había caído hasta los 1.867 muertos, lo que supuso una reducción del 65 % en solo ocho años, la mayor bajada histórica registrada en tan poco tiempo. A esta campaña se sumaron medidas como la implantación del permiso por puntos (2006), el endurecimiento de sanciones y los radares de tramo, consolidando el periodo 2004-2012 como la etapa de mayor descenso de la siniestralidad vial en España. A partir de 2014, apenas dos años después de la salida de Pere Navarro, la siniestralidad vial en España rompió la tendencia descendente que había sido constante desde 2003 y comenzó a repuntar de forma sostenida: de los 1.688 fallecidos registrados en 2013 (el mínimo histórico) se pasó a 1.830 en 2019 y, tras el paréntesis de la pandemia, a 1.746 en 2023 y 1.795 en 2024 (datos a 31 de diciembre provisional). Este incremento ha alejado definitivamente al país de la hoja de ruta marcada en la Estrategia de Seguridad Vial 2011-2020 y de las previsiones que la DGT presentó en 2006, cuando, sobre la inercia del permiso por puntos y las campañas de choque, se calculaba que España alcanzaría en 2020 menos de 1.000 fallecidos anuales y se situaría por debajo de la media europea más exigente. En 2025 la cifra real duplica prácticamente aquel objetivo y España ha pasado de ser uno de los países que más rápidamente reducían víctimas a situarse en la zona media-baja de la UE, con una tasa de mortalidad por millón de habitantes que ya no mejora desde hace una década y que en 2024 (38 fallecidos por millón) se encuentra muy lejos de los líderes como Suecia (22) o Noruega (26). Por eso en 2018 se vuelve a contratar a la superestrella para ver si se puede rascar algo. La cuestión es que en un pais en deficit, las carreteras se van deteriorando y el mantenimiento es cada vez mas escaso, a la vez que el parque automovilístico envejece por no poder renovarlo y aumentan el numero de conductores procedentes de países del tercer mundo mientras que el parque tecnológico de control vial de la DGT y las comunidades autónomas con competencias transferidas es uno de los más densos y avanzados de Europa. Actualmente operan los siguientes sistemas: Radares fijos: más de 1.400 visibles, los cinemómetros clásicos en pórtico o poste, Veloláser que la DGT rota entre cabinas vacías para que no se sepa exactamente dónde están. También unos 80 “de baja altura” u ocultos. La DGT tiene un plan para instalar 122 nuevos puntos de control de velocidad a lo largo de 2025. Radares de tramo: 92 tramos operativos en 2025 con unos 232 radares, que miden la velocidad media entre dos puntos. Cubren unos 1.200 km de vías de alta capacidad. Radares móviles: unos 700 dispositivos (la mayoría Veloláser de última generación) usados por Guardia Civil y policías autonómicas/municipales. Pueden instalarse en trípode, en el guardarail, dentro de coche camuflado, motos camufladas y camiones o incluso en coche en movimiento (sin parar). El total de radares en España (todos los tipos, incluidas competencias autonómicas/ayuntamientos) es de 3.395 dispositivos en algún estudio reciente de 2025. Cámaras de cinturón y móvil: desde 2021 se han ido instalando progresivamente. En 2025 hay más de 400 cámaras certificadas que detectan simultáneamente el no uso del cinturón y el manejo del móvil. Funcionan día y noche y ya sancionan automáticamente. Cámaras de reconocimiento de matrículas (OCR): más de 1.200 instaladas en pórticos, postes y coches patrulla. Sirven para: Controlar vehículos sin ITV o sin seguro. Detectar coches robados o reclamados judicialmente. Vigilar el acceso a Zonas de Bajas Emisiones (ZBE) de las ciudades. Hacer seguimiento de flotas y detectar infracciones reiteradas. Cámaras fijas de 360º: Se estima que hay al menos 1.492 cámaras fijas de tráfico distribuidas en unas 150 carreteras de la red nacional y autonómica, muchas de las cuales incorporan tecnología PTZ (pan-tilt-zoom) que permite una visión panorámica de 360 grados para ofrecer imágenes en movimiento de alta resolución, tanto para agentes como para el público a través de herramientas como Infocar de la DGT. A esto debemos sumar las que existan en Cataluña y Pais Vasco dentro de sus propios sistemas de trafico y las operadas por operadores privados en autopistas. Cámaras en peajes y pórticos “Free-Flow”: desde la supresión de peajes físicos en muchas autopistas (AP-7, AP-4, etc.), se han instalado cientos de pórticos con cámaras 3D que identifican la matrícula delantera y trasera y miden velocidad instantánea al mismo tiempo. Detectores de kamikazes: desde 2022 se han instalado más de 120 sensores en autovías y autopistas de doble calzada (principalmente Cataluña, Valencia, Andalucía y Madrid). Son cámaras y sensores LIDAR que detectan vehículos circulando en sentido contrario en menos de 15 segundos y activan paneles luminosos con la alerta “KAMIKAZE” y avisos a la Guardia Civil. En 2024-2025 se ha ampliado el despliegue a Galicia, Castilla y León y Aragón. Drones: la DGT dispone de 39 drones Pegasus con cámara 4K y zoom de 180x que vigilan especialmente en operaciones especiales, carreteras secundarias y eventos masivos (Semana Santa, verano, puentes). Helicópteros: 9 helicópteros en activo y 2 en proyecto equipados con radar Pegasus que pueden controlar hasta 8 carriles simultáneamente y sancionar mientras vuelan a 300-400 km/h de velocidad. Todo este arsenal tecnológico ha permitido que en 2024 se formularan más de 5,5 millones de denuncias automatizadas (el 92 % del total), pero también ha generado la sensación de que, pese a la vigilancia masiva, la mortalidad no baja desde hace diez años, lo que ha llevado a debates sobre si el enfoque exclusivamente sancionador y tecnológico ha tocado techo y necesita complementarse con otras medidas (educación, diseño de carreteras más seguras, renovación del parque móvil, etc.). Pues a todo este despliegue monstruoso de control viene a sumarse una triste lucecita para poner en el techo con la excusa de salvar 25 vidas por atropellos en las carreteras, en palabras textuales de la DGT: "La sustitución de los triángulos está justificada por motivos de seguridad vial, al considerar el riesgo de atropello que supone la colocación de los triángulos por tener que andar, al menos, 100 metros por la calzada sin que haya garantía de que se mantengan en su sitio una vez colocados.” "Con el propósito de avanzar en el ámbito de la seguridad vial y la reducción de accidentes, nace el dispositivo V16.” Según el director general Pere Navarro: "La implantación de la V16 conectada supone un salto adelante y nos sitúa como referentes europeos en seguridad vial. Permite señalizar sin salir del vehículo, evita riesgos innecesarios y aporta información vital a los demás usuarios de la vía." "El objetivo de implantar este nuevo dispositivo de preseñalización en los vehículos es mejorar la seguridad vial, intentando reducir los accidentes de tráfico, sobre todo los provocados por vehículos inmovilizados y estacionados en el arcén.” Os leo textualmente los apartados del articulo 130 del Reglamento General de Circulación de España publicado en el BOE en el Real Decreto 159/2021, de 26 de febrero, dice así: Artículo 130. Señalización e inmovilización de vehículos. 1.Los conductores deberán señalizar la situación de peligro creada por la avería de su vehículo o por el accidente sufrido, adoptando las medidas necesarias para su propia seguridad y la de sus acompañantes, y para la de los demás usuarios de la vía. 2.Si el vehículo o la carga obstaculizan la calzada, deberán señalizarse y retirarse lo antes posible. En tanto no se haya producido la retirada, el vehículo deberá estacionarse de acuerdo con lo dispuesto en el artículo 91.2. 3.En caso de accidente o avería, como norma general, los ocupantes deberán abandonar el vehículo y situarse en un lugar seguro fuera de la calzada, por el lado contrario a la circulación, sin invadir los carriles de circulación ni el arcén. En el supuesto de que no exista un lugar seguro, los ocupantes deberán permanecer dentro del vehículo con el cinturón de seguridad abrochado. 4.Mientras se efectúen las actuaciones para retirar el vehículo de la vía, se utilizará el dispositivo de preseñalización de peligro reglamentario. 5.No se efectuará el atestado del accidente en la calzada, debiendo realizarse en un lugar seguro fuera de la vía. Juan Carlos Toribio, ex-Guardia Civil representante de la Unión Internacional para la Defensa de los Motociclistas nos dice claramente en un video que estamos obligados a señalizar en caso de obstruir la calzada, esto es, la zona por donde circulan los coches y no si logramos detenernos en el arcén. Desgraciadamente nos lo dejan claro en el articulo Artículo 91. Inmovilización del vehículo en casos de emergencia o de peligro. Donde en su apartado 2 se dice: 2. Cuando, por emergencia, el vehículo haya de permanecer detenido o estacionado en la calzada o en el arcén, el conductor estará obligado a adoptar las medidas necesarias para que resulte perfectamente perceptible y para que se retire lo antes posible de la vía. Volviendo al tema de los accidentes mortales que nos han traído hasta aqui, no hay un informe monográfico que confirme cuántos de estos incidentes fueron directamente por colocar o retirar los triángulos, ni cuántos involucraron a conductores particulares versus trabajadores profesionales de la carretera (como operarios de mantenimiento vial, grúas o servicios de emergencia, que representan un subgrupo significativo de peatones expuestos en arcenes, según el Registro Nacional de Víctimas de Accidentes de Tráfico). La propia DGT admite en comunicados que "no existen estudios específicos que determinen cuántas de esas víctimas lo fueron al colocar los triángulos", y expertos independientes, como en análisis de 2025, cuestionan la precisión de la cifra de "25" como aproximada y no exacta, sugiriendo que podría inflar el riesgo para justificar la baliza V-16. En su lugar, la justificación se basa en informes agregados como la Instrucción MOV-2023/15, que destaca el "notable incremento del riesgo de atropello" en autopistas/autovias por transitar el arcén, sin desglose laboral, y en la Estrategia de Seguridad Vial 2030, que agrupa estos datos en categorías amplias de "peatones vulnerables en vías interurbanas" sin diferenciar perfiles profesionales. La Estrategia de Seguridad Vial 2030 de España, aprobada en diciembre de 2021 por el Consejo de Ministros, se presenta oficialmente como la contribución nacional al cumplimiento del Objetivo de Desarrollo Sostenible 3.6 de la Agenda 2030 de Naciones Unidas, que establece textualmente: «Para 2030, reducir a la mitad el número de muertes y lesiones causadas por accidentes de tráfico en el mundo». La propia DGT lo reconoce así en su documento oficial: «Esta Estrategia se alinea con la Agenda 2030 para el Desarrollo Sostenible y, en concreto, con la meta 3.6», y adopta el mismo horizonte temporal (2030) y el mismo objetivo cuantitativo: reducir un 50 % las víctimas mortales y los heridos graves respecto a la base 2019 (1.755 fallecidos y 8.558 heridos graves hospitalizados). Además, incorpora explícitamente los principios de la Agenda 2030 (Visión Cero muertes y lesiones graves, Sistema Seguro, enfoque basado en datos, gobernanza multinivel y participación de la sociedad civil) y se integra en el marco europeo del Plan de Acción de Seguridad Vial 2021-2030 de la Comisión Europea, que también toma como referencia la meta 3.6 de la ONU. En resumen, la Estrategia española no es solo un plan nacional de tráfico, sino la herramienta con la que España pretende cumplir formalmente su compromiso internacional asumido al firmar la Agenda 2030 en septiembre de 2015. Vivimos en un país donde la esquizofrenia política roza lo caricaturesco: hace solo cinco meses, el 16 de junio de 2025, Vox presentó y defendió en el Congreso una Proposición No de Ley con el nombre “la mejora de la seguridad de los trabajadores que prestan servicio en carretera” y pidió acelerar la obligatoriedad de la baliza V-16 conectada (la misma que ahora llaman “nuevo impuesto encubierto”), logrando su aprobación con los votos del PP, los votos en contra del PSOE y todos sus socios y la abstención de Junts. Su entonces portavoz de Tráfico, Francisco José Alcaraz —el ex-peluquero convertido en diputado—, llegó a calificarla de “tecnología innovadora que salvará vidas” y exigió al Gobierno que no retrasara más su implantación definitiva. Hoy, el mismo partido pide la paralización inmediata de la medida que él mismo forzó, demostrando que en España la coherencia política tiene menos recorrido que un triángulo de emergencia en plena autovía. En 2026, cuando se haga efectiva la obligatoriedad de este nuevo artefacto de control, llevaré 40 años conduciendo por las carreteras de España y de Europa. 4 décadas en las que he visto muchas cosas en los mas de un millón de kilómetros recorridos a una media de 25.000 km al año. He tenido que usar muchas veces la señalización pasiva que ofrecen los triángulos y he visto su eficacia de noche, a pleno sol, en curvas, cambios de rasante y todo tipo de condiciones atmosféricas. Sin embargo Pere Navarro no habrá conducido ni un solo kilometro ya que nunca ha tenido carnet de conducir y siempre ha tenido chofer particular, como político estrella que ha sido. Las condiciones meteorológicas o la cobertura impedirán en un montón de ocasiones que este flan Dhul con luces sirva para algo. Hay muchas carreteras en España, incluidos trozos de autovías, donde no hay cobertura y por tanto no funcionara la geolocalización. Y este cacharro como bien dice AlainCreaciones no es a prueba de agua. La carcasa de plástico es de una calidad muy baja con pestañas de acople, sin tornillos lo que hace que la baliza tenga una protección mínima exigida por el BOE de IP54 aunque existan algunas con IP66 que ya garantizan protección contra polvo y lluvia intensa. En situación de lluvia las de menor IP tendrán fallo electrónico garantizado. Por no hablar de la durabilidad de las pilas que según el pliego de características técnicas de los dispositivos de preseñalización V-16 establecidas por la Dirección General de Tráfico (DGT) en su normativa de homologación (Instrucción MOV-2023/15 y requisitos de certificación UNE-EN 12352), la duración mínima exigida a los fabricantes para la pila o batería es de 18 meses de vida útil en reposo, independientemente de si se trata de pilas alcalinas no recargables o baterías de litio recargables. Esta especificación garantiza que el dispositivo permanezca operativo sin uso durante al menos ese periodo desde su fabricación o última carga completa, complementada con una autonomía mínima de 30 minutos de funcionamiento continuo una vez activado para emitir luz intermitente de alta intensidad. El fabricante entre otros muchos datos recibe el estado de nuestras baterías en la baliza, me pregunto para que, lo que levanta las sospechas de que el software pueda hacer otras cosas a parte de simplemente marcar el punto del accidente. Una vez agotadas, la V16 es como dice Rose Saint Olaf (ManzanaDori) un flan Dhul en el techo del coche. Eso en el mejor de los casos, porque una batería de litio dejada al sol en pleno verano en España puede terminar en tragedia, así que mejor a pilas entrecomillas “de toda la vida” que lo máximo que harán será sulfatarse y estropear la electrónica. Os puedo asegurar que en mis 40 años al volante he necesitado indicar mi avería en la carretera durante bastantes horas en alguna ocasión. Los triángulos, como he dicho anteriormente otorgan una seguridad mediante elementos pasivos, reflectantes, que no necesitan de una fuente de energia externa para funcionar y se ven desde bastante mas distancia que este flan Dhul a pilas. Entonces, si la DGT no ha demostrado con datos desglosados y públicos que esos 25 atropellos anuales se deban realmente a la colocación de triángulos (y no a otros factores como reparaciones, cambios de rueda o trabajadores en la vía), si la baliza V16 conectada no mejora la visibilidad respecto a las versiones no conectadas ya permitidas desde 2021 algunas como las V2 con sirenas giratorias enchufadas al encendedor del vehículo, y si su principal ventaja (la geolocalización) solo será obligatoria a partir de 2026 y aún no está plenamente operativa en todos los navegadores y paneles… ¿por qué se impone de forma tan drástica y urgente una medida que obliga a 30 millones de conductores a gastar entre 25 y 60 € en un dispositivo nuevo, que genera rechazo masivo por la sensación de impuesto encubierto, que se ha comunicado de forma confusa y tardía, y ha sido alimentada por bulos (chip de seguimiento, multas automáticas, negocio de empresas afines, etc.) que la propia DGT no ha desmentido con la claridad y antelación necesarias? La pregunta no es si la V16 es útil o no; es por qué se ha convertido en símbolo de una gestión autoritaria, poco transparente y desconectada de la realidad de la ciudadanía. Y aqui es donde debemos sospechar que la DGT simplemente está trabajando para otras entidades supranacionales que son las que verdaderamente están detrás de la implementación de la Agenda 2030 como he comentado antes. Eso sí, gracias a esta tecnologia la DGT obtendría algún beneficio oculto a simple vista. Vamos a analizar los datos que nos permiten asegurar sin ningún genero de dudas lo que se esconde aqui. Es verdad que algunas balizas V-16 conectadas (no todas) incluyen o recomiendan la instalación de una aplicación móvil específica del fabricante para acceder a funcionalidades adicionales, como la confirmación de recepción de alertas por la DGT, el aviso automático a contactos de emergencia vía WhatsApp, la gestión de flotas o la verificación del estado del dispositivo. En estos casos, la app sí puede solicitar datos personales del usuario (nombre, email, teléfono) y del vehículo (matrícula, tipo, bastidor o datos del seguro) para vincular la baliza a un perfil concreto y personalizar el servicio, lo que facilita la integración con plataformas como DGT 3.0 o apps de aseguradoras. Ejemplos incluyen la app SOS Alert de FlashLED/Telefónica Tech, que pide estos datos para "toda la información de tu vehículo en la APP", o apps de marcas como SOOS o LEDONE, donde se registra la matrícula para asociar la geolocalización en emergencias. Sin embargo, esto no es un requisito obligatorio de la DGT ni para la homologación ni para el uso básico de la baliza: la normativa (Instrucción MOV-2023/15) establece que el dispositivo funciona de forma autónoma con su chip GPS y SIM integrada, transmitiendo solo la ubicación anónima (sin matrícula ni identidad) a la plataforma DGT 3.0 al activarse, sin necesidad de apps, registros previos o cesión de datos a la Administración. La Agencia Española de Protección de Datos (AEPD) lo confirma explícitamente: "Para mandar la ubicación del vehículo incidentado no es necesario instalar ninguna aplicación", y "la baliza no transmite ningún tipo de datos personales ni relacionados con el vehículo" más allá del identificador técnico anónimo de cada baliza. La DGT advierte que las apps de fabricantes son opcionales y que el comprador "no tiene por qué facilitar ningún tipo de dato", ya que el proceso es completamente anónimo. O sea, la baliza tiene una ID única que la identifica, lo cual podría permitir anexar datos a esa ID, algo asi como el numero PNR que cada uno de nosotros tenemos asignados aunque ni siquiera seamos conscientes de ello. El reciente ciberataque a la Dirección General de Tráfico (DGT), detectado el 31 de mayo de 2024, ha expuesto los datos personales y vehiculares de más de 34 millones de conductores españoles, incluyendo DNIs, direcciones, matrículas y detalles de seguros, que ahora circulan en el dark web para su venta. Este incidente pone de manifiesto la creciente vulnerabilidad de los sistemas públicos ante amenazas cibernéticas, y genera preocupación sobre cómo estos datos podrían cruzarse con otros registros estatales para un seguimiento más exhaustivo de la movilidad ciudadana. Por ejemplo, al entrar en vigor la obligatoriedad de las balizas V16 –dispositivos que transmiten la ID única y la geolocalización en caso de avería–, surge la posibilidad de que se integren con la información filtrada de la DGT, permitiendo un mapeo detallado de trayectos vehiculares en tiempo real. A esto se suma que el Estado ya nos tiene en listas a través del Registro de Nombres de Pasajeros (PNR), implementado tras el 11S, que recopila datos de todos los vuelos de entrada, salida o escala en España, viajes en tren de largo recorrido y pernoctación en hoteles para fines de seguridad, abarcando identidades, itinerarios y preferencias de viaje. Podrán encontrar más información en los enlaces que se publicaran junto a la descripción de este podcast en Ivoox. Pero, sigamos. Según la Dirección General de Tráfico (DGT), en su página oficial sobre los Dispositivos de preseñalización V16, se debe llevar la baliza de la siguiente manera para evitar multas: "Debemos llevarla en la guantera de nuestro vehículo". Esto implica que, a partir del 1 de enero de 2026, cuando sea obligatoria, todo conductor estará sancionado con 80 euros (infracción leve) si no dispone de ella homologada y lista para usar en su interior, accesible y con batería o pila en buen estado (mínimo 18 meses de vida útil en reposo). Respecto a "activada", la DGT aclara textualmente que "en el momento en que tengamos que señalizar que nuestro vehículo está inmovilizado en carretera, lo único que debemos hacer es encender la baliza y colocarla en el exterior del mismo. Por eso es tan importante que la guardes a mano y que la lleves siempre cargada, ya sea con baterías o con pilas, en función del modelo de la baliza que hayas adquirido”. Bien. La baliza solo dispone de un único botón, se trata de un pulsador que activa inmediatamente las luces led y la geolocalización de la baliza a los 100 segundos de la pulsación. Con otra pulsación la apagamos y supuestamente deja de enviar nuestra geolocalización. Pero esto se ha demostrado falso ya que se le han realizado pruebas donde se ve que el router eSIM que monta emite datos estando apagada pero con las pilas puestas. Se ha elegido este tipo de transmisión de datos ya que hace que sea imposible evitar su funcionamiento extrayendo la tarjeta SIM que esta integrada en dicho modulo electrónico. Los desmontajes de las balizas han arrojado que solo disponen de un controlador de software, una antena GPS y este router de comunicación. Dicha comunicación es full duplex y permite la salida y entrada de datos asi como existe en la placa base de la baliza un sistema de introducción y extracción de datos manual y actualización del firmware. Todo el software está encriptado dentro del chip controlador y hasta donde yo se todavía ningún hacker ha podido desvelar exactamente que hace dicho software, pero debemos sospechar que podría hacer algo más que comunicar anónimamente nuestra geolocalización tras pulsar el botón. Leemos un articulo en bandaancha punto eu titulado “El dominio al que las balizas V-16 envían datos no pertenece a la DGT, sino a un misterioso usuario particular”. “Los más de 30 millones de balizas V-16 que tendrán que adquirir los propietarios de vehículos para cumplir con la normativa que entra en vigor el 1 de enero, no están programadas para llamar directamente a los sistemas de la DGT cuando se activan para señalizar la detención de un vehículo. La Resolución de la DGT publicada en noviembre de 2021 en el BOE que define el funcionamiento técnico de las balizas1, establece 2 protocolos, Protocolo A y B. El llamado protocolo A contiene el conjunto de campos que se exige a los fabricantes que remitan sus balizas. Entre los campos encontramos un identificador único de la baliza, el IMEI del módem que conecta con la red móvil, nivel de batería y por supuesto, las coordenadas geográficas que permiten a la DGT conocer la posición sobre el mapa del vehículo. Pero esta información no llega a los servidores de la DGT. La norma obliga a los fabricantes a mantener un servicio en la nube encargado de procesar todas las peticiones que llegan de las balizas de su marca como tráfico UDP sobre IP. El servidor es accesible mediante un APN privado integrado en la eSIM de la baliza, que no tiene acceso a internet. Este punto crítico para el funcionamiento de todas las balizas de un fabricante deberá mantenerse en funcionamiento durante los 12 años en los que se garantiza el servicio de conectividad. La caída del servicio de un fabricante, bien por problemas técnicos o por el cierre de la empresa, algo que podría ocurrir más fácilmente con las marcas creadas ad-hoc para aprovechar el boom de la venta de balizas, dejaría fuera de juego a las miles de balizas de la marca. Es por ello que el pliego técnico del concurso en el que se adjudicó la creación de la DGT 3.0 a un grupo de empresas lideradas por Vodafone, contemplaba la posibilidad de habilitar sistemas de respaldo para los fabricantes. Los servidores del fabricante de la baliza son los encargados de, en un segundo paso, reenviar los datos de un incidente en curso a los servidores de la DGT. Lo hacen aplicando el protocolo B, que a día de hoy contiene un conjunto reducido de los datos originalmente enviados por la baliza a su fabricante. Cambiar los campos del protocolo A es prácticamente inviable, puesto que requeriría actualizar manualmente el firmware de las balizas. Mucho más sencillo resulta para la DGT vía publicación de nueva Resolución en el BOE modificar el protocolo B, ampliando si lo desea sus campos con los que ya reciben los fabricantes. El dominio de entrada a la DGT 3.0 está a nombre de un particular. La DGT invita a los fabricantes de dispositivos y desarrolladores de apps a conectarse a su nube DGT 3.0 publicando en su web2 los repositorios en Github que contienen los detalles para acceder al servicio. En el caso de las V-16, la nube de los fabricantes debe enviar los eventos de las balizas activas en formato json a una URL en concreto: https://pre.cmobility30.es/v16/ Aunque el subdominio pre probablemente indica que se trata de la versión del servicio habilitada para hacer pruebas antes de su paso a producción, el dominio cmobility30.es figura en la documentación de todas las APIs de la DGT 3.0, siendo por tanto un elemento crítico para el funcionamiento de la plataforma DGT 3.0. Sin embargo, la DGT no tiene la titularidad de este dominio. Al consultar el whois de cmobility30.es en los registro de Red.es no aparece como propietario la DGT ni otro organismo gubernamental. Tampoco la UTE (Unión Temporal de Empresas) designada para operar la DGT 3.0, si no que su titular es un misterioso usuario particular.” O sea, toda la arquitectura de registro de datos de un pais entero pasa por un servidor alojado en un dominio de internet a nombre de un tal Ivan Vega. Imagino que seria bastante fácil de tumbar en un ataque por hackers. Hemos visto varias cosas interesantes, esta decisión proviene de ámbitos superiores incluso a Europa por lo que va a ser muy difícil tumbarlo judicialmente y se busca algo mas que simplemente señalizar el punto donde se ha producido el accidente cosa que normalmente hace el propio accidentado con su movil, ya que la baliza no indica el punto al 112 por ejemplo, cosa que si debemos hacer nosotros. La baliza parece más bien un caballo de Troya para irnos acostumbrando a ser geolocalizados en el coche de forma constante en un futuro. Cosa que ya ocurre desde que empezamos a utilizar los teléfonos inteligentes, asi de tontos somos en realidad. La mejor forma de impedir su implementación es no comprar dichas balizas y arriesgarnos a ser multados con esos 80 euros. En mi experiencia en la carretera jamas se me pidió por parte de la Guardia Civil el que les mostrara los triángulos y se que muchos de ellos no ven con buenos ojos el haber pasado de unas medidas de prevención pasivas a una luz que necesita energia externa y que en muchos casos dejara de funcionar en apenas unos minutos. Visto que dichas balizas no tienen botón de apagado, ni tarjeta SIM que extraer para que no envíe datos, y que se nos exige llevar las pilas puestas recomiendo el aislarlas electromagnéticamente para impedir que puedan comunicar nuestra posición GPS mientras no la necesitemos para señalizar un accidente. Hay dos formas, o comprando una funda jaula de Faraday que nos costara lo mismo que una baliza o envolverla en tres o cuatro capas de papel de aluminio, también servirían esas bolsas que se utilizan en el supermercado para transportar comida en frio. Otra medida que los volvería locos es que intercambiaramos nuestras balizas con otros conductores ya que oficialmente nos dicen que los datos son anónimos aunque cada baliza cuente con un numero ID de identificación único. De momento no está claro si encender una baliza fuera de una vía donde circulen vehículos es un delito así que la saturación de las redes provocando eventos de encendido en masa también seria una buena forma de protesta. Conociendo los datos que ese protocolo B transmite en ultima instancia a la DGT no podemos asegurar que el fin ultimo sea conocer nuestra posición y velocidad en la carretera en la actualidad. Pero como he dicho, es muy probable que en un futuro, se utilicen dichos datos para empezar a implementar mas radares y controles en las zonas donde se incumplan los limites de velocidad, todo apunta a ello. Los datos son el oro en la actualidad, y más si son gratis. El actual director general de la DGT, Pere Navarro Olivella, fue alcalde de Terrassa entre 2000 y 2007 y ex líder del PSC del 2011 al 2014. Y por supuesto, como todo “buen político” fue “investigado" por un presunto delito de tráfico de influencias dentro del llamado caso Mercurio. La juez Beatriz Faura, del Juzgado de Instrucción número 2 de Sabadell, lo citó a declarar el 24 de febrero de 2016 sobre la ayuda que presto a un empresario amigo, Nicola Pedrazzoli, a obtener una concesión de un canal de TDT. El caso Mercurio ha tenido ramificaciones amplias, con imputaciones por cohecho, prevaricación y blanqueo aunque Pere Navarro ha quedado al margen de todo. En 2011, Pere Navarro, recién reincorporado como director general de Tráfico tras un breve paréntesis político, decidió trasladar su despacho y toda su unidad del edificio de la DGT en José Abascal 44 al número 28 de la misma calle, exactamente al mismo inmueble que él mismo había abandonado en 2007 para irse al 44. El argumento oficial fue “estar más cerca del secretario general del organismo” y mejorar la coordinación, una justificación que resultó ridícula para muchos: los dos edificios están a apenas 200 metros de distancia y ya estaban conectados internamente. El traslado fue percibido como un capricho personal sin ninguna utilidad real, especialmente en pleno pico de la crisis económica, con España sometida a recortes sociales y un desempleo del 21 %. El coste de esta operación rozó el millón de euros (según la información publicada por La Razón y nunca desmentida oficialmente): reformas integrales del despacho, mobiliario de lujo, nuevos archivadores, traslado de todo el personal del Observatorio Nacional de Seguridad Vial y acondicionamiento completo de la planta. En un momento en que el Gobierno exigía sacrificios a los ciudadanos y se recortaban prestaciones básicas, gastar cerca de un millón de euros en cambiar de edificio dentro de la misma calle para “estar más cómodo” se convirtió en uno de los símbolos más claros del despilfarro de ciertos altos cargos socialistas y alimentó durante años la imagen de Navarro como gestor poco sensible a la situación del país. Pero no vamos a terminar hundidos en el pesimismo, os voy a dar una buena noticia para variar. Y es que Aena, el operador estatal que lleva nuestros aeropuertos, ha tenido que desactivar el embarque biométrico tras recibir una sanción millonaria. Leemos en un noticia: “La Agencia Española de Protección de Datos, AEPD, ha condenado al operador aeroportuario Aena a una multa de 10 millones de euros y ha ordenado el cierre inmediato de todas las puertas biométricas de embarque. La razón de esta sanción estriba en que Aena no realizó una evaluación obligatoria de impacto en la protección de datos antes de introducir la tecnología que permite el reconocimiento de los pasajeros por su aspecto físico. Tras las quejas de los viajeros, la AEPD inició una investigación, que la ha llevado a condenar a Aena por no haber realizado la comprobación de los efectos que el reconocimiento biométrico puede tener en la protección da datos.” Desgraciadamente dicha agencia ha dado el visto bueno este mismo 20 de noviembre a las balizas V16 siempre y cuando, y leo textualmente: “estos dispositivos están destinados exclusivamente a la visibilización del vehículo accidentado y el envío de la ubicación de un incidente al activarse, prohibiendo expresamente que incorporen funcionalidades adicionales.” O sea, según ellos al más mínimo indicio de que hacen algo más dicha agencia las quitara de en medio. Sin embargo no han dicho ni mu sobre que el dominio por donde circularán los datos de millones de españoles este en manos de un tipo llamado Ivan Vega. Preparemonos para lo peor pero esperemos lo mejor. Os invito a que no compréis dicha lucecita y que desobedezcáis en masa una medida dictatorial como esta. De momento el señor Pere Navarro ya ha dicho que nos dará un periodo de gracia. En 2020, mientras todos mirábamos hipnotizados la tele y aplaudíamos a las ocho, el Gobierno lanzó en la sombra el mayor experimento de rastreo masivo jamás visto en España: un proyecto secreto del INE, la DGT y las grandes telecos (Movistar, Vodafone, Orange) para geolocalizar en tiempo real los 47 millones de móviles del país con una precisión de pocos metros. Sin pedir permiso a nadie, activaron la extracción masiva de datos de antenas y señales GPS anonimizadas… o eso nos contaron. Cada desplazamiento, cada salida al supermercado, cada viaje al pueblo quedó registrado y cruzado con bases de datos demográficas para crear mapas de colores que mostraban exactamente quién obedecía el confinamiento y quién no. Oficialmente era “para estudiar la movilidad durante la pandemia”; en realidad fue el ensayo general perfecto del sistema que hoy usa la DGT 3.0: la misma infraestructura que mañana recibirá la señal de tu baliza V16 conectada cuando te averíes… y que, casualmente, ya sabe perfectamente por dónde te mueves cada día sin que tú hayas hecho nada. El conejo ya estaba dentro del sombrero hace cinco años; ahora solo falta que enciendas la lucecita para que sepan exactamente dónde estás parado. Coincidencia, claro. ………………………………………………………………………………………. Conductor del programa UTP Ramón Valero @tecn_preocupado Canal en Telegram @UnTecnicoPreocupado Un técnico Preocupado un FP2 IVOOX UTP http://cutt.ly/dzhhGrf BLOG http://cutt.ly/dzhh2LX Ayúdame desde mi Crowfunding aquí https://cutt.ly/W0DsPVq …. Participantes ………………………………………………………………………………………. Enlaces citados en el podcast: AYUDA A TRAVÉS DE LA COMPRA DE MIS LIBROS https://tecnicopreocupado.com/2024/11/16/ayuda-a-traves-de-la-compra-de-mis-libros/ Baliza de Angel Gaitan proviene directamente de los guardiaciviles https://x.com/gisbert_ruben/status/1994144991539822895 La baliza envía datos pero no directamente a la DGT https://x.com/bricotienda/status/1993604138664345755 La super iluminación de una pila https://x.com/Anonymous_TA/status/1993197306276200712 He DESMONTADO la BALIZA V16 ¿Qué oculta realmente? https://www.youtube.com/watch?v=qb1zhS9M0ks&t=878s La V16 no es a prueba de Agua https://x.com/AlainCreaciones/status/1992536649189015876 El dominio al que las balizas V-16 envían datos no pertenece a la DGT, sino a un misterioso usuario particular https://bandaancha.eu/articulos/dominio-balizas-v-16-envian-datos-no-11583 Baliza V16 impulsada por VOX https://x.com/Davidmartin341/status/1992750051869814952 VOX exige la paralización inmediata de la imposición de la baliza V16 que esconde un nuevo impuesto contra los españoles https://gaceta.es/espana/vox-exige-la-paralizacion-inmediata-de-la-imposicion-de-la-baliza-v16-que-esconde-un-nuevo-impuesto-contra-los-espanoles-20251126-1305/ ¿Dónde envían datos las balizas V16? ¡No es a la DGT! https://www.youtube.com/watch?v=qx1tVTHLM48&t=3s Datos movilidad durante el COVID https://www.ine.es/covid/covid_movilidad.htm Las carreteras españolas ya tienen 3.395 radares, el mayor aumento desde 2021 https://www.coches.net/noticias/numero-radares-carreteras-espana ESTO ES RIDÍCULO: ¡No compres tu baliza V16 sin ver esto! "LA DGT incumple la ley constantemente" https://www.youtube.com/watch?v=17KZ6WLGPmQ LO QUE NO DEBERIAS SABER SOBRE EL PNR https://tecnicopreocupado.com/2019/03/14/lo-que-no-deberias-saber-sobre-el-pnr/ Qué datos suyos tienen los hackers de la DGT tras la filtración de 34,5 millones de usuarios https://es.euronews.com/my-europe/2024/06/01/que-datos-tuyos-tienen-los-hackers-de-la-dgt-tras-la-filtracion-de-345-millones-de-usuario Aena desactiva el embarque biométrico tras recibir una sanción millonaria https://www.tourinews.es/resumen-de-prensa/notas-de-prensa-destinos-turismo/aena-desactiva-embarque-biometrico-recibir-sancion-millonaria_4489851_102.html Nota informativa sobre la baliza V16 conectada, el dispositivo que deberán llevar los vehículos desde enero de 2026 https://www.aepd.es/prensa-y-comunicacion/notas-de-prensa/nota-informativa-sobre-baliza-v16-conectada ………………………………………………………………………………………. Música utilizada en este podcast: Tema inicial Heros Epílogo Sr.J - Transhumanismo https://youtu.be/VZhk7Wlh8ks?si=GRweMvokOtSwy57y
Marek Kozlowski, Head of the AI Lab at Poland's National Information Processing Institute, discusses project PLLuM (Polish Large Language Models). PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. He shares how countries like Poland can achieve AI sovereignty by training small, locally-adapted models for specific languages and cultures, ensuring control, privacy, and cost advantages. The conversation delves into challenges like frontier models' English bias, EU regulations, and technical strategies like "Language Adaptation" on base models. Discover how transparently created, locally-controlled AI offers a viable path for nations to maintain their technological destiny. LINKS: National Information Processing Institute Show notes source with images PLLuM open chat service Sponsors: Google AI Studio: Google AI Studio features a revamped coding experience to turn your ideas into reality faster than ever. Describe your app and Gemini will automatically wire up the right models and APIs for you at https://ai.studio/build Agents of Scale: Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts Framer: Framer is the all-in-one platform that unifies design, content management, and publishing on a single canvas, now enhanced with powerful AI features. Start creating for free and get a free month of Framer Pro with code COGNITIVE at https://framer.com/design Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Sponsor: Google AI Studio (00:31) About the Episode (03:17) Sovereign AI in Poland (04:41) The Case for Localization (13:38) The PLUME Project's Mission (Part 1) (20:25) Sponsors: Agents of Scale | Framer (22:47) The PLUME Project's Mission (Part 2) (22:47) Defining Polish AI Values (35:32) Sourcing and Curating Data (Part 1) (35:38) Sponsors: Tasklet | Shopify (38:46) Sourcing and Curating Data (Part 2) (44:40) Small Models, Big Advantage (58:21) Training and Domain Adaptation (01:12:22) Compute, Talent, and Geopolitics (01:22:50) Forming International AI Alliances (01:27:41) Decentralized AI and Conclusion (01:31:47) Outro
At the Fall '25 vCon conference in Washington, D.C., Doug Green, Publisher of Technology Reseller News, sat down with Dan Petrie, CEO & President of SIPez, to talk about the origins, purpose, and practical future of vCon technology. Petrie, who co-authored the original vCon draft and brought it to the IETF in 2003, describes vCon as a “standard container for capturing conversations” across voice, video, messaging, email, web chat, and more—bringing structure and consistency to interaction data that has long been fragmented across proprietary platforms. Drawing an analogy to Adobe's breakthrough with PDF, Petrie explains that just as PDF standardized how documents are represented and shared regardless of word processor or device, vCon does the same for conversational data. By abstracting common elements like parties, metadata, transcripts, and even AI-generated analytics into a unified format, vCons allow enterprises to capture, store, and analyze interactions from call centers, UCaaS platforms, and messaging systems in a consistent way. This unlocks deeper analysis—such as customer sentiment, agent performance, product feedback, and workflow optimization—without having to wrestle with dozens of incompatible APIs. Petrie stresses that vCon is especially valuable in an AI-driven world, where structured, well-labeled data is essential. “To get real value from AI, you need structured data,” he notes, pointing out that large language models like ChatGPT can only work on limited context windows and rely on upstream systems to extract, segment, and feed the right portions of conversation data. vCons provide that layer: a rich, extensible container that supports encryption, signing, redaction, amendments, and complex scenarios such as multi-leg call transfers and agent handoffs. Much of Petrie's advice is practical: don't try to build everything from scratch. SIPez maintains open-source vCon projects (such as PyvCon) and also offers a commercial vCon recording and AI analysis solution for the NetSapiens platform, giving service providers and MSPs a faster on-ramp. As more vendors add vCon interfaces and as small and mid-sized providers adopt these tools, Petrie believes 2026 will be a pivotal year for MSPs and channel partners to start monetizing vCon-based analytics and services across horizontal markets—from healthcare to customer support and beyond. To learn more about SIPez's vCon tools, open-source projects, and consulting services, visit http://sipez.com/.
(This is made with AI from our sponsor, Buzzsprout)We break down the Recap Apocalypse across Spotify, YouTube, Apple, and Amazon, then dig into craft with Brad Mielke on how Start Here reached 2,000 episodes by prioritising clarity, titles that pull, and audio-first production. Data meets discipline and the result is steady growth without burnout.• Spotify's Creator Wrapped as a real growth tool• YouTube's US-only charts and watch-time logic• Apple Replay and Amazon Delivered compared• Why hosts should build their own year-in-review• Start Here's daily format and guest booking tactics• Titles that drive plays and timely packaging• News avoidance, constructive journalism, balance• Audio-only discipline vs video tradeoffs• UK podcast charts and creator ad spend signals• iOS auto-chapters and timed links for navigation• V4V, boosts, and payment standard progress• New tools, APIs, and analytics experimentsStart podcasting, keep podcasting with BuzzSprout.comSend James & Sam a messageSupport the showConnect With Us: Email: weekly@podnews.net Fediverse: @james@bne.social and @samsethi@podcastindex.social Support us: www.buzzsprout.com/1538779/support Get Podnews: podnews.net
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by CodeRabbit; Smart CLI Reviews act as quality gates for Codex, Claude, Gemini, and you.Show linksBlade @hasStack Directive Added in Laravel 12.39 Time Interval Helpers in Laravel 12.40 Pause a Queue for a Given Number of Seconds in Laravel 12 PHP 8.5 is released with the pipe operator, URI extension, new array functions, and more Introducing Mailviews Early Access Prevent Disposable Email Registrations with Email Utilities for Laravel A DynamoDB Driver for the Laravel Auditing Package Build Production-ready APIs in Laravel with Tyro TutorialsSeparate your Cloudflare page cache with a middleware group PostgreSQL vs. MongoDB for Laravel: Choosing the Right Database Modernizing Code with Rector - Laravel In Practice EP12 Static Analysis Secrets - Laravel In Practice EP13
Send us a textThis quick-tip episode breaks down the blueprint for building a high-efficiency insurance eligibility system that actually works—before the chaos of January hits. Brandon reveals the annual VOB strategy top practices use to stay ahead: forming a dedicated verification task force, running preseason workflow simulations, batching by payer, gamifying team performance, using standups and dashboards, and applying smart tools like APIs and AI phone verification. He walks through how to prep your team months in advance, streamline communication, prevent burnout, and protect your revenue from missed benefits, incorrect data, and preventable errors. If your practice struggles every January with verification bottlenecks, denials, or frantic phone calls, this episode gives you a proven, repeatable model to transform your entire VOB process—and start the year with clarity, accuracy, and confidence. Welcome to Private Practice Survival Guide Podcast hosted by Brandon Seigel! Brandon Seigel, President of Wellness Works Management Partners, is an internationally known private practice consultant with over fifteen years of executive leadership experience. Seigel's book "The Private Practice Survival Guide" takes private practice entrepreneurs on a journey to unlocking key strategies for surviving―and thriving―in today's business environment. Now Brandon Seigel goes beyond the book and brings the same great tips, tricks, and anecdotes to improve your private practice in this companion podcast. Get In Touch With MePodcast Website: https://www.privatepracticesurvivalguide.com/LinkedIn: https://www.linkedin.com/in/brandonseigel/Instagram: https://www.instagram.com/brandonseigel/https://wellnessworksmedicalbilling.com/Private Practice Survival Guide Book
In this episode of The Product Experience, host Randy Silver speaks with Teresa Huang — Head of Product for Enablement at global health‑insurer Bupa — about the often‑overlooked world of platform product management. They explore why building internal platforms is fundamentally different and often more challenging than building user‑facing products, how to measure the value of platform work, and practical strategies for gaining stakeholder alignment, driving platform adoption and demonstrating business impact.Chapters0:00 – Why “efficiency” alone no longer cuts it — measuring platform impact in business terms1:02 – Teresa's background: from business analyst to head of product in health insurance6:20 – What we mean by “platform product management” — internal tools vs marketplace vs public‑API platforms7:44 – Why you need to “hop two steps”: address developer needs and end-customer value10:24 – Types of platforms: internal APIs, marketplace ecosystems, public‑facing platforms (e.g. like Shopify)10:55 – Reframing platform work: building business cases instead of chasing “efficiency” metrics13:16 – Linking platform initiatives to core business goals and joint OKRs15:47 – The importance of visualisation — using prototypes and role‑plays to communicate platform value20:57 – Internal showcases: keeping stakeholders engaged with real‑world scenarios23:28 – Success metrics for platforms: adoption, usage, reliability, ecosystem growth26:00 – Retiring legacy services: deciding when low-use tools should be decommissioned28:55 – From cost centre to enabler: shifting the narrative to show value creationOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.
What does it really take to build AI that can resolve customer support at scale reliably, safely, and with measurable business impact?We explore how Intercom has evolved from a traditional customer support platform into an AI-first company, with its AI assistant, Fin, now resolving 65% of customer queries without human intervention. Intercom's Chief AI Officer, Fergal Reid, discusses the company's journey from natural language understanding (NLU) systems to their current retrieval augmented generation (RAG) approach, explaining how they've optimised every component of their AI pipeline with custom-built models.The conversation covers Intercom's unique approach to AI product development, emphasising standardisation and continuous improvement rather than customisation for individual clients. Fergal explains their outcome-based pricing model, where clients pay for successful resolutions rather than conversations, and how this aligns incentives across the business.We also discuss Intercom's approach to agentic AI, which enables their systems to perform complex, multi-step tasks, such as processing refunds, by integrating with various APIs. Fergal shares insights on testing methodologies, the balance between customisation and standardisation, and the challenges of building AI products in a rapidly evolving technological landscape.Finally, Fergal shares what excites and honestly freaks him out a bit about where AI is heading next.Timestamps00:00 - Intro02:31 - Welcome to Fergal Reid05:26 - How to train an NLU solution effectively?08:56 - What gen AI changed for Intercom10:57 - How would you describe Fin?14:30 - Fin's performance increase17:18 - Intercom's custom models22:14 - Large Language Models vs Small Language Models30:40 - RAG and 'the full stop problem'40:08 - Agentic AI capabilities at Intercom50:40 - Intercom's approach to testing1:04:46 - About the most exciting things in the AI spaceShow notesLearn more about IntercomConnect with Fergal Reid on LinkedInFollow Kane Simms on LinkedInArticle - The full stop problem: RAG's biggest limitationTake our updated AI Maturity AssessmentSubscribe to VUX WorldSubscribe to The AI Ultimatum Substack Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Power Producers Podcast, David Carothers, Kyle Houck, and Clinton Houck explore how technology, customer experience, and new product offerings are reshaping opportunities in the insurance industry. Clinton, who started his career at State Farm and later moved into the insurtech space, shares how his path led him to Fair, a company reimagining the vehicle warranty space. Historically plagued by poor customer experiences and shady telemarketing tactics, warranties are being reinvented as a trustworthy, transparent, and agency-distributed product. Key Highlights: Disrupting Auto Warranties Clinton Houk explains how Fair eliminates dealership markups and regulation issues to offer independent agents a transparent, partner-focused warranty solution with a superior claims experience. The "Plus One" Cross-Sell Learn how to seamlessly integrate warranty discussions into everyday workflows. This strategy offers clients critical financial protection against repair bills while boosting agency revenue and retention. Closing Commercial Coverage Gaps David and Clinton highlight a major opportunity: protecting rideshare drivers and commercial fleets, which are often excluded by standard personal warranties, from cash flow shocks. Plug-and-Play Sales Tech Clinton details Fair's agent-friendly technology, from embeddable quoting links and APIs to an in-house sales team that can handle the entire process for your agency. Connect with: David Carothers LinkedIn Clinton Houck LinkedIn Kyle Houck LinkedIn Visit Websites: Power Producer Base Camp Fair Killing Commercial Crushing Content Power Producers Podcast Policytee The Dirty 130 The Extra 2 Minutes
This special ChinaTalk cross-post features Zixuan Li of Z.ai (Zhipu AI), exploring the culture, incentives, and constraints shaping Chinese AI development. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. The discussion covers Z.ai's powerful GLM 4.6 model, their open weights strategy as a marketing tactic, and unique Chinese AI use cases like "role-play." Gain insights into the rapid pace of innovation, the talent market, and how Chinese companies view their position relative to global AI leaders. Sponsors: Google AI Studio: Google AI Studio features a revamped coding experience to turn your ideas into reality faster than ever. Describe your app and Gemini will automatically wire up the right models and APIs for you at https://ai.studio/build Agents of Scale: Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts Framer: Framer is the all-in-one platform that unifies design, content management, and publishing on a single canvas, now enhanced with powerful AI features. Start creating for free and get a free month of Framer Pro with code COGNITIVE at https://framer.com/design Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Sponsor: Google AI Studio (00:31) About the Episode (03:44) Introducing Z.AI (07:07) Drupu AI's Backstory (09:38) Achieving Global Recognition (Part 1) (12:53) Sponsors: Agents of Scale | Framer (15:15) Achieving Global Recognition (Part 2) (15:15) Z.AI's Internal Culture (19:17) China's AI Talent Market (24:39) Open vs. Closed Source (Part 1) (24:46) Sponsors: Tasklet | Shopify (27:54) Open vs. Closed Source (Part 2) (35:16) Enterprise Sales in China (40:38) AI for Role-Playing (45:56) Optimism vs. Fear of AI (51:36) Translating Internet Culture (57:11) Navigating Compute Constraints (01:03:59) Future Model Directions (01:15:02) Release Velocity & Work Culture (01:25:04) Outro
Send us a textStop guessing your way through sales. We sit down with Carolyn Miller—builder's daughter, top-performing dealer, sales trainer, and CRM implementer—to map a simple path from chaotic follow-up to a clean, scalable system that grows shed and post‑frame sales. Carolyn's Ask, Listen, Solve framework anchors the conversation: ask smarter questions that surface real needs, listen for budget, timing, and site constraints, then solve with a clear next step that moves the deal forward. From there, we translate that human process into technology your team will actually use.You'll hear concrete examples of how a right-sized CRM becomes more than a contact list. We talk automations that text prospects within minutes of a configurator submission, task sequences that keep quotes alive, and post‑delivery check-ins that trigger five‑star Google reviews and referrals. Carolyn shares a client win where automation alone revived a lead the salesperson had written off, turning it into an $800 profit carport sale. We also open the hood on integrations—connecting IdeaRoom or Digital Shed Builder to capture high-intent leads, syncing orders to QuickBooks Online to eliminate double entry, and pushing projects to monday.com so production and delivery stay in lockstep with sales.If your tech stack already feels crowded, this chat will help you make it act like one system. We cover when to use APIs, webhooks, and Zapier, and why a simple front end matters more than a flashy dashboard. Most importantly, we focus on adoption: weekly coaching, tight feedback loops, and small refinements so your team starts the day in the CRM and never loses the thread with a customer again. Ready to replace “winging it” with a repeatable process that frees your time and lifts your close rate? Hit play, then tell us your biggest follow-up bottleneck—we'll tackle it in a future installment. If you find value here, subscribe, leave a review, and share this with a dealer who needs a cleaner system.For more information or to know more about the Shed Geek Podcast visit us at our website.Would you like to receive our weekly newsletter? Sign up here.Follow us on Twitter, Instagram, Facebook, or YouTube at the handle @shedgeekpodcast.To be a guest on the Shed Geek Podcast visit our website and fill out the "Contact Us" form.To suggest show topics or ask questions you want answered email us at info@shedgeek.com.This episodes Sponsors:Studio Sponsor: Shed ProIdentigrowCALCardinal LeasingDigital Shed Builder
In this episode of The Product Experience, host Randy Silver speaks with Teresa Huang — Head of Product for Enablement at global health‑insurer Bupa — about the often‑overlooked world of platform product management. They explore why building internal platforms is fundamentally different and often more challenging than building user‑facing products, how to measure the value of platform work, and practical strategies for gaining stakeholder alignment, driving platform adoption and demonstrating business impact. Chapters0:00 – Why “efficiency” alone no longer cuts it — measuring platform impact in business terms1:02 – Teresa's background: from business analyst to head of product in health insurance6:20 – What we mean by “platform product management” — internal tools vs marketplace vs public‑API platforms7:44 – Why you need to “hop two steps”: address developer needs and end-customer value10:24 – Types of platforms: internal APIs, marketplace ecosystems, public‑facing platforms (e.g. like Shopify)10:55 – Reframing platform work: building business cases instead of chasing “efficiency” metrics13:16 – Linking platform initiatives to core business goals and joint OKRs15:47 – The importance of visualisation — using prototypes and role‑plays to communicate platform value20:57 – Internal showcases: keeping stakeholders engaged with real‑world scenarios23:28 – Success metrics for platforms: adoption, usage, reliability, ecosystem growth26:00 – Retiring legacy services: deciding when low-use tools should be decommissioned28:55 – From cost centre to enabler: shifting the narrative to show value creationOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.
Ben Toner once again joins Keith Parsons to explain the new WLAN-Pi App. Originally built to control the WLAN-Pi Go, the app now works with all WLAN-Pi models and consolidates controls previously spread across the web UI and APIs. They then dive deeper into the method of connectivity for the app and the functionalities it... Read more »
Most software teams still think of payments as a chore. We take you inside the playbook that turns it into a growth engine. I sits down with NMI CTO Phillip Goericke to unpack how embedded payments evolved from a basic checkout to a full-stack platform that handles onboarding, underwriting, payouts, analytics, and even embedded finance. The conversation is straight talk on what actually works when you're shipping fast and scaling globally.We dig into the architectural choices that matter: start with a no-code drop-in to activate revenue quickly, then progress to low-code SDKs and finally full APIs when you need deep control. Phillip shares where platforms stall - manual KYC, fragmented global rules, and data blind spots and how a modular approach fixes these without ripping out your stack. You'll hear how compliance-as-a-service, network tokenization, and adaptive 3D Secure can raise approval rates, reduce fraud, and simplify audits while keeping the checkout experience seamless.Looking ahead, we explore why identity, compliance, and data are the foundation for embedded finance. Philip outlines NMI's unified experience that brings payments, onboarding, insights, and new services like business capital into one place. We also tackle AI with clear eyes: use it to augment decisioning and anomaly detection, but wrap it with deterministic controls so money-critical outcomes are consistently right. The key takeaway is a mindset shift: stop treating payments as a feature and start using it as a strategic lever for revenue, retention, and product velocity.If you're building software with transactions anywhere in the flow, this is your blueprint for turning payments into a competitive moat. Subscribe for more deep dives, share with a teammate who owns monetization, and leave a review to tell us what topic you want next.
In this special 2026 Payments Outlook episode of the Payments Podcast, host Owen McDonald is joined by Jessica Cheney and Vitus Rotzer to explore the trends shaping the future of banking and B2B payments. From monetizing ISO 20022 data and accelerating real-time and cross-border payments to scaling embedded payments and leveraging AI for fraud prevention, this conversation dives deep into the strategies banks must adopt to stay competitive. Discover why collaboration, APIs, and advanced analytics will define success in the coming year.
Ben Toner once again joins Keith Parsons to explain the new WLAN-Pi App. Originally built to control the WLAN-Pi Go, the app now works with all WLAN-Pi models and consolidates controls previously spread across the web UI and APIs. They then dive deeper into the method of connectivity for the app and the functionalities it... Read more »
In this special Cloud Wars report, Bob Evans sits down with Michael Ameling, President and Chief Product Officer of SAP Business Technology Platform, for a deep dive into how SAP is helping customers navigate the fast-moving AI Era. Ameling and Evans discuss how SAP's Business Data Cloud, partnerships with Snowflake and Databricks, HANA Cloud innovations, and new AI-powered tools and agents are helping SAP evolve from an applications powerhouse into a data-and-AI-driven business platform for the next generation.SAP's AI Data FutureThe Big Themes:SAP HANA Cloud Becomes an AI-Optimized Database: SAP HANA Cloud is evolving into “the database AI was looking for." As a multi-model system supporting spatial, graph, vector, and document storage, HANA Cloud enables AI workloads to run more efficiently and contextually. Recent additions, like vector engines and Knowledge Graph capabilities, give customers powerful tools for retrieval-augmented generation (RAG), contextual reasoning, and advanced analytics.Developers Are 'The AI Revolution': Developers aren't observing the AI Revolution, they are the revolution. With modern AI tools, developers can innovate faster, solve bigger problems, and directly influence business outcomes. SAP is investing heavily in meeting developers where they are by enhancing IDEs, building business-aware development tools, and providing context-rich assets such as APIs, business objects, and process insights. AI acts as a teammate, not a replacement.SAP: An Applications and a Data Company: SAP must be both an applications and a data company. Customer value emerges when applications, data, and AI converge seamlessly. SAP's decades of industry expertise give it unparalleled business context, which becomes even more powerful when embedded into AI agents and data platforms. With more than 34,000 SAP HANA Cloud customers and rapidly expanding AI adoption, SAP is positioning itself as the platform where business process knowledge meets modern AI capability.The Big Quote: " . . what we need to understand that AI is our teammate. It's like asking your best friend who has a lot of knowledge, but you can ask multiple friends at the same time. Not everything is always right, but you can ask questions, you can continuously improve. If we understand that pattern, we understand that AI helps us to solve much bigger problems as a developer, and then, of course, having much more impact on real business."More from Michael Ameling and SAP:Connect with Michael Ameling on LinkedIn, or get more insights from SAP TechEd. Visit Cloud Wars for more.