Programming language with English-like syntax
POPULARITY
AI is happening so fast, Ransomware attacks increasing but payments going down, AI's ability to write Cobol tanks IBM stock, Bumper Music, My Network interface seems to have broken DNS,
Andrew and Ben break down a busy week on the Friday Deploy, starting with the market reaction to new COBOL tools and the permissions oversights that led to recent outages at AWS. They also explore the shifting landscape of developer productivity studies, the security risks of cloud-hosted agents, and the latest cybersecurity takeaways from the International AI Safety report. Finally, they close out the episode by checking in on a retired Claude model that was given a blog.Follow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:IBM Didn't Lose 13% Because COBOL DiedAWS suffered ‘at least two outages' caused by AI tools, and now I'm convinced we're living inside a ‘Silicon Valley' episodeWe are Changing our Developer Productivity Experiment DesignDeepfakes spreading and more AI companions': seven takeaways from the latest artificial intelligence safety reportGreetings from the Other Side (of the AI Frontier)OFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
This week, we discuss AI-assisted COBOL migrations, the OpenClaw Foundation, and AI killing Office. Plus, is TSA PreCheck Touchless the peak of airport efficiency? Watch the YouTube Live Recording of Episode 561 Runner-up Titles New's not good He knows how to be retired Let Matt Cook We don't have to worry about that Brandon You're that guy The stock market feels reactionary Siri-Claw Foundation Washing Give me life-changing money and I'll have a better take Why do I need to pay for power usage? Rundown IBM is the latest AI casualty. Shares tank 13% on Anthropic programming language threat IBM Crashes 11% as Anthropic Threatens COBOL Empire Mechanical Orchard: Half Baked OpenClaw, OpenAI and the future This Is the Biggest Threat to Microsoft Office I've Ever Seen. LibreOffice Online: a fresh start - TDF Community Blog Linux 7.0-rc1 Released With Many New Features Relevant to your Interests Warren Buffett's Berkshire Hathaway announces it sold 77% of its Amazon they hacked CSS The A.I. Disruption We've Been Waiting for Has Arrived YOLO Travel Bookings This App Warns You if Someone Is Wearing Smart Glasses Nearby The Death of Spotify: Why Streaming is Minutes Away From Being Obsolete OpenAI resets spending expectations, tells investors compute target is around $600 billion by 2030 Cloud and AWS cost consultant Duckbill expands to software, raises $7.75M for new Skyway platform Man accidentally gains control of 7,000 robot vacuums My smart sleep mask broadcasts users' brainwaves to an open MQTT broker Nonsense GE Profile made a smaller version of its nugget ice maker that needs less counter space TSA PreCheck Touchless ID | Delta Air Lines Listener Feedback Introducing Agent Plugins for AWS Conferences DevOpsDay LA at SCALE23x, March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026, March 4th to 6th, Atlanta, GA. Use this 30% off discount code from your pals at Tanzu: DN26VMWARE30. Check out the Tanzu and Spring talks and trading cards on THE LANDING PAGE. Austin Meetup, March 10th, Listener Steve Anness speaking on Grafana KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. DevOpsdays Atlanta 2026, April 21-22, 2026 DevOpsDays Austin, May 5 - 6, 2026 WeAreDevelopers, July 8th to 10th, Berlin, Coté speaking. VMware User Groups (VMUGs): Amsterdam (March 17-19, 2026) - Coté speaking. Minneapolis (April 7-9, 2026) Toronto (May 12-14, 2026) Dallas (June 9-11, 2026) Orlando (October 20-22, 2026) SDT News & Community Join our Slack community Email the show: questions@softwaredefinedtalk.com Free stickers: Email your address to stickers@softwaredefinedtalk.com Follow us on social media: Twitter, Threads, Mastodon, LinkedIn, BlueSky Watch us on: Twitch, YouTube, Instagram, TikTok Book offer: Use code SDT for $20 off "Digital WTF" by Coté Sponsor the show Recommendations Brandon: Milestone Birthdays (iOS App) Matt: Lupin on Netflix
Cheese is MIA, so Chad bring the ladies in to take over, and chaos follows: StepStone celebrates record applications… during record job desperation. Spin level: Olympic gold. AI agent harassment enters the chat. IBM's COBOL cash cow meets AI with a chainsaw. Google's “CareerDreamer” Copy prompt → paste → profit? Kombo vs. Humand at 2 am Tech layoffs are giving Hunger Games energy. CEOs call unemployment “momentum.” Workers call it “rent's due.” AI in hiring: helpful assistant or reputation wrecking hallucination machine? Stay tuned.
My main takeawaysMain TakeawaysThe "Stargate" Collapse: The $500 billion partnership between OpenAI, SoftBank, and Oracle is being labeled "vaporware." Reports suggest the deal is in shambles due to internal power struggles and a lack of actual liquidity, with SoftBank allegedly scrambling for 90% debt financing.Market Volatility vs. Reality: There is a disconnect between market reactions and product performance. While Anthropic's claim that Claude can streamline COBOL code caused IBM's stock to drop 10%, critics argue the public is still in a "demo phase" of awe and hasn't realized the tech often fails to work as advertised.Reliability Concerns: High-profile failures are surfacing, such as Claude reportedly deleting a Meta researcher's entire Gmail history. This raises alarms as these same models are being positioned to manage critical infrastructure like banking and the IRS.Corporate Espionage: Anthropic has reported "industrial-scale distillation attacks" from Chinese labs (DeepSeek, Moonshot AI, MiniMax), claiming they used over 24,000 fraudulent accounts to "siphon" Claude's capabilities to train their own models.The "Theranos" Comparison: Critics are drawing parallels between current AI labs and failed startups like Theranos, arguing that the goal of reaching AGI via Large Language Models may be technically impossible, creating a "feedback loop delusion" to sustain venture capital investment.Strategic Shifts: OpenAI is pivoting toward traditional consulting giants (McKinsey, Accenture) to integrate its tech, while the community continues to debate the technical distinctions between generative AI and autonomous agents.@XFreeze@MrEwanMorrison@sterlingcrispin@dwlz
Join host Steven Dickens in this inspiring episode of I Am a Mainframer featuring Junior Tadiffo, a third-year Computer Science student at University at Buffalo, IBM Z Student Ambassador, and President of the UB IBM Z Club. Junior shares his journey discovering the mainframe through a friend's recommendation, earning IBM Z badges on z/OS Explore, and igniting his passion for this powerful platform.From his first "mind-blowing" experience accessing z/OS to running the university's IBM Z Club, Junior discusses the perception challenges on college campuses, the importance of hands-on access like z/OS Explore, and how open source Linux on mainframe makes it more accessible to students. He also shares advice for the mainframe community on supporting early-career talent and his vision for more public resources, YouTube tutorials, and greater mainframe integration in modern computing over the next 10 years.Celebrating Black History Month: This episode highlights Junior Tadiffo's journey as a Black student leader breaking barriers in mainframe technology during Black History Month. Junior represents the next generation of diverse talent bringing fresh perspectives and energy to the mainframe ecosystem, proving that innovation knows no bounds.
Kevin Horner dials up the chart of "Big Blue" and tries to make sense of the recent selling seen in IBM Corp. (IBM). Following news of Anthropic's latest COBOL capabilities with its Claude AI, shares of IBM slide 13% on Monday–it's biggest single session drop of this millennium. Kevin looks at the long-term support significance of the $225 level and shows how dramatic this week's drop was for IBM.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
Anthropic's Hidden Claude 1, Market-Shaking AI Tools, and MIT's One-Step 3D-Printed Electric Motor Host Jim Love covers three major stories: Anthropic CEO Dario Amodei's comments on AI governance and safety, including that "Claude 1" was built before ChatGPT but not released because it didn't meet Anthropic's alignment and safety bar; how Anthropic's recent launches—Claude for knowledge-work "cowork" workflows, deeper office/document integrations, Claude Code Security for vulnerability scanning, and tooling to automate parts of COBOL modernization—coincided with sharp market reactions including declines in CrowdStrike and Zscaler (around 10–11%) and a major IBM drop (more than 13%) amid fears AI could disrupt SaaS, cybersecurity, and legacy modernization revenue; and MIT researchers' report of a 3D printing process that produces a fully functional linear electric motor in a single step (aside from magnetization), with reported material cost around 50 cents in a lab setting, raising the prospect of on-demand manufacturing and compressed supply chains. The episode also includes sponsorship messages about Meter's integrated wired, wireless, and cellular networking stack. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Headlines and Sponsor 00:45 Amodei vs Altman 01:29 Claude 1 Not Shipped 03:19 Anthropic Shakes Markets 04:57 AI Hits Cybersecurity 05:28 COBOL Modernization Shock 08:10 MIT Prints Electric Motor 09:39 Manufacturing Disruption 10:26 Wrap Up and Thanks
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown/id1684415169?i=1000751248515
Infinite AI Monkeys, Ploutus, Serv-U, Fortinet, Cyberwar, COBOL, NIST, Dr. Strangelove, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-558
Infinite AI Monkeys, Ploutus, Serv-U, Fortinet, Cyberwar, COBOL, NIST, Dr. Strangelove, Aaran Leyland, and More on the Security Weekly News. Show Notes: https://securityweekly.com/swn-558
Infinite AI Monkeys, Ploutus, Serv-U, Fortinet, Cyberwar, COBOL, NIST, Dr. Strangelove, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-558
Infinite AI Monkeys, Ploutus, Serv-U, Fortinet, Cyberwar, COBOL, NIST, Dr. Strangelove, Aaran Leyland, and More on the Security Weekly News. Show Notes: https://securityweekly.com/swn-558
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown/id1684415169?i=1000751248515
In aflevering 129 spreken Ronald en Jan met Jan Buurman van HCS Company iemand die al sinds zijn 17e in de IT zit en de evolutie van mainframes tot Kubernetes van dichtbij heeft meegemaakt Jan begon als COBOL-programmeur en groeide door richting organisatie, processen en platformstrategie Vanuit die brede ervaring kijkt hij anders naar Kubernetes dan veel engineers: niet als eindoplossing, maar als onderdeel van een groter geheel.Een van de scherpste inzichten uit het gesprek:Veel organisaties draaien al vijf jaar Kubernetes, maar hebben vaak nog maar 5–10% van hun applicatielandschap gemoderniseerd Volgens Jan ligt het echte probleem niet bij het platform, maar bij legacy-applicaties en de businesscontext eromheen. Hij deelt een confronterende anekdote van een overheidsorganisatie waar Kubernetes technisch succesvol was geïmplementeerd, maar het onderliggende probleem niet oploste, omdat de oude COBOL- en Oracle Forms-applicaties nog steeds herbouwd moesten worden We praten over:Waarom platform engineering niet automatisch modernisering betekentDe kloof tussen “de bubbel” en de realiteit bij developers buiten de cloud-native wereld Community-denken binnen organisaties als versneller voor standaardisatie AI als mogelijke katalysator om legacy sneller te migreren naar Kubernetes En waarom standaardisatie vaak het ondergeschoven kindje blijft Jan sluit af met een nuchtere blik op de toekomst van Kubernetes: niets is permanent in IT. Ook Kubernetes zal ooit verdwijnen — maar tot die tijd is het dé standaard Een aflevering over realisme, organisatieverandering en waarom techniek zelden het echte probleem is.Stuur ons een bericht.DevOps ConferenceThe Conference for CI/CD, Kubernetes, Platform Engineering & DevSecOps k8_Podcast voor 15% kortingSupport the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
Dette er episode 356 av Tid er penger.Alle tjenester Tid er penger leverer til lyttere kan du finne på vår nye, enkle hjemmeside: www.tiderpenger.noDer finner du:LinkedIn-sideSpotfyRSSDiscord chatBlueskyNyhetsbrevFacebook-gruppePatreonBokanbefalingerPeters CVMail Hosted on Acast. See acast.com/privacy for more information.
The daughter of a hospital administrator, Amy Gleason never considered a career in the public sector – she went straight into healthcare. As an emergency room nurse, she started to see the dangers that unfold when healthcare providers don't have access to the information they need to treat patients. Those experiences drove her towards a tech career in the emerging electronic health records space before a very personal experience altered her professional path yet again.Amy's active and healthy 10-year old daughter began suffering unusual healthcare events, from rashes and headaches to broken bones. Eventually, she couldn't walk. It took more than a year from the start of these symptoms for doctors to diagnose her with a rare autoimmune disease. Even then, it was an accidental diagnosis from a dermatologist conducting a skin biopsy.Amy attributes the delayed diagnosis to siloed data, not unsimilar to the challenges she experienced as a nurse and was working to solve in the EHR space. It motivated her to co-found a company focused on helping patients with chronic diseases access their data to share it with the providers and family members helping to navigate complex care journeys.In 2015, Amy's work earned her an award from the White House for Champions of Change in Precision Medicine – her first foray into the public sector. By 2018, she entered civic service full time with a role at the United States Digital Service, which she describes as “DOGE 1.0.”In this episode of Healthcare is Hard, Amy talked to Keith Figlioli about the work she's doing now as Strategic Advisor to CMS and Administrator of the U.S. DOGE Service, where her main mission is modernizing technology across government agencies for the millions of people who rely on federal services every day. This ranges from modernizing FAFSA and the student loan process, to improving the Visa system ahead of the World Cup, and work on various critical healthcare systems. Some of the topics Amy and Keith discussed in this episode, include:Bold plans for a Digital Health Ecosystem. Launched in July 2025, CMS' Health Tech Ecosystem is a public-private partnership designed as a voluntary, fast-moving alternative to slow rulemaking. Rather than years of regulation, the program uses pledges, working groups, and short development cycles to put interoperability building blocks and real patient-facing use cases in place. The goal is to get usable capabilities into the market in months – not years – let the community iterate, and have baseline use cases live by March 31, 2026 with more advanced capabilities rolling out by July.Carrots and sticks before regulation. Recognizing the limitations of regulation, Amy talked about a new philosophy for incentivizing the market to change behaviors on its own first. “Carrots” include the rural health transformation fund and the recently introduced ACCESS model, a 10-year pilot that, for the first time, lets tech-enabled services bill Medicare directly. “Sticks” include stricter enforcement of information-blocking rules.Replacing the 1970s-era Medicare claims system. Amy discussed plans to replace Medicare's decades-old COBOL-based adjudication platform. While it's a stable platform, it can't support real-time processing, AI, or rapid change. To replace it, CMS is looking to commercial, off-the-shelf solutions that operate at scale so claims processing can be modernized, made real-time, and integrated with new interoperability rails. It's a concrete example of bringing modern engineering and product thinking to government technology.To hear Amy and Keith discuss these topics and more, listen to this episode of Healthcare is Hard: A Podcast for Insiders.
Photo by Viktor Keri on Unsplash Published 16 February 2026 e543 with Andy, Michael and Michael – Stories and discussion on Agentic AI and the changing nature of work, agents renting humans, real time translation, artistic roads, e-bikes for your feet and a whole lot more. Andy, Michael and Michael get things rolling with several AI articles. First up, is a Mastodon post by Alan Pringle that called attention to a HBR article on the influence of AI on productivity. This then led to a post on productivity acceleration technologies from years past – from COBOL, which was designed to enable business people to write programs, to 4GLs to case tools. Then, the team discusses a detailed post from Matt Shumer entitled Something Big Is Happening. The entire post is well worth reading, not only for how history is unfolding in real time, also for the recommendations that Matt makes for people to take onboard right now. Among the recommendations are to begin the habit of adapting, and experimenting with multiple tools to build resiliency and experience. Wrapping up this section is a new version of taskrabbit that provides an API for Agents to rent humans for specific work called rentahuman.ai . The future is certainly coming in fast. In the AR VR section, there is a story from Tom's Guide where the author used her Ray Ban Meta glasses to translate the Super Bowl halftime video in real time. This feels like the precursor to the next logical step, a dynamic version of the Amazon X-Ray feature where further context can be personalized and served up to the user if they wish. After touching on the assembly of Game Poems and the art of roads in games, the team sprints to the end of the episode with Nike's Project Amplify, which is an ankle exoskeleton to augment humans running abilities. Looping back to the start of the episode, Andy highlights a BBC show called Chris McCausland. What's been your experience with AI productivity? What are you experimenting with? Have your bots
Emmanuel et Guillaume discutent de divers sujets liés à la programmation, notamment les systèmes de fichiers en Java, le Data Oriented Programming, les défis de JPA avec Kotlin, et les nouvelles fonctionnalités de Quarkus. Ils explorent également des sujets un peu fous comme la création de datacenters dans l'espace. Pas mal d'architecture aussi. Enregistré le 13 février 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-337.mp3 ou en vidéo sur YouTube. News Langages Comment implémenter un file system en Java https://foojay.io/today/bootstrapping-a-java-file-system/ Créer un système de fichiers Java personnalisé avec NIO.2 pour des usages variés (VCS, archives, systèmes distants). Évolution Java: java.io.File (1.0) -> NIO (1.4) -> NIO.2 (1.7) pour personnalisation via FileSystem. Recommander conception préalable; API Java est orientée POSIX. Composants clés à considérer: Conception URI (scheme unique, chemin). Gestion de l'arborescence (BD, métadonnées, efficacité). Stockage binaire (emplacement, chiffrement, versions). Minimum pour démarrer (4 composants): Implémenter Path (représente fichier/répertoire). Étendre FileSystem (instance du système). Étendre FileSystemProvider (moteur, enregistré par scheme). Enregistrer FileSystemProvider via META-INF/services. Étapes suivantes: Couche BD (arborescence), opérations répertoire/fichier de base, stockage, tests. Processus long et exigeant, mais gratifiant. Un article de brian goetz sur le futur du data oriented programming en Java https://openjdk.org/projects/amber/design-notes/beyond-records Le projet Amber de Java introduit les "carrier classes", une évolution des records qui permet plus de flexibilité tout en gardant les avantages du pattern matching et de la reconstruction Les records imposent des contraintes strictes (immutabilité, représentation exacte de l'état) qui limitent leur usage pour des classes avec état muable ou dérivé Les carrier classes permettent de déclarer une state description complète et canonique sans imposer que la représentation interne corresponde exactement à l'API publique Le modificateur "component" sur les champs permet au compilateur de dériver automatiquement les accesseurs pour les composants alignés avec la state description Les compact constructors sont généralisés aux carrier classes, générant automatiquement l'initialisation des component fields Les carrier classes supportent la déconstruction via pattern matching comme les records, rendant possible leur usage dans les instanceof et switch Les carrier interfaces permettent de définir une state description sur une interface, obligeant les implémentations à fournir les accesseurs correspondants L'extension entre carrier classes est possible, avec dérivation automatique des appels super() quand les composants parent sont subsumés par l'enfant Les records deviennent un cas particulier de carrier classes avec des contraintes supplémentaires (final, extends Record, component fields privés et finaux obligatoires) L'évolution compatible des records est améliorée en permettant l'ajout de composants en fin de liste et la déconstruction partielle par préfixe Comment éviter les pièges courants avec JPA et Kotlin - https://blog.jetbrains.com/idea/2026/01/how-to-avoid-common-pitfalls-with-jpa-and-kotlin/ JPA est une spécification Java pour la persistance objet-relationnel, mais son utilisation avec Kotlin présente des incompatibilités dues aux différences de conception des deux langages Les classes Kotlin sont finales par défaut, ce qui empêche la création de proxies par JPA pour le lazy loading et les opérations transactionnelles Le plugin kotlin-jpa génère automatiquement des constructeurs sans argument et rend les classes open, résolvant les problèmes de compatibilité Les data classes Kotlin ne sont pas adaptées aux entités JPA car elles génèrent equals/hashCode basés sur tous les champs, causant des problèmes avec les relations lazy L'utilisation de lateinit var pour les relations peut provoquer des exceptions si on accède aux propriétés avant leur initialisation par JPA Les types non-nullables Kotlin peuvent entrer en conflit avec le comportement de JPA qui initialise les entités avec des valeurs null temporaires Le backing field direct dans les getters/setters personnalisés peut contourner la logique de JPA et casser le lazy loading IntelliJ IDEA 2024.3 introduit des inspections pour détecter automatiquement ces problèmes et propose des quick-fixes L'IDE détecte les entités finales, les data classes inappropriées, les problèmes de constructeurs et l'usage incorrect de lateinit Ces nouvelles fonctionnalités aident les développeurs à éviter les bugs subtils liés à l'utilisation de JPA avec Kotlin Librairies Guide sur MapStruct @IterableMapping - https://www.baeldung.com/java-mapstruct-iterablemapping MapStruct est une bibliothèque Java pour générer automatiquement des mappers entre beans, l'annotation @IterableMapping permet de configurer finement le mapping de collections L'attribut dateFormat permet de formater automatiquement des dates lors du mapping de listes sans écrire de boucle manuelle L'attribut qualifiedByName permet de spécifier quelle méthode custom appliquer sur chaque élément de la collection à mapper Exemple d'usage : filtrer des données sensibles comme des mots de passe en mappant uniquement certains champs via une méthode dédiée L'attribut nullValueMappingStrategy permet de contrôler le comportement quand la collection source est null (retourner null ou une collection vide) L'annotation fonctionne pour tous types de collections Java (List, Set, etc.) et génère le code de boucle nécessaire Possibilité d'appliquer des formats numériques avec numberFormat pour convertir des nombres en chaînes avec un format spécifique MapStruct génère l'implémentation complète du mapper au moment de la compilation, éliminant le code boilerplate L'annotation peut être combinée avec @Named pour créer des méthodes de mapping réutilisables et nommées Le mapping des collections supporte les conversions de types complexes au-delà des simples conversions de types primitifs Accès aux fichiers Samba depuis Java avec JCIFS - https://www.baeldung.com/java-samba-jcifs JCIFS est une bibliothèque Java permettant d'accéder aux partages Samba/SMB sans monter de lecteur réseau, supportant le protocole SMB3 on pense aux galériens qui doivent se connecter aux systèmes dit legacy La configuration nécessite un contexte CIFS (CIFSContext) et des objets SmbFile pour représenter les ressources distantes L'authentification se fait via NtlmPasswordAuthenticator avec domaine, nom d'utilisateur et mot de passe La bibliothèque permet de lister les fichiers et dossiers avec listFiles() et vérifier leurs propriétés (taille, date de modification) Création de fichiers avec createNewFile() et de dossiers avec mkdir() ou mkdirs() pour créer toute une arborescence Suppression via delete() qui peut parcourir et supprimer récursivement des arborescences entières Copie de fichiers entre partages Samba avec copyTo(), mais impossibilité de copier depuis le système de fichiers local Pour copier depuis le système local, utilisation des streams SmbFileInputStream et SmbFileOutputStream Les opérations peuvent cibler différents serveurs Samba et différents partages (anonymes ou protégés par mot de passe) La bibliothèque s'intègre dans des blocs try-with-resources pour une gestion automatique des ressources Quarkus 3.31 - Support complet Java 25, nouveau packaging Maven et Panache Next - https://quarkus.io/blog/quarkus-3-31-released/ Support complet de Java 25 avec images runtime et native Nouveau packaging Maven de type quarkus avec lifecycle optimisé pour des builds plus rapides voici un article complet pour plus de detail https://quarkus.io/blog/building-large-applications/ Introduction de Panache Next, nouvelle génération avec meilleure expérience développeur et API unifiée ORM/Reactive Mise à jour vers Hibernate ORM 7.2, Reactive 3.2, Search 8.2 Support de Hibernate Spatial pour les données géospatiales Passage à Testcontainers 2 et JUnit 6 Annotations de sécurité supportées sur les repositories Jakarta Data Chiffrement des tokens OIDC pour les implémentations custom TokenStateManager Support OAuth 2.0 Pushed Authorization Requests dans l'extension OIDC Maven 3.9 maintenant requis minimum pour les projets Quarkus A2A Java SDK 1.0.0.Alpha1 - Alignement avec la spécification 1.0 du protocole Agent2Agent - https://quarkus.io/blog/a2a-java-sdk-1-0-0-alpha1/ Le SDK Java A2A implémente le protocole Agent2Agent qui permet la communication standardisée entre agents IA pour découvrir des capacités, déléguer des tâches et collaborer Passage à la version 1.0 de la spécification marque la transition d'expérimental à production-ready avec des changements cassants assumés Modernisation complète du module spec avec des Java records partout remplaçant le mix précédent de classes et records pour plus de cohérence Adoption de Protocol Buffers comme source de vérité avec des mappers MapStruct pour la conversion et Gson pour JSON-RPC Les builders utilisent maintenant des méthodes factory statiques au lieu de constructeurs publics suivant les best practices Java modernes Introduction de trois BOMs Maven pour simplifier la gestion des dépendances du SDK core, des extensions et des implémentations de référence Quarkus AgentCard évolue avec une liste supportedInterfaces remplaçant url et preferredTransport pour plus de flexibilité dans la déclaration des protocoles Support de la pagination ajouté pour ListTasks et les endpoints de configuration des notifications push avec des wrappers Result appropriés Interface A2AHttpClient pluggable permettant des implémentations HTTP personnalisées avec une implémentation Vert.x fournie Travail continu vers la conformité complète avec le TCK 1.0 en cours de développement parallèlement à la finalisation de la spécification Pourquoi Quarkus finit par "cliquer" : les 10 questions que se posent les développeurs Java - https://www.the-main-thread.com/p/quarkus-java-developers-top-questions-2025 un article qui revele et repond aux questions des gens qui ont utilisé Quarkus depuis 4-6 mois, les non noob questions Quarkus est un framework Java moderne optimisé pour le cloud qui propose des temps de démarrage ultra-rapides et une empreinte mémoire réduite Pourquoi Quarkus démarre si vite ? Le framework effectue le travail lourd au moment du build (scanning, indexation, génération de bytecode) plutôt qu'au runtime Quand utiliser le mode réactif plutôt qu'impératif ? Le réactif est pertinent pour les workloads avec haute concurrence et dominance I/O, l'impératif reste plus simple dans les autres cas Quelle est la différence entre Dev Services et Testcontainers ? Dev Services utilise Testcontainers en gérant automatiquement le cycle de vie, les ports et la configuration sans cérémonie Comment la DI de Quarkus diffère de Spring ? CDI est un standard basé sur la sécurité des types et la découverte au build-time, différent de l'approche framework de Spring Comment gérer la configuration entre environnements ? Quarkus permet de scaler depuis le développement local jusqu'à Kubernetes avec des profils, fichiers multiples et configuration externe Comment tester correctement les applications Quarkus ? @QuarkusTest démarre l'application une fois pour toute la suite de tests, changeant le modèle mental par rapport à Spring Boot Que fait vraiment Panache en coulisses ? Panache est du JPA avec des opinions fortes et des défauts propres, enveloppant Hibernate avec un style Active Record Doit-on utiliser les images natives et quand ? Les images natives brillent pour le serverless et l'edge grâce au démarrage rapide et la faible empreinte mémoire, mais tous les apps n'en bénéficient pas Comment Quarkus s'intègre avec Kubernetes ? Le framework génère automatiquement les ressources Kubernetes, gère les health checks et métriques comme s'il était nativement conçu pour cet écosystème Comment intégrer l'IA dans une application Quarkus ? LangChain4j permet d'ajouter embeddings, retrieval, guardrails et observabilité directement en Java sans passer par Python Infrastructure Les alternatives à MinIO https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/ MinIO a abandonné le support single-node fin 2025 pour des raisons commerciales, cassant de nombreuses démos et pipelines CI/CD qui l'utilisaient pour émuler S3 localement L'auteur cherche un remplacement simple avec image Docker, compatibilité S3, licence open source, déploiement mono-nœud facile et communauté active S3Proxy est très léger et facile à configurer, semble être l'option la plus simple mais repose sur un seul contributeur RustFS est facile à utiliser et inclut une GUI, mais c'est un projet très récent en version alpha avec une faille de sécurité majeure récente SeaweedFS existe depuis 2012 avec support S3 depuis 2018, relativement facile à configurer et dispose d'une interface web basique Zenko CloudServer remplace facilement MinIO mais la documentation et le branding (cloudserver/zenko/scality) peuvent prêter à confusion Garage nécessite une configuration complexe avec fichier TOML et conteneur d'initialisation séparé, pas un simple remplacement drop-in Apache Ozone requiert au minimum quatre nœuds pour fonctionner, beaucoup trop lourd pour un usage local simple L'auteur recommande SeaweedFS et S3Proxy comme remplaçants viables, RustFS en maybe, et élimine Garage et Ozone pour leur complexité Garage a une histoire tres associative, il vient du collectif https://deuxfleurs.fr/ qui offre un cloud distribué sans datacenter C'est certainement pas une bonne idée, les datacenters dans l'espace https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/ Avis d'expert (ex-NASA/Google, Dr en électronique spatiale) : Centres de données spatiaux, une "terrible" idée. Incompatibilité fondamentale : L'électronique (surtout IA/GPU) est inadaptée à l'environnement spatial. Énergie : Accès limité. Le solaire (type ISS) est insuffisant pour l'échelle de l'IA. Le nucléaire (RTG) est trop faible. Refroidissement : L'espace n'est pas "froid" ; absence de convection. Nécessite des radiateurs gigantesques (ex: 531m² pour 200kW). Radiations : Provoque erreurs (SEU, SEL) et dommages. Les GPU sont très vulnérables. Blindage lourd et inefficace. Les puces "durcies" sont très lentes. Communications : Bande passante très limitée (1Gbps radio vs 100Gbps terrestre). Le laser est tributaire des conditions atmosphériques. Conclusion : Projet extrêmement difficile, coûteux et aux performances médiocres. Data et Intelligence Artificielle Guillaume a développé un serveur MCP pour arXiv (le site de publication de papiers de recherche) en Java avec le framework Quarkus https://glaforge.dev/posts/2026/01/18/implementing-an-arxiv-mcp-server-with-quarkus-in-java/ Implémentation d'un serveur MCP (Model Context Protocol) arXiv en Java avec Quarkus. Objectif : Accéder aux publications arXiv et illustrer les fonctionnalités moins connues du protocole MCP. Mise en œuvre : Utilisation du framework Quarkus (Java) et son support MCP étendu. Assistance par Antigravity (IDE agentique) pour le développement et l'intégration de l'API arXiv. Interaction avec l'API arXiv : requêtes HTTP, format XML Atom pour les résultats, parser XML Jackson. Fonctionnalités MCP exposées : Outils (@Tool) : Recherche de publications (search_papers). Ressources (@Resource, @ResourceTemplate) : Taxonomie des catégories arXiv, métadonnées des articles (via un template d'URI). Prompts (@Prompt) : Exemples pour résumer des articles ou construire des requêtes de recherche. Configuration : Le serveur peut fonctionner en STDIO (local) ou via HTTP Streamable (local ou distant), avec une configuration simple dans des clients comme Gemini CLI. Conclusion : Quarkus simplifie la création de serveurs MCP riches en fonctionnalités, rendant les données et services "prêts pour l'IA" avec l'aide d'outils d'IA comme Antigravity. Anthropic ne mettra pas de pub dans Claude https://www.anthropic.com/news/claude-is-a-space-to-think c'est en reaction au plan non public d'OpenAi de mettre de la pub pour pousser les gens au mode payant OpenAI a besoin de cash et est probablement le plus utilisé pour gratuit au monde Anthropic annonce que Claude restera sans publicité pour préserver son rôle d'assistant conversationnel dédié au travail et à la réflexion approfondie. Les conversations avec Claude sont souvent sensibles, personnelles ou impliquent des tâches complexes d'ingénierie logicielle où les publicités seraient inappropriées. L'analyse des conversations montre qu'une part significative aborde des sujets délicats similaires à ceux évoqués avec un conseiller de confiance. Un modèle publicitaire créerait des incitations contradictoires avec le principe fondamental d'être "genuinely helpful" inscrit dans la Constitution de Claude. Les publicités introduiraient un conflit d'intérêt potentiel où les recommandations pourraient être influencées par des motivations commerciales plutôt que par l'intérêt de l'utilisateur. Le modèle économique d'Anthropic repose sur les contrats entreprise et les abonnements payants, permettant de réinvestir dans l'amélioration de Claude. Anthropic maintient l'accès gratuit avec des modèles de pointe et propose des tarifs réduits pour les ONG et l'éducation dans plus de 60 pays. Le commerce "agentique" sera supporté mais uniquement à l'initiative de l'utilisateur, jamais des annonceurs, pour préserver la confiance. Les intégrations tierces comme Figma, Asana ou Canva continueront d'être développées en gardant l'utilisateur aux commandes. Anthropic compare Claude à un cahier ou un tableau blanc : des espaces de pensée purs, sans publicité. Infinispan 16.1 est sorti https://infinispan.org/blog/2026/02/04/infinispan-16-1 déjà le nom de la release mérite une mention Le memory bounded par cache et par ensemble de cache s est pas facile à faire en Java Une nouvelle api OpenAPI AOT caché dans les images container Un serveur MCP local juste avec un fichier Java ? C'est possible avec LangChain4j et JBang https://glaforge.dev/posts/2026/02/11/zero-boilerplate-java-stdio-mcp-servers-with-langchain4j-and-jbang/ Création rapide de serveurs MCP Java sans boilerplate. MCP (Model Context Protocol): standard pour connecter les LLM à des outils et données. Le tutoriel répond au manque d'options simples pour les développeurs Java, face à une prédominance de Python/TypeScript dans l'écosystème MCP. La solution utilise: LangChain4j: qui intègre un nouveau module serveur MCP pour le protocole STDIO. JBang: permet d'exécuter des fichiers Java comme des scripts, éliminant les fichiers de build (pom.xml, Gradle). Implémentation: se fait via un seul fichier .java. JBang gère automatiquement les dépendances (//DEPS). L'annotation @Tool de LangChain4j expose les méthodes Java aux LLM. StdioMcpServerTransport gère la communication JSON-RPC via l'entrée/sortie standard (STDIO). Point crucial: Les logs doivent impérativement être redirigés vers System.err pour éviter de corrompre System.out, qui est réservé à la communication MCP (messages JSON-RPC). Facilite l'intégration locale avec des outils comme Gemini CLI, Claude Code, etc. Reciprocal Rank Fusion : un algorithme utile et souvent utilisé pour faire de la recherche hybride, pour mélanger du RAG et des recherches par mots-clé https://glaforge.dev/posts/2026/02/10/advanced-rag-understanding-reciprocal-rank-fusion-in-hybrid-search/ RAG : Qualité LLM dépend de la récupération. Recherche Hybride : Combiner vectoriel et mots-clés (BM25) est optimal. Défi : Fusionner des scores d'échelles différentes. Solution : Reciprocal Rank Fusion (RRF). RRF : Algorithme robuste qui fusionne des listes de résultats en se basant uniquement sur le rang des documents, ignorant les scores. Avantages RRF : Pas de normalisation de scores, scalable, excellente première étape de réorganisation. Architecture RAG fréquente : RRF (large sélection) + Cross-Encoder / modèle de reranking (précision fine). RAG-Fusion : Utilise un LLM pour générer plusieurs variantes de requête, puis RRF agrège tous les résultats pour renforcer le consensus et réduire les hallucinations. Implémentation : LangChain4j utilise RRF par défaut pour agréger les résultats de plusieurs retrievers. Les dernières fonctionnalités de Gemini et Nano Banana supportées dans LangChain4j https://glaforge.dev/posts/2026/02/06/latest-gemini-and-nano-banana-enhancements-in-langchain4j/ Nouveaux modèles d'images Nano Banana (Gemini 2.5/3.0) pour génération et édition (jusqu'à 4K). "Grounding" via Google Search (pour images et texte) et Google Maps (localisation, Gemini 2.5). Outil de contexte URL (Gemini 3.0) pour lecture directe de pages web. Agents multimodaux (AiServices) capables de générer des images. Configuration de la réflexion (profondeur Chain-of-Thought) pour Gemini 3.0. Métadonnées enrichies : usage des tokens et détails des sources de "grounding". Comment configurer Gemini CLI comment agent de code dans IntelliJ grâce au protocole ACP https://glaforge.dev/posts/2026/02/01/how-to-integrate-gemini-cli-with-intellij-idea-using-acp/ But : Intégrer Gemini CLI à IntelliJ IDEA via l'Agent Client Protocol (ACP). Prérequis : IntelliJ IDEA 2025.3+, Node.js (v20+), Gemini CLI. Étapes : Installer Gemini CLI (npm install -g @google/gemini-cli). Localiser l'exécutable gemini. Configurer ~/.jetbrains/acp.json (chemin exécutable, --experimental-acp, use_idea_mcp: true). Redémarrer IDEA, sélectionner "Gemini CLI" dans l'Assistant IA. Usage : Gemini interagit avec le code et exécute des commandes (contexte projet). Important : S'assurer du flag --experimental-acp dans la configuration. Outillage PipeNet, une alternative (open source aussi) à LocalTunnel, mais un plus évoluée https://pipenet.dev/ pipenet: Alternative open-source et moderne à localtunnel (client + serveur). Usages: Développement local (partage, webhooks), intégration SDK, auto-hébergement sécurisé. Fonctionnalités: Client (expose ports locaux, sous-domaines), Serveur (déploiement, domaines personnalisés, optimisé cloud mono-port). Avantages vs localtunnel: Déploiement cloud sur un seul port, support multi-domaines, TypeScript/ESM, maintenance active. Protocoles: HTTP/S, WebSocket, SSE, HTTP Streaming. Intégration: CLI ou SDK JavaScript. JSON-IO — une librairie comme Jackson ou GSON, supportant JSON5, TOON, et qui pourrait être utile pour l'utilisation du "structured output" des LLMs quand ils ne produisent pas du JSON parfait https://github.com/jdereg/json-io json-io : Librairie Java pour la sérialisation et désérialisation JSON/TOON. Gère les graphes d'objets complexes, les références cycliques et les types polymorphes. Support complet JSON5 (lecture et écriture), y compris des fonctionnalités non prises en charge par Jackson/Gson. Format TOON : Notation orientée token, optimisée pour les LLM, réduisant l'utilisation de tokens de 40 à 50% par rapport au JSON. Légère : Aucune dépendance externe (sauf java-util), taille de JAR réduite (~330K). Compatible JDK 1.8 à 24, ainsi qu'avec les environnements JPMS et OSGi. Deux modes de conversion : vers des objets Java typés (toJava()) ou vers des Map (toMaps()). Options de configuration étendues via ReadOptionsBuilder et WriteOptionsBuilder. Optimisée pour les déploiements cloud natifs et les architectures de microservices. Utiliser mailpit et testcontainer pour tester vos envois d'emails https://foojay.io/today/testing-emails-with-testcontainers-and-mailpit/ l'article montre via SpringBoot et sans. Et voici l'extension Quarkus https://quarkus.io/extensions/io.quarkiverse.mailpit/quarkus-mailpit/?tab=docs Tester l'envoi d'emails en développement est complexe car on ne peut pas utiliser de vrais serveurs SMTP Mailpit est un serveur SMTP de test qui capture les emails et propose une interface web pour les consulter Testcontainers permet de démarrer Mailpit dans un conteneur Docker pour les tests d'intégration L'article montre comment configurer une application SpringBoot pour envoyer des emails via JavaMail Un module Testcontainers dédié à Mailpit facilite son intégration dans les tests Le conteneur Mailpit expose un port SMTP (1025) et une API HTTP (8025) pour vérifier les emails reçus Les tests peuvent interroger l'API HTTP de Mailpit pour valider le contenu des emails envoyés Cette approche évite d'utiliser des mocks et teste réellement l'envoi d'emails Mailpit peut aussi servir en développement local pour visualiser les emails sans les envoyer réellement La solution fonctionne avec n'importe quel framework Java supportant JavaMail Architecture Comment scaler un système de 0 à 10 millions d'utilisateurs https://blog.algomaster.io/p/scaling-a-system-from-0-to-10-million-users Philosophie : Scalabilité incrémentale, résoudre les goulots d'étranglement sans sur-ingénierie. 0-100 utilisateurs : Serveur unique (app, DB, jobs). 100-1K : Séparer app et DB (services gérés, pooling). 1K-10K : Équilibreur de charge, multi-serveurs d'app (stateless via sessions partagées). 10K-100K : Caching, réplicas de lecture DB, CDN (réduire charge DB). 100K-500K : Auto-scaling, applications stateless (authentification JWT). 500K-10M : Sharding DB, microservices, files de messages (traitement asynchrone). 10M+ : Déploiement multi-régions, CQRS, persistance polyglotte, infra personnalisée. Principes clés : Simplicité, mesure, stateless essentiel, cache/asynchrone, sharding prudent, compromis (CAP), coût de la complexité. Patterns d'Architecture 2026 - Du Hype à la Réalité du Terrain (Part 1/2) - https://blog.ippon.fr/2026/01/30/patterns-darchitecture-2026-part-1/ L'article présente quatre patterns d'architecture logicielle pour répondre aux enjeux de scalabilité, résilience et agilité business dans les systèmes modernes Il présentent leurs raisons et leurs pièges Un bon rappel L'Event-Driven Architecture permet une communication asynchrone entre systèmes via des événements publiés et consommés, évitant le couplage direct Les bénéfices de l'EDA incluent la scalabilité indépendante des composants, la résilience face aux pannes et l'ajout facile de nouveaux cas d'usage Le pattern API-First associé à un API Gateway centralise la sécurité, le routage et l'observabilité des APIs avec un catalogue unifié Le Backend for Frontend crée des APIs spécifiques par canal (mobile, web, partenaires) pour optimiser l'expérience utilisateur CQRS sépare les modèles de lecture et d'écriture avec des bases optimisées distinctes, tandis que l'Event Sourcing stocke tous les événements plutôt que l'état actuel Le Saga Pattern gère les transactions distribuées via orchestration centralisée ou chorégraphie événementielle pour coordonner plusieurs microservices Les pièges courants incluent l'explosion d'événements granulaires, la complexité du debugging distribué, et la mauvaise gestion de la cohérence finale Les technologies phares sont Kafka pour l'event streaming, Kong pour l'API Gateway, EventStoreDB pour l'Event Sourcing et Temporal pour les Sagas Ces patterns nécessitent une maturité technique et ne sont pas adaptés aux applications CRUD simples ou aux équipes junior Patterns d'architecture 2026 : du hype à la réalité terrain part. 2 - https://blog.ippon.fr/2026/02/04/patterns-darchitecture-2026-part-2/ Deuxième partie d'un guide pratique sur les patterns d'architecture logicielle et système éprouvés pour moderniser et structurer les applications en 2026 Strangler Fig permet de migrer progressivement un système legacy en l'enveloppant petit à petit plutôt que de tout réécrire d'un coup (70% d'échec pour les big bang) Anti-Corruption Layer protège votre nouveau domaine métier des modèles externes et legacy en créant une couche de traduction entre les systèmes Service Mesh gère automatiquement la communication inter-services dans les architectures microservices (sécurité mTLS, observabilité, résilience) Architecture Hexagonale sépare le coeur métier des détails techniques via des ports et adaptateurs pour améliorer la testabilité et l'évolutivité Chaque pattern est illustré par un cas client concret avec résultats mesurables et liste des pièges à éviter lors de l'implémentation Les technologies 2026 mentionnées incluent Istio, Linkerd pour service mesh, LaunchDarkly pour feature flags, NGINX et Kong pour API gateway Tableau comparatif final aide à choisir le bon pattern selon la complexité, le scope et le use case spécifique du projet L'article insiste sur une approche pragmatique : ne pas utiliser un pattern juste parce qu'il est moderne mais parce qu'il résout un problème réel Pour les systèmes simples type CRUD ou avec peu de services, ces patterns peuvent introduire une complexité inutile qu'il faut savoir éviter Méthodologies Le rêve récurrent de remplacer voire supprimer les développeurs https://www.caimito.net/en/blog/2025/12/07/the-recurring-dream-of-replacing-developers.html Depuis 1969, chaque décennie voit une tentative de réduire le besoin de développeurs (de COBOL, UML, visual builders… à IA). Motivation : frustration des dirigeants face aux délais et coûts de développement. La complexité logicielle est intrinsèque et intellectuelle, non pas une question d'outils. Chaque vague technologique apporte de la valeur mais ne supprime pas l'expertise humaine. L'IA assiste les développeurs, améliore l'efficacité, mais ne remplace ni le jugement ni la gestion de la complexité. La demande de logiciels excède l'offre car la contrainte majeure est la réflexion nécessaire pour gérer cette complexité. Pour les dirigeants : les outils rendent-ils nos développeurs plus efficaces sur les problèmes complexes et réduisent-ils les tâches répétitives ? Le "rêve" de remplacer les développeurs, irréalisable, est un moteur d'innovation créant des outils précieux. Comment creuser des sujets à l'ère de l'IA générative. Quid du partage et la curation de ces recherches ? https://glaforge.dev/posts/2026/02/04/researching-topics-in-the-age-of-ai-rock-solid-webhooks-case-study/ Recherche initiale de l'auteur sur les webhooks en 2019, processus long et manuel. L'IA (Deep Research, Gemini, NotebookLM) facilite désormais la recherche approfondie, l'exploration de sujets et le partage des résultats. L'IA a identifié et validé des pratiques clés pour des déploiements de webhooks résilients, en grande partie les mêmes que celles trouvées précédemment par l'auteur. Génération d'artefacts par l'IA : rapport détaillé, résumé concis, illustration sketchnote, et même une présentation (slide deck). Guillaume s'interroge sur le partage public de ces rapports de recherche générés par l'IA, tout en souhaitant éviter le "AI Slop". Loi, société et organisation Le logiciel menacé par le vibe coding https://www.techbuzz.ai/articles/we-built-a-monday-com-clone-in-under-an-hour-with-ai Deux journalistes de CNBC sans expérience de code ont créé un clone fonctionnel de Monday.com en moins de 60 minutes pour 5 à 15 dollars. L'expérience valide les craintes des investisseurs qui ont provoqué une baisse de 30% des actions des entreprises SaaS. L'IA a non seulement reproduit les fonctionnalités de base mais a aussi recherché Monday.com de manière autonome pour identifier et recréer ses fonctionnalités clés. Cette technique appelée "vibe-coding" permet aux non-développeurs de construire des applications via des instructions en anglais courant. Les entreprises les plus vulnérables sont celles offrant des outils "qui se posent sur le travail" comme Atlassian, Adobe, HubSpot, Zendesk et Smartsheet. Les entreprises de cybersécurité comme CrowdStrike et Palo Alto sont considérées plus protégées grâce aux effets de réseau et aux barrières réglementaires. Les systèmes d'enregistrement comme Salesforce restent plus difficiles à répliquer en raison de leur profondeur d'intégration et de données d'entreprise. Le coût de 5 à 15 dollars par construction permet aux entreprises de prototyper plusieurs solutions personnalisées pour moins cher qu'une seule licence Monday.com. L'expérience soulève des questions sur la pérennité du marché de 5 milliards de dollars des outils de gestion de projet face à l'IA générative. Conférences En complément de l'agenda des conférences de Aurélie Vache, il y a également le site https://javaconferences.org/ (fait par Brian Vermeer) avec toutes les conférences Java à venir ! La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-13 février 2026 : Touraine Tech #26 - Tours (France) 12-13 février 2026 : World Artificial Intelligence Cannes Festival - Cannes (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18 mars 2026 : Jupyter Workshops: AI in Jupyter: Building Extensible AI Capabilities for Interactive Computing - Saint-Maur-des-Fossés (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 31 mars 2026-1 avril 2026 : FlowCon France 2026 - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 2-3 avril 2026 : Xen Spring Meetup 2026 - Grenoble (France) 7 avril 2026 : PyTorch Conference Europe - Paris (France) 9-10 avril 2026 : Android Makers by droidcon 2026 - Paris (France) 9-11 avril 2026 : Drupalcamp Grenoble 2026 - Grenoble (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 17-18 avril 2026 : Faiseuses du Web 5 - Dinan (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 12 mai 2026 : Lead Innovation Day - Leadership Edition - Paris (France) 19 mai 2026 : La Product Conf Paris 2026 - Paris (France) 21-22 mai 2026 : Flupa UX Days 2026 - Paris (France) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 28 mai 2026 : DevCon 27 : I.A. & Vibe Coding - Paris (France) 28 mai 2026 : Cloud Toulouse 2026 - Toulouse (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 29 mai 2026 : Agile Tour Strasbourg 2026 - Strasbourg (France) 2-3 juin 2026 : Agile Tour Rennes 2026 - Rennes (France) 2-3 juin 2026 : OW2Con - Paris-Châtillon (France) 3 juin 2026 : IA–NA - La Rochelle (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 9 juin 2026 : JFTL - Montrouge (France) 9 juin 2026 : C: - Caen (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 16 juin 2026 : Mobilis In Mobile 2026 - Nantes (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 18 juin 2026 : Tech'Work - Lyon (France) 22-26 juin 2026 : Galaxy Community Conference - Clermont-Ferrand (France) 24-25 juin 2026 : Agi'Lille 2026 - Lille (France) 24-26 juin 2026 : BreizhCamp 2026 - Rennes (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 6-8 juillet 2026 : Riviera Dev - Sophia Antipolis (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 20-22 août 2026 : 4th Tech Summit on AI & Robotics - Paris (France) & Online 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Tekoälykoodaus huipputasolla | Markus Hav | 371 Neuvottelija. Markus Hav kuvaa tekoälykoodauksen terävintä kärkeä ja intensiteettiä. Tekoälyagentit muokkaavat ajattelua, työtä ja oppimista kiihtyvällä vauhdilla. Hav korostaa kontekstia, muistia ja intentioita sekä "codumentantaatiota" koodauksessa.2026 on tekoälykoodauksen käännekohta varsinkin Anthropicin ja Googlen vyöryttäessä huipputyökaluja.00:00 Anthropicin Claude Code pyyhkii tekoälypöydän00:19 Markdown pyörittää AI:ta mutta missä on hyvä editori?00:30 Claude Code Max tokenilompakot Markus €400/kk vs Sami €100/kk00:57 Lex Friedmanin tylsä tekoälyjakso?01:45 Me ei olla hei hulluja, vaan maailma syöksyy tekoälyyn02:06 Tekoälyn eksponentiaalinen kehitys 2026 vs seesteinen 202502:34 Claude Coden tokenit tekevät paljon todellista työtä03:24 Ennuste Googlen ja Geminin noususta ja miksi se oli loogista jo vuosi sitten04:09 Googlen data, piirit ja mallit yhdistyvät ylivoimaiseksi kokonaisuudeksi05:04 Asennetaan Open Claw Samin Mac Miniin!05:42 Ihmisriskin merkitys ja miksi first mover ei aina voita07:01 Tietoturvan meni uusiksi07:21 Samantha-bot auttaa Samia ja palkitaan HoxHunt koululla09:08 Ovatko agentit lojaaleja tai reagoivat kannustimiin?10:44 Agentti ei ole enää sama keskustelusta toiseen11:26 Kontekstiikkuna muistina ja sen tyhjentäminen11:57 Miljoonan tokenin aikakausi ja sen rajat13:51 Second Brain ja rajattoman kontekstin haave14:19 Muistihaku tekee agentista ihmistä tehokkaamman15:00 Ralph Wiggum malli eli tyhmä mutta sinnikäs agentti15:45 Orkestroinnin sijaan yksinkertainen tehtävälista17:31 Valmiit skill md:t ja työkalujen kypsyminen18:12 Kaikkea ei ehdi kokeilla ja se on ok20:19 Uudet työntekijät saavat AI työkaluista eniten irti21:34 Vasaraongelma ja vanhoihin teknologioihin jämähtäminen22:12 Myös kivikautiset järjestelmät kuten Cobol avautuvat tekoälylle23:30 Gemni vs Claude Code vs Codex vs xAI24:39 One shot koodaus vs oppiminen virheiden kautta26:12 Itsereflektio uutena mallikyvykkyytenä27:15 Agentti tunnistaa itsensä ja parantaa itseään28:07 Resilienssi virheisiin koodauksessa29:20 Käyttöliittymä ratkaisee oppimiskokemuksen30:12 Mallien erilaiset virheprofiilit31:24 Google Antigravity ja käytännön työnkulut32:17 Frontend backend ja tietokantarealismi33:15 Demoefekti ja turhautuminen uusissa työkaluissa34:26 Firestore vs Supabase35:26 Lovable apupyörinä ja nopean alun mahdollistajana36:25 Dokumentaation ja speksin ero37:48 Hyvä speksi on jo puoliksi tehty työ39:19 Intentioiden rajaaminen agenttien hallintaan40:05 Ajatus intentiotietokannasta GitHubin sijaan41:24 Ymmärrä miksi rakennat et vain miten42:13 Markdown ja Obsidian Second Brainin perustana43:11 Microsoftin hapuilu ja Anthropicin MS Excel PowerPoint CoWork44:16 Anthropic kaikissa pilvissä strategisena voittajana45:31 OpenAI ei kuole mutta suunta hämärtyy46:00 Elon Musk SpaceX ja xAI:n inferenssi avaruudessa50:35 Neurolink ja suora yhteys tekoälyyn52:40 Epämiellyttävät ihmiset joilla on valtava vaikutus54:41 Kiinalaiset mallit ja agenttinen rynnäkkö56:39 Satojen miljoonien tokenien orkestrointi57:36 Halvat mallit mahdollistavat massiiviset agenttijoukot58:31 2026 käännekohtana ihmiskunnan historiassa59:12 Käytännön ensiaskel agenttiseen tekemiseen1:00:04 Agenttiset selaimet ja varovaisuuden tarve1:01:23 Lovable ja Firebase matalan kynnyksen alkuun1:02:13 Oivallus että kuka tahansa voi rakentaa1:03:35 Yhdessä tekeminen oppimisen katalyyttinä1:04:07 Käyttöliittymien kuolema - UX is dead?#neuvottelija Sisäpiirissä kekustellaan onko käyttöliittymä ihmisille turhaa koodia,Katso Sisäpirijaksot ja tue Samia#neuvottelija Sami Miettinen
I had a blast chatting with Anthony Anter, DevOps Evangelist at BMC Software on The Ravit Show and this one goes deep into a topic many enterprises are struggling with quietly. Mainframe modernization. Not tools. Not hype. Real ground reality.We started with a simple but uncomfortable truth Tony writes aboutBefore you even think about converting code, explain what you already have.That single line sets the tone for the entire conversation.- We talked about why so many COBOL to Java projects fail even before they begin.- Why teams rush into conversion without understanding decades of business logic buried in code.- And why mainframe systems often look like a long game of telephone, where intent is lost but code survives.A big part of the discussion focused on generative AI.Not as a magic converter, but as a way to explain, map, and document existing systems before touching a single line of Java.When teams finally see dependencies and flows clearly, the surprises are often eye opening.We also broke down a critical distinction that is often ignoredCode explanation is not the same as code translation.Missing this is where most modernization programs go wrong.Tony also shared why refactoring before rewriting matters, what practical cleanup really looks like, and how GenAI can help create Java code that is actually maintainable, not just converted.One part I personally found valuable was the balance between automation and human expertise.Where AI helps, where humans are still irreplaceable, and what governance is needed so AI output can be trusted.We wrapped with Tony's checklist for smarter modernization and one clear takeaway for anyone working on or around mainframes today.If you are a CIO, architect, or mainframe professional thinking about modernization, this conversation will save you from expensive mistakes.#data #ai #mainframes #bmc #theravitshow
Legacy systems work. So why do companies waste millions rewriting them? In this episode of Arguing Agile, Product Manager Nisha Patel joins Product Manager Brian Orlando and Enterprise Business Agility Consultant Om Patel for a debate on the dangerous obsession with rewriting legacy systems — from COBOL to green screens — that still power ATMs, government systems, and Fortune 500 billing engines. Watch or listen as we discuss the myth that "modern" equals "better" and reveal how most rewrites fail because they ignore customer value, edge cases, and real ROI as well as other topics, such as:How Chesterton's Fence applies to code (Brian still doesn't know)How Developers kill software with Resume-Driven Development (RDD)How Finance kills software with spreadsheet-driven development (SDD)Why chasing "parity" kills innovationRisk Mitigation, or, framing technical debt in business termsIf you've ever worked on or tried to replace legacy systems, this episode will either give you nightmares, or help how you approach legacy systems while helping you also stop burning budget on vanity projects.#LegacyCode #ProductManagement #AgileCoachingREFERENCESAA148 - An Introduction to Software Development FinancesLINKSYouTube: https://www.youtube.com/@arguingagileSpotify: https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3Apple: https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596INTRO MUSICToronto Is My BeatBy Whitewolf (Source: https://ccmixter.org/files/whitewolf225/60181)CC BY 4.0 DEED (https://creativecommons.org/licenses/by/4.0/deed.en)
Internationaler Zahlungsverkehr, Kreditkartenzahlungen, Online-Banking - ohne Mainframe ist all das heute wie morgen nicht denkbar. Doch wie begeistert man die nächste Generation für Cobol und Co? Andrea, Geschäftsbereich Core Banking Solutions, und Andreas, Mitglied der KI-Koordinationsstelle, erzählen, wie sich das Team der Lösungsfinder für die Mainframe-Transformation aufstellt und wie KI dabei unterstützt, Anwendungen schrittweise zu modernisieren.
Xmas Special: Software Industry Transformation - Why Software Development Must Mature Welcome to the 2025 Xmas special - a five-episode deep dive into how software as an industry needs to transform. In this opening episode, we explore the fundamental disconnect between how we manage software and what software actually is. From small businesses to global infrastructure, software has become the backbone of modern society, yet we continue to manage it with tools designed for building ships in the 1800s. This episode sets the stage for understanding why software development must evolve into a mature discipline. Software Runs Everything Now "Without any single piece, I couldn't operate - and I'm tiny. Scale this reality up: software isn't just in tech companies anymore." Even the smallest businesses today run entirely on software infrastructure. A small consulting and media business depends on WordPress for websites, Kajabi for courses, Stripe for payments, Quaderno for accounting, plus email, calendar, CRM systems, and AI assistants for content creation. The challenge? We're managing this critical infrastructure with tools designed for building physical structures with fixed requirements - an approach that fundamentally misunderstands what software is and how it evolves. This disconnect has to change. The Oscillation Between Technology and Process "AI amplifies our ability to create software, but doesn't solve the fundamental process problems of maintaining, evolving, and enhancing that software over its lifetime." Software improvement follows a predictable pattern: technology leaps forward, then processes must adapt to manage the new complexity. In the 1960s-70s, we moved from machine code to COBOL and Fortran, which was revolutionary but led to the "software crisis" when we couldn't manage the resulting complexity. This eventually drove us toward structured programming and object-oriented programming as process responses, which, in turn, resulted in technology changes! Today, AI tools like GitHub Copilot, ChatGPT, and Claude make writing code absurdly easy - but writing code was never the hard part. Robert Glass documents in "Facts and Fallacies of Software Engineering" that maintenance typically consumes between 40 and 80 percent of software costs, making "maintenance" probably the most important life cycle phase. We're overdue for a process evolution that addresses the real challenge: maintaining, evolving, and enhancing software over its lifetime. Software Creates An Expanding Possibility Space "If they'd treated it like a construction project ('ship v1.0 and we're done'), it would never have reached that value." Traditional project management assumes fixed scope, known solutions, and a definable "done" state. The Sydney Opera House exemplifies this: designed in 1957, completed in 1973, ten times over budget, with the architect resigning - but once built, it stands with "minimal" (compared to initial cost) maintenance. Software operates fundamentally differently. Slack started as an internal tool for a failed gaming company called Glitch in 2013. When the game failed, they noticed their communication tool was special and pivoted entirely. After launching in 2014, Slack continuously evolved based on user feedback: adding threads in 2017, calls in 2016, workflow builder in 2019, and Canvas in 2023. Each addition changed what was possible in organizational communication. In 2021, Salesforce acquired Slack for $27.7 billion precisely because it kept evolving with user needs. The key difference is that software creates possibility space that didn't exist before, and that space keeps expanding through continuous evolution. Software Is Societal Infrastructure "This wasn't a cyber attack - it was a software update gone wrong." Software has become essential societal infrastructure, not optional and not just for tech companies. In July 2024, a faulty software update from cybersecurity firm CrowdStrike crashed 8.5 million Windows computers globally. Airlines grounded flights, hospitals canceled surgeries, banks couldn't process transactions, and 911 services went down. The global cost exceeded $10 billion. This wasn't an attack - it was a routine update that failed catastrophically. AWS outages in 2021 and 2023 took down major portions of the internet, stopping Netflix, Disney+, Robinhood, and Ring doorbells from working. CloudFlare outages similarly cascaded across daily-use services. When software fails, society fails. We cannot keep managing something this critical with tools designed for building physical things with fixed requirements. Project management was brilliant for its era, but that era isn't this one. The Path Ahead: Four Critical Challenges "The software industry doesn't just need better tools - it needs to become a mature discipline." This five-episode series will address how we mature as an industry by facing four critical challenges: Episode 2: The Project Management Trap - Why we think in terms of projects, dates, scope, and "done" when software is never done, and how this mindset prevents us from treating software as a living capability Episode 3: What's Already Working - The better approaches we've already discovered, including iterative delivery, feedback loops, and continuous improvement, with real examples of companies doing this well Episode 4: The Organizational Immune System - Why better approaches aren't universal, how organizations unconsciously resist what would help them, and the hidden forces preventing adoption Episode 5: Software-Native Organizations - What it means to truly be a software-native organization, transforming how the business thinks, not just using agile on teams Software is too important to our society to keep getting it wrong. We have much of the knowledge we need - the challenge is adoption and evolution. Over the next four episodes, we'll build this case together, starting with understanding why we keep falling into the same trap. References For Further Reading Glass, Robert L. "Facts and Fallacies of Software Engineering" - Fact 41, page 115 CrowdStrike incident: https://en.wikipedia.org/wiki/2024_CrowdStrike_incident AWS outages: 2021 (Dec 7), 2023 (June 13), and November 2025 incidents CloudFlare outages: 2022 (June 21), and November 2025 major incident Slack history and Salesforce acquisition: https://en.wikipedia.org/wiki/Slack_(software) Sydney Opera House: https://en.wikipedia.org/wiki/Sydney_Opera_House About Vasco Duarte Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success. You can link with Vasco Duarte on LinkedIn.
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss small language models (SLMs) and how they differ from large language models (LLMs). You will understand the crucial differences between massive large language models and efficient small language models. You’ll discover how combining SLMs with your internal data delivers superior, faster results than using the biggest AI tools. You will learn strategic methods to deploy these faster, cheaper models for mission-critical tasks in your organization. You will identify key strategies to protect sensitive business information using private models that never touch the internet. Watch now to future-proof your AI strategy and start leveraging the power of small, fast models today! Watch the video here: https://youtu.be/XOccpWcI7xk Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-what-are-small-language-models.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s *In-Ear Insights*, let’s talk about small language models. Katie, you recently came across this and you’re like, okay, we’ve heard this before. What did you hear? Katie Robbert: As I mentioned on a previous episode, I was sitting on a panel recently and there was a lot of conversation around what generative AI is. The question came up of what do we see for AI in the next 12 months? Which I kind of hate that because it’s so wide open. But one of the panelists responded that SLMs were going to be the thing. I sat there and I was listening to them explain it and they’re small language models, things that are more privatized, things that you keep locally. I was like, oh, local models, got it. Yeah, that’s already a thing. But I can understand where moving into the next year, there’s probably going to be more of a focus on it. I think that the term local model and small language model in this context was likely being used interchangeably. I don’t believe that they’re the same thing. I thought local model, something you keep literally locally in your environment, doesn’t touch the internet. We’ve done episodes about that which you can catch on our livestream if you go to TrustInsights.ai YouTube, go to the Soap playlist. We have a whole episode about building your own local model and the benefits of it. But the term small language model was one that I’ve heard in passing, but I’ve never really dug deep into it. Chris, in as much as you can, in layman’s terms, what is a small language model as opposed to a large language model, other than— Christopher S. Penn: Is the best description? There is no generally agreed upon definition other than it’s small. All language models are measured in terms of the number of tokens they were trained on and the number of parameters they have. Parameters are basically the number of combinations of tokens that they’ve seen. So a big model like Google Gemini, GPT 5.1, whatever we’re up to this week, Claude Opus 4.5—these models are anywhere between 700 billion and 2 to 3 trillion parameters. They are massive. You need hundreds of thousands of dollars of hardware just to even run it, if you could. And there are models. You nailed it exactly. Local models are models that you run on your hardware. There are local large language models—Deep Seq, for example. Deep Seq is a Chinese model: 671 billion parameters. You need to spend a minimum of $50,000 of hardware just to turn it on and run it. Kimmy K2 instruct is 700 billion parameters. I think Alibaba Quinn has a 480 billion parameter. These are, again, you’re spending tens of thousands of dollars. Models are made in all these different sizes. So as you create models, you can create what are called distillates. You can take a big model like Quinn 3 480B and you can boil it down. You can remove stuff from it till you get to an 80 billion parameter version, a 30 billion parameter version, a 3 billion parameter version, and all the way down to 100 million parameters, even 10 million parameters. Once you get below a certain point—and it varies based on who you talk to—it’s no longer a large language model, it’s a small English model. Because the smaller the model gets, the dumber it gets, the less information it has to work with. It’s like going from the Oxford English Dictionary to a pamphlet. The pamphlet has just the most common words. The Oxford English Dictionary has all the words. Small language models, generally these days people mean roughly 8 billion parameters and under. There are things that you can run, for example, on a phone. Katie Robbert: If I’m following correctly, I understand the tokens, the size, pamphlet versus novel, that kind of a thing. Is a use case for a small language model something that perhaps you build yourself and train solely on your content versus something externally? What are some use cases? What are the benefits other than cost and storage? What are some of the benefits of a small language model versus a large language model? Christopher S. Penn: Cost and speed are the two big ones. They’re very fast because they’re so small. There has not been a lot of success in custom training and tuning models for a specific use case. A lot of people—including us two years ago—thought that was a good idea because at the time the big models weren’t much better at creating stuff in Katie Robbert’s writing style. So back then, training a custom version of say Llama 2 at the time to write like Katie was a good idea. Today’s models, particularly when you look at some of the open weights models like Alibaba Quinn 3 Next, are so smart even at small sizes that it’s not worth doing that because instead you could just prompt it like you prompt ChatGPT and say, “Here’s Katie’s writing style, just write like Katie,” and it’s smart enough to know that. One of the peculiarities of AI is that more review is better. If you have a big model like GPT 5.1 and you say, “Write this blog post in the style of Katie Robbert,” it will do a reasonably good job on that. But if you have a small model like Quinn 3 Next, which is only £80 billion, and you have it say, “Write a blog post in style of Katie Robbert,” and then re-invoke the model, say, “Review the blog post to make sure it’s in style Katie Robbert,” and then have it review it again and say, “Now make sure it’s the style of Katie Robbert.” It will do that faster with fewer resources and deliver a much better result. Because the more passes, the more reviews it has, the more time it has to work on something, the better tends to perform. The reason why you heard people talking about small language models is not because they’re better, but because they’re so fast and so lightweight, they work well as agents. Once you tie them into agents and give them tool handling—the ability to do a web search—that small model in the same time it takes a GPT 5.1 and a thousand watts of electricity, a small model can run five or six times and deliver a better result than the big one in that same amount of time. And you can run it on your laptop. That’s why people are saying small language models are important, because you can say, “Hey, small model, do this. Check your work, check your work again, make sure it’s good.” Katie Robbert: I want to debunk it here now that in terms of buzzwords, people are going to be talking about small language models—SLMs. It’s the new rage, but really it’s just a more efficient version, if I’m following correctly, when it’s coupled in an agentic workflow versus having it as a standalone substitute for something like a ChatGPT or a Gemini. Christopher S. Penn: And it depends on the model too. There’s 2.1 million of these things. For example, IBM WatsonX, our friends over at IBM, they have their own model called Granite. Granite is specifically designed for enterprise environments. It is a small model. I think it’s like 8 billion to 10 billion parameters. But it is optimized for tool handling. It says, “I don’t know much, but I know that I have tools.” And then it looks at its tool belt and says, “Oh, I have web search, I have catalog search, I have this search, I have all these tools.” Even though I don’t know squat about squat, I can talk in English and I can look things up. In the WatsonX ecosystem, Granite performs really well, performs way better than a model even a hundred times the size, because it knows what tools to invoke. Think of it like an intern or a sous chef in a kitchen who knows what appliances to use and in which order. The appliances are doing all the work and the sous chef is, “I’m just going to follow the recipe and I know what appliances to use. I don’t have to know how to cook. I just got to follow the recipes.” As opposed to a master chef who might not need all those appliances, but has 40 years of experience and also costs you $250,000 in fees to work with. That’s kind of the difference between a small and a large language model is the level of capability. But the way things are going, particularly outside the USA and outside the west, is small models paired with tool handling in agentic environments where they can dramatically outperform big models. Katie Robbert: Let’s talk a little bit about the seven major use cases of generative AI. You’ve covered them extensively, so I probably won’t remember all seven, but let me see how many I got. I got to use my fingers for this. We have summarization, generation, extraction, classification, synthesis. I got two more. I lost. I don’t know what are the last two? Christopher S. Penn: Rewriting and question answering. Katie Robbert: Got it. Those are always the ones I forget. A lot of people—and we talked about this. You and I talk about this a lot. You talk about this on stage and I talked about this on the panel. Generation is the worst possible use for generative AI, but it’s the most popular use case. When we think about those seven major use cases for generative AI, can we sort of break down small language models versus large language models and what you should and should not use a small language model for in terms of those seven use cases? Christopher S. Penn: You should not use a small language model for generation without extra data. The small language model is good at all seven use cases, if you provide it the data it needs to use. And the same is true for large language models. If you’re experiencing hallucinations with Gemini or ChatGPT, whatever, it’s probably because you haven’t provided enough of your own data. And if we refer back to a previous episode on copyright, the more of your own data you provide, the less you have to worry about copyrights. They’re all good at it when you provide the useful data with it. I’ll give you a real simple example. Recently I was working on a piece of software for a client that would take one of their ideal customer profiles and a webpage of the clients and score the page on 17 different criteria of whether the ideal customer profile would like that page or not. The back end language model for this system is a small model. It’s Meta Llama 4 Scout, which is a very small, very fast, not a particularly bright model. However, because we’re giving it the webpage text, we’re giving it a rubric, and we’re giving it an ICP, it knows enough about language to go, “Okay, compare.” This is good, this is not good. And give it a score. Even though it’s a small model that’s very fast and very cheap, it can do the job of a large language model because we’re providing all the data with it. The dividing line to me in the use cases is how much data are you asking the model to bring? If you want to do generation and you have no data, you need a large language model, you need something that has seen the world. You need a Gemini or a ChatGPT or Claude that’s really expensive to come up with something that doesn’t exist. But if you got the data, you don’t need a big model. And in fact, it’s better environmentally speaking if you don’t use a big heavy model. If you have a blog post, outline or transcript and you have Katie Robbert’s writing style and you have the Trust Insights brand style guide, you could use a Gemini Flash or even a Gemini Flash Light, the cheapest of their models, or Claude Haiku, which is the cheapest of their models, to dash off a blog post. That’ll be perfect. It will have the writing style, will have the content, will have the voice because you provided all the data. Katie Robbert: Since you and I typically don’t use—I say typically because we do sometimes—but typically don’t use large language models without all of that contextual information, without those knowledge blocks, without ICPs or some sort of documentation, it sounds like we could theoretically start moving off of large language models. We could move to exclusively small language models and not be sacrificing any of the quality of the output because—with the caveat, big asterisks—we give it all of the background data. I don’t use large language models without at least giving it the ICP or my knowledge block or something about Trust Insights. Why else would I be using it? But that’s me personally. I feel that without getting too far off the topic, I could be reducing my carbon footprint by using a small language model the same way that I use a large language model, which for me is a big consideration. Christopher S. Penn: You are correct. A lot of people—it was a few weeks ago now—Cloudflare had a big outage and it took down OpenAI, took down a bunch of other people, and a whole bunch of people said, “I have no AI anymore.” The rest of us said, “Well, you could just use Gemini because it’s a different DNS.” But suppose the internet had a major outage, a major DNS failure. On my laptop I have Quinn 3, I have it running inside LM Studio. I have used it on flights when the internet is highly unreliable. And because we have those knowledge blocks, I can generate just as good results as the major providers. And it turns out perfectly. For every company. If you are dependent now on generative AI as part of your secret sauce, you have an obligation to understand small language models and to have them in place as a backup system so that when your provider of choice goes down, you can keep doing what you do. Tools like LM Studio, Jan, AI, Cobol, cpp, llama, CPP Olama, all these with our hosting systems that you run on your computer with a small language model. Many of them have drag and drop your attachments in, put in your PDFs, put in your knowledge blocks, and you are off to the races. Katie Robbert: I feel that is going to be a future live stream for sure. Because the first question, you just sort of walk through at a high level how people get started. But that’s going to be a big question: “Okay, I’m hearing about small language models. I’m hearing that they’re more secure, I’m hearing that they’re more reliable. I have all the data, how do I get started? Which one should I choose?” There’s a lot of questions and considerations because it still costs money, there’s still an environmental impact, there’s still the challenge of introducing bias, and it’s trained on who knows. Those things don’t suddenly get solved. You have to sort of do your due diligence as you’re honestly introducing any piece of technology. A small language model is just a different piece of technology. You still have to figure out the use cases for it. Just saying, “Okay, I’m going to use a small language model,” doesn’t necessarily guarantee it’s going to be better. You still have to do all of that homework. I think that, Chris, our next step is to start putting together those demos of what it looks like to use a small language model, how to get started, but also going back to the foundation because the foundation is the key to all of it. What knowledge blocks should you have to use both a small and a large language model or a local model? It kind of doesn’t matter what model you’re using. You have to have the knowledge blocks. Christopher S. Penn: Exactly. You have to have the knowledge blocks and you have to understand how the language models work and know that if you are used to one-shotting things in a big model, like “make blog posts,” you just copy and paste the blog post. You cannot do that with a small language model because they’re not as capable. You need to use an agent flow with small English models. Tools today like LM Studio and anythingLLM have that built in. You don’t have to build that yourself anymore. It’s pre-built. This would be perfect for a live stream to say, “Here’s how you build an agent flow inside anythingLLM to say, ‘Write the blog post, review the blog post for factual correctness based on these documents, review the blog post for writing style based on this document, review this.'” The language model will run four times in a row. To you, the user, it will just be “write the blog post” and then come back in six minutes, and it’s done. But architecturally there are changes you would need to make sure that it meets the same quality of standard you’re used to from a larger model. However, if you have all the knowledge blocks, it will work just as well. Katie Robbert: And here I was thinking we were just going to be describing small versus large, but there’s a lot of considerations and I think that’s good because in some ways I think it’s a good thing. Let me see, how do I want to say this? I don’t want to say that there are barriers to adoption. I think there are opportunities to pause and really assess the solutions that you’re integrating into your organization. Call them barriers to adoption. Call them opportunities. I think it’s good that we still have to be thoughtful about what we’re bringing into our organization because new tech doesn’t solve old problems, it only magnifies it. Christopher S. Penn: Exactly. The other thing I’ll point out with small language models and with local models in particular, because the use cases do have a lot of overlap, is what you said, Katie—the privacy angle. They are perfect for highly sensitive things. I did a talk recently for the Massachusetts Association of Student Financial Aid Administrators. One of the biggest tasks is reconciling people’s financial aid forms with their tax forms, because a lot of people do their taxes wrong. There are models that can visually compare and look at it to IRS 990 and say, “Yep, you screwed up your head of household declarations, that screwed up the rest of your taxes, and your financial aid is broke.” You cannot put that into ChatGPT. I mean, you can, but you are violating a bunch of laws to do that. You’re violating FERPA, unless you’re using the education version of ChatGPT, which is locked down. But even still, you are not guaranteed privacy. However, if you’re using a small model like Quinn 3VL in a local ecosystem, it can do that just as capably. It does it completely privately because the data never leaves your laptop. For anyone who’s working in highly regulated industries, you really want to learn small language models and local models because this is how you’ll get the benefits of AI, of generative AI, without nearly as many of the risks. Katie Robbert: I think that’s a really good point and a really good use case that we should probably create some content around. Why should you be using a small language model? What are the benefits? Pros, cons, all of those things. Because those questions are going to come up especially as we sort of predict that small language model will become a buzzword in 2026. If you haven’t heard of it now, you have. We’ve given you sort of the gist of what it is. But any piece of technology, you really have to do your homework to figure out is it right for you? Please don’t just hop on the small language model bandwagon, but then also be using large language models because then you’re doubling down on your climate impact. Christopher S. Penn: Exactly. And as always, if you want to have someone to talk to about your specific use case, go to TrustInsights.ai/contact. We obviously are more than happy to talk to you about this because it’s what we do and it is an awful lot of fun. We do know the landscape pretty well—what’s available to you out there. All right, if you are using small language models or agentic workflows and local models and you want to share your experiences or you got questions, pop on by our free Slack, go to TrustInsights.ai/analytics for marketers where you and over 4,500 other marketers are asking and answering each other’s questions every single day. Wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to TrustInsights.ai/TIPodcast and you can find us in all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one. Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models. Yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Data Storytelling—this commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
On this week's Curmudgeon's Corner Sam and Ivan's topics include Republicans in Congress considering leaving early, Ivan's experiences with COBOL, Godzilla, recession warnings flashing red, if we are in an AI bubble, and much much more. Worth every penny! Show Details: Recorded 2025-12-06 Length this week 1:53:45 0:01:18 - But First Ivan does COBOL! Movie: Godzilla (1954) Movie: Riverworld (2003) 0:40:47 - But Second Cost of Living AI Bubble? Economic Timebomb 1:19:53 - But Third Tennessee 7 Political Trends R's Jumping Ship? The Curmudgeon's Corner theme music is generously provided by Ray Lynch. Our intro is The Oh of Pleasure (Amazon MP3 link) Our outro is Celestial Soda Pop (Amazon MP3 link) Both are from the album Deep Breakfast (iTunes link) Please buy his music and support his GoFundMe.
Witam w trzysta pierwszym odcinku podcastu „Porozmawiajmy o IT”. Tematem dzisiejszej rozmowy jest to jak przepisać system bankowy obsługujący 10 milionów klientów, czyli od Cobola i Mainframe do .NET i rozproszonej architektury.Dziś moimi gościem jest Michał Niedźwiecki – Dyrektor Departamentu Rozwoju i Utrzymania Aplikacji w mBank S.A., lider z kilkunastoletnim doświadczeniem menedżerskim w branży bankowej. Od początku kariery pasjonował się programowaniem i inżynierią oprogramowania, co do dziś inspiruje go do wdrażania nowoczesnych rozwiązań IT. Specjalizuje się w zarządzaniu zespołami oraz realizacji projektów transformacyjnych. Odpowiada za rozwój kluczowych systemów bankowych i modernizację platform technologicznych.Sponsor odcinkaSponsorem odcinka jest mBank S. A.W tym odcinku o migracji bankowych systemów IT rozmawiamy w następujących kontekstach:migracja bankowych systemów informatycznych to strategiczna decyzjajak wyglądają kolejne etapy planowania i wdrażaniajak wygląda docelowa architekturajak testuje się tego typu rozwiązaniavendor lock-in versus wsparcie dużego dostawcywpływ na biznes, współpracę, tworzenie nowych produktów bankowychzakres umiejętności niezbędny w takiej migracjiwpływ na klientówSubskrypcja podcastu:zasubskrybuj w Apple Podcasts, Spreaker, Sticher, Spotify, przez RSS, lub Twoją ulubioną aplikację do podcastów na smartphonie (wyszukaj frazę „Porozmawiajmy o IT”)poproszę Cię też o polubienie fanpage na FacebookuLinki:Profil Michała na LinkedIn – https://www.linkedin.com/in/michal-niedzwiecki-/Jeśli masz jakieś pytania lub komentarze, pisz do mnie śmiało na krzysztof@porozmawiajmyoit.plhttps://porozmawiajmyoit.pl/301
每一个在英国坐过火车的人,大概都体会过那种“钱包在哭、心在骂、系统在石器时代”的复杂情绪。本期英伦奇葩说,Bessie融合自己的真实经历,扒一扒这条“全球最难懂的铁路定价系统”背后的故事。从500公里能卖出接近上海—北京高铁票价的英式铁路,到改签价格堪比再买一张机票,再到短途车票反而比长途贵;这个还在用1959年的 COBOL 语言运行的老系统……真的是系统性落后了。节目里,Bessie科普了英国铁路票价体系、运营架构和历史变迁,也拆解了它高价的原因——从民营化切割造成的“拼装铁路”、数据不透明、动态定价,到部门各自为政+利益分账+老系统维护困难,再加上英式“能跑就行、别折腾”的佛系哲学,不是一句无奈就能表达的。最后Bessie也分享了铁路省钱秘籍。听完这一期,你大概会明白,为什么英国人一谈火车就破防、为什么他们宁愿飞到欧洲也不坐国内火车、为什么远程办公时代苏格兰青年不愿回伦敦上班……以及为什么Bessie每次坐火车都会怀念中国的高铁。【本节目由Withinlink碚曦投资协作体出品】【主持】李倩玲 Bessie Lee广告营销行业资深从业者,商业观察者【本期内容提要】[00:12]为什么英国火车票“贵到离谱“?[02:29]票价比商务座还贵?[03:43]1959年开发的COBOL系统至今还在沿用[06:02]Advance/Off-Peak/Anytime三种票类[10:28]多家运营商售票及参与分成[11:28]铁路售票也”动态定价“[17:02]英国铁路的民营化历史与英国铁路的拼装系统[22:56]欧洲之星比英国国内段铁路售价更便宜[27:54]远程办公潮与火车票通胀也有关系[28:43]六大省钱策略详解[43:04]AI可以帮你买到便宜火车票吗?[48:15]中国高铁的秩序和效率令人怀念【后期制作】Jean【收听方式】推荐您使用Apple Podcast、小宇宙APP、喜马拉雅FM、汽水儿APP、荔枝播客、网易云音乐、QQ音乐、Spotify或任意泛用型播客客户端订阅收听《贝望录》。【互动方式】微博:@贝望录微信公众号:贝望录+商务合作:beiwanglu@withinlink.com
The August 2025 update highlights progress on the show's blog with new podcast recommendations, a busy month of live and After Dark sessions, and anticipation for Apple's September 9. Chuck also shares gratitude in the monthly Support Report and encourages listener feedback. Show Notes: Chapters: [0:00] August 2025 update introduction [1:00] Recommended podcast picks on the blog [1:24] Busy August of live shows and After Dark sessions [2:32] Preview of Apple's September 9 event and live panel coverage [2:50] Updates on new projects in progress [2:50] Monthly Support Report and sponsor appreciation [4:46] Feedback invitation for show improvements [5:13] Closing remarks and holiday wishes Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
Chuck and Charlotte explore the soaring success of soccer in the U.S. through Apple's MLS deal, highlighted by Son Heung-min's arrival and his record jersey sales. They examine the blockbuster run of Apple's F1 movie, questions around Apple's MLB streaming future, and the latest Apple TV+ price hike. Wrapping up, they preview fall programming including The Morning Show and Slow Horses, underscoring TV+'s growing library and value. http://traffic.libsyn.com/maclevelten/MV25227.mp3 Show Notes: Chapters: [0:00] Soccer's rise in the U.S. and Apple's MLS deal [1:27] Son Heung-min's impact on jersey sales and subscriptions [4:12] Apple's long-term MLS investment strategy [6:42] Apple's F1 movie box office success and digital release [9:34] Reports of Apple ending its MLB deal and implications [11:15] Fragmentation of sports streaming and ESPN's role [16:03] Apple TV+ subscription price increase explained [18:48] Churn and the challenges of subscriber retention [23:37] Comparing Apple TV+ to Disney+ and HBO Max [29:06] Apple's premium strategy versus ad-supported models [32:28] Upcoming Apple TV+ releases for fall 2025 [33:56] Prime Target, Mythic Quest, and other recommendations Guests: Charlotte Henry is a media junkie, covering how Apple is not just a revolutionary tech firm, but a revolutionary media firm. She is based in London, writes and broadcasts for various outlets, and is the author of Not Buying It, an examination of fake news. You can find her on her The Addition blog, her podcast, in her The Addition newsletter on substack, and on Twitter, Facebook, Instagram and TikTok. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
This week's episode comes live from SaaStock USA 2025. Join special guests Greg Head (Practical Founders) and Josh Turley (CEO, RTA) as they discuss the story of transforming RTA from a 1970s COBOL-based business into a thriving SaaS company with a $100M valuation. Josh shares: - Why a fanatical focus on culture and clarity fueled RTA's growth. - The challenges of replatforming legacy software (and migrating customers.) - How narrowing their niche to government fleets led to 95% retention & 120% NRR. - Lessons from bootstrapping, using debt strategically, and closing a growth equity round. Guest links: LinkedIn - https://www.linkedin.com/in/josh-turley/ Website: https://rtafleet.com/ Check out the other ways SaaStock is helping SaaS founders move their business forward:
Adobe has opened a new door on their AI strategy, unveiling a new AI service for PDFs. Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jeff Gamet, Eric Bolden, Jim Rea, Web Bixby, and Norbert Frassa weigh its usefulness against privacy risks and academic research tools. They discuss the Dia browser's pricey subscription model, compare trends in $20–$25 monthly services, and question long-term sustainability. Finally, just for fun, the panel has thoughts on a foldable phone durability test and how it compares to probable real world use. Today's MacVoices is supported by Insta360 and their new GO Ultra, the tiny 4K camera that goes everywhere with you. Visit store.Insta360.com and use the come “MacVoices” for a free set of Sticky Tabs. Show Notes: Chapters: [0:00] Adobe's new PDF AI hub and first impressions [2:28] Enterprise-level document analysis and consumer use [5:35] Privacy concerns and proprietary data risks with Adobe [9:10] Academic research tools like Elicit compared [10:43] PDF history and standards discussion [14:06] Launch of the DIA browser and subscription costs [15:32] Comparing AI services, subscriptions, and pricing strategies [18:11] Subscription fatigue and streaming service parallels [19:40] Fun segment: foldable phone durability test [21:55] Real-world math on folding use cases[23:09] Humor and closing reflections on tech hype Links: Acrobat Studio is Adobe's new AI-powered hub for PDFs https://www.engadget.com/ai/acrobat-studio-is-adobes-new-ai-powered-hub-for-pdfs-130003264.html The Dia browser now offers a $20/month subscription plan https://9to5mac.com/2025/08/07/the-dia-browser-now-offers-a-20-month-subscription-plan/ What happens when you fold a Galaxy Z Fold 7 200,000 times? Creaking, leaking, and crashing https://www.macworld.com/article/2870354/what-happens-when-you-fold-a-galaxy-z-fold-7-200000-times-creaking-leaking-and-crashing.html Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Norbert Frassa is a technology “man about town.” Follow him on Twitter and see what he's up to. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
The Building Better Developers with AI podcast continues its season of revisiting past episodes with fresh insights. In this discussion, Rob Broadhead and Michael Meloche revisit the classic topic of breaking through career plateaus and reframe it through the lens of developer career growth. The original episode shared practical strategies for accelerating progress. This version adds AI-driven perspectives, personal stories, and a reminder that developers must be intentional about growth in a rapidly evolving industry. Recognizing Developer Career Growth Roadblocks Career plateaus are rarely obvious. Instead, they surface gradually through symptoms like: Completing tasks on autopilot A lack of new responsibilities or ownership Months without learning a new tool, framework, or design pattern As Rob explains, being “comfortable” often means you're falling behind. In technology, a developer's career growth demands continuous movement forward. If you haven't challenged yourself in six months, your developer career growth may already be stuck. Why Developer Career Growth Plateaus Happen Rob frames the plateau as a “gamer problem”—your XP bar fills, but the level-up screen never appears. Routine work, a lack of internal visibility, or failure to market oneself can all hinder a developer's career growth. Michael emphasizes the importance of self-reflection. Sometimes the issue isn't a lack of opportunity, but a lack of initiative. Are you doing the bare minimum, or seeking challenges that stretch you? He shares how experimenting with signature tablets and webcams—well outside his role—kept him learning and growing. That curiosity didn't move him up in that company, but it paved the way to a higher-paying role elsewhere. Expanding Beyond Code for Developer Career Growth Not all growth is about coding more. Rob points out that developer career growth also comes from: Taking on design and architecture work Mentoring and teaching others Exploring leadership or project ownership Michael reinforces the power of teaching. Sharing knowledge sharpens communication skills, broadens perspective, and strengthens problem-solving abilities. Strategies to Accelerate Developer Career Growth The episode outlines clear steps for reigniting progress: Stretch Projects – Volunteer for cross-team or challenging work. Skill Stacking – Add complementary abilities like UX, DevOps, or CI/CD. Mentorship 2.0 – Learn from mentors, but also mentor others. Visibility Boosts – Blog, present at meetups, or contribute to open source. Side Hustles – Build projects outside work to push yourself into new learning. Side projects are “cheat codes” for developer career growth. Even small shifts—such as switching IDEs or adopting new tools—can help shake off stagnation and sharpen your adaptability. AI's Role in Developer Career Growth Michael warns against ignoring AI. Some developers resist learning it, believing their existing skills will always be in demand. History shows otherwise—just as COBOL programmers saw demand collapse after Y2K, today's developers risk irrelevance by avoiding new technologies. Embracing AI isn't optional anymore. It's the new baseline for sustaining developer career growth. Episode Challenge: Take Charge of Your Developer Career Growth Your challenge this week: Identify one area where your growth has stalled, and take one intentional step forward. Options include: Learning a new framework or tool Volunteering for a stretch assignment Mentoring a junior colleague Starting a side project outside your comfort zone Don't wait for others to create opportunities. Own your developer career growth starting today. Final Thoughts Breaking through plateaus isn't about endless reinvention—it's about steady, intentional growth. Rob and Michael agree: if your current environment doesn't provide chances to grow, then create them—or find a place that will. Developer career growth is not optional. In a fast-moving industry, standing still means falling behind. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Please get in touch with us at info@develpreneur.com with any questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Essential Habits for Software Developers: Boosting Productivity and Career Growth Pivoting: How to Embrace Change and Fuel Your Professional Growth Are Technology Certifications Necessary For Career Growth? Be Intentional In Choosing Tasks For Career Growth The Developer Journey Videos – With Bonus Content Building Better Developers With AI Podcast Videos – With Bonus Content
В современном IT есть два пути к бесконечным деньгам. Один из них – быть топовым AI рисерчером и ждать, пока позвонит Цукерберг. Второй – поддерживать код на COBOL, от которого зависит вся мировая банковская система. В этом классическом языковом выпуске вместе с Сергеем Куксом, principal инженером из JetBrains, разбираемся с тем, что когда-то помогло COBOL стать таким популярным, и как он продолжает развиваться по сегодняшний день. Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodcastPodlodka Ведущие в выпуске: Катя Петрова, Егор Толстой Полезные ссылки: Документация COBOL для новичков https://www.microfocus.com/documentation/visual-cobol/vc80/EclWin/index.html?t=GUID-7D9BBE92-0AD8-4DDC-975A-FA92A55D7187.html Про Micro Focus https://www.opentext.com/about/brands/microfocus Как работает фича ReportWriter https://www.microfocus.com/documentation/visual-cobol/vc80/VS2022/index.html?t=HRLHLHWRI01.html Awesome COBOL https://github.com/loveOSS/awesome-cobol
Banking executives face a familiar dilemma: decades-old core systems technically constrain innovation while replacement costs can reach hundreds of millions of dollars and take years to implement. Meanwhile, fintechs launch new products in weeks while traditional banks remain trapped in months-long approval cycles. The challenge extends beyond technology. "Most banks duct-tape capabilities onto what they already have, and eventually they break," explains Ritesh Rihani, Vice President of Enterprise Banking at Galileo. "You've seen the number of outages we've had in the industry recently. That's all happening because they put duct tape upon duct tape." The pressure to modernize comes from multiple directions. Customer expectations have evolved toward integrated experiences and ease of use and operational risks multiply as the pool of COBOL programmers shrinks through retirement. Regulatory compliance becomes increasingly difficult with manual processes and fragmented systems. This podcast explores five critical dimensions of core modernization: balancing costs with competitive necessity, understanding operational and regulatory risks, implementing incremental transformation strategies, enabling product innovation, and unlocking the future potential of modern banking architecture.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
Breaking Through Requirements Hell: How AI is Revolutionizing Software Development
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com Many of today's archaic federal systems have been built over decades. As a result, they are perfected but also fragile. One obvious source of vulnerability is the workforce that created it. It is hard to believe that COBOL was released in 1960 and is still active in some federal systems. The individuals who developed the code for these systems are now long past retirement age. Some code was accurately documented, and some were not. As a result, the process of transferring to a newer environment is fraught with concern. Until AI, the only way to understand the underlying code was for a human being to review it line by line. Everyone realized that this process was so tedious and time-consuming that an informal policy emerged, essentially patching the system. They kicked the can down the road for the next generation. Well, AI is her. It has the unique ability to review code, identify problems, and provide solutions quickly. This will drastically reduce the risk of moving antiquated systems from aging code systems. Today, we sat down with Kartik Mecheri from Karsun Solutions and Alan Thomas, former Commissioner of GSA FAS. During the interview, they offered suggestions on how best to accomplish the challenging task of digital modernization. Kartik emphasizes the value of a platform like ReDuX. Utilizing Amazon's Bedrock, ReDuX allows system developers to create a blueprint for the existing system. When combined with humans, this platform will save money on maintaining older systems and bring much-needed flexibility to new applications. Listen to learn how Karsun Solutions can reduce costs, increase flexibility, and improve efficiency in the process of digital modernization. The conversation also touched on the challenges of mission-critical systems, the role of AI in reducing risk, and the evolving job market, suggesting a shift towards strategic and innovative roles.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
AI coding tools are now fixing problems in old software, helping non-coders build apps, and making work faster for big companies. Microsoft, Google, and Amazon use AI to write much of their code, while Morgan Stanley used AI to update millions of lines of COBOL, saving thousands of developer hours.Get Ad Free AI Daily Brief: https://patreon.com/AIDailyBriefBrought to you by:KPMG – Go to https://kpmg.com/ai to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months AGNTCY - The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at agntcy.org - https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-aidailybrief_podcast&utm_channel=podcast&utm_source=podcast Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The automation platform for AI experts and consultants https://useplumb.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdownInterested in sponsoring the show? nlw@breakdown.network
Think your project's on fire?
Santi: Hi, this is a special episode of Statecraft. I've got a wonderful guest host with me today. Kyla Scanlon: Hey, I'm Kyla Scanlon! I'm the author of a book called In This Economy and an economic commentator. Santi: Kyla has joined me today for a couple reasons. One, I'm a big fan of her newsletter: it's about economics, among many other things. She had a great piece recently on what we can learn from C.S. Lewis's The Screwtape Letters, which is a favorite book of mine.Kyla's also on today because we're interviewing Wally Adeyemo, who was the Deputy Secretary of the Treasury in the Biden administration. We figured we each had questions we wanted answered.Kyla: Yeah, I've had the opportunity to interview Wally a couple times during the Biden administration, and I wanted to see where he thinks things are at now. He played a key role in implementing the Inflation Reduction Act, financial sanctions on Russia, and a whole bunch of other things.Santi: For my part, I'm stuck on Wally's role in setting up the IRS's Direct File program, where you can file your taxes for free directly through the IRS instead of paying TurboTax a hundred bucks to do it. “Good governance types” tend to love Direct File, but the current admin is thinking of killing it. I wanted to understand how the program got rolled out, how Wally would respond to criticisms of the program, and what he learned from building something in government, which now may disappear.Kyla, you've talked to Wally before. How did that conversation go? Kyla: I actually was able to go to his office in D.C., and I talked to a couple of key people in the Biden administration: Jared Bernstein, the former chair of the CEA, and Daniel Hornung, who was at the National Economic Council.We're talking to Wally on the day that the House passed the one big beautiful bill. There's also so much happening financially, like the bond market is totally rebelling against the US government right now. I'm really curious how he thinks things are, as a key player in the last administration.Santi: Wally, you've spent most of your career in Democratic Party institutions. You worked on the Kerry presidential campaign in 2004. You served in the Obama admin. You were the first chief of staff to the CFPB, the president of the Obama Foundation, and, most recently, Deputy Treasury Secretary in the Biden admin.30,000ft question: How do you see the Democratic Party today?My view is that we continue to be the party that cares deeply about working-class people, but we haven't done a good job of communicating that to people, especially when it comes to the things that matter most to them. From my standpoint, it's costs: things in America cost too much for a working-class family.I want to make sure I define working class: I think about people who make under $100,000 a year, many of whom don't own homes on the coast or don't own a significant amount of stocks (which means they haven't seen the asset appreciation that's led to a great deal of wealth creation over the last several decades). When you define it that way, 81% of Americans sit in that category of people. Despite the fact that they've seen their median incomes rise 5-10% over the last five years, they've seen the cost of the things they care about rise even faster.We haven't had a clear-cut agenda focused on the standard of living, which I think is the thing that matters most to Americans today.Santi: There are folks who would say the problem for Democrats wasn't that they couldn't communicate clearly, or that they didn't have a governing agenda, but that they couldn't execute their agenda the way they hoped to in the time available to them. Would you say there's truth to that claim?Most people talk about a communications issue, but I don't think it's a communications issue. There are two issues. One is an implementation issue, and the second is an issue of the actual substance and policy at the Treasury Department. I was the deputy secretary, but I was also the Chief Operating Officer, which meant that I was in charge of execution. The two most significant domestic things I had to execute were the American Rescue Plan, where $1.9 trillion flowed through the Treasury Department, and the Inflation Reduction Act. The challenge with execution in the government is that we don't spend a lot on our systems, on making execution as easy as possible.For example, the Advanced Child Tax Credit was intended to give people money to help with each of their children during the pandemic. What Congress called on us to do was to pay people on a monthly basis. In the IRS system, you pay your taxes mostly on an annual basis, which meant that most of our systems weren't set up to pay a monthly check to Americans. It took us a great deal of work to figure out a way to recreate a system just to do that.We've underinvested in the systems that the IRS works on. The last time we made a significant investment in the IRS's digital infrastructure was the 1960s; before we had an ATM machine, before we sent a man to the moon, before we had a personal computer. So that meant that everything was coded in a language called COBOL.So execution was quite hard in the American Rescue Plan. People were left out and felt that the government wasn't working for them. If you called the IRS, only 13% of your calls were being answered. We got that back up to 85% before we left. Ultimately, I think part of this is an execution challenge. In government we want to spend money coming up with new policies, but we don't want to pay for execution, which then means that when you get the policy passed, implementation isn't great.When Jen Pahlka was on your show, she talked about the need to focus on identifying the enablers to implementation. Direct File was one of the best examples of us taking implementation very seriously.But also, on some policy issues that mattered most to Americans, we weren't advancing the types of strategies that would've helped lower the cost of housing and lowering the cost of medicine. We did some things there, but there's clearly more that we could have done, and more we need to do going forward to demonstrate that we're fighting to bring down those costs. It's everything from permitting reform — not just at the federal level, but what can we do to incentivize it at the state and local level — to thinking about what we can do on drug costs. Why does it cost so much more to get a medicine in America than in Canada? That is something that we can solve. We've just chosen not to at the federal level.At the end of the year, we were going to take action to go after some of the middlemen in the pharmacy industry who were taking out rents and large amounts of money. It dropped out of the bill because of the negotiations between the Republican Congress and then President-elect Trump. But there are a lot of things that we can do both on implementation, which will mean that Americans feel the programs that we're passing in a more effective way, and policy solutions that we need to advance as a party that will help us as well.Kyla: Some people think Americans tend to vote against their own self-interest. How can your party message to people that these sorts of policies are really important for them?Ultimately, what I found is that most people just understand their self-interest differently, and for them, a big part of this was, “Who's fighting for me on the issues that I care most about?”From my standpoint, part of the problem we had with Direct File, which I think was an innovative solution, was that we got to implementing it so late in the administration that we didn't have the ability for it to show the impact. I'm hoping future administrations will think through how to start their implementation journey on things like Direct File sooner in the administration, when you have a great deal of political capital, so people can actually feel the impact over time.To your question, it's not just about the messaging, it's about the messenger. People tend to trust people who look like them, who come from the places they come from. When it came to the Child Tax Credit and also to Direct File, the biggest innovation wasn't the technology: the technology for Direct File has been used by the Australians, the British, and other countries for decades.The biggest innovation was us joining that technology with trusted people in communities who were going out to talk to people about those programs and building those relationships. That was something that the IRS hadn't done a great deal of. We invested a great deal in those community navigators who were helping us get people to trust the things the government was doing again, like the Child Tax Credit, like Direct File, so that they could use it.We often think that Washington is going to be able to give messages to the country that people are going to hear. But we're both in a more complicated media environment, where people are far more skeptical of things that come from people in Washington. So the best people to advocate for and celebrate the things that we're doing are people who are closer to the communities we're trying to reach. In product advertising today, more companies are looking to influencers to advertise things, rather than putting an ad on television, because people trust the people that they follow. The same is true for the things that we do in government.Santi: I've talked to colleagues of yours in the last administration who say things like, “In the White House, we did not have a good enough sense of the shot clock.” They point to various reasons, including COVID, as a reason the admin didn't do a good enough job of prioritization.Do you think that's true, that across the administration, there was a missing sense of the shot clock or a missing sense of prioritization? No, because I'm a Lakers fan. These are professionals. We're professionals. This is not our first rodeo. We know how much time is on the shot clock; we played this game. The challenge wasn't just COVID. For me at Treasury — and I think this is the coolest part of being Deputy Secretary of the Treasury — I had responsibilities domestic and international. As I'm trying to modernize the IRS, to invest all my time in making the system work better for customers and to collect more taxes from the people who owe money, Russia invades Ukraine. I had to turn a bunch of my attention to thinking about what we were going to do there. Then you have Hamas attacking Israel.There was more we should have done on the domestic end, but we have to remember that part of the presidency is: you get to do the things you want to do, but you also have to do the things you have to do. We had a lot of things we had to do that we weren't planning for which required all-of-the-administration responses.I think the most important lesson I've learned about that is that it comes down to both being focused on the things that matter, and being willing to communicate to the American people why your priorities have to change in light of things that happen in the world.But the people I'm sure you've talked to, most of them work on domestic policy alone, and they probably never have been in a National Security Council meeting, where you're thinking about the risks to the country. The president has to do both of those things. So I get how difficult it is to do that, just given where I sat at the Treasury Department.Santi: Looking back from an implementation perspective, are there things you would've done differently during your time at Treasury?The most important thing that I would've done differently was to immediately set up a permanent implementation and delivery unit in the Treasury Department. We always like to pretend like the Treasury Department is just a policy department where we make policy, we collect taxes. But in any crisis the country ever has, a great deal of responsibility — for execution or implementation of whatever the response is — falls to the Treasury Department. Think about the financial crisis, which is clearly something that's in the Treasury's domain. The vast majority of money for COVID flowed through the Treasury Department. You think about the IRA, a climate bill: the vast majority of that money flows through the Treasury Department.And Treasury doesn't have a dedicated staff that's just focused on implementation: How do we do this well? How do we make sure the right people are served? How do we make sure that we communicate this well? We did this to a degree by a team that was focused on the American Rescue Plan. But it was only focused on the American Rescue Plan. If I could start again, I would have said, “I want a permanent implementation structure within the Treasury Department of people who are cross-cutting, who only think about how we execute the policies that we pass through Congress and that we put together through an executive order. How do we do that extremely well?”Kyla: What you're talking about is very people-centric: How do we get an implementation team, and how do we make sure that the right people are doing the right jobs? Now we have DOGE, which is less people-centric. How do you reconcile what Doge is doing relative to what you would've done differently in this role that you had?As you would suspect, I wasn't excited about the fact we had lost the election, but initially I thought DOGE could be helpful with technology. I think marrying technology with people — that's the key to success for the government. We've never really been great at doing technology in the government.Part of the reason for that is a procurement process that is very slow because of how the federal acquisition rules work. What we are trying to do is prevent corruption and also waste, fraud, and abuse. But what that does is, it leads to slowness in our ability to get the technology on board that we need, and in getting the right people.I was hoping DOGE would bring in people who knew a great deal about technology and put us in a position where we could use that to build better products for the American people. I thought they would love Direct File, and that they would find ways to improve Direct File and expand it to more Americans.My view is that any American in the working class or middle class should not have to pay a company to file their taxes. We have the ability in this country, and I think Direct File was proving that. My goal, if we'd had more time, was to expand this to almost any American being able to use it. I thought they'd be able to accelerate that by bringing in the right people, but also the right technology. We were on that path before they took those two things apart.My sense is that you have to reform the way that we hire people because it's too hard to hire the right people. In some cases, you don't need some of the people you have today because technology is going to require different skills to do different things. It's easier to break something, I found, than it is to build something. I think that's what they're finding today as well.Santi: When I talk to left-of-center folks about the DOGE push, they tend to be skeptical about the idea that AI or modern technology can replace existing federal workers. I think some of that is a natural backlash to the extreme partisan coding of DOGE, and the fact that they're firing a lot of people very quickly. But what's your view? After DOGE, what kinds of roles would you like to see automated?Let me say: I disagree with the view that DOGE and technology can't replace some of the things that federal workers do today. My view is that “productivity enhancing” tech — it's not that it is going to make employees who are currently doing the job more productive. It is going to mean you need fewer employees. We have to be honest about that.Go to the IRS, for example. When I got there, we had a huge paper backlog at the IRS because, despite what most people think, millions of people still file their taxes by paper, and they send them to the IRS. And during the pandemic, the commissioner, who was then working for President Trump, decided to shut down the IRS for public health reasons — to make sure employees did not have to risk getting COVID.There were piles of paper backing up, so much so that they had filled cafeterias at the IRS facilities with huge piles of paper. The problem, of course, is that, unlike modern systems, you could not just machine-read those papers and put them into our systems. Much of that required humans to code those papers into the system by hand. There is no need in the 21st century for that to happen, so one of the things that we started to do was introduce this simple thing called scanning, where you would scan the papers — I know it sounds like a novel idea. That would help you get people's tax returns faster into the system, but also get checks out quickly, and allow us to see if people are underpaying their taxes, because we can use that data with a modern system. But over time, what would that mean? We'd need fewer people to enter the data from those forms.When we get money for the IRS from Congress, it is actually seen as revenue-raising because they expect it to bring down the debt and deficit, which is completely true. But the model Congress uses to do that is reliant on the number of full-time employees we hire. One challenge we have with the IRS — and in government systems in general — is that you don't get credit for technology investments that should improve your return on investment.So whenever we did the ROI calculations for the IRS, the Congressional Budget Office would calculate how much revenue we'd bring in, and it was always based on the number of people you had doing enforcement work that would lead to certain dollars coming in. So we got no credit for the technology investments. Which was absolutely the opposite of what we knew would be true: the more you invested in technology, the more likely you were to bring in more revenue, and you would be able to cut the cost of employees.Santi: If the CBO changed the way it scored technology improvements, would more Congresspeople be interested in funding technology?It is just a CBO issue. It's one we've tried to talk to them about over the last several years, but one where they've been unwilling to move. My view is that unlocking this will unlock greater investment in technology in a place like the IRS, because every dollar you invest in technology — I think — would earn back $10 in additional tax revenue we'd be able to collect from people who are skipping out on their taxes today. It's far more valuable to invest in that technology than to grow the number of employees working in enforcement at the IRS. You need both, but you can't say that a person is worth 5x their salary in revenue and that technology is worth 0. That makes no sense.Kyla: When we spoke about Direct File many months ago, people in my comment section were super excited and saying things like, “I just want the government to tell me how much money I owe.” When you think about the implementation of Direct File, what went right, and how do you think it has evolved?The thing that went right was that we proved that we could build something quite easily, and we built it ourselves, unlike many technology projects in government. We didn't go out and hire a bunch of consultants and contractors to do it. We did it with people at the IRS, but also with people from 18F and from GSA who worked in the government. We did it in partnership with a number of stakeholders outside the government who gave us advice, but the build was done by us.The reason that was important — and the reason it's important to build more things internally rather than hiring consulting firms or other people to build it — is that you then have the intellectual capital from building that, and that can be used to build other things. This was one product, but my view is that I want the IRS home page to one day look a lot more like the screen on your iPhone, so that you can click on the app on the IRS homepage that can help you, depending on what you need — if it's a Direct File, or if it's a tax transcript.By building Direct File internally, we were getting closer to that, and the user scores on the effectiveness of the tool and the ability to use it were through the roof. Even for a private sector company, it would've been seen as a great success. In the first year, we launched late in the filing season, mostly just to test the product, but also to build stakeholder support for it. In the limited release, 140,000 people used it. The average user said that before Direct File, it took them about 13 hours to file their taxes, and with Direct File, it took them just over an hour to file their taxes.But you also have to think about how much money the average American spends filing their taxes: about $200. That's $200 that a family making under $100,000 could invest in their kids, in paying some bills, rather than in filing their taxes.Even this year, with no advertising by the Trump administration of Direct File, we had more than 300,000 people use it. The user scores for the product were above 85%. The challenge, of course, is that instead of DOGE investing in improving the product — which was a place where you could have seen real intellectual capital go to work and make something that works for all Americans — they've decided to discontinue Direct File. [NB: There has been widespread reporting that the administration plans to discontinue Direct File. The GOP tax bill passed by the House would end Direct File if it becomes law. At the time of publication, the Direct File has not been discontinued.]The sad part is that when you think about where we are as a country, this is a tool that could both save people money, save people time, improve our ability to collect taxes, and is something that exists in almost every other developed economy. It makes no sense to me why you would end something like this rather than continue to develop it.Santi: People remember the failure of healthcare.gov, which crashed when it was rolled out all at once to everyone in the country. It was an embarrassing episode for the Obama administration, and political actors in that administration learned they had to pilot things and roll them out in phases.Is there a tension between that instinct — to test things slowly, to roll them out to a select group of users, and then to add users in following cycles — Is there a tension between that and trying to implement quickly, so that people see the benefit of the work you're doing?One of my bosses in the Obama administration was Jeff Zients, the person who was brought in to fix healthcare.gov. He relentlessly focused on execution. He always made the point that it's easy to come up with a strategy to some degree: you can figure out what the policy solution is. But the difference between good and great is how you execute against it. I think there is some tension there, but not as much as you would think.Once we were able to show that the pilot was a success, I got invited to states all over the country, like Maryland, to announce that they were joining Direct File the next year. These members of Congress wanted to do Direct File events telling people in their state, “This product that's worked so well elsewhere is coming to us next.” It gave us the ability to celebrate the success.I learned the lesson not just from Zients, but also from then-professor Elizabeth Warren, whom I worked for as chief of staff at the CFPB. One challenge we had at the CFPB was to build a complaint hotline, at that point mostly phone-operated, for people who were suffering. They said it would take us at least a year to build out all the product functions we need. We decided to take a modular approach and say, “How long would it take for us to build the system for one product? Let's try that and see how that works. We'll do a test.”It was successful, and we were able to use that to tell the story about the CFPB and what it would do, not just for mortgages, but for all these other products. We built user interest in the complaint hotline, in a way that we couldn't have if we'd waited to build the whole thing at once. While I think you're right that there is some tension between getting everyone to feel it right away and piloting; if the pilot is successful, it also gives you the opportunity to go out and sell this thing to people and say, “Here's what people who did the pilot are saying about this product.”I remember someone in Texas who was willing to do a direct-to-camera and talk about the ways that Direct File was so easy for them to use. It gets back to my point on message and messenger. Deputy Secretary Adeyemo telling you about this great thing the government did is one thing. But an American who looks like you, who's a nurse, who's a mom of two kids, telling you that this product actually worked for her: That's something that more people identify with.Healthcare.gov taught us the lesson of piloting and doing things in a modular way. This is what companies have been doing for decades. If it's worked for them, I think it can work for the government too.Santi: I'm a fan of Direct File, personally. I don't want this administration to kill it. But I was looking through some of the criticism that Direct File got: for instance, there's criticism about it rivaling the IRS Free File program, which is another IRS program that partners with nonprofits to help some folks file their taxes for free.Then there's this broader philosophical criticism: “I don't want the feds telling me how much I owe them.” The idea is that the government is incentivized to squeeze every last dollar out of you.I'm curious what you make of that, in part because I spoke recently to an American who worked on building e-government systems for Estonia. One of the things that has allowed Estonia to build cutting-edge digital systems in the government is that Estonia is a small and very high-trust society. Everybody's one degree of separation from everybody else.We're a much bigger and more diverse country. How do you think that affects the federal government's ability to build tools like Direct File?I think it affects it a lot, and it gets back to my point: not just the message but the messenger. I saw this not just with Direct File, but with the Advanced Child Tax Credit, which was intended to help kids who were living in poverty, but also families overall. What we found initially in the data was that, among families that didn't have to file taxes because they made too little, many of them were unwilling to take advantage of Direct File and the Advanced Child Tax Credit because they couldn't believe the government was doing something to just help them. I spent a lot of time with priests, pastors, and other community leaders in many of the communities where people were under-filing to try and get them to talk about this program and why it was something that they should apply for.One of the challenges we suffer from right now in America, overall, is a lack of trust in institutions. You have to really go local and try to rebuild that trust.That also speaks to taking a pilot approach that goes slower in some cases. Some of the criticism we got was, “Why don't you just fill out this form for us and then just send it to us, so that Direct File is just me pressing a button so I can pay my taxes?”Part of the challenge for us in doing that is a technology challenge: we are not there technologically. But the other problem is a trust problem. If I were to just fill out your taxes for you and send them to you, I think people, at this stage, would distrust the government and distrust the technology.Direct File had to be on a journey with people, showing people, “If I put in this information, it accurately sends me back my check.” As people develop more trust, we can also add more features to it that I think people will trust. But the key has to be: how do you earn that trust over time?We can't expect that if we put out a product that looks like something the Estonian government or Australia would put out, that people would trust it at this point. We have to realize that we are on a journey to regain the trust of the American people.The government can and will work for them, and Direct File was a part of that. We started to demonstrate that with that product because the people who used it in these communities became the spokespeople for it in a better way than I ever could be, than the Secretary or the President could be.Everyone knows that they need to pay their taxes because it's part of their responsibility living in this country. The things that make people the most upset is the fact that there are people who don't pay their taxes. We committed that we were going to go after them.The second frustration was: “Why do you make it so hard for me to pay my taxes? Why can't I get through to you on the phone line? Why do I have to pay somebody else to do my taxes?” Our goal was to solve those two problems by investing money and going after the people who just decided they weren't going to pay, but also by making it as easy as possible for you to pay your taxes and for most people, to get that tax refund as quickly as possible.But doing that was about going on a journey with people, about regaining their trust in an institution that mattered to them a great deal because 90 something-percent of the money that funds our government comes in through the IRS.Kyla: You have a piece out in Foreign Affairs called “Make Moscow Pay,” and what I found most interesting about that essay is that you said Europe needs to step it up because the United States won't. Talk through the role of Treasury in financial sanctions, and your reasons for writing this piece.People often think about the Treasury Department as doing a few things. One is working with Wall Street; another one is collecting your taxes. Most people don't think about the fact that the Treasury Department is a major part of the National Security Committee, because we have these tools called financial sections.They use the power of the dollar to try and change the behavior of foreign actors who are taking steps that aren't consistent with our national security interests. A great example of this is what we did with regard to Russia — saying that we're going to cut off Russian banks from the US financial system, which means that you can't transact in US dollars.The problem for any bank that can transact in dollars is that the backbone of most of the financial world is built on the US dollar. It increases their cost, it makes it more difficult for them to transact, and makes it harder for them to be part of the global economy, nearly impossible.And that's what we've done in lots of cases when it comes to Russia. We have financial sanction programs that touch all over the world, from Venezuela to Afghanistan. The US government, since 9/11, has used sanctions as one of its primary tools of impacting foreign policy. Some of them have gone well, some of them I think haven't gone as well, and there's a need for us to think through how we use those policies.Santi: What makes sanctions an effective tool? Positions on sanctions don't line up neatly on partisan lines. Sanctions have a mixed track record, and you'll have Republicans who say sanctions have failed, and you'll have Democrats say sanctions have been an effective tool, and vice versa.The way I think about sanctions is that they are intended to bring change, and the only way that they work is that they're part of an overarching foreign policy strategy. That type of behavior change was what we saw when Iran came to the table and wanted to negotiate a way to reduce sanctions in exchange for limits on their nuclear program. That's the type of behavior change we're trying to accomplish with sanctions, but you can't do it with sanctions alone. You need a foreign policy strategy. We didn't do it by the United States confronting Iran; we got our allies and partners to work together with us. When I came into office in 2021, Secretary Yellen asked me to do a review of our sanctions policies — what's worked, what hasn't — because it had been 20 years since the 9/11 attacks.And the most important lesson I learned was that the sanctions programs that were the most effective were the ones we did on a multilateral basis — so we did it with our friends and allies. Part of the reason for this is that while the dollar is the most dominant currency around the world, oftentimes if you can't do something in dollars, you do it in a euro, or you do it in a Japanese yen, or pound sterling.The benefit of having allies all over the world is that the dominant, convertible currencies in the world are controlled by allies and partners. When we acted together with them, we were more effective in curtailing the economic activity of our adversary, and our pressure is more likely to lead to them changing their behavior.We had to be very cautious about collateral damage. You might be targeting an individual, but by targeting that individual, you might make it harder for a company they're affiliated with to continue doing business, or for a country that they're in to get access to banking services. Let's say that you're a huge bank in America, and you're worried about sanctions risk in a small country where you do little business. Why not pull out, rather than having to put in place a huge compliance program? One of the challenges that we have is that the people who make the decisions about whether to extend sanctions don't necessarily spend a lot of time thinking about some of these economic consequences of the sanctions approach.Whenever I was around the table and we were making a decision about using weapons, there was a process that was very elaborate that ended up with something going to the president. You'd often think about kinetic force very seriously, because you were going to have to get the president to make a decision. We didn't always take that kind of rigor when it came to thinking about using our sanctions policy, but the impact on the lives of people in these countries was just as significant for their access to not only money, but to food and to the resources they needed to live.Santi: What do you make of the effectiveness of the initial sanctions on Russia after the invasion of Ukraine? I've heard mixed reviews from folks inside and outside the Biden administration.Sanctions, again, to my point, are only a tool. They've had to be part of a larger strategy, and I think those sanctions were quite effective. I think the saving grace for the Russians has been the fact that China has largely been able and willing to give them access to the things they need to continue to perpetuate.There was a choice for Ukraine, but when you think about Russia's economy today vs. Russia's economy before the sanctions were put in place, it's vastly different. Inflation in Russia still runs far higher than inflation anywhere else in the world. If you were a Russian citizen, you would feel the impacts of sanctions.The challenge, of course, is that it hasn't changed Vladimir Putin's behavior or the behavior of the Kremlin, largely because they've had access to the goods and supplies they need from China, Iran, and North Korea. But over time, it means Russia's economy is becoming less competitive. They have less access to resources; they're going to struggle.I think everyone hoped that sanctions would immediately change the calculus of the Kremlin, but we've never seen that to be the case. When sanctions are effective, they take time, because the economic consequences continue to compound over time, and they have to be part of a larger strategy for the behavior of the individual. That's why I wrote the article, because while the Kremlin and Russia are under pressure, their view is that ultimately the West is going to get tired of supporting Ukraine, financially and politically, because the economic consequences for us — while not as significant as for Moscow or for Kiev — have been quite significant, when you think about the cost of living issues in Europe.I think it's important to write this now, when it appears that Russia is stalling on negotiations, because ultimately, US financial support is waning. We just know that the Trump administration is not willing to put more money into Ukraine, so Europe is going to have to do more, at a time when their economic situation is quite complicated as well.They've got a lot to do to build up their economy and their military-industrial base. Asking them to also increase their support for Ukraine at the same time is going to be quite difficult. So using this money that Russia owes to Ukraine — because they owe them compensation at this moment — can be quite influential in helping support the Ukrainians, but also changing Russia's calculus with regard to the ability of Ukraine to sustain itself.Kyla: On CNBC about a month ago, you said if we ever have a recession over the next couple of months or so, it would be a self-inflicted one. Do you still resonate with that idea? To build on the point I was making, the economy has done quite well over the course of the first few months of the year, largely because of the strength of the consumer, where our balance sheets are still quite strong. Companies in America have done well. The biggest headwind the US economy faces has been self-inflicted by the tariffs the president has put on. Part of what I still do is talk to CEOs of companies, big and small. Small businesses feel the impact of this even more than the big businesses. What they tell me is that it's not just the tariffs and the fact that they are making it more expensive for them to get the goods that they need, but it's the uncertainty created by the off-again, on-again, nature of those tariffs that makes it impossible for them to plan for what supplies they're going to get the next quarter. How are they going to fulfill their orders? What employees are they going to need? It's having a real impact on the performance of these companies, but also their ability to hire people and plan for the future.If you go to the grocery store, you're going to start seeing — and you're starting to see already — price increases. The thing that Americans care most about is, the cost of living is just too high. You're at the grocery store, as you're shopping for your kids for the summer, you're going to see costs go up because of a self-imposed tax we've put in place. So I still do think that if we do find ourselves in a recession, it's going to be because of the tariffs we've put in place.Even if we don't enter a technical recession, what we're seeing now is that those tariffs are going to raise the cost for people when they go out to buy things. It's going to raise the cost of building homes, which is going to make it harder for people to get houses, which is ultimately going to have an impact on the economy that isn't what I think the president or anyone wants at this point.Kyla: Is there anything else we haven't asked about? I think the place where we continue, as a country, to struggle is that, given the federal system we have, many of these problems aren't just in Washington — they're in state and local governments as well. When you think about the challenges to building more housing in this country, you can't just solve it by doing things at the federal level. You have to get state and local governments unified in taking a proactive approach. Part of this has to be not just financial or regulatory from the federal government, but we have to do more things that force state and local governments to get out of the way of people being able to build more housing. I think that the conversations that you've had on your show, and the conversations we're having in government, need to move past our regular policy conversations of: “Should we do more on LIHTC? Should we try to fix NEPA?” Those, to me, are table stakes, and we're in the middle of what I'd say is a generational crisis when it comes to housing. We have to be willing to treat it like a crisis, rather than what I think we've done so far, which is take incremental steps at different levels to try and solve this. That's one thing that I wanted to make sure that I said, because I think it's the most important thing that we can do at the moment.Kyla: Absolutely. During your time there, the Treasury was doing so much with zoning reform, with financial incentives. What I really liked about our last conversation was how much you talked about how important it is that workers can live close to work. Are you optimistic that we will be able to address the problem, or do you think we are sinking into quicksand?I'd say a little bit of both, and the thing that I'm doing now is getting hyperlocal. One of the projects I'm working on in my post-administration life is I'm working with 15 churches in D.C., where they have vacant land and want to use it to build affordable housing as quickly as possible.I'm learning that even when you have the land donated for free and you're willing to work as quickly as possible, it's still quite hard because you have regulations and financial issues that often get in the way of building things. Part of what we have to do now is just launch as many natural experiments as possible to see what works.What I've learned already from this lived experience is that even cities that are trying to get out of the way and make it easier to build housing struggle because of what you all know to be true, which is that the local politics of this is quite complicated. Oftentimes, the way that you get them over the line is by creating incentives or disincentives.In the past, I talked a lot about incentives in terms of “giving people money to do things.” I'm now in favor of “not giving money to people who don't do things” — if you don't take steps to fix your zoning, some of the federal money that you regularly get is not coming to your jurisdiction. I'm going to reallocate that money to places that are doing this activity. I think we have to take those types of radical steps.It's similar to what we did with the Emergency Rental Assistance Program, where if you didn't spend your money, we could take your money back and reallocate it to people who were giving away emergency rental assistance money.That motivates people a lot — when they feel like something's going to be taken away from them. I'm of the view that we have to find more radical things that we can do to get housing built. If we don't, costs will continue to rise faster than people's incomes.Santi: Wally, I have to ask after that point you just made: did you read the paper by my colleague Chris Elmendorf on using LIHTC funds? The idea is to re-allocate those federal funds away from big, expensive cities and into other places in a state, if the cities don't commit to basic zoning reforms.I completely agree with him, and I think I would go even further than just LIHTC money. I would reallocate non-housing money as well, because from my standpoint, if you think about the most important issue for a family, it's being able to find housing that is affordable near their place of work and where their kids go to school. I said that on purpose. I didn't say “affordable housing.” I said “housing that is affordable,” because affordable housing is, in lots of ways, targeted towards a population of people who need it the most. But for even people who are middle income in this country, it crowds out their ability to pay for other things when housing costs continue to creep higher.The only way we solve that problem is if you get rid of restrictive zoning covenants and fix permitting. The natural thing that every city and state is thinking about right now is throwing more money at the problem. There's going to need to be money here, just in light of some of the headwinds, but it's going to be more costly and less effective if we don't fix the underlying issues that are making it hard to build housing where we want it.Right now in California, we're having a huge debate over what we do with infill housing in urban areas. A simple solution — you don't have to do another environmental review if one was already done in this area— is taking months to work through the California legislature, which demonstrates that we're going too slow. California's seeing an exodus of people. I just talked to a CEO who said, “I'm moving my business because the people who work for me can't afford to live in California anymore.” This is the kind of problem that you can solve. State legislatures, Congress, and executives have to get together and take some radical steps to make it easier to build housing.I appreciate what you said about what we were doing at Treasury, but from my standpoint, I wish we had done more earlier to focus on this issue. We had a lot going on, but fundamentally, the most important thing on housing is taking a step to try and build housing today, which is going to have an impact on the economy 10, 20, 30 years from now. We just have to start doing that as soon as possible.Thanks to Emma Hilbert for her transcript and audio edits. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.statecraft.pub
In this episode of the Mainframe Connect podcast's I am a Mainframer series, Richelle from Beta Systems shares her inspiring journey from the Philippines to Austria in the mainframe industry. Starting as a COBOL programmer trainee, Richelle transitioned through roles in systems programming and open-source development, becoming a key contributor to the Zowe community as a Scrum Master for Zowe Explorer. She discusses mainframe modernization, the power of the Zowe community, and her passion for teaching modern mainframe tools like VS Code and CLI to apprentices and colleagues. A highlight of the conversation is Richelle's vision for a hybrid mainframe future and her advocacy for greater visibility of women in the industry through the upcoming Mainframe Coven podcast.
Scott Wu is the co-founder and CEO of Cognition, the company behind Devin—the world's first autonomous AI software engineer. Unlike other AI coding tools, Devin works like an autonomous engineer that you can interact with through Slack, Linear, and GitHub, just like with a remote engineer. With Scott's background in competitive programming and a previous AI-powered startup, Lunchclub, teaching AI to code has become his ultimate passion.What you'll learn:1. How a team of “Devins” are already producing 25% of Cognition's pull requests, and they are on track to hit 50% by year's end2. How each engineer on Cognition's 15-person engineering team works with about five Devins each3. How Devin has evolved from a “high school CS student” to a “junior engineer” over the past year4. Why engineering will shift from “bricklayers” to “architects”5. Why AI tools will lead to more engineering jobs rather than fewer6. How Devin creates its own wiki to understand and document complex codebases7. The eight pivots Cognition went through before landing on their current approach8. The cultural shifts required to successfully adopt AI engineers—Brought to you by:Enterpret—Transform customer feedback into product growthParagon—Ship every SaaS integration your customers wantAttio—The powerful, flexible CRM for fast-growing startups—Where to find Scott Wu:• X: https://x.com/scottwu46• LinkedIn: https://www.linkedin.com/in/scott-wu-8b94ab96/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Scott Wu and Devin(09:13) Scaling and future prospects(10:23) Devin's origin story(17:26) The idea of Devin as a person(22:19) How a team of “Devins” are already producing 25% of Cognition's pull requests(25:17) Important skills in the AI era(30:21) How Cognition's engineering team works with Devin's(34:37) Live demo(42:20) Devin's codebase integration(44:50) Automation with Linear(46:53) What Devin does best(52:56) The future of AI in software engineering(57:13) Moats and stickiness in AI(01:01:57) The tech that enables Devin(01:04:14) AI will be the biggest technology shift of our lives(01:07:25) Adopting Devin in your company(01:15:13) Startup wisdom and hiring practices(01:22:32) Lightning round and final thoughts—Referenced:• Devin: https://devin.ai/• GitHub: https://github.com/• Linear: https://linear.app/• Waymo: https://waymo.com/• GitHub Copilot: https://github.com/features/copilot• Cursor: https://www.cursor.com/• Anysphere: https://anysphere.inc/• Bolt: https://bolt.new/• StackBlitz: https://stackblitz.com/• Cognition: https://cognition.ai/• v0: https://v0.dev/• Vercel: https://vercel.com/• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Assembly: https://en.wikipedia.org/wiki/Assembly_language• Pascal: https://en.wikipedia.org/wiki/Pascal_(programming_language)• Python: https://www.python.org/• Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox• Datadog: https://www.datadoghq.com/• Bending the universe in your favor | Claire Vo (LaunchDarkly, Color, Optimizely, ChatPRD): https://www.lennysnewsletter.com/p/bending-the-universe-in-your-favor• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Windsurf: https://windsurf.com/• COBOL: https://en.wikipedia.org/wiki/COBOL• Fortran: https://en.wikipedia.org/wiki/Fortran• Magic the Gathering: https://magic.wizards.com/en• Aura frames: https://auraframes.com/• AirPods: https://www.apple.com/airpods/• Steven Hao on LinkedIn: https://www.linkedin.com/in/steven-hao-160b9638/• Walden Yan on LinkedIn: https://www.linkedin.com/in/waldenyan/—Recommended books:• How to Win Friends & Influence People: https://www.amazon.com/How-Win-Friends-Influence-People/dp/0671027034• The Power Law: Venture Capital and the Making of the New Future: https://www.amazon.com/Power-Law-Venture-Capital-Making/dp/052555999X• The Great Gatsby: https://www.amazon.com/Great-Gatsby-F-Scott-Fitzgerald/dp/0743273567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
When Hackaday editors Elliot Williams and Al Williams need a break from writing posts, they hop on the podcast and talk about their favorite stories of the past week. Want to know what they were talking about? Listen in below and find out! In an unusual twist, a listener sent in the sound for this week's What's This Sound competition, so it turns out Elliot and Al were both stumped for a change. See if you can do better, and you might just score a Hackaday Podcast T-shirt. On the hacking front, the guys talked about what they hope to see as entries in the pet hacking contest, quantum diamonds (no kidding), spectrometers, and several science projects. There was talk of a tiny robot, a space mouse—the computer kind, not a flying rodent—and even an old-fashioned photophone that let Alexander Graham Bell use the sun like a string on a paper cup telephone. Things really heat up at the end, when there is talk about computer programming ranging from COBOL to Vibe programming. In case you've missed it, Vibe programming is basically delegating your work to the AI, but do you really want to? Maybe, if your job is to convert all that old COBOL code.
Elon Musk's Department of Government Efficiency, or DOGE, has been edged out of the headlines this past week, or so, by the administration's current flirtation with a constitutional crisis. But the DOGE team is still busy. One project on the office's agenda, originally reported by WIRED late last month, is to rewrite the Social Security Administration's code base—in other words, the agency's computer programs, which handle millions of Americans' personal and financial data. Brooke sits down with Clive Thompson, author of Coders: The Making of a New Tribe and the Remaking of the World, contributing writer to New York Times Magazine, and monthly columnist for Wired, to discuss the coding language under DOGE's microscope. On the Media is supported by listeners like you. Support OTM by donating today (https://pledge.wnyc.org/support/otm). Follow our show on Instagram, Twitter and Facebook @onthemedia, and share your thoughts with us by emailing onthemedia@wnyc.org.
AKA There Is No Trump without Newt. In episode 888 of The Professional Left Podcast, hosts driftglass and Blue Gal dive into the political landscape of April 1st, 2025, where April Fools Day is effectively canceled at the Cornfield Resistance due to the real Fool occupying the Oval Office.The hosts examine how Republicans have weaponized language over decades, tracing the destructive path from Newt Gingrich's slash-and-burn politics through Rush Limbaugh's hate radio to Trump's presidency. They contrast this with Democrats' constant pressure to be "civil" and "bipartisan," recalling Senator Durbin's tearful apology for his anti-Gitmo torture comments while Republicans never apologize for anything.Also covered: Senator Cory Booker's filibuster against hate, the White House Correspondents' Association's capitulation by canceling comedian Amber Ruffin, Elon Musk's dangerous plan to replace the COBOL code running Social Security payments, and a news roundup featuring Trump's latest attacks on democracy—from Signal-gate to tariff impacts, DEI funding cuts, and threats to the Smithsonian.Don't miss this sharp, unapologetic analysis of why "talking to the other side" isn't viable when the other side has abandoned democratic principles entirely.More at proleftpod.com.Driftglass will be on The BradCast with Digby on 4/2/2025. We are scheduled to be on The Bob Cesca show on 4/9/2025.Support the show at Patreon.com/proleftpod, or send a letter/contribution to The Professional Left PodcastPO Box 9133Springfield, IL 62791Support the show
What happens when political changes collide with tax enforcement? In this revealing conversation, former IRS revenue agent Jeff Johnson shares his firsthand account of being caught in the recent mass layoffs at the IRS. Having just completed seven months of intensive training, Jeff was among 7,000 probationary employees suddenly terminated as part of a broader federal workforce reduction initiative.Jeff pulls back the curtain on the agency's antiquated systems—including 1960s COBOL programming and cryptic green screens that revenue agents must master—and explains why these cuts could mean fewer audits of non-compliant taxpayers. You'll hear how experienced professionals who left established careers to serve at the IRS were dismissed with minimal notice and why this might be just the beginning of a much more significant reduction targeting up to half of the agency's 90,000 workforce.Meet Our Guest, Jeff JohnstonLinkedIn: http://www.linkedin.com/in/jon-jeffery-johnston-6812875Need CPE?Get CPE for listening to podcasts with Earmark: https://earmarkcpe.comSubscribe to the Earmark Podcast: https://podcast.earmarkcpe.comGet in TouchThanks for listening and the great reviews! We appreciate you! Follow and tweet @BlakeTOliver and @DavidLeary. Find us on Facebook and Instagram. If you like what you hear, please do us a favor and write a review on Apple Podcasts or Podchaser. Call us and leave a voicemail; maybe we'll play it on the show. DIAL (202) 695-1040.SponsorshipsAre you interested in sponsoring the Cloud Accounting Podcast? For details, read the prospectus.Need Accounting Conference Info? Check out our new website - accountingconferences.comLimited edition shirts, stickers, and other necessitiesTeePublic Store: http://cloudacctpod.link/merchSubscribeApple Podcasts: http://cloudacctpod.link/ApplePodcastsYouTube: https://www.youtube.com/@TheAccountingPodcastSpotify: http://cloudacctpod.link/SpotifyPodchaser: http://cloudacctpod.link/podchaserStitcher: http://cloudacctpod.link/StitcherOvercast: http://cloudacctpod.link/OvercastClassifiedsWant to get the word out about your newsletter, webinar, party, Facebook group, podcast, e-book, job posting, or that fancy Excel macro you just created? Let the listeners of The Accounting Podcast know by running a classified ad. Go here to create your classified ad: https://cloudacctpod.link/RunClassifiedAdTranscriptsThe full transcript for this episode is available by clicking on the Transcript tab at the top of this page
In another crossover episode, Adam Haman joins Bob to discuss the phenomenon of arguing with midwits on social media. They. beautifully illustrate the bell curve meme. Examples include the COBOL red herring regarding DOGE and Social Security, as well as Krugman's handling of the claim that excessive debt "burdens future generations."Mentioned in the Episode and Other Links of Interest:The YouTube version of this episode.Justin Wolfer's tweet endorsing the COBOL theory.Bob's EconLib article on government debt burdens.The Haman Nature page.Help support the Bob Murphy Show.