Podcasts about Visual Studio Code

Free source code editor by Microsoft

  • 321PODCASTS
  • 778EPISODES
  • 54mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 22, 2025LATEST
Visual Studio Code

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Visual Studio Code

Show all podcasts related to visual studio code

Latest podcast episodes about Visual Studio Code

The Jim Rutt Show
EP 300 Daniel Rodriguez on AI-Assisted Software Development

The Jim Rutt Show

Play Episode Listen Later May 22, 2025 72:17


Jim talks with Daniel Rodriguez about the state of AI software development and its implementation in industry. They discuss Daniel's background at Microsoft & Anaconda, transformer-based technologies, software engineering as hard vs soft science, vibe coding, barriers to entry in software engineering, cognitive styles needed for programming, Daniel's history with LLMs, unit testing & test-driven development with AI, social aspects of AI adoption, quality concerns & technical debt, style consistency & aesthetics, approaches to steering LLMs through roles & personas, philosophical perspectives on LLM consciousness & intelligence, personification & interaction styles, memory & conversation history in models, agent-based systems & their historical origins, the future of agent frameworks, customer/user interaction within agent ecosystems, distributed systems, future predictions about inference costs & protocols, IDEs & linting tools, and much more. Episode Transcript JRS EP 289 - Adam Levine on AI-Powered Programming for Non-Developers Daniel Rodriguez is Chief Architect and acting Technical Lead at r.Potential, the first enterprise platform for optimizing hybrid teams of humans and digital workers. As the venture's overall technical architect, he designs and integrates a full stack of AI systems, combining Agentforce with advanced data, simulation, and orchestration technologies to bring that vision to life. Before r.Potential, Daniel bootstrapped and scaled retrieval-augmented AI services and agentic infrastructure at Anaconda. Earlier, at Microsoft, he maintained Azure TypeScript SDKs and co-created Visual Studio Code's Jupyter and Data Wrangler extensions, expanding cloud and data-science workflows.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 529: Microsoft Build Updates: 5 new Copilot AI updates and how to use them

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 20, 2025 41:49


Microsoft legit just dropped a book of AI updates at the Build Conference.We're going to go over the 5 most impactful AI-powered Microsoft Copilot updates and how they will change the future of work. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GitHub Copilot's Autonomous Coding Partner UpdateCopilot Tuning for Enterprise CustomizationIntroducing Agent Foundry on AzureMulti-Agent Orchestration in Copilot StudioComputer Use Automation in CopilotMCP Native Support in Microsoft SystemsTimestamps:00:00 "Everyday AI: Transform Your Business"06:42 AI Coding Assistant Evolution09:29 Copilot Tuning for Business Leaders10:56 Data Privacy Concerns in Cloud Use16:52 "AI Collaboration Among Tech Giants"20:48 "Multi-Agent Orchestration Cautions"22:59 "Multi-Agent Orchestration in Copilot Studio"25:27 OpenAI Copilot Access and Availability29:38 Copilot Pro: Versatile AI Agent35:13 Microsoft Embraces Open AI Collaboration36:57 "Security Concerns Slow AI Rollout"39:44 Subscribe & Review RequestKeywords:Microsoft Build 2025, AI updates, Copilot AI updates, GitHub Copilot, GitHub Copilot coding agent, Autonomous coding partner, Visual Studio Code, Multimodal understanding, Natural language prompts, MCP protocol, Model context protocol, Anthropic, Microsoft 365 Copilot, Business leaders, Copilot tuning, Organization's internal data, Low code model tuning, Task specific agents, Secure service boundary, Azure, Agent foundry, AI agent playground, Enterprise grade AI agents, Grok, Elon Musk, Microsoft Azure, Agent to agent protocol, A to A, Multi agent orchestration, Copilot Studio, Agents collaboration, Agentic memory, Automated validation tools, Computer use in Copilot, Desktop applications, Repetitive tasks, MCP native support, Windows 11, Future of work, Third party applications, Agentic web, Security and access controls.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

programmier.bar – der Podcast für App- und Webentwicklung
News 20/25: V8 // Accessibility // iMessage-Bug // VS Code Updates // JJ vs. Git // Nissan Leaf Hack

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later May 15, 2025 42:13


In dieser News-Ausgabe sprechen wir über Änderungen an der V8 JavaScript Engine, die euch erlauben, Dateien mit Explicit Compile Hints für die direkte Kompilierung zu markieren. In der neuen Chrome-Version kann das hunderte Millisekunden Beschleunigung bringen.Wir diskutieren außerdem, warum die WCAG anfängt, ihr Kernthema „Accessibility“ anders zu denken und zu bewerten.Dave berichtet von einem Bug, der still und leise Nachrichten in Apples iMessage verschluckt & was genau XML damit zu tun hat.Fabi nutzt zwar mittlerweile mehrheitlich Cursor als IDE, war aber trotzdem erstaunt über die neuesten Änderungen und Verbesserungen im Umgang mit AI und Copilot in Visual Studio Code.Nachdem es zuletzt mit Evo nicht geklappt hatte, eine Alternative zu git zu etablieren, nimmt das Projekt JJ (Jujutsu) immer mehr an Fahrt auf. Jan legt dar, welche Vorteile das Projekt gegenüber Git mitbringt und was Google damit zu tun hat.Und zu guter Letzt berichtet Dennis, wie es (White-Hat-)Hacker:innen gelungen ist, die Kontrolle über einen Nissan Leaf zu übernehmen und was sie alles damit anstellen konnten.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. BlueskyInstagramLinkedInMeetupYouTube

Lenny's Podcast: Product | Growth | Career
The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later May 1, 2025 71:13


Michael Truell is the co-founder and CEO of Anysphere, the company behind Cursor—the fastest-growing AI code editor in the world, reaching $300 million in annual recurring revenue just two years after its launch. In this conversation, Michael shares his vision for the future, lessons learned, and advice for preparing for the fast-approaching AI future.What you'll learn:• Cursor's early pivot from automating CAD to automating code• Michael's vision for “what comes after code” and how programming will evolve• Why Cursor built their own custom AI models despite not starting there• Key lessons from Cursor's rapid growth• Why “taste” and logic design will become more valuable engineering skills than technical coding ability• Why the market for AI coding tools is much larger than people realize—and why there will likely be one dominant winner• Michael's advice for engineers and product teams preparing for the AI future—Brought to you by:Eppo—Run reliable, impactful experimentsVanta—Automate compliance. Simplify securityOneSchema—Import CSV data 10x faster—Where to find Michael Truell:• X: https://x.com/mntruell• LinkedIn: https://www.linkedin.com/in/michael-t-5b1bbb122/• Website: https://mntruell.com/—In this episode, we cover:(00:00) Introduction to Michael Truell and Cursor(04:20) What comes after code(08:32) The importance of taste(12:39) Cursor's origin story(18:31) Why they chose to build an IDE(22:39) Will everyone become engineering managers?(24:31) How they decided it was time to ship(26:45) Reflecting on Cursor's success(32:03) Counterintuitive lessons on building AI products(34:02) Inside Cursor's stack(38:42) Defensibility and market dynamics in AI(46:13) Tips for using Cursor(51:25) Hiring and building a strong team(59:10) Staying focused amid rapid AI advancements(01:02:31) Final thoughts and advice for aspiring AI innovators—Referenced:• Cursor: https://www.cursor.com/• Microsoft Copilot: https://copilot.microsoft.com/• Scaling laws for neural language models: https://openai.com/index/scaling-laws-for-neural-language-models/• MIT: https://www.mit.edu/• Telegram: https://telegram.org/• Signal: https://signal.org/• WhatsApp: https://www.whatsapp.com/• Devin: https://devin.ai/• Visual Studio Code: https://code.visualstudio.com/• Chromium: https://chromium.googlesource.com/chromium/src/base/• Exploring ChatGPT (GPT) Wrappers—What They Are and How They Work: https://learnprompting.org/blog/gpt_wrappers• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• DALL-E 3: https://openai.com/index/dall-e-3/• Stable Diffusion 3: https://stability.ai/news/stable-diffusion-3—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Power Platform Boost Podcast
One of those (#57)

Power Platform Boost Podcast

Play Episode Listen Later Apr 30, 2025 61:28 Transcription Available


The Diary Of A CEO with Steven BartlettMicrosoft 365 Copilot Gets Smarter - Big Changes Ahead by Lisa CrosbieSpring Release of Microsoft 365 Copilot by Femke CornelissenCRM will Die — Benioff admits it, Microsoft is building the replacement by Steve MordueMicrosoft Copilot Studio ❤️ MCP | Power Platform Developer BlogEngineered Code - Blog - Power Pages: Depending on jQuery Power Pages Actions in Visual Studio Code by Nick DoelmanGit Integration is Generally AvailableSet regarding any table in Power Automate without complex conditions by Amey HoldenMaking the Move from Outbound to Realtime Marketing by Megan V. WalkerOne form to rule them all: Reuse Marketing Forms Across Pages with Javascript by Pauline KoldeAnnouncing new computer use in Microsoft Copilot Studio for UI automation by Charles Lamanna Automate, agentify, or nothing? by Ana Inés UrrutiaRevolutionizing Digital Workflows: Traditional Automation vs. AI-Powered Agents by Carsten GrothSelf-Service Disaster Recovery for Power Platform and D365 by Andrew LyMicrosoft Power Platform and Copilot Studio Architecture CenterBe sure to subscribe so you don't miss a single episode of Power Platform BOOST!Thank you for buying us a coffee: buymeacoffee.comPodcast home page: https://powerplatformboost.comEmail: hello@powerplatformboost.comFollow us!Twitter: https://twitter.com/powerplatboost Instagram: https://www.instagram.com/powerplatformboost/ LinkedIn: https://www.linkedin.com/company/powerplatboost/ Facebook: https://www.facebook.com/profile.php?id=100090444536122 Mastodon: https://mastodon.social/@powerplatboost

Microsoft Business Applications Podcast
Building AI Solutions with Azure AI Foundry with Nanddeep Nachan

Microsoft Business Applications Podcast

Play Episode Listen Later Apr 28, 2025 28:53 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/680 Microsoft's AI landscape has evolved into three distinct categories: Copilot for Microsoft 365 (M365) applications, Copilot Studio for low-code chatbot development, and Azure AI Foundry (formerly AI Studio) for pro-code flexibility with AI models. Join Nanddeep Nachan on today's Power Platform Show to learn more. TAKEAWAYs• Declarative agents provide the simplest approach to extending Copilot functionality without complex licensing• Teams toolkit in Visual Studio Code offers an easy way to create declarative agents using simple JSON configurations• Copilot Studio gives business users a drag-and-drop interface for creating virtual assistants quickly• Azure AI Foundry provides comprehensive tools for developers and data scientists building advanced AI solutions• Retrieval Augmented Generation (RAG) pattern bridges the gap between LLMs and organization-specific data• Contract management use cases demonstrate how AI can extract insights from millions of documents• Graph RAG pattern enables "global queries" that deliver insights across entire document collections• AI Foundry solutions can be deployed directly to websites, Teams apps, or Microsoft 365 Copilot• Despite impressive personal productivity gains, many organizations still struggle to find compelling enterprise-level use cases for CopilotThis year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening

RunAs Radio
Agentic AI for IT Pros with Tim Warner

RunAs Radio

Play Episode Listen Later Apr 23, 2025 34:44


What can agentic AI do for you? Richard talks to Tim Warner about his work utilizing next generation agentic AI technologies to help with sysadmin tasks. Tim talks about the early lead that Cursor AI took with AI agents capable of writing and executing scripts on your behalf - as opposed to just creating code you can cut-and-paste. Today, GitHub Copilot has caught up with Agent Mode in Copilot Edits, although still in preview, it speaks to a future where sysadmins use these tools to write better scripts for work - and get more done in less time!LinksCursor AIOpenAI OperatorGitHub CopilotCopilot EditsRecorded February 17, 2025

Crazy Wisdom
Episode #454: From Zero to Git: A Founder's Guide to the Terminal

Crazy Wisdom

Play Episode Listen Later Apr 21, 2025 64:42


In this episode, I, Stewart Alsop III, sat down with AJ Beckner to walk through how non-technical founders can build a deeper understanding of their codebase using AI tools like Cursor and Claude. We explored the reality of navigating an IDE as a beginner, demystified Git and GitHub version control, and walked through practical ways to clone a repo, open it safely in Cursor, and start asking questions about your app's structure and functionality without breaking anything. AJ shared his curiosity about finding specific text in his app and how to track that down across branches. We also looked at using AI-powered tools for tasks like dependency analysis and visualizing app architecture, with a focus on empowering non-devs to gain confidence and clarity in their product's code. You can connect with AJ through Twitter at @thisistheaj.Check out this GPT we trained on the conversation!Timestamps00:00 – Stewart introduces Cursor as a fork of Visual Studio Code and explains the concept of an IDE to AJ, who has zero prior experience. They talk about the complexity of coding and the importance of developer curiosity.05:00 – They walk through cloning a GitHub repository using the git clone command. Stewart highlights that AJ won't break anything and introduces the idea of a local playground for exploration.10:00 – Stewart explains Git vs GitHub, the purpose of version control, and how to use the terminal for navigation. They begin setting up the project in Cursor using the terminal rather than GUI options.15:00 – They realize only a README was cloned, leading to a discussion about branches—specifically the difference between main and development branches—and how to clone the right one.20:00 – Using git fetch, they get access to the development branch. Stewart explains how to disconnect from Git safely to avoid pushing changes.25:00 – AJ and Stewart begin exploring Cursor's AI features, including the chat interface. Stewart encourages AJ to start asking natural-language questions about the app structure.30:00 – Stewart demonstrates how to ask for a dependency analysis and create mermaid diagrams for visualizing how app modules are connected.35:00 – They begin identifying specific UI components, including finding and editing the home screen title. AJ uploads a screenshot to use as reference in Cursor.40:00 – They successfully trace the UI text to an index.tsx file and discuss the layout's dependency structure. AJ learns how to use search and command-F effectively.45:00 – They begin troubleshooting issues with Claude's GitHub integration, exploring Claude MCP servers and configuration files to fix broken tools.50:00 – Stewart guides AJ through using npm to install missing packages, explains what Node Package Manager is, and reflects on the interconnected nature of modern development.55:00 – Final troubleshooting steps and next steps. Stewart suggests bringing in Phil for deeper debugging. AJ reflects on how empowered he now feels navigating the codebase.Key InsightsYou don't need to be a developer to understand your app's codebase: AJ Beckner starts the session with zero familiarity with IDEs, but through Stewart's guidance, he begins navigating Cursor and GitHub confidently. The key idea is that non-technical founders can develop real intuition about their code—enough to communicate better with developers, find what they need, and build trust with the systems behind their product.Cursor makes AI-native development accessible to beginners: One of the biggest unlocks in this episode is seeing how Cursor, a VS Code fork with AI baked in, can answer questions about your codebase in plain English. By cloning the GitHub repo and indexing it, AJ is able to ask, “Where do I change this text in the app?” and get direct, actionable guidance. Stewart points out that this shifts the role of a founder from passively waiting on answers to actively exploring and editing.Version control doesn't have to be scary—with the right framing: Git and GitHub come across as overwhelming to many non-engineers, but Stewart breaks it down simply: Git is the local system that helps keep changes organized and non-destructive, and GitHub is the cloud-based sharing tool layered on top. Together, they allow safe experimentation, like cloning a development branch and disconnecting it from the main repo to create a playground environment.Branching strategies reflect how work gets done behind the scenes: The episode includes a moment of discovery: AJ cloned the main branch and only got a README. Stewart explains that the real work often lives in a “development” branch, while “main” is kept stable for production. Understanding this distinction helps AJ (and listeners) know where to look when trying to understand how features are actually being built and tested.Command line basics give you superpowers: Rather than relying solely on visual tools, Stewart introduces AJ to the terminal—explaining simple commands like cd, git clone, and git fetch—and emphasizes that the terminal has been the backbone of developer work for decades. It's empowering to learn that you can use just a few lines of text to download and explore an entire app.Modern coding is less about code and more about managing complexity: A recurring theme in the conversation is the sheer number of dependencies, frameworks, and configuration files that make up any modern app. Stewart compares this to a reflection of modern life—interconnected and layered. Understanding this complexity (rather than being defeated by it) becomes a mindset that AJ embraces as part of becoming technically fluent.AI will keep lowering the bar to entry, but learning fundamentals still matters: Stewart shares how internal OpenAI coding models went from being some of the worst performers two years ago to now ranking among the top 50 in the world. While this progress promises an easier future for non-devs, Stewart emphasizes the value of understanding what's happening under the hood. Tools like Claude and Cursor are incredibly powerful, but knowing what they're doing—and when to be skeptical—is still key.

.NET in pillole
288 - Estendere Copilot con un nostro server MCP

.NET in pillole

Play Episode Listen Later Apr 14, 2025 11:53


Microsoft ha rilasciato in preview l'SDK in C# per poter realizzare un server MCP (Model Context Protocol) che permette di far interagire gli LLM con applicazioni e sorgenti dati esterne.Visual Studio Code già supporta i server MCP, e questo permette di poter richiamare/utilizzare del proprio codice che copilot potrà sfruttare.https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/https://devblogs.microsoft.com/blog/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocolhttps://www.youtube.com/watch?v=iS25RFups4A#AI #ModelContextProtocol #copilot #dotnet #vscode #dotnetinpillole

Negocios & WordPress
226. ¿El fin de las páginas web?

Negocios & WordPress

Play Episode Listen Later Apr 8, 2025 58:53


✏️ Suscribirse https://youtu.be/2R8Emhy6_dw Bienvenidos a otro emocionante episodio de Negocios y WordPress, donde exploramos el fascinante mundo del desarrollo web, automatización y las últimas tendencias tecnológicas. En el episodio 226, analizamos el impacto de la inteligencia artificial (IA) en el futuro de las páginas web y cómo los profesionales del desarrollo web pueden adaptarse a esta evolución. ¿El Fin de las Páginas Web Tal y Como las Conocemos? El título del episodio es llamativo y algo apocalíptico: "El fin de las páginas web". Sin embargo, más que un fin absoluto, hablamos de una evolución significativa del ecosistema web. La IA está transformando la manera en que interactuamos con las páginas web, y esto plantea preguntas cruciales sobre el futuro del desarrollo web y el diseño de experiencias de usuario. IA y Automatización: Revolucionando la Web La IA no solo automatiza tareas repetitivas; ahora también se plantea como una herramienta capaz de diseñar y desarrollar sitios web automáticamente. Herramientas como Lovable.dev y V0 de Vercel permiten a los usuarios generar aplicaciones web y móviles simplemente hablando con un chatbot. Google ha lanzado su propia versión con Firebase Studio, integrando capacidades avanzadas de IA para generar aplicaciones personalizadas. IA en el Proceso de Diseño La idea de que un bot pueda diseñar una interfaz web plantea una pregunta interesante: ¿Cuál será el rol de los diseñadores humanos? La respuesta parece estar en la estrategia y creatividad humana. Los diseñadores deberán enfocarse más en la experiencia de usuario (UX) y en cómo transmitir la identidad de marca de manera efectiva a través de medios automatizados. SEO y Datos Estructurados en la Era de la IA El SEO tradicional también está evolucionando. Motores de búsqueda y asistentes virtuales basados en IA ahora pueden acceder y analizar información de maneras nuevas y eficientes. Para seguir siendo relevante, es crucial que los desarrolladores comiencen a prestar más atención a: APIs Abiertas: Asegurar que los datos sean accesibles a través de llamadas API robustas. Datos Estructurados: Uso de Schema y otros esquemas de datos para mejorar la legibilidad de los contenidos para IA. Optimización de Reviews: Mantener y promover opiniones y reviews auténticas y verificadas. Reflexiones y Estrategias Futuras Los desarrolladores y diseñadores web tienen un papel crucial en la adaptación y transformación de sus capacidades. Aquí algunas estrategias para mantenerse relevantes: Enfoque en Estrategia y Branding La estrategia se convierte en un pilar fundamental. Como desarrollador, tu misión será ayudar a los clientes a entender cómo diferenciarse y gestionar su presencia en línea de manera eficiente. La estrategia de branding, bien articulada, será vital para navegar el futuro del marketing y el desarrollo web. Uso de Herramientas Modernas Adaptarse a las nuevas herramientas que la IA ofrece. Desde el uso de Visual Studio Code hasta el despliegue automatizado con Firebase Studio, es vital mantenerse actualizado. Conclusión: La Evolución Continúa El mundo del desarrollo web está en constante evolución. La IA está redefiniendo cómo diseñamos, desarrollamos y consumimos contenido en línea. Sin embargo, los principios de buena estrategia, branding sólido y datos bien estructurados seguirán siendo esenciales. ¿Quieres saber más? Únete a nuestra comunidad en Telegram para debatir sobre este y otros temas fascinantes del mundo del desarrollo web y el marketing digital. Preguntas Frecuentes (FAQ) 1. ¿Cómo está impactando la IA en el desarrollo web? La IA está automatizando tareas y ofrece nuevas maneras de generar y administrar sitios web y aplicaciones, lo que potencialmente reduce la necesidad de tareas manuales repetitivas. 2. ¿Desaparecerán las páginas web tradicionales? Aunque no desaparecerán de inmediato, la manera en que interactuamos con ellas cambiará. La información podrá ser consumida directamente a través de asistentes virtuales y otras interfaces. 3. ¿Qué pueden hacer los desarrolladores para adaptarse a estos cambios? Los desarrolladores deben enfocarse en la estrategia, la optimización de datos estructurados, APIs abiertas y la experiencia de usuario. Adicionalmente, es importante mantenerse actualizado con las herramientas y tecnologías emergentes. ¡Esperamos tus comentarios en la sección de abajo! Síguenos para más episodios llenos de insights y estrategias innovadoras.

The .NET Core Podcast
From Code to Cloud in 15 Minutes: Jason Taylor's Expert Insights And The Clean Architecture Template

The .NET Core Podcast

Play Episode Listen Later Apr 4, 2025 62:14


RJJ Software's Software Development Service This episode of The Modern .NET Show is supported, in part, by RJJ Software's Podcasting Services, whether your company is looking to elevate its UK operations or reshape its US strategy, we can provide tailored solutions that exceed expectations. Show Notes "So I've been focused on the code to cloud journey, I like to call it, for the template. And two years ago, my goal was to provide a solution that could take you from code to cloud in 45 minutes or less. So I wanted it to be "file new project" to deploy a solution on Azure—because that's where my main focus is—within 45 minutes."— Jason Taylor Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie "GaProgMan" Taylor. In this episode, Jason Taylor (no relation) joined us to talk about his journey from Classic ASP to .NET and Azure. He also discusses clean architecture's maintainability, and his open-source Clean Architecture Solution template for ASP .NET Core, along with strategies for learning new frameworks and dealing with complexity. "Right now the template supports PostgreSQL, SQLite, and SQL Server. If you want to support MySQL, it's relatively easy to do because there's already a Bicep module or a Terraform module that you can go in and use it. So I went from 45 minutes to now I can get things up and running in like, I don't know, two minutes of effort and 15 minutes of waiting around while I make my coffee"— Jason Taylor Along the way, we talk about some of the complexities involved with creating a template which supports multiple different frontend technologies and .NET Aspire (which was news to me when we recorded), all the while maintaining the goal of being the simplest approach for enterprise development with Clean Architecture. Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/from-code-to-cloud-in-15-minutes-jason-taylors-expert-insights-and-the-clean-architecture-template/ Jason's Links: Jason's Clean Architecture repo on GitHub Jason's Northwind Traders with Clean Architecture repo on Github Connect with Jason Jason's RapidBlazor repo on GitHub Other Links: C# DevKit for Visual Studio Code Code, Coffee, and Clever Debugging: Leslie Richardson's Microsoft Journey and the C# Dev Kit in Visual Studio Code with Leslie Richardson dotnet scaffold devcontainers .NET Aspire Azure Developer CLI GitHub CLI Obsidian Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in Touch: Via the contact page Joining the Discord Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast. Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show

HTML All The Things - Web Development, Web Design, Small Business

Choosing the right code editor can make or break a web developer's workflow. In this episode, we dive into the Top 5 Code Editors for Web Developers—exploring their strengths, quirks, and everything in between. From the widely-loved Visual Studio Code to the blazing-fast newcomer Zed, we discuss which editors could suit your coding style. Whether you're a fan of Vim's keyboard mastery, WebStorm's all-in-one features, or experimenting with modern tools like Cursor, there's something here for everyone. Tune in to find the perfect fit for your development journey! Show Notes: https://www.htmlallthethings.com/podcasts/top-5-code-editors-for-web-developers

The Azure Podcast
Episode 515 - Building Copilots

The Azure Podcast

Play Episode Listen Later Mar 27, 2025


In this episode of the Azure podcast, Sujit and the team, including Cale, Russell, and Cynthia, are joined by special guest Matteo Pagani, a Cloud Solutions Architect in the Tech Strategy team at Microsoft. Matteo provides insights into the agentic world of Co-pilot, explaining how agents can enhance business processes and improve efficiency. Tune in to learn about the practical applications of these technologies and how they can be integrated into existing workflows.   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode515.mp3 YouTube: https://youtu.be/qMJ88BLbTVo Resources:   Overview of Microsoft 365 Copilot extensibility: https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/ Building declarative agents with Visual Studio Code, Copilot Studio and Agent Builder: https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/overview-declarative-agent Building custom engine agents with Visual Studio Code and Copilot Studio: https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/overview-custom-engine-agent My blog with some fun experiments with multi-agents scenarios: https://www.developerscantina.com/   Other updates: Announcing GA for Azure Container Apps Serverless GPUs | Microsoft Community Hub   https://www.linkedin.com/pulse/introducing-deep-reasoning-agent-flows-copilot-studio-charles-lamanna-n1zxc/   Let's try GitHub Copilot Agent mode in VS Code to build a FULL app!

Emílias Podcast
Daniele Pishinin: Inspirando Mulheres na Tecnologia e no Front-end

Emílias Podcast

Play Episode Listen Later Mar 27, 2025 33:30


Daniele Pishinin, engenheira de software especializada em tecnologias front-end, compartilhou sua trajetória no Emílias Podcast. Ela atua como Frontend Engineer. Além disso, ela é embaixadora do Google Women Techmakers, mentora na Reprograma e colaboradora em eventos comunitários como o DevOps Day Campinas 2024. Durante o episódio, Daniele abordou temas como sua motivação para ingressar na área de computação, destacando o impacto de videogames em sua infância e a escolha por uma carreira focada no desenvolvimento frontend. Ela compartilhou boas práticas para acessibilidade e segurança no desenvolvimento de aplicações, incluindo o uso de guidelines como as do W3C. Além disso, discutiu as ferramentas que utiliza no trabalho, como Visual Studio Code e React Query.Daniele também relatou desafios enfrentados por mulheres na tecnologia, incluindo assédio no início da carreira e dificuldades para mulheres em transição de carreira. Como mentora na Reprograma, ela destacou a importância de criar oportunidades para mulheres ingressarem no mercado. Sobre sua atuação como embaixadora do Google Women Techmakers, enfatizou o papel do programa em conectar mulheres e fortalecer redes de apoio.Por fim, Daniele incentivou mulheres a persistirem na área de computação e indicou os filmes Estrelas Além do Tempo e Batalhão 6888, além do livro Os Irmãos Karamazov. Ela encerrou o episódio agradecendo a oportunidade e reforçando sua disponibilidade para conexões profissionais via LinkedIn.Links de Daniele:https://www.instagram.com/danipishinin/https://www.danipishinin.com/ https://www.linkedin.com/in/danipishinin/Indicações de Daniele:Estrelas Além do Tempo https://www.imdb.com/pt/title/tt4846340Batalhão 6888 https://www.imdb.com/pt/title/tt24458622 Irmãos Karamázov https://www.goodreads.com/book/show/43176794-os-irm-os-karamazov Entrevistadores:Adolfo Neto - Professor da UTFPR https://adolfont.github.io/ Nathálya Chaves - Estudante de Sistemas de Informação da UTFPR e Bolsista do Emílias Podcast.Editor: Allax Almeida Episódio 122 do Emílias Podcast.O Emílias Podcast é um projeto de extensão da UTFPR Curitiba que faz parte da Rede Emílias de Podcasts ⁠⁠https://fronteirases.github.io/redeemilias .  Descubra tudo sobre o programa Emílias - Armação em Bits em ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/Emilias   #podcast  #EMILIAS

Thoughtstuff - Tom Morgan on Microsoft Teams, Skype for Business and Office 365 Development

Audio version of video on YouTube. A New Assignment: using ClothesPilot to pack for MVP Summit App caching for your tab app Teams Toolkit for Visual Studio Code update – March 2025 Teams Phone extensibility powered by Azure Communication Services Copilot in Azure is now integrated in the Voice and Video Insights dashboard Subscribe to all my videos at: https://thoughtstuff.co.uk/video Podcast: https://thoughtstuff.co.uk/itunes, https://thoughtstuff.co.uk/spotify or https://thoughtstuff.co.uk/podcast Blog: https://blog.thoughtstuff.co.uk

.NET in pillole
285 - Prompty, un playground per i nostri prompt (dentro VS Code)

.NET in pillole

Play Episode Listen Later Mar 24, 2025 8:32


La scrittura, gestione, debug e test di prompt è un'attività sempre più presente, ed ecco che Prompty su questo ci può aiutare fornendoci un playground dentro a Visual Studio Code. Uno strumento che ci facilita la scrittura ed il test dei prompt. - https://github.com/microsoft/p... - https://prompty.ai/ - https://marketplace.visualstud... #prompty #ai #azureai #openai #dotnetinpillole #vscode

Microsoft Cloud IT Pro Podcast
Episode 397 – Local LLMs: Why Every Microsoft 365 & Azure Pro Should Explore Them

Microsoft Cloud IT Pro Podcast

Play Episode Listen Later Mar 13, 2025 44:22 Transcription Available


Welcome to Episode 397 of the Microsoft Cloud IT Pro Podcast. In this episode, Scott and Ben dive into the world of local LLMs—large language models that run entirely on your device. We're going to explore why more IT pros and developers are experimenting with them, the kinds of models you can run, and how you can integrate them directly into your workflow, including in Visual Studio Code for AI-assisted coding. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes Ollama Running LLMs Locally: A Beginner's Guide to Using Ollama open-webui/open-webui LM Studio LM. Studio Model Catalog Why do people like Ollama more than LM Studio? A Starter Guide for Playing with Your Own Local AI! host ALL your AI locally Run your own AI (but private) About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!

Camino a Moscu
Tunear Visual Studio Code. Mis Extensiones.

Camino a Moscu

Play Episode Listen Later Mar 12, 2025 16:12


Os cuento una a una mis extensiones dentro de Visual Studio Code (vscode) para dejarlo completamente a mi gusto y poder desarrollar PHP ( entre otras tecnologías) más a gusto.anan.jetbrains-darcula-themedevsense.composer-php-vscodedevsense.intelli-php-vscodedevsense.phptools-vscodedevsense.profiler-php-vscodedonjayamanne.githistoryeamodio.gitlensgieson.writetimestamphuizhou.githdjunstyle.php-cs-fixerlacroixdavid1.vscode-format-context-menumaketes.git-patch-utilitymhutchie.git-graphms-ceintl.vscode-language-pack-esms-vscode-remote.remote-sshms-vscode-remote.remote-ssh-editms-vscode-remote.remote-wslms-vscode.remote-exploreroctref.veturoderwat.indent-rainbowopen-southeners.php-support-utilsronvanderheijden.phpdoc-generatorsonarsource.sonarlint-vscodethangnc.ssh-clientwhatwedo.twigxdebug.php-debug

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Thursday Feb 27th: High Exfil Ports; Malicious VS Code Theme; Developer Workstation Safety; NAKIVO PoC; OpenH264 and rsync vuln;

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Feb 27, 2025 6:45


Attacker of of Ephemeral Ports Attackers often use ephermeral ports to reach out to download additional resources or exfiltrate data. This can be used, with care, to detect possible compromises. https://isc.sans.edu/diary/%5BGuest%20Diary%5D%20Malware%20Source%20Servers%3A%20The%20Threat%20of%20Attackers%20Using%20Ephemeral%20Ports%20as%20Service%20Ports%20to%20Upload%20Data/31710 Compromised Visal Studio Code Extension downloaded by Millions Amit Assaraf identified a likely compromised Visual Studio Code theme that was installed by millions of potential victims. Amit did not disclose the exact malicious behaviour, but is asking for victims to contact them for details. https://medium.com/@amitassaraf/a-wolf-in-dark-mode-the-malicious-vs-code-theme-that-fooled-millions-85ed92b4bd26 ByBit Theft Due to Compromised Developer Workstation ByBit and Safe{Wallet} disclosed that the record breaking ethereum theft was due to a compromised Safe{Wallet} developer workstation. A replaced JavaScript file targeted ByBit and altered a transaction signed by ByBit. https://x.com/benbybit/status/1894768736084885929 https://x.com/safe/status/1894768522720350673 PoC for NAKIVO Backup Replication Vulnerability This vulnerability allows the compromise of NAKIVO backup systems. The vulnerability was patched silently in November, and never disclosed by NAKIVO. Instead, WatchTowr now disloses details including a proof of concept exploit. https://labs.watchtowr.com/the-best-security-is-when-we-all-agree-to-keep-everything-secret-except-the-secrets-nakivo-backup-replication-cve-2024-48248/ OpenH264 Vulnerability https://github.com/cisco/openh264/security/advisories/GHSA-m99q-5j7x-7m9x rsync vulnerability exploited https://www.cisa.gov/known-exploited-vulnerabilities-catalog

AI DAILY: Breaking News in AI
WHY ARE CHATBOTS SO CHATTY?

AI DAILY: Breaking News in AI

Play Episode Listen Later Feb 25, 2025 4:04


Plus 1000 Musicians Release AI Protest AlbumLike this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter athttps://aidaily.usWhy AI Chatbots Are So Unbearably ChattyAI chatbots often produce verbose responses due to their design to predict the most probable next word in a sequence, leading to excessive and sometimes redundant information. This verbosity can stem from attempts to cover all possible interpretations of a query, resulting in lengthy and less concise answers. Additionally, chatbots may over-explain to compensate for their lack of true understanding, aiming to appear more helpful to users.Musicians Release Silent Album to Protest UK's AI Copyright ChangesOver 1,000 musicians, including Kate Bush and Cat Stevens, have released a silent album titled "Is This What We Want?" to protest proposed UK copyright law changes. The legislation would allow AI developers to train models on artists' works without compensation, requiring creators to opt out to prevent usage. Artists argue this undermines copyright principles and threatens their livelihoods.Alibaba Unveils QwQ-Max-Preview AI Model to Challenge DeepSeek R1 and OpenAI o1Alibaba has introduced QwQ-Max-Preview, its latest AI reasoning model designed to compete with DeepSeek's R1 and OpenAI's o1. This release aligns with Alibaba's commitment to invest $53 billion in cloud and AI infrastructure over the next three years, aiming to enhance its position in the AI sector. citeturn0search0Google Launches Free AI Coding Assistant with Generous Usage LimitsGoogle has introduced a free version of Gemini Code Assist, its AI-powered coding tool, now available globally for individual developers. This release offers up to 180,000 code completions per month, significantly surpassing the 2,000 completions provided by competitors like GitHub Copilot. Gemini Code Assist supports 38 programming languages and integrates seamlessly with development environments such as Visual Studio Code, GitHub, and JetBrains. While the free tier is comprehensive, advanced features like productivity metrics and integration with Google Cloud services require a subscription to paid plans.AI Chatbots in Therapy: Balancing Innovation and Ethical ConcernsAI chatbots can improve mental health care access, but raise ethical concerns. California proposes a ban on AI posing as human therapists due to misrepresentation and potential harm. User safety and trust are crucial, and AI should complement, not replace, human therapy.Seeking Late-Night Comfort from AI: A Personal ReflectionIn a moment of late-night anxiety about impending life changes, the author turned to ChatGPT for reassurance, asking, "Am I real?" While the AI provided summaries of philosophical perspectives on identity, the interaction offered only temporary relief. This experience highlights the limitations of seeking quick fixes from technology for deep-seated human concerns and underscores the importance of engaging with genuine self-reflection and human connection.

.NET in pillole
279 - Nuove funzionalità per GitHub Copilot

.NET in pillole

Play Episode Listen Later Feb 10, 2025 11:53


GitHub Copilot cresce sempre più, andando a semplificare il lavoro dello sviluppatore che lo va ad utilizzare. Il 6 febbraio è stato annunciato il rilascio in GA per Visual Studio Code di Copilot Edit, e l'introduzione in VS Code Insider di Copilot Agents.https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/https://github.com/marketplace?type=apps&copilot_app=true#copilot #github #agents #dotnetinpillole #podcast

LINUX Unplugged
600: Everyone, Everywhere, All at Once

LINUX Unplugged

Play Episode Listen Later Feb 3, 2025 68:50 Transcription Available


We celebrate 600 episodes, announce a new show feature, and officially launch the FreeBSD challenge.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:

Thoughtstuff - Tom Morgan on Microsoft Teams, Skype for Business and Office 365 Development

Audio version of video on YouTube. Enhance AI-generated bot messages Teams Toolkit for Visual Studio Code update – January 2025 SQL Server: The Polymath of Databases? Subscribe to all my videos at: https://thoughtstuff.co.uk/video Podcast: https://thoughtstuff.co.uk/itunes, https://thoughtstuff.co.uk/spotify or https://thoughtstuff.co.uk/podcast Blog: https://blog.thoughtstuff.co.uk

AI DAILY: Breaking News in AI
WILL AI CURE LONELINESS?

AI DAILY: Breaking News in AI

Play Episode Listen Later Jan 10, 2025 3:45


 Plus Can You Get Cloned In A Single Conversation? Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us AI Companions: Asian Tech Tackles Loneliness with AI Robots at CES 2025 At CES 2025, Asian tech firms like Samsung and TCL are betting on AI companion robots to tackle loneliness. These robots combine cute designs with emotional intelligence to offer companionship, aiming to provide comfort in an increasingly isolated society. However, the true impact on human loneliness remains to be seen. One Conversation is All it Takes for This AI to Deepfake Your Entire Personality A new AI model can replicate a person's personality after just one conversation, creating digital clones with 85% accuracy. This technology, based on a study by Stanford and Google DeepMind, raises concerns about privacy, identity theft, and the potential for scamming using deepfake personalities. Discover Alibaba's AI Coder: Build Apps in Minutes with Tongyi Lingma Alibaba Cloud has introduced Tongyi Lingma, an AI-powered coding tool that can construct applications within minutes. This system, enhancing productivity by over 10 times, supports both Microsoft's Visual Studio Code and JetBrains' IDEs, potentially outmatching competitors like ChatGPT in creative coding tasks. AI and Meaningful Work: Enhancing Human Roles in the Workplace AI is transforming the workplace by not just automating tasks but elevating the quality of work, enabling employees to engage in more meaningful activities. This shift suggests that AI leads to job enhancement rather than mere displacement, offering a glimpse into the future of work.  Is Personal Training Making a Comeback? 2025 Trends Suggest So Despite the rise of AI fitness apps, personal training is seeing a resurgence in 2025, thanks to the enduring appeal of Zoom-based remote sessions. This trend reflects a preference for personalized guidance over automated coaching, highlighting the human touch's value in fitness routines. AI Job Market Insights: WEF Predicts Growth in Hands-On Roles The World Economic Forum's 2025 report highlights a surge in demand for manual and technical skills, with AI automation expected to reduce clerical jobs. Conversely, there's a growing need for AI, big data, and cybersecurity expertise, suggesting a complex job market transformation.

Atareao con Linux
ATA 656 Configurar el gestor de archivos más rápido de Linux

Atareao con Linux

Play Episode Listen Later Dec 26, 2024 23:18


#yazi es un gestor de archivos ligero y muy muy rápido para la terminal de #linux. Puedes personalizarlo utilizando #lua como lenguaje de scripting Hay que ver lo que nos gusta el cacharreo. Si lugar a dudas esto está directamente relacionado con mi pasión por Linux, y con editores como Neovim. Y por supuesto que está relacionado directamente con programación. Pero no solamente esto. Seguro que te pasa con otros editores de código como Visual Studio Code, y por supuesto con los navegadores como Firefox o Chrome. Y es que, todo de lo que te estoy hablando tiene algo en común, y son los complementos. Y es que los complementos te permiten personalizar el comportamiento de cualquier aplicación y adaptarla exactamente a tus necesidades. No me lo puedes negar, te apasionan los complementos, al igual que te apasionan las apps de tu móvil. Puedes pasar toda una tarde de domingo, instalando y desinstalando aplicaciones en tu móvil o instalando y probando complementos en tu navegador de referencia. Y si además, puedes programar tus propios complementos esto ya roza el éxtasis. Todo esto, te lo cuento por Yazi, el gestor más rápido de Linux, que también admite complementos y que los puedes programar en Lua, lo mismo que con Neovim. En este episodio, te quiero hablar de Yazi, como lo puedes configurar el gestor de archivos más rápido de Linux, y porque yo he terminado programando un par de complementos para adaptarlo precisamente a mis necesidades. Más información y enlaces en las notas del episodio

Tech45
#681: Een ergonomisch wansmaakmiddel

Tech45

Play Episode Listen Later Dec 18, 2024 90:34


Follow-up DIGI Sun sensors, Jared Isaacman, Blue Origin New Glenn, Deep Space Gateway, Re #677 → NMBS ziet definitief af van wifi in Belgische treinen Onderwerpen De eindejaarslijstjes, het beste (of slechtste) van 2️⃣0️⃣2️⃣4️⃣ Beste film

AWS Morning Brief
The re:Invent Stragglers

AWS Morning Brief

Play Episode Listen Later Dec 16, 2024 5:21


AWS Morning Brief for the week of December 16th, 2024, with Corey Quinn. Links:Amazon Bedrock Guardrails reduces pricing by up to 85%Amazon CloudWatch now provides centralized visibility into telemetry configurationsAmazon EC2 F2 instances, featuring up to 8 FPGAs, are generally availableAmazon SES now offers Global Endpoints for multi-region sending resilienceAWS Toolkit for Visual Studio Code now includes Amazon CloudWatch Logs Live TailAccelerate your AWS Graviton adoption with the AWS Graviton Savings DashboardCapture data changes while restoring an Amazon DynamoDB tableUnderstand the benefits of physical replication in Amazon RDS for PostgreSQL Blue/Green DeploymentsHow AWS sales uses Amazon Q Business for customer engagementAWS Network Firewall Geographic IP Filtering launchIssue with DynamoDB local - CVE-2022-1471

Les Cast Codeurs Podcast
LCC 319 - le ramasse-miettes-charognes

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 16, 2024 70:05


Dans cet épisde en audio et en vidéo (youtube.com/lescastcodeurs), Guillaume et Emmanuel discutent des 15 ans de Go, d'une nouvelle approche de garbage collecting, de LLMs dans les applications Java, dobservabilité, d'une attaque de chaine d'approvisionnement via javac et d'autres choses. Enregistré le 13 décembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-319.mp3 News Langages Go fête son 15ème anniversaire ! https://go.dev/blog/15years discute les 15 ans la corrections de gotchas dans les for loops (notamment les variables étaient loop scoped) le fait que la compile echoue si on attend une version de go superieure seulement depuis go 1.21 en parallele de la gestion de la chaine d'outil (c'est en 2023 seulement!) opt-in telemetrie aussi recent Construire OpenJDK à partir des sources sur macOS https://www.morling.dev/blog/building-openjdk-from-source-on-macos/ de maniere surprenante ce n'est pas tres compliqué Papier sur l'aproche Mark-scavenge pour un ramasse miette https://inside.java/2024/11/22/mark-scavenge-gc/ papier de recherche utiliser l'accessibilité pour preuve de vie n'est pas idéal: un objet peut etre atteignable mais ne sera jamais accedé par le programme les regions les plus pauvres en objets vivant voient leurs objets bouger dans uen autre region et la regio libéré, c'est le comportement classique des GC deux methodes: mark evaguate qui le fait en deux temps et la liveness peut evoluer ; et scavenge qui bouge l'objet vivant des sa decouverte ont fait tourner via ZGC des experience pour voir les objects consideres vivants et bougés inutilement. resultats montrent un gros taux d'objets bougés de maniere inutile proposent un algo different ils marquent les objets vivants mais ne les bougent pas avant le prochain GC pour leur donner une change de devenir unreachable elimine beaucoup de deplacement inutiles vu que les objets deviennent non accessible en un cycle de GC jusquà 91% de reduction ! Particulierement notable dans les machines chargées en CPU. Les tokens d'accès court ou longs https://grayduck.mn/2023/04/17/refresh-vs-long-lived-access-tokens/ pourquoi des long access tokens (gnre refresh token) sont utilises pour des short lived dans oauth 2.0 refresh token simplifient la revocation: vu que seul le auth serveur a a verifier la révocation et les clients vérifient l'expiration et la validité de la signature refresh token ne sont envoyés que entre endpoints alors que les access tokens se baladent pas mal: les frontières de confiance ne sont pas traversées refresh token comme utilise infréquement, et donc peut etre protegee dans une enclave les changements de grants sont plus simple tout en restant distribuable histoire des access refresh token et access token permet de mieux tracer les abus / attaques les inconvenients: c'est plus compliqué en flow, the auth serveur est un SPOF amis mitigeable Java Advent est de retour https://www.javaadvent.com/calendar backstage Java integrite par defaut (et ses consequences sur l'ecosysteme) timefold (sovler) Les extensions JUNit 5 OpenTelemetry via Java Agent vs Micrometer analyse statique de code CQRS et les fonctionalités modernes de Java java simple (sans compilatrion, sans objet fullstack dev with quarkus as backend José Paumard introduit et explique les Gatherers dans Java 24 dans cette vidéo https://inside.java/2024/11/26/jepcafe23/ Librairies Micronaut 4.7, avec l'intégration de LangChain4j https://micronaut.io/2024/11/14/micronaut-framework-4-7-0-released/ Combiner le framework de test Spock et Cucumber https://www.sfeir.dev/back/spock-framework-revolutionnez-vos-tests-unitaires-avec-la-puissance-du-bdd-et-de-cucumber/ les experts peuvent écrire leurs tests au format Gherkin (de Cucumber) et les développeurs peuvent implémenter les assertions correspondantes avec l'intégration dans Spock, pour des tests très lisibles Spring 6.2 https://spring.io/blog/2024/11/14/spring-framework-6-2-0-available-now beans @Fallback améliorations sur SpELet sur le support de tests support de l'echape des property placeholders une initioalisation des beans en tache de fond nouvelle et pleins d'autres choses encore Comment créer une application Java LLM tournant 100% en Java avec Jlama https://quarkus.io/blog/quarkus-jlama/ blog de Mario Fusco, Mr API et Java et Drools utilise jlama + quarkus + langchain Explique les avantage de l'approche pure Java comme le cycle de vie unique, tester les modeles rapidement, securite (tout est in process), monolithe ahahah, observabilité simplifiée, distribution simplifiée (genre appli embarquée) etc Vert.x 5 en seconde incubation https://vertx.io/blog/eclipse-vert-x-5-candidate-2-released/ Support des Java modules (mais beacoup des modules vert.x eux-même ne le supportent pas support io_uring dans vert.x core le load balancing côté client le modele des callbacks n'est plus supporté, vive les Futur beaucoup d'améliorations autour de gRPC et d'autres choses Un article sur Spring AI et la multi modalite audio https://spring.io/blog/2024/12/05/spring-ai-audio-modality permet de voir les evolutions des APIs de Spring AI s'appluie sur les derniers modeles d'open ai des examples comme par exemple un chatbot voix et donc comment enregistrer la voix et la passer a OpenAI Comment activer le support experimental HTTP/3 dans Spring Boot https://spring.io/blog/2024/11/26/http3-in-reactor-2024 c'ets Netty qui fait le boulot puis Spring Netty l'article décrit les etapes pour l'utiliser dans vos applis Spring Boot ou Spring Cloud Gateway l'article explique aussi le cote client (app cliente) ce qui est sympa Infrastructure Un survol des offres d'observabilité http://blog.ippon.fr/2024/11/18/observabilite-informatique-comprendre-les-bases-2eme-partie/ un survol des principales offres d'observabilité Open source ou SaaS et certains outsiders Pas mal pour commencer à défricher ce qui vous conviendrait blog de ippon Web Sortie de Angular 19 https://blog.ninja-squad.com/2024/11/19/what-is-new-angular-19.0/ stabilité des APIs Signal APIs migration automatique vers signals composants standalone par défaut nouvelles APIs linkedSignal et resource de grosses améliorations de SSR et HMR un article également de Sfeir sur Angular 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ Angluar 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ composant standalone par default (limiter les problemes de dependances), peut le mettre en strict pour le l'imposer (ou planter) signalement des imports inutilisés @let pour les variables locales dans les templates linkedSignal (experimental) pour lier des signaux entre eux (cascade de changement suite a un evenement hydratation incrementale (contenu progressivement interactif avec le chargement - sur les parties de la page visible ou necessaires et event replay, routing et modes de rendu en rendy hybride, Hot module replacement etc The State of Frontend — dernière compilation des préférences des développeurs en terme de front https://tsh.io/state-of-frontend/ React en tête, suivi de Vue et Svelte. Angular seulement 4ème Côté rendering framework, Next.js a la majorité absolue, ensuite viennent Nuxt et Astro Zod est la solution de validation préférée Pour la gestion de date, date-fns est en tête, suivi par moment.js Côté state management, React Context API en première place, mais les suivants sont tous aussi pour React ! Grosse utilisation de lodash pour plein d'utilités Pour fetcher des resources distantes, l'API native Fetch et Axios sont les 2 vaincoeurs Pour le déploiement, Vercel est premier Côté CI/CD, beaucoup de Github Actions, suivi par Gitlab CI Package management, malgré de bonnes alternatives, NPM se taille toujours la part du lion Ecrasante utilisation de Node.js comme runtime JavaScript pour faire du développement front Pour ce qui est du typing, beaucoup utilisent TypeScript, et un peu de JSdoc, et la majorité des répondants pensent que TypeScript a dépassé JavaScript en usage Dans les API natives du navigateur, Fetch, Storage et WebSockets sont les APIs les plus utilisées La popularité des PWA devrait suivre son petit bonhomme de chemin En terme de design system, shadcn.ui en tête, suivi par Material, puis Bootstram Pour la gestion des styles, un bon mix de plain old CSS, de Tailwind, et de Sass/CSS Jest est premier comme framework de tests Les 3/4 des développeurs front utilisent Visual Studio Code, quant au quart suivant, c'est JetBrains qui raffle les miettes Pour le build, Vite récolte les 4/5 des voix ESLint et Prettier sont les 2 favoris pour vérifier le code   Parfois, on aimerait pouvoir tester une librairie ou un framework JavaScript, sans pour autant devoir mettre en place tout un projet, avec outil de build et autre. Julia Evans explore les différents cas de figure, suivant la façon dont ces librairies sont bundlées https://jvns.ca/blog/2024/11/18/how-to-import-a-javascript-library/ Certaines librairies permette de ne faire qu'un simple import dans une balise script Certaines frameworks sont distribués sous forme d'Universal Module Definition, sous CommonJS, d'ESmodule franchemet en tant que noob c'est compliqué quand même Data et Intelligence Artificielle L'impact de l'IA en entreprise et des accès aux documents un peu laxistes https://archive.ph/uPyhX l'indexing choppe tout ce qu'il peut et l'IA est tres puissante pour diriger des requetes et extraires les données qui auraient du etre plus restreintes Différentes manières de faire de l'extraction de données et de forcer la main à un LLM pour qu'il génère du JSON https://glaforge.dev/posts/2024/11/18/data-extraction-the-many-ways-to-get-llms-to-spit-json-content/ l'approche “je demande gentiment” au LLM, en faisant du prompt engineering en utilisant du function calling pour les modèles supportant la fonctionnalité, en particulier avant les approches de type “JSON mode” ou “JSON schema” ou effectivement si le modèle le supporte aussi, toujours avec un peu de prompting, mais en utilisant le “JSON mode” qui force le LLM a générer du JSON valide encore mieux avec la possibilité de spécifier un schema JSON (type OpenAPI) pour que le JSON en sortie soit “compliant” avec le schéma proposé Comment masquer les données confidentielles avec ses échanges avec les LLMs https://glaforge.dev/posts/2024/11/25/redacting-sensitive-information-when-using-generative-ai-models/ utilisation de l'API Data Loss Prevention de Google Cloud qui permet d'identifier puis de censurer / masquer (“redacted” en anglais) des informations personnelles identifiables (“PII”, comme un nom, un compte bancaire, un numéro de passeport, etc) pour des raison de sécurité, de privacy, pour éviter les brèche de données comme on en entend trop souvent parler dans les nouvelles On peut utiliser certains modèles d'embedding pour faire de la recherche de code https://glaforge.dev/posts/2024/12/02/semantic-code-search-for-programming-idioms-with-langchain4j-and-vertex-ai-embedding-models/ Guillaume recherche des bouts de code, en entrant une requête en langue naturel Certains embedding models supportent différents types de tâches, comme question/réponse, question en langue naturelle / retour sous forme de code, ou d'autres tâches comme le fact checking, etc Dans cet article, utilisation du modèle de Google Cloud Vertex AI, en Java, avec LangChain4j Google sort la version 2 de Gemini Flash https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/ La nouvelle version Gemini 2.0 Flash dépasse même Gemini 1.5 Pro dans les benchmarks Tout en étant 2 fois plus rapide que Gemini 1.5 Pro, et bien que le prix ne soit pas encore annoncé, on imagine également plus abordable Google présente Gemini 2 comme le LLM idéal pour les “agents” Gemini propose une vraie multimodalité en sortie (premier LLM sur le marché à le proposer) : Gemini 2 peut entrelacer du texte, des images, de l'audio Gemini 2 supporte plus de 100 langues 8 voix de haute qualité, assez naturelles, pour la partie audio Un nouveau mode speech-to-speech en live, où on peut même interrompre le LLM, c'est d'ailleurs ce qui est utilisé dans Project Astra, l'application mobile montrée à Google I/O qui devient un vrai assistant vocale en live sur votre téléphone Google annonce aussi une nouvelle expérimentation autour des assistants de programmation, avec Project Jules, avec lequel on peut discuter en live aussi, partager son code, comme un vrai pair programmeur Google a présenté Project Mariner qui est un agent qui est sous forme d'extension Chrome, qui va permettre de commander votre navigateur comme votre assistant de recherche personnel, qui va être capable de faire des recherches sur le web, de naviguer dans les sites web, pour trouver les infos que vous recherchez Cet autre article montre différentes vidéos de démos de ces fonctionnalités https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/ Un nouveau projet appelé Deep Research, qui permet de faire des rapports dans Gemini Advanced : on donne un sujet et l'agent va proposer un plan pour un rapport sur ce sujet (qu'on peut valider, retoucher) et ensuite, Deep Research va effectuer des recherches sur le web pour vous, et faire la synthèse de ses recherches dans un rapport final https://blog.google/products/gemini/google-gemini-deep-research/ Enfin, Google AI Studio, en plus de vous permettre d'expérimenter avec Gemini 2, vous pourrez aussi utiliser des “starter apps” qui montrent comment faire de la reconnaissance d'objet dans des images, comment faire des recherches avec un agent connecté à Google Maps, etc. Google AI Studio permet également de partager votre écran avec lui, en mobile ou en desktop, de façon à l'utiliser comme un assistant qui peut voir ce que vous faites, ce que vous coder et peut répondre à vos questions Méthodologies Un article de GitHub sur l'impact de la surutilisation des CPU sur la perf de l'appli https://github.blog/engineering/architecture-optimization/breaking-down-cpu-speed-how-utilization-impacts-performance/ c'est surprenant qu'ils ont des effets des 30% de perf c'est du a la non limit thermique, au boost de frequece qui en suit ils ont donc cherché le golden ratio pour eux autour de 60% ils prennent des morceaux de cluster kube poru faire tourner les workloads et ajoutent des wqorkload CPU artificiels (genre math) Sécurité Attaque de la chaîne d'approvisionnement via javac https://xdev.software/en/news/detail/discovering-the-perfect-java-supply-chain-attack-vector-and-how-it-got-fixed s'appuie sur l'annotation processeur l'annotation processors des dependances est chargé et executé au moment du build du projet et cherche les annotations processor dans le user classpath (via le pattern serviceloader) et donc si la dependance est attaquée et un annotation processor est ajouté ou modifié on a un vecteur d'attaque au moment de la compilation du projet ciblé des qu'on deparre l'IDE en gros workaround, activer -proc:none et activer les annotation processors explicitly dans votre outil de build certaines améliorations dans le JDK: le compilateur note qu'il execute un annotation processor dans java 23+ les annotation processors sont deactivés par defaut Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 19 décembre 2024 : Normandie.ai 2024 - Rouen (France) 20 janvier 2025 : Elastic{ON} - Paris (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24-25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Develpreneur: Become a Better Developer and Entrepreneur
How to Build Better Habits with Coding Standards

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Dec 5, 2024 23:37


Season after season, the “Building Better Developers” podcast inspires tech enthusiasts to refine their craft by fostering productive habits. In a recent episode, hosts Rob Broadhead and Michael Meloche emphasized coding standards—a crucial but often overlooked pillar in software development. Here's a deep dive into their insights on how personal and team-wide coding standards can elevate your development game. Why Coding Standards Matter At its core, coding standards provide consistency and clarity. Whether you're an independent developer or part of a large team, they serve as a guideline for writing clean, maintainable, and professional code. Rob pointed out that following standards is not about adhering to rigid rules but about making life easier—for yourself and your team. Michael added a critical perspective: coding standards often extend beyond aesthetics. In industries like healthcare and finance, compliance with external standards like HIPAA or SOC is mandatory. Similarly, developers working on mobile apps must align with platform-specific requirements, such as those of the Apple App Store, to ensure their software is accepted and functions as intended. Personalizing Coding Standards The hosts encouraged listeners to start with personal coding standards before expanding to team-wide practices. Rob explained that simple habits, such as consistent indentation, intuitive variable naming, and clear function structuring, can dramatically improve readability and maintainability. He also highlighted tools like linters and formatters, which can automate the enforcement of these standards. Michael expanded on this idea, emphasizing the concept of “clean code.” By writing self-documenting code—where functions, variables, and structures clearly convey their purpose—developers can minimize reliance on inline comments. However, he noted the importance of documenting elusive bugs or unique solutions directly in the codebase to prevent future troubleshooting headaches. Leveraging Tools for Consistent Coding Standards The episode underscored the importance of adopting tools like linters, such as SonarLint or integrated features in IDEs like Visual Studio Code. These tools can help enforce standards automatically, reducing the likelihood of human error. The hosts recommended configuring these tools for “format on save,” ensuring consistent styling across a team's codebase. Rob highlighted the productivity benefits of standardization, especially during code reviews and merges. Misaligned formats can create confusion, leading to unnecessary rework. By agreeing on a common setup and sharing IDE configurations, teams can streamline their development process and focus on meaningful changes. The Broader Impact of Standards Beyond the practicalities, coding standards contribute to a sense of professionalism and ownership. Rob likened them to a team's “stamp,” reflecting their identity and ethos. For individual developers, adhering to consistent standards fosters discipline, an essential trait for long-term growth. Michael introduced a compelling argument for balancing internal and external requirements. While personal and team standards are foundational, developers must also be mindful of external constraints, such as compliance and platform guidelines. This dual focus ensures that software not only functions well but also meets legal and industry expectations. Challenges and Takeaways: Refining Your Coding Standards The hosts concluded with a weekly challenge: dedicate 5–10 minutes daily to reviewing and refining your code according to your standards. This practice serves as a litmus test to assess whether you're following your own rules. For teams without established standards, they recommended adopting widely respected guidelines, like Google's or PEP 8 for Python, as a starting point. Bonus tips included leveraging documentation exports and linter configurations to share consistent settings across teams. By doing so, developers can create an environment where everyone writes code that feels cohesive and professional. Final Thoughts Coding standards might not be the flashiest aspect of development, but they are undeniably impactful. By committing to personal and team-wide practices, you can improve not just your code but also your efficiency, collaboration, and career prospects. Whether you're refining your Pomodoro technique or revisiting old projects, take a moment to reflect on your coding habits and how they align with your standards. As Rob and Michael emphasized, “Building Better Developers” is about incremental progress. Coding standards are one small step toward becoming a more disciplined and effective developer. Start today, and see the difference it makes in your workflow and your team's success. Stay Connected: Join the Develpreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Coding Standards – A Personal Approach Look More Professional With Personal Coding Standards Coding Standards: Understanding Their Importance in Software Development Updating Developer Tools: Keeping Your Tools Sharp and Efficient Building Better Habits Videos – With Bonus Content

All JavaScript Podcasts by Devchat.tv
TypeScript Success: Integration, Type Checking, and Generics - JsJ 660

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Dec 3, 2024 80:36


In this episode, Charles sits down with TypeScript expert Matt Pocock to dive deep into the world of TypeScript migration, learning curves, and developer challenges. They explore why having a TypeScript "wizard" is crucial for teams transitioning from JavaScript and how TypeScript's integration with development environments like Visual Studio Code has been a game changer.Charles and Matt discuss the importance of real-time typechecking, the community's role in TypeScript's success, and practical strategies for migrating large codebases to TypeScript. You'll hear about Matt's journey from drama school to becoming a DevRel expert, his contributions to the XState library, and his philosophy of type-driven development. Together, they highlight TypeScript's advantages, such as enhanced code reliability and the nuanced benefits of explicit vs. inferred types.Whether you're a seasoned developer or just starting with TypeScript, this episode offers valuable insights and actionable advice to help you harness the full power of static typing in your projects. Tune in for a fascinating discussion that underscores the value of "boring" code, the need for continual learning, and the ongoing evolution of software development practices. Stay with us as we unravel the intricacies of TypeScript and share practical tips to elevate your coding journey.SocialsLinkedIn: Matt PocockBecome a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

Microsoft 365 Developer Podcast
Why build declarative agents with Visual Studio Code vs Copilot Studio

Microsoft 365 Developer Podcast

Play Episode Listen Later Nov 25, 2024 36:35


In this episode Jeremy Thake talks to Sebastien Levert. The podcast focuses on the evolution and development of tools within the Microsoft 365 ecosystem, particularly around agent building and Copilot Studio. The discussion highlights the distinction between low-code/no-code solutions for makers and pro-code tools for developers. Key tools such as Agent Builder and Teams Toolkit are explored, emphasizing their respective strengths: Agent Builder's accessibility for information workers versus Teams Toolkit's power and flexibility for professional developers. They discuss how developers can leverage Visual Studio Code, adaptive cards, and GitHub integration for building robust, scalable agents. A recurring theme is enabling seamless collaboration between makers and developers, with a focus on governance, scalability, and customization, particularly for enterprise-grade agents. The conversation also delves into advancements in API integration, such as using OpenAPI files and the emerging TypeSpec language to streamline the creation of APIs and plugins. Graph connectors and Power Platform connectors are highlighted as critical components for data access and processing within agents. To watch their Ignite breakout with all the demos discussed watch Developers guide to building your own agents To find out more please visit https://aka.ms/extendcopilotm365 

Microsoft Cloud IT Pro Podcast
Episode 388 – Getting Started with Azure Bicep: Infrastructure as Code with a Domain Specific Language

Microsoft Cloud IT Pro Podcast

Play Episode Listen Later Nov 7, 2024 34:06 Transcription Available


Welcome to Episode 388 of the Microsoft Cloud IT Pro Podcast. In this episode, we dive into Azure Bicep, Microsoft's streamlined language for defining cloud infrastructure. If you're new to Infrastructure as Code (IaC) or looking to simplify your Azure deployments, listen in to learn how easy it is to get started with Azure Bicep. We walk through the essentials, from setting up the necessary tools such as Visual Studio Code and the Azure Bicep extension, to exploring the intuitive features that make Bicep so powerful. Discover how Bicep's functions, objects, and simplified syntax improve your workflow, offering a more readable and maintainable alternative to traditional ARM templates. Whether you're an Azure admin or a developer, this episode provides a clear path to building and managing Azure resources effectively with Bicep. Tune in and start coding your infrastructure with confidence! Like what you hear and want to support the show? Check out our membership options. Show Notes Microsoft Ignite What is Bicep? Bicep functions Quickstart: Create Bicep files with Visual Studio Code Azure/azure-quickstart-templates ˚Decompiling ARM template JSON to Bicep Learn modules for Bicep About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!

DevOps and Docker Talk
State of Kubernetes UIs

DevOps and Docker Talk

Play Episode Listen Later Oct 18, 2024 17:08


Bret explores the spectrum of user interfaces and tools available for managing Kubernetes clusters as of Autumn 2024. This solo episode touches on both paid and open-source options, looking at their features, benefits, and drawbacks. Key tools covered include Lens, Aptakube, K8Studio, Visual Studio Code's Kubernetes extension, K9S, Portainer, and Meshery. Bret also discusses specialized tools like Headlamp and the Argo CD dashboard, and their specific use cases and advantages.★Topics★LensAptakubeK8StudioK9sKubernetes DashboardPortainerMesheryHeadlampCreators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host (00:00) - Intro (01:43) - Paid UI Offerings (02:22) - Lens (03:42) - Aptakube and K8Studio (04:30) - Free and Open Apps (05:42) - K9s (06:45) - SaaS Offerings (07:32) - Web Dashboards (08:08) - Portainer (09:08) - Meshery (11:14) - Headlamp (13:28) - Argo CD's Web Dashboard You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

airhacks.fm podcast with adam bien
The AI Revolution in Java Development and Devoxx Genie

airhacks.fm podcast with adam bien

Play Episode Listen Later Oct 6, 2024 68:34


An airhacks.fm conversation with Stephan Janssen (@Stephan007) about: Stephan previously appeared on "#254 How JavaPolis and Devoxx Happened", discussion on the AI revolution in programming, development of an AI-assisted photo sharing application, creation of the Devoxx Genie IntelliJ plugin for AI-augmented programming, advantages of Claude 3.5 from Anthropic, use of local AI models in development environments, integration of AI in Java development, langchain4j and its adoption by Red Hat, development of Java-based AI tools like Lama3.java, jlama and JVector, potential for specialized AI models in software development, challenges and opportunities for junior and senior developers in AI-augmented programming, importance of understanding cloud services and cost structures when using AI, potential future of prompt-based programming and code generation, discussion on maintaining and improving AI-generated code, exciting developments in Java for AI including project valhalla and tornadovm, potential for running AI models directly on Java without external dependencies, considerations for enterprise AI adoption and integration, the need for promoting Java's capabilities in AI development, potential for Visual Studio Code port of Devoxx Genie, the challenge of maintaining AI-generated code versus keeping prompts, the concept of "prompt ops" for software development, the use of AI for code review and improvement, the potential for AI to lower the barrier to entry for new developers, and the exciting future of AI in software development Stephan Janssen on twitter: @Stephan007

Low Code Approach
Episode 63: Custom agents in Microsoft Copilot (w/ Jeremy Thake)

Low Code Approach

Play Episode Listen Later Oct 1, 2024 32:31


NOTE: This episode was recorded prior to the September 16th moment when Microsoft rebranded copilots to agents.  Jeremy Thake joins the LCA crew to talk about extensibility in Microsoft Copilot. What does this mean for pro code developers using Visual Studio Code and makers using Microsoft Copilot Studio? What is the difference with Graph connectors and how do they extend knowledge and agents? And with so many options for extensibility with agents, who is the owner of ensuring security of that data? Check out https://adoption.microsoft.com

Hacker News Recap
September 29th, 2024 | Gavin Newsom vetoes SB 1047

Hacker News Recap

Play Episode Listen Later Sep 30, 2024 12:44


This is a recap of the top 10 posts on Hacker News on September 29th, 2024.This podcast was generated by wondercraft.ai(00:36): Gavin Newsom vetoes SB 1047Original post: https://news.ycombinator.com/item?id=41690302&utm_source=wondercraft_ai(01:43): NotebookLM's automatically generated podcasts are surprisingly effectiveOriginal post: https://news.ycombinator.com/item?id=41693087&utm_source=wondercraft_ai(02:50): Some Go web dev notesOriginal post: https://news.ycombinator.com/item?id=41687707&utm_source=wondercraft_ai(04:08): Visual Studio Code is designed to fracture (2022)Original post: https://news.ycombinator.com/item?id=41691577&utm_source=wondercraft_ai(05:25): FTC Report Confirms: Commercial Surveillance Is Out of ControlOriginal post: https://news.ycombinator.com/item?id=41688080&utm_source=wondercraft_ai(06:33): Map with public fruit treesOriginal post: https://news.ycombinator.com/item?id=41688469&utm_source=wondercraft_ai(07:29): A Bendy RISC-V ProcessorOriginal post: https://news.ycombinator.com/item?id=41687739&utm_source=wondercraft_ai(08:46): Sitina1 Open-Source CameraOriginal post: https://news.ycombinator.com/item?id=41688018&utm_source=wondercraft_ai(09:52): FDA approves a novel drug for schizophreniaOriginal post: https://news.ycombinator.com/item?id=41689138&utm_source=wondercraft_ai(11:03): When To Do What You LoveOriginal post: https://news.ycombinator.com/item?id=41687176&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Atareao con Linux
ATA 631 Programar en una tablet Android

Atareao con Linux

Play Episode Listen Later Sep 26, 2024 18:09


Convierte tu tableta Android en una herramienta de desarrollo eficiente. Te presento dos opciones: utilizar Visual Studio Code alojado en un VPS, o instalar Neovim en Termux. Ambas soluciones te permitirán escribir y editar código desde tu tableta, cada una con sus ventajas y desventajas. ¿Quieres aprender a programar desde cualquier lugar, con un dispositivo tan portátil como una tableta Android? Aquí descubrirás cómo configurar un entorno de desarrollo usando estas dos opciones. Más información, enlaces y notas en https://atareao.es/podcast/631

PodRocket - A web development podcast from LogRocket
Exploring Node.js with David Neal

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Aug 29, 2024 27:29


David Neal, developer advocate and Asana content creator, discusses his talk, The Illustrated Guide to Node.js. David shares insights from his 10-year journey with Node.js, discussing its origins, use cases, and why it remains a vital tool for developers, giving insights into JavaScript's evolution and practical tips for navigating the Node.js ecosystem. Links https://reverentgeek.com https://twitter.com/reverentgeek https://techhub.social/@reverentgeek https://staging.bsky.app/profile/reverentgeek.com https://www.threads.net/@reverentgeek https://github.com/reverentgeek https://www.youtube.com/ReverentGeek https://www.linkedin.com/in/davidneal We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: David Neal.

BSD Now
573: Kyua Graduation

BSD Now

Play Episode Listen Later Aug 22, 2024 54:18


What Would It Take to Recreate Bell Labs?, Human Scale Software vs Open Source, How to run Visual Studio (VS) Code Remote over SSH on FreeBSD 13 and 14, Why are some emails from Charlie Root and others are from root?, Backward compatibility has real costs even for settings, Kyua graduates, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines What Would It Take to Recreate Bell Labs? (https://www.construction-physics.com/p/what-would-it-take-to-recreate-bell) Human Scale Software vs Open Source (https://posixcafe.org/blogs/2024/07/31/0/) News Roundup How to run Visual Studio (VS) Code Remote over SSH on FreeBSD 13 and 14 (https://group.miletic.net/en/blog/2024-06-14-how-to-run-visual-studio-vs-code-remote-over-ssh-on-freebsd-13-and-14) Why are some emails from Charlie Root and others are from root? (https://dan.langille.org/2024/07/27/why-are-some-emails-from-charlie-root-and-others-are-from-root/) Backward compatibility, even for settings, has real costs (https://utcc.utoronto.ca/~cks/space/blog/programming/BackwardCompatibilityHasCosts) Kyua graduates (https://jmmv.dev/2024/08/kyua-graduates.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions 573 - Vedran - linuxulator (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/573/feedback/Vedran%20-%20linuxulator) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)

The Azure Podcast
Episode 503 - Secure Future Initiative

The Azure Podcast

Play Episode Listen Later Aug 21, 2024


In this episode of the Azure Podcast, Cale, Evan, and Sujit engage in a comprehensive discussion about the Secure Future Initiative at Microsoft. They explore how this initiative influences our use of Azure and why it's beneficial for customers to consider implementing similar strategies in their own Azure environments.   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode503.mp3 YouTube: https://youtu.be/TyvkKhdRR5k Resources: https://www.microsoft.com/en/microsoft-cloud/resources/secure-future-initiative#tabx6a6ce2c0327741938ac10b008d5cff64 https://learn.microsoft.com/en-us/azure/well-architected/security/design-patterns SFI Updates   Other resources: https://azure.microsoft.com/en-us/updates/v2/Volume-enhancements https://azure.microsoft.com/en-us/updates/v2/Dedicated-log-analytics-tables-in-Application-Gateway https://azure.microsoft.com/en-us/updates/v2/ANF-Double-Encryption-at-rest https://azure.microsoft.com/en-us/updates/v2/FIPS-mutability-support-in-AKS https://azure.microsoft.com/en-us/updates/v2/CNI-Powered-by-Cilium-Azure-CNI-Overlay-support-AKS https://azure.microsoft.com/en-us/updates/v2/New-features-in-AKS-extension-for-Visual-Studio-Code https://azure.microsoft.com/en-us/updates/v2/Enable-multifactor-authentication-for-your-tenant-by-15-October-2024  (also below) https://azure.microsoft.com/en-us/updates/v2/generally-available-azure-chaos-studio-supports-a-new-network-isolation-fault-for-virtual-machines https://azure.microsoft.com/en-us/updates/v2/High-Scale-mode-Container-Insights

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
AI Magic: Shipping 1000s of successful products with no managers and a team of 12 — Jeremy Howard of Answer.ai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 16, 2024 58:56


Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe

Atareao con Linux
ATA 616 Vídeos de programación y recursos de Rust

Atareao con Linux

Play Episode Listen Later Aug 5, 2024 19:24


Algunos recursos para aprender a programar en #rust y tu opinión sobre vídeos de programación en #python #bash y #rust y otros temas como #vscode En esta ocasión, quiero compartir algunas reflexiones y avances en los proyectos que he estado desarrollando. Aunque he tenido algunos contratiempos y debates sin resolver, me he mantenido comprometido en asentar mis conocimientos en programación, especialmente en Rust, un lenguaje en el que llevo trabajando unos tres años. Durante los últimos días, me he dedicado intensamente a realizar ejercicios y a explorar nuevas formas de aprendizaje y enseñanza, tanto para Rust como para otros lenguajes como Python y Bash. En este episodio, quiero comentarte sobre mis progresos con los ejercicios de Rustlings, una serie de retos diseñados para mejorar las habilidades en Rust. Además, quiero comentarte la posibilidad de crear nuevos vídeos, tanto de Rust como de Python y Bash, pero para ello, es fundamental tener feedback, y saber si esto realmente interesará o no. También quiero abordar algunos temas relacionados con el uso de herramientas de programación como Visual Studio Code y compartir algunos recursos para quien quiera profundizar en Rust. Como siempre, estoy interesado en escuchar tu opiniones y comentarios para decidir la dirección de mis próximos vídeos. Así que, sin más preámbulos, ¡vamos al turrón! Más información, enlaces y notas en https://atareao.es/podcast/616

Salesforce Developer Podcast
221: MuleSoft Anypoint Code Builder with Alex Martinez

Salesforce Developer Podcast

Play Episode Listen Later Jul 8, 2024 17:32


Join us as we welcome Alex Martinez to discuss the latest updates on Anypoint Code Builder (ACB), a new IDE based on Visual Studio Code, which is set to replace the old Eclipse-based Anypoint Studio. Alex sheds light on the significant release that occurred on June 4th, 2024, bringing notable improvements to DataWeave and MUnit. You'll hear about the benefits of transitioning to a modern IDE, the enhanced capabilities for running MUnit tests, and the introduction of the DataWeave Expression Builder. Listen in as we explore the recent enhancements in the MuleSoft ecosystem, including the ability to select different Java versions within Anypoint Studio and Anypoint Code Builder (ACB). We'll discuss the dual flavors of ACB — local and web-based IDEs — and their respective advantages, such as ease of configuration and flexibility for specific projects. This episode promises an exciting look ahead at future updates and the ongoing evolution of MuleSoft tools, sure to spark your interest in revisiting and exploring these new capabilities further. Show Highlights: Overview of ACB as a Visual Studio Code-based IDE  Enhancements in DataWeave and MUnit within ACB User-friendly features of the DataWeave Expression Builder Dual options of local and web-based IDEs in ACB Support for different Java versions in both Anypoint Studio and ACB New features, including async API support and improved API kit configurations Links: MuleSoft Anypoint Code Builder (ACB) Release Notes - https://docs.mulesoft.com/release-notes/code-builder/acb-release-notes Anypoint Code Builder June '24 Release Playlist - https://www.youtube.com/playlist?list=PLiONnRuKRuJCtQBGW9Qcrp_tCMgBmg0-F  

airhacks.fm podcast with adam bien
OpenRewrite: Transforming Java Code at Scale

airhacks.fm podcast with adam bien

Play Episode Listen Later Jul 7, 2024 47:33


An airhacks.fm conversation with Jonathan Schneider (@jon_k_schneider) about: OpenRewrite as an open-source tool for code transformation using lossless semantic trees (LSTs), recipes as programs that manipulate the LST, YAML configuration for defining recipes, dry run and in-place code modification options, separation of open-source and commercial aspects of the project, Moderne as a SaaS platform for large-scale code analysis and transformation, visualization features in Moderne including dependency usage violin charts, impact analysis capabilities, organizational structure in Moderne for managing large codebases, integration of OpenRewrite in various IDEs and tools including Amazon Q Code Transformer, IntelliJ IDEA, and Visual Studio Code, the business model of open-source and commercial offerings, the genesis of OpenRewrite from Gradle Lint in 2015-2016, recent momentum in adoption, Jonathan's background with micrometer project, discussion about IDEs including Visual Studio Code and IntelliJ IDEA, potential future topics including Micrometer and Spinnaker Jonathan Schneider on twitter: @jon_k_schneider

Atareao con Linux
ATA 603 Visual Studio Code y Neovim con IA en Docker

Atareao con Linux

Play Episode Listen Later Jun 17, 2024 21:20


Como utilizar la Inteligencia Artifical #IA en #Docker con Visual Studio Code #VSCode y #Neovim para ayudar a documentar en #Python y #Rust . Extensiones Después de mi primera prueba de concepto utilizando la IA en local con Docker, he pensado que hay que aterrizar esto de la inteligencia en algo práctico y que realmente me sea de utilidad, a mi y a todos, claro. En este sentido, evidentemente, y tal y como he contado en alguna que otra ocasión, la cuestión es delegar en la IA aquellos trabajos tediosos o que realmente no nos aportan valor, dedicando nuestros esfuerzos en lo que realmente nos interesa. Así que como primer punto empezaré por la parte de la programación. Evidentemente, el objetivo no es que la IA programe por mi, sino que la IA se encargue de hacer aquello que yo no quiero hacer, o mejor dicho, aquello que yo no hago, que es básicamente documentar y poner mensajes en los commits que sean de utilidad. Soy consciente que esto de la documentación es realmente importante, pero, es que ahora mismo no lo estoy haciendo, así que mas vale algo que nada. De esta forma, aunque sea revisando los comentarios ya haré algo mas. Así, en este episodio del podcast te voy hablar de las pruebas que he estado realizando con Visual Studio Code y con Neovim. Más información, enlaces y notas en https://atareao.es/podcast/603

Software Defined Talk
Episode 469: Amanda K. Silver on Developer Tools

Software Defined Talk

Play Episode Listen Later May 31, 2024 46:31


Matt interviews Amanda K. Silver, Corporate Vice President in the Developer Division at Microsoft. They discuss the latest developments with Visual Studio Code, GitHub Copilot, and why developers only want to see live demos. Plus, some thoughts on tiny houses and Murphy beds vs. hammocks. Show Links visualstudio.com (https://t.co/QHl5wYlaUn) Contact Amanda LinkedIn: Amanda K. Silver (https://www.linkedin.com/in/amandaksilver/) Twitter: @amandaksilver (https://twitter.com/amandaksilver?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SDT News & Hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Photo Credit (https://unsplash.com/photos/brown-wooden-house-in-the-middle-of-forest-during-daytime-4S6FmLPEP6A) Special Guest: Amanda K. Silver.

Develpreneur: Become a Better Developer and Entrepreneur
Short Coding Videos: A Career Booster for Developers

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later May 14, 2024 28:46


This episode covers a key aspect of developer career advancement: short coding videos. Drawing from "Source Code for Happiness," we discuss how skills, projects, and hustles shape careers. We'll be focusing on the mindset behind creating coding video shorts. As well as offering a glimpse into the potential career boost this practice provides. Getting Started: The Accessibility of Short Coding Videos With the ubiquity of smartphones and PCs equipped with cameras, creating coding videos has never been more accessible. Whether you're learning Python, SQL, or building apps, the barrier to entry is minimal. You can showcase skills by recording and narrating your coding process. Documenting projects this way enhances coding abilities. Simply hit record to highlight your skills. Verbalizing Your Thoughts: Why It Matters Verbalizing thoughts while coding helps with problem-solving and improves the ability to articulate decisions and approaches. This practice prepares you to explain code rationale professionally and fosters crucial communication skills for collaborative work environments. Building Your Brand: Leveraging Coding Videos for Career Growth Coding videos are more than instructional content - they build your professional brand. Embedded in blogs, used as lead magnets, or on YouTube - they showcase coding skills. These videos tangibly demonstrate proficiency. They also provide valuable references for future projects and troubleshooting. The IDE Environment Debate: Finding Your Comfort Zone While IDE choice varies, using industry-standard options like Visual Studio Code or PyCharm offers familiarity. These align with real-world development practices potential employers recognize. Ultimately, choose an IDE based on personal comfort and efficient workflow. This ensures seamless recording sessions and effective code demonstrations. Structuring Your Videos: Balancing Preparation and Authenticity Creating short coding videos requires balancing preparation and spontaneity. Scripting or outlining code segments streamlines recording. However, allowing room for organic exploration and problem-solving adds authenticity. Find a workflow that suits your style while ensuring clarity and coherence for viewers. GitHub Integration: Sharing and Showcasing Your Work Integrating GitHub with coding videos adds professionalism and documentation. Sharing code repositories gives viewers access to project evolution, commit history, and supplementary materials. This includes README files and other documentation. The transparent approach enhances credibility and fosters community engagement. Embracing the Potential of Short Coding Videos In conclusion, short coding videos are a multifaceted tool for developer career advancement. Benefits include honing coding skills, building a professional brand, and fostering community engagement. Embracing this practice and integrating it into your development journey unlocks opportunities. Growth and recognition await in the ever-evolving tech landscape. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Create a A Better Professional Network Through Planning Be Intentional In Choosing Tasks For Career Growth Turning Extra Effort Into A Better Career – Season Review Behind the Scenes Podcast Video

RunAs Radio
PowerShell 7.4 with Sydney Smith

RunAs Radio

Play Episode Listen Later Apr 24, 2024 32:31


Have you downloaded the latest version of PowerShell? While at the MVP Summit in Redmond, Washington, Richard sat down with Sydney Smith to discuss some of the features in PowerShell 7.4. Sydney talks about the successful delivery of PSResourceGet and PSReadline, two long-in-development features that have reached their so-called "1.0" state. The conversation also digs into the ongoing challenge of some sysadmins sticking with PowerShell 5.1, the last of the Windows-only versions. Today, PowerShell 7 has feature parity with 5.1 and many new features that improve the quality, security, and capabilities of PowerShell. Try the latest!LinksPowerShell 7.4PowerShell on GitHubPowerShell in Visual Studio CodeGet PowerShellRecorded March 11, 2024

Laravel News Podcast
October in February, new terminals, and modularized Laravel

Laravel News Podcast

Play Episode Listen Later Feb 29, 2024 39:02


Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Sentry - code breaks, fix it faster. Don't just observe, take action today!(06:33) - With Laravel 10.44 you can add Model Scopes and Observers using PHP Attributes (13:45) - October CMS v3.6 Ships Today, Full of New Features (17:13) - PhpStorm is getting a brand new terminal (20:53) - Laracon EU Videos are now out (22:24) - Easy management of your application settings with Setting Pro (25:52) - Create elegant Discord bots with Laracord (29:13) - Tempo: The easiest way to work with dates in JavaScript (30:04) - Handle money transactions in Eloquent with Laravel Wallet (31:39) - Modularize your Laravel application with the Modular package (34:13) - Use Google's Gemini AI in Laravel (34:59) - Add Kanban boards to your Laravel app in seconds (36:02) - Essential plugins for PHPStorm users (37:30) - Six essential plugins for Visual Studio Code