Podcasts about intellij

  • 90PODCASTS
  • 147EPISODES
  • 55mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about intellij

Latest podcast episodes about intellij

Lenny's Podcast: Product | Growth | Career
Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder & CEO)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Apr 20, 2025 74:06


Varun Mohan is the co-founder and CEO of Windsurf (formerly Codeium), an AI-powered development environment (IDE) that has been used by over 1 million developers in just four months and has quickly emerged as a leader in transforming how developers build software. Prior to finding success with Windsurf, the company pivoted twice—first from GPU virtualization infrastructure to an IDE plugin, and then to their own standalone IDE.In this conversation, you'll learn:1. Why Windsurf walked away from a profitable GPU infrastructure business and bet the company on helping engineers code2. The surprising UI discovery that tripled adoption rates overnight.3. The secret behind Windsurf's B2B enterprise plan, and why they invested early in an 80-person sales team despite conventional startup wisdom.4. How non-technical staff at Windsurf built their own custom tools instead of purchasing SaaS products, saving them over $500k in software costs5. Why Varun believes 90% of code will be AI-generated, but engineering jobs will actually increase6. How training on millions of incomplete code samples gives Windsurf an edge, and creates a moat long-term7. Why agency is the most undervalued and important skill in the AI era—Brought to you by:• Brex—The banking solution for startups• Productboard—Make products that matter• Coda—The all-in-one collaborative workspace—Where to find Varun Mohan:• X: https://x.com/_mohansolo• LinkedIn: https://www.linkedin.com/in/varunkmohan/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Varun's background(03:57) Building and scaling Windsurf(12:58) Windsurf: The new purpose-built IDE to harness magic(17:11) The future of engineering and AI(21:30) Skills worth investing in(23:07) Hiring philosophy and company culture(35:22) Sales strategy and market position(39:37) JetBrains vs. VS Code: extensibility and enterprise adoption(41:20) Live demo: building an Airbnb for dogs with Windsurf(42:46) Tips for using Windsurf effectively(46:38) AI's role in code modification and review(48:56) Empowering non-developers to build custom software(54:03) Training Windsurf(01:00:43) Windsurf's unique team structure and product strategy(01:06:40) The importance of continuous innovation(01:08:57) Final thoughts and advice for aspiring developers—Referenced:• Windsurf: https://windsurf.com/• VS Code: https://code.visualstudio.com/• JetBrains: https://www.jetbrains.com/• Eclipse: https://eclipseide.org/• Visual Studio: https://visualstudio.microsoft.com/• Vim: https://www.vim.org/• Emacs: https://www.gnu.org/software/emacs/• Lessons from a two-time unicorn builder, 50-time startup advisor, and 20-time company board member | Uri Levine (co-founder of Waze): https://www.lennysnewsletter.com/p/lessons-from-uri-levine• IntelliJ: https://www.jetbrains.com/idea/• Julia: https://julialang.org/• Parallel computing: https://en.wikipedia.org/wiki/Parallel_computing• Douglas Chen on LinkedIn: https://www.linkedin.com/in/douglaspchen/• Carlos Delatorre on LinkedIn: https://www.linkedin.com/in/cadelatorre/• MongoDB: https://www.mongodb.com/• Cursor: https://www.cursor.com/• GitHub Copilot: https://github.com/features/copilot• Llama: https://www.llama.com/• Mistral: https://mistral.ai/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• React: https://react.dev/• Sonnet: https://www.anthropic.com/claude/sonnet• OpenAI: https://openai.com/• FedRamp: https://www.fedramp.gov/• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law• How to win in the AI era: Ship a feature every week, embrace technical debt, ruthlessly cut scope, and create magic your competitors can't copy | Gaurav Misra (CEO and co-founder of Captions): https://www.lennysnewsletter.com/p/how-to-win-in-the-ai-era-gaurav-misra—Recommended book:• Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs: https://www.amazon.com/Fall-Love-Problem-Solution-Entrepreneurs/dp/1637741987—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

GOTO - Today, Tomorrow and the Future
Balancing Tech & Human Creativity • Susanne Kaiser, Michaela Greiler, Adele Carpenter, Daniel Terhorst-North & Simon Wardley

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Mar 28, 2025 26:09 Transcription Available


This interview was recorded for GOTO Unscripted.https://gotopia.techRead the full transcription of this interview hereSusanne Kaiser - Independent Tech Consultant & Author of "Architecture for Flow"Michaela Greiler - Code Reviews Expert, Trainer & ConsultantAdele Carpenter - Software Engineer at TriforkDaniel Terhorst-North - Originator of Behavior Driven Development (BDD) & Principal at Dan North & AssociatesSimon Wardley - Thought Lord, Mapper, Mostly GoodRESOURCESSusannehttps://mastodon.social/@suksrhttps://susannekaiser.netMichaelahttps://twitter.com/mgreilerhttps://michaelagreiler.comAdelehttps://bsky.app/profile/97adele.bsky.socialDanielhttps://bsky.app/profile/suksr.bsky.socialhttp://dannorth.net/blogSimonhttps://bsky.app/profile/swardley.bsky.socialhttp://blog.gardeviance.orgDESCRIPTIONExplore the rich tapestry of what it truly means to support developers.The conversation took a forward-looking turn as they examined the role of AI, not as a looming replacement, but as a powerful ally that enhances human creativity, much like past innovations that revolutionized workflows. They showcased how intuitive design—exemplified by tools like IntelliJ—can make a developer's experience seamless and enjoyable.RECOMMENDED BOOKSSusanne Kaiser • Adaptive Systems With Domain-Driven Design, Wardley Mapping & Team TopologiesSimon Wardley • Wardley MapsSimon Wardley • Wardley Mapping, The KnowledgeMatthew Casperson • DevEx as a ServiceChristian Clausen • Five Lines of CodeDavid Anderson, Marck McCann & Michael O'Reilly • TBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Spring Office Hours
S3E42 - IntelliJ Tips & Tricks with Siva Reddy

Spring Office Hours

Play Episode Listen Later Dec 10, 2024 59:01


Join Dan Vega and DaShaun Carter as they welcome Siva Prasad Reddy K, Developer Advocate at JetBrains. Join us for the latest updates from the Spring ecosystem. In this episode, Siva will be sharing tips and tricks for IntelliJ to help you get the most out of your development experience. You can participate in our live stream to ask questions or catch the replay on your preferred podcast platform.Show Notes: JetBrains BlogSivaLabs

Les Cast Codeurs Podcast
LCC 318 - La faille sous la faille sous la faille

Les Cast Codeurs Podcast

Play Episode Listen Later Nov 8, 2024 83:53


Cet épisode est relativement pauvre en IA, ouaissssssss ! Mais il nous reste plein de Spring, plein de failles, plein d'OpenTelemetry, un peu de versionnage sémantique, une astuce Git et bien d'autres choses encore. Enregistré le 8 novembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode–318.mp3 News Langages Le createur de Fernflower in decompilateur qui a relancé l'outillage autour de Java 8 est mort, un hommage d'IntelliJ IDEA https://blog.jetbrains.com/idea/2024/11/in-memory-of-stiver/ les decompilateurs s'appuyaient sur des patterns reconnus et étaient fragiles et incomplets surtout quand Java 8 a changé le pattern try catch et ajouté des concepts comme les annotations le champ était moribond quand Stiver s'est lancé dommage l'article n'explique pas comment le control-flow graph est genere a partir du bytecode pour ameliorer la decompilation Librairies On peut maintenant utiliser Jakarta Data Repository dans Quarkus https://in.relation.to/2024/11/04/data-in-quarkus/ petit article avec un projet example aussi un lien sur la presentation de Jakarta Data par Gavin à Devoxx Belgique Quarkus 3.16 https://quarkus.io/guides/opentelemetry-logging logs distribués avec OpenTelemetry (preview) deserialiseurs Jackson sans reflection des améliorations dans la stack de sécurité TLS registry a ratjouté graphql client et keycloak admin client LEs logs des container devservice et des access http sont visible dans la DevUI Les extensions peuvent maintenant ecrire leur doc en markdown (c'etait juste asciidoc avant) Un artcile sur comment débuter en Spring Batch https://www.sfeir.dev/back/planifier-des-taches-avec-spring-batch/ Le support OAuth2 pour RestClient arrive dans Security 6.4 / Boot 3.4. Plus de hack de WebClient dans vos applications Spring-Web ! https://spring.io/blog/2024/10/28/restclient-support-for-oauth2-in-spring-security–6–4 RestClient a été ajouté dans Spring Framework 6.1 API Fluide Spring Security 6.4 simplifie la configuration OAuth2 avec le nouveau client HTTP synchrone RestClient. RestClient permet des requêtes de ressources sans dépendances réactives, alignant la configuration entre applications servlet et réactives. La mise à jour facilite la migration depuis RestTemplate et ouvre la voie à des scénarios avancés. Marre des microservices ? Revenez au monoliths avec Spring Modulith 1.3RC1, 1.2.5 et 1.1.10 https://spring.io/blog/2024/10/28/spring-modulith–1–3-rc1–1–2–5-and–1–1–10-released Spring Modulith 1.3 RC1, 1.2.5, and 1.1.10 sont disponibles. La version 1.3 RC1 inclut des nouvelles fonctionnalités : archiving event publication completion mode compatibilité avec MariaDB et Oracle avec JDBC-based event publication registry Possibilité d'externaliser des événements dans des MessageChannels de Spring. Expressions SpEL dans @Externalized validation d'architecture technique jMolecules. Les versions 1.2.5 et 1.1.10 apportent des correctifs et mises à jour de dépendances. Spring gRPC 0.1 est sorti https://github.com/spring-projects-experimental/spring-grpc c'est tout nouveau et explorationel si c'est un probleme qui vous gratte, ca vaut le coup de jeter un coup d'oeil et participer. Spring Boot 3.3 Integrer Spring avec Open Telemetry (OTLP protocole) https://spring.io/blog/2024/10/28/lets-use-opentelemetry-with-spring rappel de la valeur de ce standard Open Telemetry comment l'utiliser dans vos projets Spring Comment utiliser ollama avec Spring AI https://spring.io/blog/2024/10/22/leverage-the-power-of–45k-free-hugging-face-models-with-spring-ai-and-ollama permet d'acceter aux 45k modeles de Hugging faces qui supportent le deploiement sur ollama il y a un spring boot starter c'est vraiment pour debuter Cloud Google Cloud Frankfort a subit 12h d'interruption https://t.co/VueiQjhCA3 Google Cloud a subi une panne de 12 heures dans la région europe-west3 (Francfort) le 24 octobre 2024. La panne, causée par une défaillance d'alimentation et de refroidissement, a affecté plusieurs services, y compris Compute Engine et Kubernetes Engine. Les utilisateurs ont rencontré des problèmes de création de VM, des échecs d'opérations et des retards de traitement. Google a conseillé de migrer les charges de travail vers d'autres zones. il y a eu une autre zone Europeenne pas mal affectée l'année dernière et des clients ont perdu des données :sweat: Web La fin de la World Wild Web Foundation https://www.theregister.com/2024/09/30/world_wide_web_foundation_closes/ la Fondation World Wide Web ferme ses portes. Les cofondateurs estiment que les problèmes auxquels est confronté le Web ont changé et que d'autres groupes de défense peuvent désormais prendre le relais. Ils estiment également que la priorité absolue doit être donnée à la passion de Tim Berners-Lee pour redonner aux individus le pouvoir et le contrôle de leurs données et pour construire activement des systèmes de collaboration puissants (Solid Protocol - https://solidproject.org/). Release du https://www.patternfly.org/ 6 Fw opensource pour faire de UI, sponsor RH Interessant à regarder Data et Intelligence Artificielle TSMC arrête des ventes à un client chinois qui aurait revenu un processeur à Huawei et utilise dans sa puce IA https://www.reuters.com/technology/tsmc-suspended-shipments-china-firm-after-chip-found-huawei-processor-sources–2024–10–26/ Taiwan Semiconductor Manufacturing Company (TSMC) a suspendu ses livraisons à Sophgo, un concepteur de puces chinois, après la découverte d'une puce fabriquée par TSMC dans un processeur AI de Huawei (Ascend 910B). Cette découverte soulève des préoccupations concernant des violations potentielles des contrôles d'exportation des États-Unis, qui restreignent Huawei depuis 2020. Sophgo, lié à Bitmain, a nié toute connexion avec Huawei et affirme se conformer aux lois applicables. Toutefois, l'incident a conduit à une enquête approfondie de TSMC et des autorités américaines et taïwanaises Open AI et Microsoft, de l'amour à la guerre https://www.computerworld.com/article/3593206/microsoft-and-openai-good-by-bromance-hel[…]m_source=Adestra&huid=4349eeff–5b8b–493d–9e61–9abf8be5293b on a bien suivi les chants d'amour entre Sam Altman et Satia Nadella ca c'est tendu ces derniers temps deja avec le coup chez openAI où MS avait sifflé la fin de la récré “on a le code, les données, l'IP et la capacité, on peut tout recrée” OpenAi a un competiteur de Copilot et essaie de courtises ses clients les apétits d'investissements d'OpenAI et une dispute sur la valeur de la aprt de MS qui a donné des crédits cloud semble etre aui coeur de la dispute du moment Debezium 3 est sorti https://debezium.io/blog/2024/10/02/debezium–3–0-final-released/ Java 17 minimum pour les connecteurs et 21 pour le serveur, l'extension quarkus outbox et pour l'operateur nettoyage des depreciations metriques par table maintenant support for mysql 9 y compris vector data type oracle, default mining strategie changée ehcache off-heap ajouté amelioarations diverses Oracle (offline RAC node flush, max string size for Extended PostgreSQL PGVector etc (Spanner, vitess, …) NotebookLlama: une version Open Source de NotebookLM https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/NotebookLlama Si vous avez été impressionné par les démo de Gemini Notebook, en créant des podcasts à partir de différentes resources, testez la version llama Tutoriel étape par étape pour transformer un PDF en podcast. Outillage Vous aimez Maven? Bien évidemment! Vous aimez asciidoctor? Absolument! Alors la version 3.1.0 du plugin asciidoctor pour maven est pour vous !! https://github.com/asciidoctor/asciidoctor-maven-plugin Le plugin permet soit de convertir des documents asciidoc de manière autonome, soit de les gérer via le site maven GitHub Universe: de l'IA, de l'IA et encore de l'IA https://github.blog/news-insights/product-news/universe–2024-previews-releases/ GitHub Universe 2024 présente les nouveautés de l'année, notamment la possibilité de choisir parmi plusieurs modèles d'IA pour GitHub Copilot (Claude 3.5, Gemini 1.5 Pro, OpenAI o1). Nouvelles fonctionnalités : GitHub Spark pour créer des micro-applications, révisions de code assistées par Copilot, sécurité renforcée avec Copilot Autofix. Simplification des workflows avec les extensions GitHub Copilot Facilitation de la création d'applications IA génératives avec GitHub Models Méthodologies Les blogs de developpeurs experts Java recommandés par IntelliJ https://blog.jetbrains.com/idea/2024/11/top-java-blogs-for-experienced-programmers/ pas forcement d'accord avec toute la liste mais elle donne de bonnes options si vous voulez lire plus de blogs Java Keycloak revient au semantic versioning après avoir suivi le versionage à la Google Chrome https://www.keycloak.org/2024/10/release-updates ne pas savoir si une mise a jour était retrocompatible était problématique pour les utilisateurs aussi les librairies clientes seront délivrées séparément et supporteront toutes les versions serveur de keycloak supportés Sécurité Un exemple d'attaque de secure supply chain théorique identifiée dans le quarkiverse et les détails de la résolution https://quarkus.io/blog/quarkiverse-and-smallrye-new-release-process/ dans le quarkiverse, les choses sont automatisées pour simplifier la vie des contributeurs d'extension occasionels mais il y avait un défaut, les secrets de signature et d'accès à maven central étaient des secrets d'organisation ce qui veut dire qu'un editeur d'extension malicieux pouvait ecrire un pluging ou un test qiu lisait ses secrets et pouvait livrer de faux artifacts la solution est de séparer la construction des artifacts de l'etape de signature et de release sur maven central comme cela les cles ne sont plus accessible Avec Okta pus besoin de mot de passe quand tu as un identifiant long :face_with_hand_over_mouth: https://trust.okta.com/security-advisories/okta-ad-ldap-delegated-authentication-username/ LOL Une vulnérabilité a été découverte dans la génération de la clé de cache pour l'authentification déléguée AD/LDAP. Les conditions: MFA non utilisé Nom d'utilisateur de 52 caractères ou plus Utilisateur authentifié précédemment, créant un cache d'authentification Le cache a été utilisé en premier, ce qui peut se produire si l'agent AD/LDAP était hors service ou inaccessible, par exemple en raison d'un trafic réseau élevé L'authentification s'est produite entre le 23 juillet 2024 et le 30 octobre 2024 Fixé le 30 octobre, 2024 La revanche des imprimantes !! Linux ne les aime pas, et elles lui rendent bien. https://www.theregister.com/2024/09/26/cups_linux_rce_disclosed/ Après quelques heures / jours de rumeurs sur une faille 9.9/10 CVSS il s'avère que cela concerne que les système avec le système d'impression CUPS et cups-browsed Désactivez et/ou supprimez le service cups-browsed. Mettez à jour votre installation CUPS pour appliquer les mises à jour de sécurité lorsqu'elles sont disponibles. Envisagez de bloquer l'accès au port UDP 631 et également de désactiver le DNS-SD. Cela concerne la plupart des distributions Linux, certaines BSD, possiblement Google ChromeOS, Solaris d'Oracle et potentiellement d'autres systèmes, car CUPS est intégré à diverses distributions pour fournir la fonctionnalité d'impression. Pour exploiter cette vulnérabilité via internet ou le réseau local (LAN), un attaquant doit pouvoir accéder à votre service CUPS sur le port UDP 631. Idéalement, aucun de vous ne devrait exposer ce port sur l'internet public. L'attaquant doit également attendre que vous lanciez une tâche d'impression. Si le port 631 n'est pas directement accessible, un attaquant pourrait être en mesure de falsifier des annonces zeroconf, mDNS ou DNS-SD pour exploiter cette vulnérabilité sur un LAN. Loi, société et organisation La version 1.0 de la definition de l'IA l'Open Source est sortie https://siliconangle.com/2024/10/28/osi-clarifies-makes-ai-systems-open-source-open-models-fall-short/ L'Open Source Initiative (OSI) a clarifié les critères pour qu'un modèle d'IA soit considéré comme open-source : accès complet aux données de formation, au code source et aux paramètres d'entraînement. La plupart des modèles dits “open” comme ceux de Meta (Llama) et Stability AI (Stable Diffusion) ne respectent pas ces critères, car ils imposent des restrictions sur l'utilisation commerciale et ne rendent pas publiques les données de formation c'est au details de données de formation (donc pas forcement les données elle meme. “In particular, this must include: (1) the complete description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures, and data processing and filtering methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.” C'est en echo a la version d'open source AI de la linux fondation En parlant de cela un article sur l'open source washing dans les modèles https://www.theregister.com/2024/10/25/opinion_open_washing/ L'open washing désigne la pratique où des entreprises prétendent que leurs produits ou modèles sont open-source, bien qu'ils ne respectent pas les critères réels d'ouverture (transparence, accessibilité, partage des connaissances). De grandes entreprises comme Meta, Google et Microsoft sont souvent accusées d'utiliser cette stratégie, ce qui soulève des préoccupations concernant la clarté des définitions légales et commerciales de l'open source, surtout avec l'essor de l'IA. Rubrique débutant Un petit article fondamental sur REST https://www.sfeir.dev/rest-definition/ there de Roy Fielding en reaction aux protocoles lourds comme SOAP 5 verbes (GET PUT, POST. DELETE, PATCH) JSON mais pas que (XML et autre pas d'etat inter requete Ask Me Anything Morgan de Montréal Comment faire cohabiter plusieurs dépôts Git ? Je m'explique : dans mon entreprise, nous utilisons notre dépôt Git (Bitbucket) configuré pour notre dépôt d'entreprise. Lorsque je souhaite contribuer à un projet open source, je suis obligé de modifier ma configuration globale Git (nom d'utilisateur, email) pour correspondre à mon compte GitHub. Il arrive souvent que, lorsque je reviens pour effectuer un commit sur le dépôt d'entreprise, j'oublie que je suis en mode “open source”, ce qui entraîne l'enregistrement de mes configurations “open source” dans l'historique de Bitbucket… Comment gérez-vous ce genre de situation ? Comment gérer différents profiles git https://medium.com/@mrjink/using-includeif-to-manage-your-git-identities-bcc99447b04b Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 8 novembre 2024 : BDX I/O - Bordeaux (France) 13–14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 16–17 novembre 2024 : Capitole Du Libre - Toulouse (France) 20–22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 21 novembre 2024 : Codeurs en Seine - Rouen (France) 21 novembre 2024 : Agile Game Toulouse - Toulouse (France) 27–28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : OVHcloud Summit - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 2–3 décembre 2024 : Tech Rocks Summit - Paris (France) 3 décembre 2024 : Generation AI - Paris (France) 3–5 décembre 2024 : APIdays Paris - Paris (France) 4–5 décembre 2024 : DevOpsRex - Paris (France) 4–5 décembre 2024 : Open Source Experience - Paris (France) 5 décembre 2024 : GraphQL Day Europe - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 19 décembre 2024 : Normandie.ai 2024 - Rouen (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6–7 février 2025 : Touraine Tech - Tours (France) 28 février 2025 : Paris TS La Conf - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 25 mars 2025 : ParisTestConf - Paris (France) 3 avril 2025 : DotJS - Paris (France) 10–12 avril 2025 : Devoxx Greece - Athens (Greece) 16–18 avril 2025 : Devoxx France - Paris (France) 7–9 mai 2025 : Devoxx UK - London (UK) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 11–13 juin 2025 : Devoxx Poland - Krakow (Poland) 12–13 juin 2025 : DevLille - Lille (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 26–27 juin 2025 : Sunny Tech - Montpellier (France) 1–4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9–10 octobre 2025 : Volcamp - Clermont-Ferrand (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Pi Tech
News: як ми пробуємо Cursor, як GitHub пробує Claude, як Claude пробує наші комп'ютери

Pi Tech

Play Episode Listen Later Nov 7, 2024 58:18


У цьому випуску наші ведучі продовжують обговорювати новини зі світу штучного інтелекту

Develpreneur: Become a Better Developer and Entrepreneur
Improving Coding Skills: Building Better Developers by Building Better Habits

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Oct 31, 2024 23:11


In the latest episode of Building Better Developers, Rob and Michael continue their journey through Season 23, “Building Better Habits.” In Episdoe 5 the focusing on improving coding skills. Like diving into why developers should consistently refine techniques, stay updated with evolving coding standards, and explore new tools. This hosts go beyond just writing code; it's about forming habits that make continuous learning an essential part of a developer's daily routine. Tools and Techniques Rob shares insights on the importance of regularly updating coding habits and exploring new languages and techniques. While it's easy to rely on familiar coding methods, especially in stable languages like Java, doing so can lead to missed opportunities to improve coding skills and write more efficient code. Rob suggests developers make time to explore updates in the languages and tools they already use, as newer features and standards can bring significant performance and workflow benefits. The Importance of Tools for Improving Coding Skills Michael builds on this idea by discussing the importance of trying out different tools. Many developers grow accustomed to particular development environments like Visual Studio or IntelliJ. Becareful relying exclusively on one tool, because this can limit your skills as a developer. Instead, Michael recommends creating sample applications (like the kitchen sink app) to test how various development environments work. Experimenting with multiple IDEs (integrated development environments) can make developers more versatile, which is especially valuable as they shift projects or roles. Coding Standards and Best Practices Maintaining coding standards and regularly reviewing best practices are essential to improving coding skills. As Michael explains, developers' understanding of best practices should grow over time. Start by revisiting previous projects. This can be a great opportunity to enhance the quality of code. The kitchen sink app can be a helpful reference for testing and refining best practices, helping developers ensure that new projects begin with a foundation of high-quality, standardized code. Resources for Improving Coding Skills Rob offers practical resources to assist developers in their continuous learning. He mentions platforms like CodeSignal, where developers can engage in challenges and tutorials, practicing coding skills in real-time. This type of interactive platform mirrors real-world coding environments. This allows developers to get immediate feedback and making it ideal for those aiming to learn new languages or strengthen their problem-solving skills. He also encourages developers to explore podcasts, blogs, and forums to stay informed about trends. Also, look at changes in their coding languages and frameworks. Daily Commitment to Improving Coding Skills Spend 10–15 minutes daily learning something new about a coding environment, tool, or language. Rob suggests beginning with an overarching goal—such as exploring a recent update in a language—then breaking it down into small daily learning segments. This incremental approach to learning makes it easier to grasp new information without feeling overwhelmed and helps developers make continuous progress. Improving Coding Skills by Understanding Standards Michael emphasizes that improving coding skills goes beyond practice; understanding the “why” behind coding standards is essential. This deeper comprehension helps developers apply coding standards effectively and reduces the likelihood of coding “blindly” or by habit alone. With this approach, developers improve their code quality, problem-solving skills, and ability to communicate technical choices within their teams. The Episode's Challenge: A Week of Learning Challenge for Improving Coding Skills The episode wraps up with a learning challenge for the week: pick one coding or technology-related topic to explore. Rob and Michael recommend choosing a topic that complements the developer's current skill set, whether it's a new feature in a familiar language, articles on coding standards, or recent security protocols. By setting aside a small amount of time each day to learn something new, developers can gradually build knowledge and gain new insights for a week. Improving Coding Skills through Small, Daily Steps Episode 5 of Building Better Developers provides an actionable approach to continuous improvement for developers. By forming habits around learning and experimentation, Rob and Michael show that developers can stay sharp, relevant, and prepared for a fast-evolving tech industry. Their advice—taking small, daily steps toward learning something new—reminds developers that effective growth is as much about mindset as technical skill. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Incremental Improvement and Progress – Do It Now Mastering Skills In Under Ten Thousand Hours Self-Confidence That Comes From Incremental Improvement Implementing An Incremental Approach – Small Progress is Still Progress Building Better Habits Videos – With Bonus Content

The MongoDB Podcast
EP. 238 Advancing Developer Tools and Multi-Cluster Kubernetes: Insights from MongoDB Experts

The MongoDB Podcast

Play Episode Listen Later Oct 25, 2024 10:46


In this episode of the MongoDB Podcast, live from .local London, Shane McAllister and Luce Carter explore the latest advancements in MongoDB's developer tools and their new support for multi-cluster Kubernetes. They discuss how MongoDB integrates Kubernetes' scalability and resiliency, developer tools like GitHub Copilot, generative AI in MongoDB Charts, and IntelliJ's integration. Learn how these tools empower developers to build better, faster applications with ease. Plus, insights into the future of cloud-native applications and automation.

augustwenty podcast
Caffeine w/ DJ : Episode 4

augustwenty podcast

Play Episode Listen Later Sep 25, 2024 36:58


In episode four of Caffeine with DJ, DJ kicks off the conversation by welcoming guests Michael Hall and Gabriel Sheeley, as the trio launches into a laid-back discussion about their morning beverage choices. While DJ confesses he's sipping on a caffeine-free orange Sunkist, Michael and Gabriel opt for more traditional brews—a pour-over coffee and a trusty French press. The light-hearted banter continues with relatable talk of broken French presses, multitasking while parenting, and the challenges of remote work life. The episode quickly shifts to the tech world as the group dives into the complexities of software development. A significant portion of the conversation centers around the grievances they have with various programming practices and tools. Michael shares his initial resistance to omitting semicolons in JavaScript, only to find freedom once he let them go—until a new team brought them back into his workflow. The discussion evolves into the pros and cons of team-wide coding standards and formatting preferences. DJ chimes in, emphasizing the importance of editor consistency to avoid unnecessary code conflicts during pull requests. The developers then explore a broader range of topics, from the merits of different coding languages to the tools they use to streamline their work. Michael and Gabriel debate the differences between IntelliJ and VS Code, touching on their respective strengths in terms of built-in functionality, code quality suggestions, and testing integration. They also share their frustrations with opinionated linters that enforce strict formatting rules and how that can sometimes hinder creativity during the development process. Gabe humorously calls out a colleague for having an absurd number of browser tabs open, which leads to a broader discussion about productivity and workflow management in the remote work era. As the conversation unfolds, the team finds humor in the everyday challenges of software development, from debugging quirks to navigating complex tech stacks. Throughout the episode, DJ, Michael, and Gabriel keep the tone casual and relatable, often veering into humorous side notes about life as developers and parents. Their authentic, real-world experiences make this episode a must-listen for anyone in the tech industry, especially those who find themselves nodding along to the shared grievances about code, browsers, and caffeine choices. The mix of humor, tech talk, and relatable life experiences makes this episode a perfect blend of entertainment and insight.

Java Off-Heap
OffHeap 86. Of striking crowds and independence

Java Off-Heap

Play Episode Listen Later Aug 14, 2024 77:55


So last time we recorded we still have a running windows operating system and we haven't seen the Crowdstrike meltdown happen yet. We also hear of JRuby achieving independence, Quarkus moving into CommonHaus and Intel letting go people (are they losing the AI battle?). An interesting episode, drop in for a listen! https://www.javaoffheap.com/datadog We thank DataDogHQ for sponsoring this podcast episode DO follow us on twitter @offheap https://www.twitter.com/offheap News: Jakarta EE 11 release pushed out JDK 23 in ramp down phase 2 https://openjdk.org/projects/jdk/23/#Schedule JDK 24 in Early Access https://openjdk.org/projects/jdk/24/ Crowdstrike https://www.forbes.com/sites/kateoflahertyuk/2024/08/07/crowdstrike-reveals-what-happened-why-and-whats-changed/ Google Search Monopoly https://www.cnn.com/2024/08/05/business/google-loses-antitrust-lawsuit-doj/index.html Intel Layoffs https://techcrunch.com/2024/08/01/intel-to-lay-off-15000-employees/ Microsoft trust with OSS - https://dusted.codes/can-we-trust-microsoft-with-open-source Quarkus moving to Commonhause - https://quarkus.io/blog/quarkus-moving-to-commonhaus/ JRuby Independence Day https://blog.jruby.org/2024/07/independence-day https://news.ycombinator.com/item?id=40868321 IntelliJ 2024.2    Conferences: JChampions Conference 2024 Videos Available SpringOne Virtual Conference - August 26-28, 2024 Dev2Next Lone Tree, Colorado Sept 30 — Oct 3, 2024 https://www.dev2next.com/ Jconf.dev, Dallas, TX Sep 24, 26 (https://2024.jconf.dev/) Community Over Code NA Denver October 7-10 https://communityovercode.org/ Codemash - Jan14-17 https://codemash.org/ JakartaOne LiveStream December 3

Develpreneur: Become a Better Developer and Entrepreneur
Updating Developer Tools: Keeping Your Tools Sharp and Efficient

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Jul 11, 2024 22:13


Welcome back to another episode of our developer journey series. Picture us with our parkas, navigating through the blizzards of the ever-evolving tech landscape. This episode is dedicated to an essential part of every developer's toolkit, "Updating Developer Tools: Keeping Your Tools Sharp and Efficient." The Constant Evolution of Developer Tools As developers, we're always in the thick of technological advancement. Whether you're working within a niche or covering a broad spectrum of tech, there's always something new on the horizon: new versions, libraries, problems, solutions, and add-ons. The saying goes, you could have ten years of experience or one year of experience ten times. We aim to ensure you accumulate a decade of progressive experience, not just repeated cycles. Importance of Up-to-Date Developer Tools Your developer tools are crucial for your growth and productivity. Many developers spend most of their time in one integrated development environment (IDE) like Eclipse, Visual Studio, or IntelliJ. If you're still using basic editors like Vi for complex tasks, it's time to upgrade. Modern IDEs offer features like syntax highlighting, autocomplete, and, increasingly, AI-assisted coding suggestions. These features can drastically reduce your time on repetitive tasks and boost your productivity. Embrace Modern Developer Tools and Debugging Modern IDEs come with powerful debugging tools. Setting up your environment to use debuggers effectively can save you countless hours. Instead of littering your code with print statements, use breakpoints and step through your code line by line. This method saves time and prevents production issues caused by excessive logging. Regularly Updating Developer Tools and Processes Twice a year, take inventory of your developer tools. Check your desktop and start menu for the applications you use frequently. Update them regularly to benefit from the latest features and improvements. Explore the extensions, plugins, and add-ons available for your IDEs and browsers. These can offer significant enhancements, from database management tools to integrations with email clients and version control systems. Automation and Efficiency with Developer Tools Evaluate your workflow periodically to identify repetitive tasks that can be automated. Numerous developer tools are available to streamline your processes. For instance, if you're dealing with complex database schemas, tools like DBVisualizer and DBSchema can simplify your life by visually representing your database, turning hours of manual work into minutes. Staying Current by Updating Developer Tools, Languages, and Environments Whether you're developing for mobile, web, or desktop, keeping up with the latest operating systems and platforms is crucial. Make sure your development environment is compatible with the latest versions. Developer tools like SonarQube can help you maintain code quality by performing static code analysis and highlighting potential vulnerabilities. Continuous Improvement and Security with Developer Tools Utilize developer tools that help you write better code. Modern IDEs can enforce coding standards, suggest improvements, and highlight potential security issues. Integrate these tools into your workflow to ensure you write clean, maintainable, and secure code. In summary, keeping your developer tools and processes up to date is essential for becoming a better developer. Regularly review and update your IDEs, libraries, and other tools. Embrace new features and automation to boost your productivity. Stay current with the latest platforms and use tools that help you write secure and efficient code. By investing time in maintaining and improving your toolkit, you'll be better prepared for the challenges of the ever-evolving tech landscape. Stay tuned for more insights and tips on our developer journey series. Your feedback and thoughts are always welcome as we explore and navigate this dynamic field together. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Navigating Communication Tools in Modern Workplaces Building a Portable Development Environment That is OS-agnostic Modern Tools For Monetizing Content The Developer Journey Videos – With Bonus Content Behind the Scenes Podcast Video – With Bonus Content

PHPUgly
386: PHPever Dream

PHPUgly

Play Episode Listen Later Jun 14, 2024 59:20


Links from the show:https://medium.com/better-programming/how-i-used-rust-to-enhance-my-php-8c84218e69edRedis Adopts Dual Source-Available Licensing - RedisReddit - Dive into anythingX (Formerly Twitter) Officially Allows Porn Under Updated PolicyUpdates for security issue affecting IntelliJ-based IDEs 2023.1+ and JetBrains GitHub Plugin | The Security BlogTwitter makes Likes private: X policy change follows update on pornTop 50+ AWS Services Explained in 10 Minutes - YouTubeThis episode of PHPUgly was sponsored by:Mailtrap.ioThis episode is sponsored by Mailtrap, an Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, and email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOhttps://l.rw.rw/phppodcastHoneybadger.ioBuilt for Developers. Monitoring doesn't have to be so complicated. That's why we built the monitoring tool we always wanted: a tool that's there when you need it, and gets out of your. Everything you need to keep production happy so that you can keep shipping. Deploy with confidence and be your team's DevOps hero.https://www.honeybadger.io/php[architect]php[architect] magazine is the only technical journal dedicated exclusively to the world of PHP. We are committed to spreading knowledge of best practices in PHP. With that purpose, the brand has expanded into producing a full line of books, hosting online and in-person web training, as well as organizing multiple conferences per year.https://www.phparch.comPHPUgly streams the recording of this podcast live. Typically every Thursday. Come and join us, and subscribe to our Youtube Channel, Twitch, or Twitter. Also, be sure to check out our Patreon Page.Twitter Account https://twitter.com/phpuglyMastodon Account https://phparch.social/@phpuglyHost:Eric Van Johnson | Mastodon: @eric@phpartch.socialJohn Congdon | Mastodon: @john@phpartch.socialStreams:Youtube ChannelTwitchPowered by RestreamPatreon PagePHPUgly Anthem by Harry Mack / Harry Mack Youtube ChannelThanks to all of our Patreon Sponsors:***** PATREON SUPPORTS SPONSOR LEVEL **Honeybadger (https://honeybader.io)** Patreon Supports **ButteryCrumpetFrank WDavid QShawnBoštjan OMarcusS FRodrigo CéBillyDarryl HKnut E B-HDmitri GElgimboMikePageDevKenrick BKalen JPeter AHolly SAlex BKevin YWayneJeroen FahinkleChris CSteve MRobert SEmily JJoe FAndrew WulrikJohn CEric MEd GlilHermitChampJeffrey DChris BTore BBek JDonald GPaul KDustin UMel SSeba RCoriNathan PLincoln M

Mosen At Large, with Jonathan Mosen
Episode 285:The positives and negatives of airline travel when blind, how's Voice Dream Reader doing as a subscription product, and the pros and cons of new Windows computers with ARM processors

Mosen At Large, with Jonathan Mosen

Play Episode Listen Later Jun 8, 2024 119:24


Here are the topics covered in this episode, and the time in the file for each. Welcome to 285 0:00 Something you should know about the PhonakTV Connect 3:07 Travelling on airlines when you're blind 11:48 Microsoft Natural Voices for SAPI 5 31:02 Check out the project here The VictorReader Stream 3 33:08 ID.ME 40:16 Cochlea implants, Android, and readers 42:16 Accessibility issues on the ElevenLabs website 44:31 No one should subscribe to Voice DreamReader 45:52 New Windows ARM computers 48:36 Glen Gordon from Vispero discusses JAWS and ARM processors 57:21 An update on Instacart 1:24:45 AI and how it got me in trouble at work 1:27:05 Missing phone calls on my iPhone 1:34:15 Google doesn't seem committed to accessibility 1:35:08 using IntelliJ as a blind person 1:39:04 The Rabbit R1 isn't accessible 1:40:29 iOS poor performance in Safari 1:47:54 Mantis carrying cases and Instacart accessibility 1:50:24 Thoughts on Glide 1:54:14 The Myriad broadcasting software 1:56:18 What was that thing you were talking to? 1:57:46 Closing and contact info 1:58:42

Programmers Quickie
Intellij 2023.3 Should you upgrade

Programmers Quickie

Play Episode Listen Later May 31, 2024 2:00


Les Cast Codeurs Podcast
LCC 312 - Dans la ferme de Mathurin IA IA IO !

Les Cast Codeurs Podcast

Play Episode Listen Later May 21, 2024 113:38


Dans ce long…. épisode, Emmanuel, Guillaume et Arnaud discutent de l'actualité avec Chicori (un runtime WASM en Java), Jakarta Data, Quarkus 3.10, Spring AI, Hibernate 6.5, mais aussi quelques retours aux basiques (timezones, rate limiting, …). Gros focus sur les nouveautés annoncées à Google I/O 2024 et dans l'écosystème IA en général avec les annonces d'OpenAI, Claude, Grok et d'autres. Différents outils sont aussi couverts comme Git, IntelliJ, ASDF, BLD, S3. Et enfin des sujets sur la haute disponibilité de Keycloak, la ré-indexation sans downtime, les challenges des implémentations alternatives, le mode vigilant dans GitHub, Redis et les changements de license, et les investissements de Microsoft et AWS en France dans le cadre du programme #ChooseFrance. N'hésitez pas à nous soumettre vos questions sur https://lescastcodeurs.com/ama nous y répondrons dans les prochains épisodes. Enregistré le 17 mai 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-312.mp3 News Langages Un runtime WASM en Java https://github.com/dylibso/chicory Projet tout nouveau, encore loin de la maturité Mais intéressant à suivre pour exécuter du code WebAssembly dans une application Java le projet n'a pas 15 jours non plus quand même :) Faire tourner des plugins WASM dans la JVM (e.g. plugins) On peut faire des heap dump en cas de OutOfMemoryException en compilation native https://quarkus.io/blog/heapdump-oome-native/ depuis JDK 21 Un exemple avec Quarkus Et le GC epsilon 100 exercices pour se mettre à Rust https://rust-exercises.com/ Librairies Hibernate 6.5 est sorti https://in.relation.to/2024/04/25/orm-650/ cache full pour les entités et leur collections (le défaut est shallow) Java record pour les @IdClass Les filtres peuvent être auto activés par défaut (vs à faire sur chaque session). Les filtres sont pas mal pour gérer par exemple des soft delete Keybased pagination pour éviter les trous de résultant en cas de modification d'entités en parallèle de.une recherche paginée. S.appuie sur une clé unique et ordonnée genre ISBN Une tech preview de Jakarta Data En parlant de Jakarta Data, deux articles sur le sujet https://in.relation.to/2024/04/01/jakarta-data-1/ https://in.relation.to/2024/04/18/jakarta-data-1/ concept de repository pas lié à une entité mais à une relation logique entre les recherches interagit via stateless session et est un bean CDI Code généré bien sur 4 opérateur crud et les requêtes save est up sert Type sage au sens ou le nom des méthodes n'est pas la logique de recherche Annotation et nom des paramètres et c'est type safe via un annotation processor ou string dans @Query qui est type safe aussi via le processeur discute plus de type safety et pagination Quarkus 3.10 avec quelques nouveautés https://quarkus.io/blog/quarkus-3-10-0-released/ flyway 10 arrive avec support natif Hibernate search supporte le standalone POJO mapper notamment pour elastic search (pas que ORM) Modification des propriétés Quarkus.package automatiquement remplacées par quarkus update et Quarkus 3.9 a fait son grand renommage réactif https://quarkus.io/blog/quarkus-3-9-1-released/ Clarifier que les extensions réactive n'imposent pas des apis réactives et seulement leur cœur implémenté en réactif ou offre optionellement des apis reacrive Les gens pensaient à tors que les réactives imposaient le modèle de programmation la encore quarkus update à la rescousse Un article sur l'api structured output pour Spring AI https://spring.io/blog/2024/05/09/spring-ai-structured-output un article descriptif sur quand cette api est utilisée Et les détails de son usage Comment passer une TimeZone dans spring boot et ce que cela impacte en terme de composants https://www.baeldung.com/spring-boot-set-default-timezone du basique mais toujours utile Task ou app Programmatiquement Sur certains lifecycles de Spring Infrastructure Un article et la vidéo de Devoxx France sur la haute disponibilité de Keycloak, comment c'est implémenté https://www.keycloak.org/2024/05/keycloak-at-devoxx-france-2024-recap l'infra d'identité est une infra clé Donc gérer la haute disponibilité est critique C'est un article qui pointe sur une vidéo de Devoxx France et la doc de keycloak sur comment tout cela est implémenté Cloud Comment se ruiner avec des buckets S3 https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 Amazon fait payer pour les requêtes non autorisées Il suffit de connaître le nom d'un bucket pour faire payer son propriétaire Amazon travaille pour fournir une solution / un fix. il est tombé par hasard sur un nom de bucket utilisé « pour de faux » par un outil open source populaire Bien rajouter un suffixe à ses buckets peut réduire le risque Mais pas l'éliminer un fix a été livré par amazon https://aws.amazon.com/about-aws/whats-new/2024/05/amazon-s3-no-charge-http-error-codes/ Data et Intelligence Artificielle Guillaume résume GoogleIO https://x.com/techcrunch/status/1790504691945898300?s=61&t=WImtt07yTQMhhoNPN6lYEw AI overview plus besoin d'aller sur les sites Google I/O 2024 Google I/O 2024 résumé en vidéo de 10 minutes https://www.youtube.com/watch?v=WsEQjeZoEng et en 100 bullet points https://blog.google/technology/ai/google-io-2024-100-announcements/ Message de Sundar Pichai https://blog.google/inside-google/message-ceo/google-io-2024-keynote-sundar-pichai/#creating-the-future Project Astra, un assistant universel, sur smartphone avec qui on peut avoir une conversation normale et à qui montrer avec la caméra ce qui nous entoure https://www.theverge.com/2024/5/14/24156296/google-ai-gemini-astra-assistant-live-io Nouveau modèle Gemini 1.5 Flash, quasi aussi performant que le nouveau Gemini 1.5 Pro, mais beaucoup plus rapide (premiers tokens dans la seconde) et aussi moins cher https://blog.google/technology/developers/gemini-gemma-developer-updates-may-2024/ Gemini 1.5 Pro est Gemini 1.5 Flash sont disponibles avec une fenêtre de contexte d'un million de tokens, mais il y a une liste d'attente pour tester une fenêtre de 2 millions de tokens https://aistudio.google.com/app/waitlist/97595554 https://cloud.google.com/earlyaccess/cloud-ai?e=48754805&hl=en PaliGemma un nouveau modèle de vision ouvert dans la famille Gemma (pour faire du Q&A du sous-titrage) et preview de Gemma 2, avec une version à 27 milliards de paramètres https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/ Gemini disponible dans les IDEs : Android Studio, IDX, Firebase, Colab, VSCode, Cloud and Intellj Gemini AI Studio enfin disponible en Europe Gemini supporte le parallel function calling et l'extraction de frame dans les vidéos Trillium, la 6ème version des TPU (Tensor Processing Unit), les processeurs spécifiques ML dans Google Cloud, 5 fois plus puissant que la génération précédente et 67% plus efficace en énergie https://cloud.google.com/blog/products/compute/introducing-trillium-6th-gen-tpus Le projet NotebookLM rajoute une fonctionnalité de Audio Overview qui permet de discuter avec son corpus de documents avec une conversation vocale https://notebooklm.google.com/ On peut appliquer le “grounding” avec Google Search pour l'API Gemini, pour que le modèle Gemini puisse chercher des informations complémentaires dans Google Search https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-io-announcements Annonce de Imagen 3, la future version de du modèle de génération d'images Imagen qui améliore la qualité et possède un très bon support du texte dans les images (objectif de disponibilité à l'été) https://blog.google/technology/ai/google-generative-ai-veo-imagen-3/#Imagen-3 https://deepmind.google/technologies/imagen-3/ DeepMind annonce Veo, un nouveau modèle de génération de vidéo très convaincant qui peut faire des vidéos en 1080p de 60s, mais en combinant plusieurs prompts successifs, il peut générer des vidéos plus longues qui s'enchainent https://deepmind.google/technologies/veo/ VideoFX, ImageFX et MusicFX, des expérimentations de Google AI intégrant Imagen 3 et Veo (pas encore disponibles en Europe) https://blog.google/technology/ai/google-labs-video-fx-generative-ai/ Gemini Advanced https://blog.google/products/gemini/google-gemini-update-may-2024/#context-window Les utilisateurs de Gemini Advanced (l'application web) utilisent Gemini 1.5 Pro avec la fenêtre de contexte de 1 million de tokens, la possibilité de charger des documents de Google Drive, et bientôt la possibilité de générer des graphiques. Gemini Advanced rajoute aussi la capacité de générer des itinéraires de voyage (avec intégration de Google Flights, etc) Fonctionnalité Gemini Live pour avoir une conversation vocale naturelle avec Gemini https://blog.google/products/gemini/google-gemini-update-may-2024/#gemini-live Gem : des plugins pour Gemini Advanced pour créer ses propres assistants personnalisés https://blog.google/products/gemini/google-gemini-update-may-2024/#personalize-gems Ask Photos, on peut poser à Google Photos des questions plus complexes comme “quelle est ma plaque d'immatriculation” et Photos devine que parmi toutes les photos de voitures lequelle est certainement la nôtre et extrait le numéro de plaque https://blog.google/products/photos/ask-photos-google-io-2024/ Même dans Google Messages vous pourrez échanger avec Gemini Google Search https://blog.google/products/search/generative-ai-google-search-may-2024/ Rajout d'un modèle Gemini spécial search intégré qui permet à Google Search de répondre aux questions de la barre de recherche avec une raisonnement multi-étapes, en étant capable de faire de la planification, en mode multimodal (texte, image, vidéo, audio) Planning de repas et de voyage, supporté dans Gemini, va arriver aussi dans Search Gemini 1.5 Pro est disponible dans le panneau latéral de Gmail, Docs, Sheets, Drive https://blog.google/products/workspace/google-gemini-workspace-may-2024-updates/ SynthID va même fonctionner pour du texte https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/ Gemini Nano bientôt disponible dans les prochaines version de Chrome, pour utiliser le LLM directement dans le navigateur Android Seconde béta d'Android 15 https://android-developers.googleblog.com/2024/05/the-second-beta-of-android-15.html Private space pour garder des apps secures avec un niveau d'authentification supplémentaire Google collabore avec Samsung et Qualcomm sur la réalité augmentée dans Android https://developers.googleblog.com/en/google-ar-at-io-2024-new-geospatial-ar-features-and-more/ Project Gameface arrive sur Android (pour diriger Android avec les yeux, avec les expressions du visage, pour l'accessibilité) https://developers.googleblog.com/en/project-gameface-launches-on-android/ Gemini Nano va passer en multimodal, pas juste du texte Circle to search étendu à 100 millions de téléphones supplémentaires supportant Nano et va permettre de poser des questions, par exemple pour l'aide aux devoirs des enfants https://blog.google/products/android/google-ai-android-update-io-2024/#circle-to-search Detect phone scam on device with Gemini Nano Talkback, l'application pour l'accessibilité dans Android, va tirer parti de la multimodalité de Gemini Nano Bientôt de la génération d'image qu'on pourra intégrer dans ses mails, ses messages Wear OS https://android-developers.googleblog.com/2024/05/whats-new-in-wear-os-io-24.html Travail sur l'économie d'énergie pour faire durer les montres plus longtemps avant la prochaine recharge. Par exemple, 20% de consommation en moins lorsqu'on court un marathon ! Plus de type de données pour les activités physiques Project IDX accessible sans liste d'attente https://developers.googleblog.com/en/start-building-with-project-idx-today/ Firebase annonce 3 nouveaux produits https://developers.googleblog.com/en/whats-new-in-firebase-io-24/ Data Connect, un backend-as-a-service avec PostgreSQL https://firebase.google.com/products/data-connect App Hosting, hosting d'application Next et Angular https://firebase.google.com/products/app-hosting Genkit, a GenAI framework for app developers https://firebase.google.com/products/genkit Dart 3.4 avec support de Wasm comme target de compilation https://medium.com/dartlang/dart-3-4-bd8d23b4462a OpenAI lance son nouveau modèle: gpt-4o http://openai.com/index/hello-gpt-4o/ https://x.com/openaidevs/status/1790083108831899854?s=46&t=GLj1NFxZoCFCjw2oYpiJpw Audio, vision et reconnaissance de texte en realtime Plus rapide et 50% moins cher que son prédécesseur 4-turbo https://claude.ai/ est disponible en europe Claude, le modèle est créé par Anthropic: Claude est un assistant IA basé sur un grand modèle de langage entraîné selon des principes éthiques stricts. Il accorde une grande importance à l'honnêteté, l'impartialité et le respect de l'être humain. Son raisonnement repose sur une compréhension profonde des concepts plutôt que sur de simples associations statistiques. Il cherche activement à corriger les éventuels biais ou erreurs. Claude est polyvalent et peut s'adapter à différents styles de communication et niveaux de complexité selon le contexte. Il maîtrise de nombreux domaines académiques et scientifiques. Il est capable d'introspection sur ses propres processus de pensée et ses limitations. La vie privée et la confidentialité sont des priorités pour lui. Claude continue d'apprendre et de s'améliorer grâce aux interactions avec les humains. Son but est d'être un assistant fiable, éthique et bienveillant. quelqu'un sait comment ils font pour raisonner et pas juste LLM statistiquer? Comment ils prouvent cela ? C'est du code à part? Grok le modèle de X/Twitter/Musk est aussi dispo en Europe https://x.com/x/status/1790917272355172401?s=46&t=GLj1NFxZoCFCjw2oYpiJpw un truc unique c'est qu'il utilise les tweet comme reference sur ce qu'il dit. Par exemple demande les meilleurs Java Champions et c'est sur les tweet recents , probablement une sorte de RAG ou une sorte de fine tuning sur les derniers tweets, je ne sais pas L'algorithm des modeles de diffusion expliqués https://x.com/emmanuelbernard/status/1787565568020619650 deux articles, un general et lisible l'autre plus abscon mais avec certains details interessants sur le downsizing étapes ajout de bruit à des images (learning) pour après appliquer le process opposé le reverse diffusion process On prédit le bruit à enlever, on l'enlève et on repère le processus. Et tout cela est influencé par le prompt. Reindexation sans downtime des données de documentation de Quarkus, en quarkus bien sûr https://quarkus.io/blog/search-indexing-rollover/ utilise hibernate search Utilisé Elasticsearch / opensearch Article qui explique une des approches pour reindexer sans downtime via index alias Outillage Un article qui parle de l'outil de build bld, peu connu, qui permet d'écrire ses builds simplement dans une classe Java https://sombriks.com/blog/0070-build-with-bld-and-why-it-matters/ IntelliJ 2024.1 est sorti https://blog.jetbrains.com/idea/2024/05/what-s-new-in-intellij-idea-ultimate-2024-1/ complétion de ligne entière (deep learning) Assistant AI amélioré Spring Boot support amélioré sur bean completion et génération de diagramme Support de dev containers simplifié Amélioration support quarkus avec notamment icône dev ui et config des tests Support OpenRewrite Server wiremock et plein d'autres choses En version beta public, Homebrew permet de vérifier la provenance des packages (bottles) https://blog.trailofbits.com/2024/05/14/a-peek-into-build-provenance-for-homebrew/ Basé sur le système “build provenance” de sigstore https://docs.sigstore.dev/verifying/attestation/#validate-in-toto-attestations qui repose sur les attestations in-toto https://in-toto.io/ Mettez à jour git en version 2.45.1 pour fixer des failles de sécurité https://github.blog/2024-05-14-securing-git-addressing-5-new-vulnerabilities/ CVE-2024-32002 (Critique, Windows & macOS) : Les repos Git avec des sous-modules peuvent tromper Git pour lui faire exécuter un hook (élément de script) à partir du répertoire .git/ pendant une opération de clonage, permettant l'exécution de code à distance (Remote Code Execution). CVE-2024-32004 (Important, machines multi-utilisateurs) : Un attaquant peut concevoir un repo local qui exécute du code arbitraire lors du clonage. CVE-2024-32465 (Important, toutes les configurations) : Le clonage à partir de fichiers .zip contenant des repos Git peut contourner les protections, et potentiellement exécuter des hooks malveillants. CVE-2024-32020 (Faible, machines multi-utilisateurs) : Les clones locaux sur le même disque peuvent permettre à des utilisateurs non approuvés de modifier des fichiers liés physiquement (hard link) dans la base de données des objets du repo cloné. CVE-2024-32021 (Faible, machines multi-utilisateurs) : Le clonage d'un repo local avec des liens symboliques (symlinks) peut entraîner la création de liens physiques vers des fichiers arbitraires dans le répertoire objects/. Architecture Visualisation des algorithmes de rate limitation https://smudge.ai/blog/ratelimit-algorithms Méthodologies Le problème de l'implémentation alternative https://pointersgonewild.com/2024/04/20/the-alternative-implementation-problem/ Article par un développeur qui a développé des Just-in-Time compiler pour différents langages Remarqué que développer une implémentation alternative d'un langage (par exemple) n'a jamais vraiment rencontré le succès Les gens préfèrent l'original à une alternative qui est dépendante de / a peine à suivre l'implémentation d'origine Pour son cas, sur le JIT, il a travaillé sur un JIT intégré directement dans CRuby (plutôt que faire son implémentation alternative comme TruffleRuby), et sont JIT est intégré maintenant dedans directement Plus facile de rejoindre / s'intégrer au projet plutôt que d'être une alternative pour laquelle il faut convaincre les gens de l'adopter Le mode vigilant dans GitHub https://x.com/emmanuelbernard/status/1790026210619068435 c'est la suite du blog wsur la signature des commits que j'ai fait ul y a quelques temps https://emmanuelbernard.com/blog/2023/11/27/git-signing-ssh/ Maintenant, GitHub rajoute de plus en plus d'infos si les signatures ne matchent pas ou ne sont pas présentes Loi, société et organisation Une perspective sur Redis et les changements de license par un devrel AWS OpenSearch https://www.infoworld.com/article/3715247/the-end-of-vendor-backed-open-source.html les sociétés regardent l'impact légal des licenses source available pour elles même en usage interne Ça casse l'écosystème de spécialisations au dessus du produit (logz.io au dessus d'elastic démarré avant le changement de license) Redis top 10 contribs à AWS et Alibaba er Huawei et 3 redis. Donc c'est pas redis qui contribue tout. La plupart des ingénieurs de redislab ne bossent pas sur redis OSS, mais sur cloud et entreprise Peut être la fin des single vendor oss Il n'y a que les cloud providers qui peuvent fournir du OSS sans affecter leur structure du coût C'est un ex AWS en fait. Maintenant indépendant Microsoft va investir 4 milliards en France (datacenters et IA) https://news.microsoft.com/fr-fr/2024/05/13/microsoft-announces-the-largest-investment-to-date-in-france-to-accelerate-the-adoption-of-ai-skilling-and-innovation/ Il ne sont pas les seuls dans le cadre du programme #chooseFrance https://www.info.gouv.fr/actualite/choose-france-un-record-de-15-milliards-deuros-dinvestissements-etrangers Mais cela n'est pas sans laisser de questions sur l'avenir de notre activité avec les US qui externalisent désormais leur silicon valley https://www.cybernetica.fr/la-france-laboratoire-de-la-silicon-valley-2-0/ Outils de l'épisode ASDF un gestionnaire de version multi-runtime https://asdf-vm.com Arnaud l'avait recommandé mais je restais sur rvm apres des deboires, je suis passé a asdf, qui fonctionne mais pour le jdk j'utilise sdkman pour les javaistes ca parrait plus poussé Conférences Les videos de Devoxx France sont en ligne https://www.youtube.com/playlist?list=PLTbQvx84FrARars1vXos7mlPdvYJmsEoK La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 16-17 mai 2024 : Newcrafts Paris - Paris (France) 22 mai 2024 : OpenInfra Day France - Palaiseau (France) 22-25 mai 2024 : Viva Tech - Paris (France) 24 mai 2024 : AFUP Day Nancy - Nancy (France) 24 mai 2024 : AFUP Day Poitiers - Poitiers (France) 24 mai 2024 : AFUP Day Lille - Lille (France) 24 mai 2024 : AFUP Day Lyon - Lyon (France) 28-29 mai 2024 : Symfony Live Paris - Paris (France) 1 juin 2024 : PolyCloud - Montpellier (France) 6 juin 2024 : WAX 2024 - Aix-en-Provence (France) 6-7 juin 2024 : DevFest Lille - Lille (France) 6-7 juin 2024 : Alpes Craft - Grenoble (France) 7 juin 2024 : Fork it! Community - Rouen (France) 11 juin 2024 : Cloud Toulouse - Toulouse (France) 11-12 juin 2024 : OW2con - Paris (France) 11-12 juin 2024 : PGDay Lille - Lille (France) 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Mobilis In Mobile 2024 - Nantes (France) 18 juin 2024 : BSides Strasbourg 2024 - Strasbourg (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 26-28 juin 2024 : Breizhcamp 2024 - Rennes (France) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 6-7 septembre 2024 : Agile Pays Basque - Bidart (France) 17 septembre 2024 : We Love Speed - Nantes (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 25-26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11-12 octobre 2024 : SecSea2k24 - La Ciotat (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 30-31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30-31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024-3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13-14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 3-5 décembre 2024 : APIdays Paris - Paris (France) 4-5 décembre 2024 : Open Source Experience - Paris (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 16-18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Programmers Quickie
Intellij 2024.1

Programmers Quickie

Play Episode Listen Later Mar 30, 2024 1:21


airhacks.fm podcast with adam bien
No Dependencies--Or How Rife 2 and Bld Happened

airhacks.fm podcast with adam bien

Play Episode Listen Later Mar 3, 2024 69:28


An airhacks.fm conversation with Geert Bevin (@gbevin) about: Yamaha DX7 - Geerts first synthesizer, Classic Minimoog analog synthesizer by Bob Moog, First synthesizer was a Yamaha DX7 successor, Early music production using Amiga computer and MIDI, iOS and macOS development with focus on audio DSP and synthesis, Returning to Java development after years away, high productivity with Java, New Java release cadence and experimental features, Rife web framework and bidirectional template engine, BLD build tool and philosophy, Kotlin Multiplatform capabilities, SwiftUI DX compared to Java and IntelliJ, Moog Music website, Jean Michel Jarre, Geert personal website, Rife 2 GitHub project, Bld GitHub project Geeert's website: uwyn.com Geert Bevin on twitter: @gbevin

Enterprise Java Newscast
Stackd 70: Natural Born Advocate

Enterprise Java Newscast

Play Episode Listen Later Feb 22, 2024 79:43


Kito Mann, Josh Juneau and special guest Grace Jansen, Java Champion and Advisory Developer Advocate at IBM, chat about Broadcom's Acquisition of VMWare, Jakarta EE 11 and the revamped Jakarta EE tutorial, Lit 3.0, JoinFaces, Liberty Tools for IntelliJ, JetBrains AI Service, Quarks and LangChain4J, and JDK 22. They also pick Grace's brain about how she got into IT, developer advocacy, getting an MBA, mentoring women in tech, and more. About Special Guest Grace Jansen Grace Jansen is a Java Champion, Software Engineer and Advisory Developer Advocate at IBM. Her interests include Java, open source, cloud technologies and developer experience. Especially excited about driving more diversity and inclusivity in the tech industry. Global and Industry News  - Broadcom Acquisition - VMWare Complete (https://investors.broadcom.com/news-releases/news-release-details/broadcom-completes-acquisition-vmware) Server Side Java  - Jakarta EE 11 (https://jakarta.ee/specifications/platform/11/)  - New Jakarta EE Docs Live (https://jakarta.ee/learn/docs/jakartaee-tutorial/current/index.html)  - Ways to contribute to Jakarta EE (https://youtu.be/VdJ-H2VNusw?si=OdybmgW6y80P7cfl) Frontend   - Lit Launch Day: Lit 3.0, Labs graduations, a compiler and more! (https://lit.dev/blog/2023-10-10-lit-3.0/)  - JoinFaces (http://joinfaces.org/) Tools  - Kotlin Advent of Code:  (https://www.youtube.com/playlist?list=PLlFc5cFwUnmzk0wvYW4aTl57F2VNkFisU)  - Java Advent Calendar (https://www.javaadvent.com/calendar)  - JakartaOne Livestream on YouTube (https://www.youtube.com/watch?v=F_FqLnyGQEY&list=PLutlXcN4EAwDNN2lVNlfV3u2rBvJMOFCn)  - Liberty Tools for IntelliJ (https://plugins.jetbrains.com/plugin/14856-liberty-tools  - Apache Netbeans 20 Released (https://github.com/apache/netbeans/releases/tag/20)  - JetBrains AI Service and In-IDE AI Assistant (https://www.jetbrains.com/ai/) AI/ML  - LangChain4J (https://github.com/langchain4j/langchain4j)  - When Quarkus meets LangChain4j (https://quarkus.io/blog/quarkus-meets-langchain4j/) Java Platform  - JDK 22 (https://openjdk.org/projects/jdk/22/) Picks   - SnagIt (https://www.techsmith.com/screen-capture.html)  - VisionOS Developer Tools (https://developer.apple.com/visionos/) Other Pubhouse Network podcasts   - OffHeap (https://javaoffheap.com)  - Java Pubhouse (https://javapubhouse.com) Events  - FOSDEM - 3-4 Jan 2024 (https://fosdem.org/2024/)  - Codemash - Jan 9-12, 2024, Sandusky, OH, USA (https://jchampionsconf.com/https://codemash.org/)  - Geekle Java Dev Summit - Jan 16-17, 2024, online (https://events.geekle.us/java24/)  - VoxxedDays Cern - Jan 22nd-23rd, 2024, Switzerland (https://cern.voxxeddays.com/)  - JChampionsConf - Jan 25-30, 2024, online (https://jchampionsconf.com/)  - Jfokus - Feb 5-7 2024, Sweden (https://www.jfokus.se/)  - VoxxedDays Zurich - 7th March 2024 (https://voxxeddays.com/zurich/)  - DevNexus - Apr 9-12, 2024, Atlanta, GA, USA (https://devnexus.org/)  - dev2next Conference - September 30 to October 3, 2024, Lone Tree, Colorado (https://www.dev2next.com/)    

javaswag
#56 - Владмирир Долженко - производительность IDE, Котлин плагин и новый компилятор К2 Type episode Kind page

javaswag

Play Episode Listen Later Jan 22, 2024 117:42


В 56 выпуске подкаста Javaswag поговорили с Владимиром Долженко о производительности IDE, Котлин плагине и новом компиляторе К2 00:01:13 Путь в компиляторы, фризы и IntelliJ как платформа 00:13:20 Опыт до Jetbrains 00:20:05 Бенчмарки в Идее, метрики и на что смотреть 00:29:00 Подсветка кода и Android Studio 00:40:38 Рынок IDE, Google IDX, VsCode и К2 - новый Котлин компилятор 01:05:20 Компилятор и микросервисы 01:09:15 К2 быстрее? 01:24:00 Флаки тесты и локальные процессы в командах 01:27:58 Монорепозиторий и перенос плагинов 01:33:20 Analysis API внутри IDEA 01:39:10 Счастливое будущее K2 и дата релиза 01:47:50 Отвечаем на предыдущее непопулярное мнение 01:51:30 Непопулярное мнение “Разработчики не умеют использовать инструменты, которые у них есть” Гость - https://twitter.com/dolzhenko Ссылки: Доклад Светланы Исаковой что нужно знать о новом компиляторе Серия статей о перформансе от А. Шипилёва - JVM Anatomy Quarks Доклад А. Шипилёва про перформанс в целом Анонс IntelliJ IDEA's K2 Kotlin Mode Now in Alpha! Видео про молоток Кип сейф! 🖖

airhacks.fm podcast with adam bien
The Long Road to Java and Kotlin

airhacks.fm podcast with adam bien

Play Episode Listen Later Dec 17, 2023 65:39


An airhacks.fm conversation with Anton Arhipov (@antonarhipov) about: playing sports games on Pentium 233 MHz the 2014 JavaOne Rockstar awards about NetBeans, Eclipse, and IntelliJ., enjoying sports games and destroying joysticks, practicing competitive swimming, swim training, starting to program in Turbo Pascal at Maelardalen University, ship simulation with Java for Vasa Museum, joining a company which maintains RefactorIT, working with Java EE and WebLogic and JRockit, joining ZeroturnAround and working on JRebel, Rebel and LiveRebel, working on a profiler, JetBrain's MPS, DevRel for TeamCity, AppCode features are appearing in fleet, Fleet is built on common UI principles, the rendering engine Skia, Kotlin and Jetpack Compose, Circles by Anton Anton Arhipov on twitter: @antonarhipov

airhacks.fm podcast with adam bien
A Deeper Dive Into Debugging

airhacks.fm podcast with adam bien

Play Episode Listen Later Aug 14, 2023 61:42


An airhacks.fm conversation with Shai Almog (@debugagent) about: method breakpoints on exit and tracepoints, method breakpoints on pattern, tracepoints and logpoints, field watch points, a watchpoints steps on a different location, stop on exception and filtering, suspending threads on a breakpoint, jdb ships with Java, RR the time travelling debugger created by Mozilla, render library by IntelliJ, logging is a major cost factor, using patterns in logs, writing tests for log statements, btrace - the dtrace for java, JaCoCo agent, kubernetes is not for startups, vendor neutral is lost in the clouds, Mozilla rr project, Shai Almog (Debugagent) on Youtube Shai Almog on twitter: @debugagent

Oracle University Podcast
Oracle NoSQL Database Cloud Service

Oracle University Podcast

Play Episode Listen Later Jul 18, 2023 13:01


High availability, data model flexibility, elastic scalability… If these words have piqued your interest, then this is the episode for you!   Join Lois Houston and Nikita Abraham, along with Autumn Black, as they discuss how Oracle NoSQL Database Cloud Service is designed for today's most demanding applications, ensuring fast and predictable single-digit millisecond response times at scale, even as throughput requirements change.   Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Ranbir Singh, and the OU Studio Team for helping us create this episode.   -------------------------------------------------------   Episode Transcript: 00;00;00;00 - 00;00;38;28 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Product Innovation and Go to Market Programs.   00;00;39;01 - 00;01;04;26 Hi there! Many of today's modern applications face a lot of different challenges. These could be anything from managing data models that are continuously evolving to providing UIs that are highly responsive. Yeah, and that's because in today's environment, no one wants to go to an interface where they have to wait for minutes or even seconds, really. Exactly. So innovations have to happen rapidly and they have to keep up with the demands of the customer base.   00;01;05;01 - 00;01;27;25 Since we discussed the MySQL database in our last episode, we thought it would only make sense to talk about NoSQL next. And we're very happy to have Oracle Database Specialist Autumn Black joining us again. Hi Autumn. What can you tell us about Oracle NoSQL Database Cloud Service? The Oracle NoSQL Database Cloud Service really offers a lot of great properties.   00;01;27;28 - 00;01;53;28 It's serverless, always on, and fully managed by Oracle. Developers can focus on application development without having to worry about managing servers, storage expansion, cluster deployment, software installation, or even backups. It's fully elastic. You just provision the throughput and storage capacity your application requires. Resources are automatically allocated and they scale accordingly to meet the dynamic workload requirements that you have.   00;01;54;01 - 00;02;18;16 The service provides predictable low latency for all types of application workload, whether it's at the peak workload or at a very low level of the workload. The latency is always at sub 10 milliseconds. Flexible data model supports document, columnar, and key value formats to capture any of the kinds of data you may have. These data models can interoperate with each other using a single application interface.   00;02;18;18 - 00;02;49;04 It offers developer-friendly APIs, and it's integrated with popular application development tools. It comes with enterprise-grade security. It's cost-effective. Lastly, and most importantly, it makes hybrid cloud or multi-cloud deployment with Oracle NoSQL Database extremely easy. This enables enterprises to expand their business operations, open up new business potential and opportunities. Autumn, you said Oracle NoSQL Database Cloud Service is fully managed by Oracle.   00;02;49;06 - 00;03;09;17 What does that exactly mean? What are the things that Oracle really handles? Oracle's responsible for the back-end software and hardware. The developer gets to focus on the application and who can use that application. Today's modern developers are not interested in what's going on behind the scenes. They want to be sure that they can get what they want when they want it.   00;03;09;20 - 00;03;36;03 Could you tell us about some use cases where we'd see Oracle NoSQL Database Cloud Service being used? We see it in mobile applications. We see it in the Internet of Things. We see it in a 360-degree customer view that's extremely prevalent in banking applications. We see it in the user profile management catalog data, such as what you would see maybe on Amazon, content management, online advertising, and big-time real data, social media.   00;03;36;05 - 00;03;59;20 And we also see it being used in the area of gaming. We've heard that Oracle NoSQL Database Cloud Service is extremely developer-friendly. How exactly is it developer-friendly? You can deploy your application to the data store in a matter of minutes. We have SDKs available in all the popular languages. We have Java, Python, Node.js, Go. We also have Spring.   00;03;59;22 - 00;04;24;24 We have open standards for our APIs, and we use the SQL language. Many, many folks already know and understand the SQL language and they can easily adapt to our APIs. We basically allow for simple and complex data types. We have our developer tools in Eclipse, IntelliJ, and Visual Studios. And for those who want to play with the service a little bit, you have full access to the service through the UI console.   00;04;24;27 - 00;04;48;05 Does Oracle NoSQL Database Cloud Service support different data models? You get to select that depending upon what you need. There's fixed schema, right? Sometimes referred to as a columnar store, sometimes referred to as just straight schema. And a JSON document? Sometimes it's referred to as just documents, and sometimes it's referred to as a JSON document. Other times, it gets referred to as schema lists.   00;04;48;08 - 00;05;07;17 We can store both of these within the same table, and we also have the ability to store a key value, which in some sense is really nothing more than a very simplified JSON document where you sort of have a primary key known as your key. Then you have your value and then oftentimes this value is really nothing more than a JSON document.   00;05;07;20 - 00;05;32;10 We have complete interoperability between either of these different types of data models. And what's the pricing like? Our pricing model is fairly straightforward. You basically pay for what you provision and different workflows can provision different amounts. The provision occurs on a per-table level that we have. We call it a read unit or write unit and storage. And each of those different parameters can be adjusted differently for each of the tables that you have.   00;05;32;12 - 00;05;44;08 So, it's a very, very simple model, very straightforward. And the cost for each one of those is extremely inexpensive.   00;05;44;10 - 00;06;14;09 Have you been wanting to earn an Oracle certification? Well, there's never been a better time than the present. Now through August 31st, you can choose from over 20 different Oracle certifications and take up to four exams for free, including foundation, associate, and professional-level Oracle Cloud Infrastructure certifications. You can also learn and get certified on Oracle Cloud Applications Business Processes for Human Capital Management, Financials, Customer Experience, Supply Chain, and Procurement.   00;06;14;13 - 00;06;40;25 And did I mention this was all free? Oracle Cloud training and certifications empower you to explore limitless possibilities in the cloud landscape. Gain the knowledge and skills needed to design, deploy, secure, and operate modern cloud infrastructure and applications with confidence. You can go to education.oracle.com for more details. What are you waiting for? Get certified today.   00;06;40;27 - 00;07;03;24 Welcome back. Now that we've covered all of that, let's get down to business. What is Oracle NoSQL Database Cloud Service? How does it help one's business? Well, really, it's a client server architecture. On the client side, an application interacts with NoSQL drivers, or sometimes more commonly referred to as our SDK. So, you develop the application.   00;07;03;25 - 00;07;31;08 And then at compile time, you include the SDK in that. Drivers are available for Java, Python, Node.js, Go, Spring, and C-Sharp. So then the application runs and it does database operations like insert, update, query, delete against the NoSQL tables that are stored on the server side. So from the developer's point of view, that's all they need to know. It's extremely simple. NoSQL tables can be created in seconds.   00;07;31;10 - 00;08;01;08 Developers can start right away to develop and deploy their applications. Businesses can focus on rapid innovations to better serve their customer's needs and expectations. So what you're saying is developers and IT don't need to manage any computing, infrastructure, or software updates. The NoSQL service comes with the high availability built in. So, all of the underlying compute services and software maintenance, all of that is fully managed by Oracle, which is hosting the NoSQL tables.   00;08;01;11 - 00;08;24;10 Database administrators will work on the authentication, the roles, the privileges, and the other types of access that are used to access the tables. Autumn, earlier on, we spoke about how easy it is to use Oracle NoSQL Database Cloud Service to develop your application. Can you talk about how NoSQL drivers connect to tables and perform database operations? So, each table has a couple of different key components.   00;08;24;12 - 00;08;49;16 The first one is the data component. So, it consists of a table definition or flexible data model, supports columnar key value or document models. We support many different data types, including integer, string, binary, long double, array, JSON, record, and a few others. Each column can be defined by a particular data type. So that could be integers, strings, and JSON columns, right?   00;08;49;16 - 00;09;06;21 Then we have a primary key which is identified. The primary key sort of doubles as the shard key and the shard key can be either a sort of complete copy of the primary key, or you can have a multicolumn key where some of those columns is your primary key and then other columns in that are your shard key.   00;09;06;24 - 00;09;30;25 The other component of a table is the capacity that is provisioned. What's an example of this? We have three units. We have write units, read units, and we have storage. Storage is expressed in gigabytes and this is the maximum amount of storage that is allocated for this particular table. Reads and writes are expressed in units. Each of those units represents a one kilobyte read or write for every second.   00;09;30;27 - 00;09;52;17 So, if you have an application where you wanted to do multiple reads a second, then you would allocate multiple read units for that particular table. And these can be dynamically adjusted. What are the different ways in which you can experience Oracle NoSQL Database Cloud Service? The first is accessing the cloud service to the service UI, probably more commonly referred to as the UI console.   00;09;52;19 - 00;10;20;12 The UI console provides basic capability. You can create and delete tables and then you can execute very simple, straightforward SQL statements against your table. If you want a richer experience, where you have an application that's developed. And then through all of our SDKs, you'll have the full experience and you'll be able to do the complete set of SQL statements that we offer. And then finally, we have what we call the NoSQL Cloud Simulator, essentially a single process version of our cloud.   00;10;20;15 - 00;10;40;06 It's very easy to use. Many of our customers use that for test and development, and the beauty with using that for testing development is there's no cost to you. There's no cost from the cloud service. You can develop your application, you can test it, and once you're ready to deploy it in the cloud, you might be able to do part of it through the UI or you could do it through your application.   00;10;40;06 - 00;11;07;14 Autumn, what makes Oracle NoSQL Database Cloud Service different? Well, there's a seamless multimodel. You have key value, fixed schema, you have schema-less, all at the same data store. The seamless SQL interoperates amongst any of these different kinds of data models. You have tunable ACID, you have shard local full ACID, you have parent tables for easy multiobject ACID. You can carefully balance the trade-offs of ACID and scale.   00;11;07;16 - 00;11;29;16 There is a fully managed cloud service. The throughput is provisioned, there is no lock-in. You have the ability to run as a fully managed service or you can take the NoSQL Database, run it anywhere. And if you want to run it in a hybrid environment, you can do that as well. Are there some additional resources that Oracle has, you know, where people can go to get more information?   00;11;29;18 - 00;11;56;04 I know we have the Developers Blog and blogs.oracle.com, but is there anything else? We have the Cloud Customer Connect, the developer community that we have for NoSQL. This gets monitored. We really have developers around the globe on the team so really this does get 24/7 coverage and we try to be very responsive to any types of questions that get posted on the forum and then we supply answers for all those questions.   00;11;56;06 - 00;12;18;19 Thank you so much, Autumn, for joining us through these last two episodes. We've learned so much from you. Learn more about the service by visiting mylearn.oracle.com and taking a look at our Oracle Cloud Data Management Foundations Workshop. You can also take skill checks to test your understanding of this topic. That brings us to the end of this episode.   00;12;18;19 - 00;12;41;26 In our next episode, we'll be joined by a very special guest who will talk about MySQL Document Store, a NoSQL solution that's built on top of MySQL. More on that next week. Until then, this is Lois Houston and Nikita Abraham signing off. That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes.   00;12;41;29 - 00;15;14;24 We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

A Bootiful Podcast
Jetbrains' Anton Arhipov on IntelliJ, Java, and so much more

A Bootiful Podcast

Play Episode Listen Later Jul 6, 2023 24:40


Hi, Spring fans! Welcome to another installment of _A Bootiful Podcast_! This week [Josh Long](https://youtube.com/@coffeesoftware) talks to [Jetbrain's developer advocate Anton Arhipov](https://twitter.com/antonarhipov), recorded live from the amazing Spring IO show in Barcelona, Spain!

airhacks.fm podcast with adam bien
A Gentle Introduction to Debugging

airhacks.fm podcast with adam bien

Play Episode Listen Later Jun 21, 2023 63:59


An airhacks.fm conversation with Shai Almog (@debugagent) about: about the name Codename One, JavaLobby became DZone, JavaBlogs and java.net, joining lightrun and developer's observability, the theory of debugging, lightrun: breakpoints which don't break, debugging in production has access to the entire data, lightrun creates snapshots - breakpoints which don't step, time travelling debugging, chrononsystems: DVR with Java, translate java runtimes to charts with appmap, the logging breakpoints: logpoints and tracepoints, exception breakpoints are hard to use, NetBeans debugger is great, exception breakpoints are useful with filters, field watchpoint is not a watch, renderers in Intellij, toString, performance and circular dependencies, memory debugging and MAT by SAP Shai Almog on twitter: @debugagent

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Debugging the Internet with AI agents – with Itamar Friedman of Codium AI and AutoGPT

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 25, 2023 62:36


We are hosting the AI World's Fair in San Francisco on June 8th! You can RSVP here. Come meet fellow builders, see amazing AI tech showcases at different booths around the venue, all mixed with elements of traditional fairs: live music, drinks, games, and food! We are also at Amplitude's AI x Product Hackathon and are hosting our first joint Latent Space + Practical AI Podcast Listener Meetup next month!We are honored by the rave reviews for our last episode with MosaicML! They are also welcome on Apple Podcasts and Twitter/HN/LinkedIn/Mastodon etc!We recently spent a wonderful week with Itamar Friedman, visiting all the way from Tel Aviv in Israel: * We first recorded a podcast (releasing with this newsletter) covering Codium AI, the hot new VSCode/Jetbrains IDE extension focused on test generation for Python and JS/TS, with plans for a Code Integrity Agent. * Then we attended Agent Weekend, where the founders of multiple AI/agent projects got together with a presentation from Toran Bruce Richards on Auto-GPT's roadmap and then from Itamar on Codium's roadmap* Then some of us stayed to take part in the NextGen Hackathon and won first place with the new AI Maintainer project.So… that makes it really hard to recap everything for you. But we'll try!Podcast: Codium: Code Integrity with Zero BugsWhen it launched in 2021, there was a lot of skepticism around Github Copilot. Fast forward to 2023, and 40% of all code is checked in unmodified from Copilot. Codium burst on the scene this year, emerging from stealth with an $11m seed, their own foundation model (TestGPT-1) and a vision to revolutionize coding by 2025.You might have heard of "DRY” programming (Don't Repeat Yourself), which aims to replace repetition with abstraction. Itamar came on the pod to discuss their “extreme DRY” vision: if you already spent time writing a spec, why repeat yourself by writing the code for it? If the spec is thorough enough, automated agents could write the whole thing for you.Live Demo Video SectionThis is referenced in the podcast about 6 minutes in.Timestamps, show notes, and transcript are below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Auto-GPT: A Roadmap To The Future of WorkMaking his first public appearance, Toran (perhaps better known as @SigGravitas on GitHub) presented at Agents Weekend:Lightly edited notes for those who want a summary of the talk:* What is AutoGPT?AutoGPT is an Al agent that utilizes a Large Language Model to drive its actions and decisions. It can be best described as a user sitting at a computer, planning and interacting with the system based on its goals. Unlike traditional LLM applications, AutoGPT does not require repeated prompting by a human. Instead, it generates its own 'thoughts', criticizes its own strategy and decides what next actions to take.* AutoGPT was released on GitHub in March 2023, and went viral on April 1 with a video showing automatic code generation. 2 months later it has 132k+ stars, is the 29th highest ranked open-source project of all-time, a thriving community of 37.5k+ Discord members, 1M+ downloads.* What's next for AutoGPT? The initial release required users to know how to build and run a codebase. They recently announced plans for a web/desktop UI and mobile app to enable nontechnical/everyday users to use AutoGPT. They are also working on an extensible plugin ecosystem called the Abilities Hub also targeted at nontechnical users.* Improving Efficacy. AutoGPT has many well documented cases where it trips up. Getting stuck in loops, using instead of actual content incommands, and making obvious mistakes like execute_code("writea cookbook"'. The plan is a new design called Challenge Driven Development - Challenges are goal-orientated tasks or problems thatAuto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks. (AI Maintainer was born out of one such challenge). Itamar compared this with Software 1.0 (Test Driven Development), and Software 2.0 (Dataset Driven Development).* Self-Improvement. Auto-GPT will analyze its own codebase and contribute to its own improvement. AI Safety (aka not-kill-everyone-ists) people like Connor Leahy might freak out at this, but for what it's worth we were pleasantly surprised to learn that Itamar and many other folks on the Auto-GPT team are equally concerned and mindful about x-risk as well.The overwhelming theme of Auto-GPT's roadmap was accessibility - making AI Agents usable by all instead of the few.Podcast Timestamps* [00:00:00] Introductions* [00:01:30] Itamar's background and previous startups* [00:03:30] Vision for Codium AI: reaching “zero bugs”* [00:06:00] Demo of Codium AI and how it works* [00:15:30] Building on VS Code vs JetBrains* [00:22:30] Future of software development and the role of developers* [00:27:00] The vision of integrating natural language, testing, and code* [00:30:00] Benchmarking AI models and choosing the right models for different tasks* [00:39:00] Codium AI spec generation and editing* [00:43:30] Reconciling differences in languages between specs, tests, and code* [00:52:30] The Israeli tech scene and startup culture* [01:03:00] Lightning RoundShow Notes* Codium AI* Visualead* AutoGPT* StarCoder* TDD (Test-Driven Development)* AST (Abstract Syntax Tree)* LangChain* ICON* AI21TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Today we have a special guest, Tamar Friedman, all the way from Tel Aviv, CEO and co-founder of Codium AI. Welcome.Itamar: Hey, great being here. Thank you for inviting me.Swyx: You like the studio? It's nice, right?Itamar: Yeah, they're awesome.Swyx: So I'm gonna introduce your background a little bit and then we'll learn a bit more about who you are. So you graduated from Teknion Israel Institute of Technology's kind of like the MIT of of Israel. You did a BS in CS, and then you also did a Master's in Computer Vision, which is kind of relevant.You had other startups before this, but your sort of claim to fame is Visualead, which you started in 2011 and got acquired by Alibaba Group You showed me your website, which is the sort of QR codes with different forms of visibility. And in China that's a huge, huge deal. It's starting to become a bigger deal in the west. My favorite anecdote that you told me was something about how much sales use you saved or something. I forget what the number was.Itamar: Generally speaking, like there's a lot of peer-to-peer transactions going on, like payments and, and China with QR codes. So basically if for example 5% of the scanning does not work and with our scanner we [00:01:30] reduce it to 4%, that's a lot of money. Could be tens of millions of dollars a day.Swyx: And at the scale of Alibaba, it serves all of China. It's crazy. You did that for seven years and you're in Alibaba until 2021 when you took some time off and then hooked up with Debbie, who you've known for 25 years, to start Codium AI and you just raised your $11 million seed rounds with TlB Partners and Vine. Congrats. Should we go right into Codium? What is Codium?Itamar: So we are an AI coding assistant / agent to help developers reaching zero bugs. We don't do that today. Right now, we help to reduce the amount of bugs. Actually you can see people commenting on our marketplace page saying that they found bugs with our tool, and that's like our premise. Our vision is like for Tesla zero emission or something like that, for us it's zero bugs.We started with building an IDE extension either in VS Code or in JetBrains. And that actually works alongside the main panel where you write your code and I can show later what we do is analyze the code, whether you started writing it or you completed it.Like you can go both TDD (Test-Driven Development) or classical coding. And we offer analysis, tests, whether they pass or not, we further self debug [00:03:00] them and make suggestions eventually helping to improve the code quality specifically on code logic testing.Alessio: How did you get there? Obviously it's a great idea. Like, what was the idea, maze? How did you get here?Itamar: I'll go back long. So, yes I was two and a half times a CTO, VC backed startup CTO where we talked about the last one that I sold to Alibaba. But basically I'm like, it's weird to say by 20 years already of R&D manager, I'm not like the best programmer because like you mentioned, I'm coming more from the machine learning / computer vision side, one, one of the main application, but a lot of optimization. So I'm not necessarily the best coder, but I am like 20 year R&D manager. And I found that verifying code logic is very hard thing. And one of the thing that really makes it difficult to increase the development velocity.So you have tools related to checking performance.You have tools for vulnerabilities and security, Israelis are really good at that. But do you have a tool that actually helps you test code logic? I think what we have like dozens or hundreds, even thousands that help you on the end to end, maybe on the microservice integration system. But when you talk about code level, there isn't anything.So that was the pain I always had, especially when I did have tools for that, for the hardware. Like I worked in Mellanox to be sold to Nvidia as a student, and we had formal tools, et cetera. [00:04:30] So that's one part.The second thing is that after being sold to Alibaba, the team and I were quite a big team that worked on machine learning, large language model, et cetera, building developer tools relate with, with LLMs throughout the golden years of. 2017 to 2021, 2022. And we saw how powerful they became.So basically, if I frame it this way, because we develop it for so many use cases, we saw that if you're able to take a problem put a framework of a language around it, whether it's analyzing browsing behavior, or DNA, or etc, if you can put a framework off a language, then LLMs take you really far.And then I thought this problem that I have with code logic testing is basically a combination of a few languages: natural language, specification language, technical language. Even visual language to some extent. And then I quit Alibaba and took a bit of time to maybe wrap things around and rest a bit after 20 years of startup and corporate and joined with my partner Dedy Kredo who was my ever first employee.And that's how we like, came to this idea.Alessio: The idea has obviously been around and most people have done AST analysis, kinda like an abstract syntax tree, but it's kind of hard to get there with just that. But I think these models now are getting good enough where you can mix that and also traditional logical reasoning.Itamar: Exactly.Alessio: Maybe talk a little bit more about the technical implementation of it. You mentioned the agent [00:06:00] part. You mentioned some of the model part, like what happens behind the scenes when Codium gets in your code base?Itamar: First of all, I wanna mention I think you're really accurate.If you try to take like a large language model as is and try to ask it, can you like, analyze, test the code, etc, it'll not work so good. By itself it's not good enough on the other side, like all the traditional techniques we already started to invent since the Greek times. You know, logical stuff, you mentioned ASTs, but there's also dynamic code analysis, mutation testing, etc. There's a lot of the techniques out there, but they have inefficiencies.And a lot of those inefficiencies are actually matching with AI capabilities. Let me give you one example. Let's say you wanna do fuzzy testing or mutation testing.Mutation testing means that you either mutate the test, like the input of the test, the code of the test, etc or you mutate the code in order to check how good is your test suite.For example, if I mutate some equation in the application code and the test finds a bug and it does that at a really high rate, like out of 100 mutation, I [00:07:30] find all of the 100 problems in the test. It's probably a very strong test suite.Now the problem is that there's so many options for what to mutate in the data, in the test. And this is where, for example, AI could help, like pointing out where's the best thing that you can mutate. Actually, I think it's a very good use case. Why? Because even if AI is not 100% accurate, even if it's 80% accurate, it could really take you quite far rather just randomly selecting things.So if I wrap up, just go back high level. I think LLM by themselves cannot really do the job of verifying code logic and and neither can the traditional ones, so you need to merge them. But then one more thing before maybe you tell me where to double click. I think with code logic there's also a philosophy question here.Logic different from performance or quality. If I did a three for in loop, like I loop three things and I can fold them with some vector like in Python or something like that. We need to get into the mind of the developer. What was the intention? Like what is the bad code? Not what is the code logic that doesn't work. It's not according to the specification. So I think like one more thing that AI could really help is help to match, like if there is some natural language description of the code, we can match it. Or if there's missing information in natural language that needs [00:09:00] to be asked for the AI could help asking the user.It's not like a closed solution. Rather open and leaving the developer as the lead. Just like moving the developer from, from being the coder to actually being like a pilot that that clicks button and say, ah, this is what I meant, or this is the fix, rather actually writing all the code.Alessio: That makes sense. I think I talked about it on the podcast before, but like the switch from syntax to like semantics, like developers used to be focused on the syntax and not the meaning of what they're writing. So now you have the models that are really good at the syntax and you as a human are supposed to be really good at the semantics of what you're trying to build.How does it practically work? So I'm a software developer, I want to use Codium, like how do I start and then like, how do you make that happen in the, in the background?Itamar: So, like I said, Codium right now is an IDE extension. For example, I'm showing VS code. And if you just install it, like you'll have a few access points to start Codium AI, whether this sidebar or above every component or class that we think is very good to check with Codium.You'll have this small button. There's other way you can mark specific code and right click and run code. But this one is my favorite because we actually choose above which components we suggest to use code. So once I click it code, I starts analyzing this class. But not only this class, but almost everything that is [00:10:30] being used by the call center class.But all and what's call center is, is calling. And so we do like a static code analysis, et cetera. What, what we talked about. And then Codium provides with code analysis. It's right now static, like you can't change. It can edit it, and maybe later we'll talk about it. This is what we call the specification and we're going to make it editable so you can add additional behaviors and then create accordingly, test that will not pass, and then the code will, will change accordingly. So that's one entrance point, like via natural language description. That's one of the things that we're working on right now. What I'm showing you by the way, could be downloaded as is. It's what we have in production.The second thing that we show here is like a full test suite. There are six tests by default but you can just generate more almost as much as you want every time. We'll try to cover something else, like a happy pass edge case et cetera. You can talk with specific tests, okay? Like you can suggest I want this in Spanish or give a few languages, or I want much more employees.I didn't go over what's a call center, but basically it manages like call center. So you can imagine, I can a ask to make it more rigorous, etc, but I don't wanna complicate so I'm keeping it as is.I wanna show you the next one, which is run all test. First, we verify that you're okay, we're gonna run it. I don't know, maybe we are connected to the environment that is currently [00:12:00] configured in the IDE. I don't know if it's production for some reason, or I don't know what. Then we're making sure that you're aware we're gonna run the code that and then once we run, we show if it pass or fail.I hope that we'll have one fail. But I'm not sure it's that interesting. So I'll go like to another example soon, but, but just to show you what's going on here, that we actually give an example of what's a problem. We give the log of the error and then you can do whatever you want.You can fix it by yourself, or you can click reflect and fix, and what's going on right now is a bit a longer process where we do like chain of thought or reflect and fix. And we can suggest a solution. You can run it and in this case it passes. Just an example, this is a very simple example.Maybe later I'll show you a bug. I think I'll do that and I'll show you a bug and how we recognize actually the test. It's not a problem in the test, it's a problem in the code and then suggest you fix that instead of the code. I think you see where I'm getting at.The other thing is that there are a few code suggestion, and there could be a dozen of, of types that could be related to performance modularity or I see this case there is a maintainability.There could also be vulnerability or best practices or even suggestion for bugs. Like if we noticed, if we think one of the tests, for example, is failing because of a bug. So just code presented in the code suggestion. Probably you can choose a few, for example, if you like, and then prepare a code change like I didn't show you which exactly.We're making a diff now that you can apply on your code. So basically what, what we're seeing here is that [00:13:30] there are three main tabs, the code, the test and the code analysis. Let's call spec.And then there's a fourth tab, which is a code suggestion, if you wanna look at analytics, etc. Mm-hmm. Right now code okay. This is the change or quite a big change probably clicked on something. So that's the basic demo.Right now let's be frank. Like I wanted to show like a simple example. So it's a call center. All the inputs to the class are like relatively simple. There is no jsm input, like if you're Expedia or whatever, you have a J with the hotels, Airbnb, you know, so the test will be almost like too simple or not covering enough.Your code, if you don't provide it with some input is valuable, like adjacent with all information or YAMA or whatever. So you can actually add input data and the AI or model. It's actually by the way, a set of models and algorithms that will use that input to create interesting tests. And another thing is many people have some reference tests that they already made. It could be because they already made it or because they want like a very specific they have like how they imagine the test. So they just write one and then you add a reference and that will inspire all the rest of the tests. And also you can give like hints. [00:15:00] This is by the way plan to be like dynamic hints, like for different type of code.We will provide different hints. So we can help you become a bit more knowledgeable about how to test your code. So you can ask for like having a, a given one then, or you can have like at a funny private, like make different joke for each test or for example,Swyx: I'm curious, why did you choose that one? This is the pirate one. Yeah.Itamar: Interesting choice to put on your products. It could be like 11:00 PM of people sitting around. Let's choose one funny thingSwyx: and yeah. So two serious ones and one funny one. Yeah. Just for the listening audience, can you read out the other hints that you decided on as well?Itamar: Yeah, so specifically, like for this case, relatively very simple class, so there's not much to do, but I'm gonna go to one more thing here on the configuration. But it basically is given when then style, it's one of the best practices and tests. So even when I report a bug, for example, I found a bug when someone else code, usually I wanna say like, given, use this environment or use that this way when I run this function, et cetera.Oh, then it's a very, very full report. And it's very common to use that in like in unit test and perform.Swyx: I have never been shown this format.Itamar: I love that you, you mentioned that because if you go to CS undergrad you take so many courses in development, but none of them probably in testing, and it's so important. So why would you, and you don't go to Udemy or [00:16:30] whatever and, and do a testing course, right? Like it's, it's boring. Like people either don't do component level testing because they hate it or they do it and they hate it. And I think part of it it's because they're missing tool to make it fun.Also usually you don't get yourself educated about it because you wanna write your code. And part of what we're trying to do here is help people get smarter about testing and make it like easy. So this is like very common. And the idea here is that for different type of code, we'll suggest different type of hints to make you more knowledgeable.We're doing it on an education app, but we wanna help developers become smarter, more knowledgeable about this field. And another one is mock. So right now, our model decided that there's no need for mock here, which is a good decision. But if we would go to real world case, like, I'm part of AutoGPT community and there's all of tooling going on there. Right? And maybe when I want to test like a specific component, and it's relatively clear that going to the web and doing some search and coming back, I don't really need to do that. Like I know what I expect to do and so I can mock that part of using to crawl the web.A certain percentage of accuracy, like around 90, we will decide this is worth mocking and we will inject it. I can click it now and force our system to mock this. But you'll see like a bit stupid mocking because it really doesn't make sense. So I chose this pirate stuff, like add funny pirate like doc stringing make a different joke for each test.And I forced it to add mocks, [00:18:00] the tests were deleted and now we're creating six new tests. And you see, here's the shiver me timbers, the test checks, the call successful, probably there's some joke at the end. So in this case, like even if you try to force it to mock it didn't happen because there's nothing but we might find here like stuff that it mock that really doesn't make sense because there's nothing to mock here.So that's one thing I. I can show a demo where we actually catch a bug. And, and I really love that, you know how it is you're building a developer tools, the best thing you can see is developers that you don't know giving you five stars and sharing a few stuff.We have a discord with thousands of users. But I love to see the individual reports the most. This was one of my favorites. It helped me to find two bugs. I mentioned our vision is to reach zero bugs. Like, if you may say, we want to clean the internet from bugs.Swyx: So debugging the internet. I have my podcast title.Itamar: So, so I think like if we move to another exampleSwyx: Yes, yes, please, please. This is great.Itamar: I'm moving to a different example, it is the bank account. By the way, if you go to ChatGPT and, and you can ask me what's the difference between Codium AI and using ChatGPT.Mm-hmm. I'm, I'm like giving you this hard question later. Yeah. So if you ask ChatGPT give me an example to test a code, it might give you this bank account. It's like the one-on-one stuff, right? And one of the reasons I gave it, because it's easy to inject bugs here, that's easy to understand [00:19:30] anyway.And what I'm gonna do right now is like this bank account, I'm gonna change the deposit from plus to minus as an example. And then I'm gonna run code similarly to how I did before, like it suggests to do that for the entire class. And then there is the code analysis soon. And when we announce very soon, part of this podcast, it's going to have more features here in the code analysis.We're gonna talk about it. Yep. And then there is the test that I can run. And the question is that if we're gonna catch the bag, the bugs using running the test, Because who knows, maybe this implementation is the right one, right? Like you need to, to converse with the developer. Maybe in this weird bank, bank you deposit and, and the bank takes money from you.And we could talk about how this happens, but actually you can see already here that we are already suggesting a hint that something is wrong here and here's a suggestion to put it from minus to to plus. And we'll try to reflect and, and fix and then we will see actually the model telling you, hey, maybe this is not a bug in the test, maybe it's in the code.Swyx: I wanna stay on this a little bit. First of all, this is very impressive and I think it's very valuable. What user numbers can you disclose, you launched it and then it's got fairly organic growth. You told me something off the air, but you know, I just wanted to show people like this is being adopted in quite a large amount.Itamar:  [00:21:00] First of all, I'm a relatively transparent person. Like even as a manager, I think I was like top one percentile being transparent in Alibaba. It wasn't five out of five, which is a good thing because that's extreme, but it was a good, but it also could be a bad, some people would claim it's a bad thing.Like for example, if my CTO in Alibaba would tell me you did really bad and it might cut your entire budget by 30%, if in half a year you're not gonna do like much better and this and that. So I come back to a team and tell 'em what's going on without like trying to smooth thing out and we need to solve it together.If not, you're not fitting in this team. So that's my point of view. And the same thing, one of the fun thing that I like about building for developers, they kind of want that from you. To be transparent. So we are on the high numbers of thousands of weekly active users. Now, if you convert from 50,000 downloads to high thousands of weekly active users, it means like a lot of those that actually try us keep using us weekly.I'm not talking about even monthly, like weekly. And that was like one of their best expectations because you don't test your code every day. Right now, you can see it's mostly focused on testing. So you probably test it like once a week. Like we wanted to make it so smooth with your development methodology and development lifecycle that you use it every day.Like at the moment we hope it to be used weekly. And that's what we're getting. And the growth is about like every two, three weeks we double the amount of weekly and downloads. It's still very early, like seven weeks. So I don't know if it'll keep that way, but we hope so. Well [00:22:30] actually I hope that it'll be much more double every two, three weeks maybe. Thanks to the podcast.Swyx: Well, we, yeah, we'll, we'll add you know, a few thousand hopefully. The reason I ask this is because I think there's a lot of organic growth that people are sharing it with their friends and also I think you've also learned a lot from your earliest days in, in the private beta test.Like what have you learned since launching about how people want to use these testing tools?Itamar: One thing I didn't share with you is like, when you say virality, there is like inter virality and intra virality. Okay. Like within the company and outside the company. So which teams are using us? I can't say, but I can tell you that a lot of San Francisco companies are using us.And one of the things like I'm really surprised is that one team, I saw one user two weeks ago, I was so happy. And then I came yesterday and I saw 48 of that company. So what I'm trying to say to be frank is that we see more intra virality right now than inter virality. I don't see like video being shared all around Twitter. See what's going on here. Yeah. But I do see, like people share within the company, you need to use it because it's really helpful with productivity and it's something that we will work about the [00:24:00] inter virality.But to be frank, first I wanna make sure that it's helpful for developers. So I care more about intra virality and that we see working really well, because that means that tool is useful. So I'm telling to my colleague, sharing it on, on Twitter means that I also feel that it will make me cool or make me, and that's something maybe we'll need, still need, like testing.Swyx: You know, I don't, well, you're working on that. We're gonna announce something like that. Yeah. You are generating these tests, you know, based on what I saw there. You're generating these tests basically based on the name of the functions. And the doc strings, I guess?Itamar:So I think like if you obfuscate the entire code, like our accuracy will drop by 50%. So it's right. We're using a lot of hints that you see there. Like for example, the functioning, the dog string, the, the variable names et cetera. It doesn't have to be perfect, but it has a lot of hints.By the way. In some cases, in the code suggestion, we will actually suggest renaming some of the stuff that will sync, that will help us. Like there's suge renaming suggestion, for example. Usually in this case, instead of calling this variable is client and of course you'll see is “preferred client” because basically it gives a different commission for that.So we do suggest it because if you accept it, it also means it will be easier for our model or system to keep improving.Swyx: Is that a different model?Itamar: Okay. That brings a bit to the topic of models properties. Yeah. I'll share it really quickly because Take us off. Yes. It's relevant. Take us off. Off. Might take us off road.I think [00:25:30] like different models are better on different properties, for example, how obedient you are to instruction, how good you are to prompt forcing, like to format forcing. I want the results to be in a certain format or how accurate you are or how good you are in understanding code.There's so many calls happening here to models by the way. I. Just by clicking one, Hey Codium AI. Can you help me with this bank account? We do a dozen of different calls and each feature you click could be like, like with that reflect and fix and then like we choose the, the best one.I'm not talking about like hundreds of models, but we could, could use different APIs of open AI for example, and, and other models, et cetera. So basically like different models are better on different aspect. Going back to your, what we talked about, all the models will benefit from having those hints in, in the code, that rather in the code itself or documentation, et cetera.And also in the code analysis, we also consider the code analysis to be the ground truth to some extent. And soon we're also going to allow you to edit it and that will use that as well.Alessio: Yeah, maybe talk a little bit more about. How do I actually get all these models to work together? I think there's a lot of people that have only been exposed to Copilot so far, which is one use case, just complete what I'm writing. You're doing a lot more things here. A lot of people listening are engineers themselves, some of them build these tools, so they would love to [00:27:00] hear more about how do you orchestrate them, how do you decide which model the what, stuff like that.Itamar: So I'll start with the end because that is a very deterministic answer, is that we benchmark different models.Like every time this there a new model in, in town, like recently it's already old news. StarCoder. It's already like, so old news like few days ago.Swyx: No, no, no. Maybe you want to fill in what it is StarCoder?Itamar: I think StarCoder is, is a new up and coming model. We immediately test it on different benchmark and see if, if it's better on some properties, et cetera.We're gonna talk about it like a chain of thoughts in different part in the chain would benefit from different property. If I wanna do code analysis and, and convert it to natural language, maybe one model would be, would be better if I want to output like a result in, in a certain format.Maybe another model is better in forcing the, a certain format you probably saw on Twitter, et cetera. People talk about it's hard to ask model to output JSON et cetera. So basically we predefine. For different tasks, we, we use different models and I think like this is for individuals, for developers to check, try to sync, like the test that now you are working on, what is most important for you to get, you want the semantic understanding, that's most important? You want the output, like are you asking for a very specific [00:28:30] output?It's just like a chat or are you asking to give a output of code and have only code, no description. Or if there's a description of the top doc string and not something else. And then we use different models. We are aiming to have our own models in in 2024. Being independent of any other third party, like OpenAI or so, but since our product is very challenging, it has UI/UX challenges, engineering challenge, statical and dynamical analysis, and AI.As entrepreneur, you need to choose your battles. And we thought that it's better for us to, to focus on everything around the model. And one day when we are like thinking that we have the, the right UX/UI engineering, et cetera, we'll focus on model building. This is also, by the way, what we did in in Alibaba.Even when I had like half a million dollar a month for trading one foundational model, I would never start this way. You always try like first using the best model you can for your product. Then understanding what's the glass ceiling for that model? Then fine tune a foundation model, reach a higher glass ceiling and then training your own.That's what we're aiming and that's what I suggest other developers like, don't necessarily take a model and, and say, oh, it's so easy these days to do RLHF, et cetera. Like I see it's like only $600. Yeah, but what are you trying to optimize for? The properties. Don't try to like certain models first, organize your challenges.Understand the [00:30:00] properties you're aiming for and start playing with that. And only then go to train your own model.Alessio: Yeah. And when you say benchmark, you know, we did a one hour long episode, some benchmarks, there's like many of them. Are you building some unique evals to like your own problems? Like how are you doing that? And that's also work for your future model building, obviously, having good benchmarks. Yeah.Itamar:. Yeah. That's very interesting. So first of all, with all the respect, I think like we're dealing with ML benchmark for hundreds of years now.I'm, I'm kidding. But like for tens of years, right? Benchmarking statistical creatures is something that, that we're doing for a long time. I think what's new here is the generative part. It's an open challenge to some extent. And therefore, like maybe we need to re rethink some of the way we benchmark.And one of the notions that I really believe in, I don't have a proof for that, is like create a benchmark in levels. Let's say you create a benchmark from level one to 10, and it's a property based benchmark. Let's say I have a WebGPT ask something from the internet and then it should fetch it for me.So challenge level one could be, I'm asking it and it brings me something. Level number two could be I'm asking it and it has a certain structure. Let's say for example, I want to test AutoGPT. Okay. And I'm asking it to summarize what's the best cocktail I could have for this season in San Francisco.So [00:31:30] I would expect, like, for example, for that model to go. This is my I what I think to search the internet and do a certain thing. So level number three could be that I want to check that as part of this request. It uses a certain tools level five, you can add to that. I expect that it'll bring me back something like relevance and level nine it actually prints the cocktail for me I taste it and it's good. So, so I think like how I see it is like we need to have data sets similar to before and make sure that we not fine tuning the model the same way we test it. So we have one challenges that we fine tune over, right? And few challenges that we don't.And the new concept may is having those level which are property based, which is something that we know from software testing and less for ML. And this is where I think that these two concepts merge.Swyx: Maybe Codium can do ML testing in the future as well.Itamar: Yeah, that's a good idea.Swyx: Okay. I wanted to cover a little bit more about Codium in the present and then we'll go into the slides that you have.So you have some UI/UX stuff and you've obviously VS Code is the majority market share at this point of IDE, but you also have IntelliJ right?Itamar: Jet Brains in general.Swyx: Yeah. Anything that you learned supporting JetBrains stuff? You were very passionate about this one user who left you a negative review.What is the challenge of that? Like how do you think about the market, you know, maybe you should focus on VS Code since it's so popular?Itamar: Yeah. [00:33:00] So currently the VS Code extension is leading over JetBrains. And we were for a long time and, and like when I tell you long time, it could be like two or three weeks with version oh 0.5, point x something in, in VS code, although oh 0.4 or so a jet brains, we really saw the difference in, in the how people react.So we also knew that oh 0.5 is much more meaningful and one of the users left developers left three stars on, on jet brands and I really remember that. Like I, I love that. Like it's what do you want to get at, at, at our stage? What's wrong? Like, yes, you want that indication, you know, the worst thing is getting nothing.I actually, not sure if it's not better to get even the bad indication, only getting good ones to be re frank like at, at, at least in our stage. So we're, we're 9, 10, 10 months old startup. So I think like generally speaking We find it easier and fun to develop in vs code extension versus JetBrains.Although JetBrains has like very nice property, when you develop extension for one of the IDEs, it usually works well for all the others, like it's one extension for PyCharm, and et cetera. I think like there's even more flexibility in the VS code. Like for example, this app is, is a React extension as opposed that it's native in the JetBrains one we're using. What I learned is that it's basically is almost like [00:34:30] developing Android and iOS where you wanna have a lot of the best practices where you have one backend and all the software development like best practices with it.Like, like one backend version V1 supports both under Android and iOS and not different backends because that's crazy. And then you need all the methodology. What, what means that you move from one to 1.1 on the backend? What supports whatnot? If you don't what I'm talking about, if you developed in the past, things like that.So it's important. And then it's like under Android and iOS and, and you relatively want it to be the same because you don't want one developer in the same team working with Jet Brains and then other VS code and they're like talking, whoa, that's not what I'm seeing. And with code, what are you talking about?And in the future we're also gonna have like teams offering of collaboration Right now if you close Codium Tab, everything is like lost except of the test code, which you, you can, like if I go back to a test suite and do open as a file, and now you have a test file with everything that you can just save, but all the goodies here it's lost. One day we're gonna have like a platform you can save all that, collaborate with people, have it part of your PR, like have suggested part of your PR. And then you wanna have some alignment. So one of the challenges, like UX/UI, when you think about a feature, it should, some way or another fit for both platforms be because you want, I think by the way, in iOS and Android, Android sometimes you don't care about parity, but here you're talking about developers that might be on the same [00:36:00] team.So you do care a lot about that.Alessio: Obviously this is a completely different way to work for developers. I'm sure this is not everything you wanna build and you have some hint. So maybe take us through what you see the future of software development look like.Itamar: Well, that's great and also like related to our announcement, what we're working on.Part of it you already start seeing in my, in my demo before, but now I'll put it into a framework. I'll be clearer. So I think like the software development world in 2025 is gonna look very different from 2020. Very different. By the way. I think 2020 is different from 2000. I liked the web development in 95, so I needed to choose geocities and things like that.Today's much easier to build a web app and whatever, one of the cloud. So, but I think 2025 is gonna look very different in 2020 for the traditional coding. And that's like a paradigm I don't think will, will change too much in the last few years. And, and I'm gonna go over that when I, when I'm talking about, so j just to focus, I'm gonna show you like how I think the intelligence software development world look like, but I'm gonna put it in the lens of Codium AI.We are focused on code integrity. We care that with all this advancement of co-generation, et cetera, we wanna make sure that developers can code fast with confidence. That they have confidence on generated code in the AI that they are using that. That's our focus. So I'm gonna put, put that like lens when I'm going to explain.So I think like traditional development. Today works like creating some spec for different companies, [00:37:30] different development teams. Could mean something else, could be something on Figma, something on Google Docs, something on Jira. And then usually you jump directly to code implementation. And then if you have the time or patience, or will, you do some testing.And I think like some people would say that it's better to do TDD, like not everyone. Some would say like, write spec, write your tests, make sure they're green, that they do not pass. Write your implementation until your test pass. Most people do not practice it. I think for just a few, a few reason, let them mention two.One, it's tedious and I wanna write my code like before I want my test. And I don't think, and, and the second is, I think like we're missing tools to make it possible. And what we are advocating, what I'm going to explain is actually neither. Okay. It's very, I want to say it's very important. So here's how we think that the future of development pipeline or process is gonna look like.I'm gonna redo it in steps. So, first thing I think there do I wanna say that they're gonna be coding assistance and coding agents. Assistant is like co-pilot, for example, and agents is something that you give it a goal or a task and actually chains a few tasks together to complete your goal.Let's have that in mind. So I think like, What's happening right now when you saw our demo is what I presented a few minutes ago, is that you start with an implementation and we create spec for you and test for you. And that was like a agent, like you didn't converse with it, you just [00:39:00] click a button.And, and we did a, a chain of thought, like to create these, that's why it's it's an agent. And then we gave you an assistant to change tests, like you can converse it with it et cetera. So that's like what I presented today. What we're announcing is about a vision that we called the DRY. Don't repeat yourself. I'm gonna get to that when I'm, when I'm gonna show you the entire vision. But first I wanna show you an intermediate step that what we're going to release. So right now you can write your code. Or part of it, like for example, just a class abstract or so with a coding assistant like copilot and maybe in the future, like a Codium AI coding assistant.And then you can create a spec I already presented to you. And the next thing is that you going to have like a spec assistant to generate technical spec, helping you fill it quickly focused on that. And this is something that we're working on and, and going to release the first feature very soon as part of announcement.And it's gonna be very lean. Okay? We're, we're a startup that going bottom up, like lean features going to more and more comprehensive one. And then once you have the spec and implementation, you can either from implementation, have tests, and then you can run the test and fix them like I presented to you.But you can also from spec create tests, okay? From the spec directly to tests. [00:40:30]So then now you have a really interesting thing going on here is that you can start from spec, create, test, create code. You can start from test create code. You can start from a limitation. From code, create, spec and test. And actually we think the future is a very flexible one. You don't need to choose what you're practicing traditional TDD or whatever you wanna start with.If you have already some spec being created together with one time in one sprint, you decided to write a spec because you wanted to align about it with your team, et cetera, and now you can go and create tests and implementation or you wanted to run ahead and write your code. Creating tests and spec that aligns to it will be relatively easy.So what I'm talking about is extreme DRY concept; DRY is don't repeat yourself. Until today when we talked about DRY is like, don't repeat your code. I claim that there is a big parts of the spec test and implementation that repeat himself, but it's not a complete repetition because if spec was as detailed as the implementation, it's actually the implementation.But the spec is usually in different language, could be natural language and visual. And what we're aiming for, our vision is enabling the dry concept to the extreme. With all these three: you write your test will help you generate the code and the spec you write your spec will help you doing the test and implementation.Now the developers is the driver, okay? You'll have a lot [00:42:00] of like, what do you think about this? This is what you meant. Yes, no, you wanna fix the coder test, click yes or no. But you still be the driver. But there's gonna be like extreme automation on the DRY level. So that's what we're announcing, that we're aiming for as our vision and what we're providing these days in our product is the middle, is what, what you see in the middle, which is our code integrity agents working for you right now in your id, but soon also part of your Github actions, et cetera, helping you to align all these three.Alessio: This is great. How do you reconcile the difference in languages, you know, a lot of times the specs is maybe like a PM or it's like somebody who's more at the product level.Some of the implementation details is like backend developers for something. Frontend for something. How do you help translate the language between the two? And then I think in the one of the blog posts on your blog, you mentioned that this is also changing maybe how programming language themselves work. How do you see that change in the future? Like, are people gonna start From English, do you see a lot of them start from code and then it figures out the English for them?Itamar: Yeah. So first of all, I wanna say that although we're working, as we speak on managing we front-end frameworks and languages and usage, we are currently focused on the backend.So for example, as the spec, we won't let you input Figma, but don't be surprised if in 2024 the input of the spec could be a Figma. Actually, you can see [00:43:30] demos of that on a pencil drawing from OpenAI and when he exposed the GPT-4. So we will have that actually.I had a blog, but also I related to two different blogs. One, claiming a very knowledgeable and respectful, respectful person that says that English is going to be the new language program language and, and programming is dead. And another very respectful person, I think equally said that English is a horrible programming language.And actually, I think both of are correct. That's why when I wrote the blog, I, I actually related, and this is what we're saying here. Nothing is really fully redundant, but what's annoying here is that to align these three, you always need to work very hard. And that's where we want AI to help with. And if there is inconsistency will raise a question, what do, which one is true?And just click yes or no or test or, or, or code that, that what you can see in our product and we'll fix the right one accordingly. So I think like English and, and visual language and code. And the test language, let's call it like, like that for a second. All of them are going to persist. And just at the level of automation aligning all three is what we're aiming for.Swyx: You told me this before, so I I'm, I'm just actually seeing Alessio's reaction to it as a first time.Itamar: Yeah, yeah. Like you're absorbing like, yeah, yeah.Swyx: No, no. This is, I mean, you know, you can put your VC hat on or like compare, like what, what is the most critical or unsolved question presented by this vision?Alessio: A lot of these tools, especially we've seen a lot in the past, it's like the dynamic nature of a lot of this, you know?[00:45:00] Yeah. Sometimes, like, as you mentioned, sometimes people don't have time to write the test. Sometimes people don't have time to write the spec. Yeah. So sometimes you end up with things. Out of sync, you know? Yeah. Or like the implementation is moving much faster than the spec, and you need some of these agents to make the call sometimes to be like, no.Yeah, okay. The spec needs to change because clearly if you change the code this way, it needs to be like this in the future. I think my main question as a software developer myself, it's what is our role in the future? You know? Like, wow, how much should we intervene, where should we intervene?I've been coding for like 15 years, but if I've been coding for two years, where should I spend the next year? Yeah. Like focus on being better at understanding product and explain it again. Should I get better at syntax? You know, so that I can write code. Would love have any thoughts.Itamar: Yeah. You know, there's gonna be a difference between 1, 2, 3 years, three to six, six to 10, and 10 to 20. Let's for a second think about the idea that programming is solved. Then we're talking about a machine that can actually create any piece of code and start creating, like we're talking about singularity, right?Mm-hmm. If the singularity happens, then we're talking about this new set of problems. Let's put that aside. Like even if it happens in 2041, that's my prediction. I'm not sure like you should aim for thinking what you need to do, like, or not when the singularity happens. So I, [00:46:30] I would aim for mm-hmm.Like thinking about the future of the next five years or or, so. That's my recommendation because it's so crazy. Anyway. Maybe not the best recommendation. Take that we're for grain of salt. And please consult with a lawyer, at least in the scope of, of the next five years. The idea that the developers is the, the driver.It actually has like amazing team members. Agents that working for him or her and eventually because he or she's a driver, you need to understand especially what you're trying to achieve, but also being able to review what you get. The better you are in the lower level of programming in five years, it it mean like real, real program language.Then you'll be able to develop more sophisticated software and you will work in companies that probably pay more for sophisticated software and the more that you're less skilled in, in the actual programming, you actually would be able to be the programmer of the new era, almost a creator. You'll still maybe look on the code levels testing, et cetera, but what's important for you is being able to convert products, requirements, et cetera, to working with tools like Codium AI.So I think like there will be like degree of diff different type developers now. If you think about it for a second, I think like it's a natural evolution. It's, it's true today as well. Like if you know really good the Linux or assembly, et cetera, you'll probably work like on LLVM Nvidia [00:48:00] whatever, like things like that.Right. And okay. So I think it'll be like the next, next step. I'm talking about the next five years. Yeah. Yeah. Again, 15 years. I think it's, it's a new episode if you would like to invite me. Yeah. Oh, you'll be, you'll be back. Yeah. It's a new episode about how, how I think the world will look like when you really don't need a developer and we will be there as Cody mi like you can see.Mm-hmm.Alessio: Do we wanna dive a little bit into AutoGPT? You mentioned you're part of the community. Yeah.Swyx: Obviously Try, Catch, Finally, Repeat is also part of the company motto.Itamar: Yeah. So it actually really. Relates to what we're doing and there's a reason we have like a strong relationship and connection with the AutoGPT community and us being part part of it.So like you can see, we're talking about agent for a few months now, and we are building like a designated, a specific agent because we're trying to build like a product that works and gets the developer trust to have developer trust us. We're talking about code integrity. We need it to work. Like even if it will not put 100% it's not 100% by the way our product at all that UX/UI should speak the language of, oh, okay, we're not sure here, please take the driving seat.You want this or that. But we really not need, even if, if we're not close to 100%, we still need to work really well just throwing a number. 90%. And so we're building a like really designated agents like those that from code, create tests.So it could create tests, run them, fix them. It's a few tests. So we really believe in that we're [00:49:30] building a designated agent while Auto GPT is like a swarm of agents, general agents that were supposedly you can ask, please make me rich or make me rich by increase my net worth.Now please be so smart and knowledgeable to use a lot of agents and the tools, et cetera, to make it work. So I think like for AutoGPT community was less important to be very accurate at the beginning, rather to show the promise and start building a framework that aims directly to the end game and start improving from there.While what we are doing is the other way around. We're building an agent that works and build from there towards that. The target of what I explained before. But because of this related connection, although it's from different sides of the, like the philosophy of how you need to build those things, we really love the general idea.So we caught it really early that with Toran like building it, the, the maker of, of AutoGPT, and immediately I started contributing, guess what, what did I contribute at the beginning tests, right? So I started using Codium AI to build tests for AutoGPT, even, even finding problems this way, et cetera.So I become like one of the, let's say 10 contributors. And then in the core team of the management, I talk very often with with Toran on, on different aspects. And we are even gonna have a workshop,Swyx: a very small [00:49:00] meetingItamar: work meeting workshop. And we're going to compete together in a, in a hackathons.And to show that AutoGPT could be useful while, for example, Codium AI is creating the test for it, et cetera. So I'm part of that community, whether is my team are adding tests to it, whether like advising, whether like in in the management team or whether to helping Toran. Really, really on small thing.He is the amazing leader like visionaire and doing really well.Alessio: What do you think is the future of open source development? You know, obviously this is like a good example, right? You have code generating the test and in the future code could actually also implement the what the test wanna do. So like, yeah.How do you see that change? There's obviously not enough open source contributors and yeah, that's one of the, the main issue. Do you think these agents are maybe gonna help us? Nadia Eghbal has this  great book called like Working in Public and there's this type of projects called Stadium model, which is, yeah, a lot of people use them and like nobody wants to contribute to them.I'm curious about, is it gonna be a lot of noise added by a lot of these agents if we let them run on any repo that is open source? Like what are the contributing guidelines for like humans versus agents? I don't have any of the answers, but like some of the questions that I've been thinking about.Itamar: Okay. So I wanna repeat your question and make sure I understand you, but like, if they're agents, for example, dedicated for improving code, why can't we run them on, mm-hmm.Run them on like a full repository in, in fixing that? The situation right now is that I don't think that right now Auto GPT would be able to do that for you. Codium AI might but it's not open sourced right now. And and like you can see like in the months or two, you will be able to like running really quickly like development velocity, like our motto is moving fast with confidence by the way.So we try to like release like every day or so, three times even a day in the backend, et cetera. And we'll develop more feature, enable you, for example, to run an entire re, but, but it's not open source. So about the open source I think like AutoGPT or LangChain, you can't really like ask please improve my repository, make it better.I don't think it will work right now because because let me like. Softly quote Ilya from Open AI. He said, like right now, let's say that a certain LLM is 95% accurate. Now you're, you're concatenating the results. So the accuracy is one point like it's, it's decaying. And what you need is like more engineering frameworks and work to be done there in order to be able to deal with inaccuracies, et cetera.And that's what we specialize in Codium, but I wanna say that I'm not saying that Auto GPT won't be able to get there. Like the more tools and that going to be added, the [00:52:30] more prompt engineering that is dedicated for this, this idea will be added by the way, where I'm talking with Toran, that Codium, for example, would be one of the agents for Auto GPT.Think about it AutoGPT is not, is there for any goal, like increase my net worth, though not focused as us on fixing or improving code. We might be another agent, by the way. We might also be, we're working on it as a plugin for ChatGPT. We're actually almost finished with it. So that's like I think how it's gonna be done.Again, open opensource, not something we're thinking about. We wanted to be really good before weSwyx: opensource it. That was all very impressive. Your vision is actually very encouraging as well, and I, I'm very excited to try it out myself. I'm just curious on the Israel side of things, right? Like you, you're visiting San Francisco for a two week trip for this special program you can tell us about. But also I think a lot of American developers have heard that, you know, Israel has a really good tech scene. Mostly it's just security startups. You know, I did some, I was in some special unit in the I D F and like, you know, I come out and like, I'm doing the same thing again, but like, you know, for enterprises but maybe just something like, describe for, for the rest of the world.It's like, What is the Israeli tech scene like? What is this program that you're on and what shouldItamar: people know? So I think like Israel is the most condensed startup per capita. I think we're number one really? Or, or startup pair square meter. I think, I think we're number one as well because of these properties actually there is a very strong community and like everyone are around, like are [00:57:00] working in a.An entrepreneur or working in a startup. And when you go to the bar or the coffee, you hear if it's 20, 21, people talking about secondary, if it's 2023 talking about like how amazing Geni is, but everyone are like whatever are around you are like in, in the scene. And, and that's like a lot of networking and data propagation, I think.Somehow similar here to, to the Bay Area in San Francisco that it helps, right. So I think that's one of our strong points. You mentioned some others. I'm not saying that it doesn't help. Yes. And being in the like idf, the army, that age of 19, you go and start dealing with technology like very advanced one, that, that helps a lot.And then going back to the community, there's this community like is all over the world. And for example, there is this program called Icon. It's basically Israelis and in the Valley created a program for Israelis from, from Israel to come and it's called Silicon Valley 1 0 1 to learn what's going on here.Because with all the respect to the tech scene in Israel here, it's the, the real thing, right? So, so it's an non-profit organization by Israelis that moved here, that brings you and, and then brings people from a 16 D or, or Google or Navon or like. Amazing people from unicorns or, or up and coming startup or accelerator, and give you up-to-date talks and, and also connect you to relevant people.And that's, that's why I'm here in addition to to, you know, to [00:58:30] me and, and participate in this amazing podcast, et cetera.Swyx: Yeah. Oh, well, I, I think, I think there's a lot of exciting tech talent, you know, in, in Tel Aviv, and I, I'm, I'm glad that your offer is Israeli.Itamar: I, I think one of thing I wanted to say, like yeah, of course, that because of what, what what we said security is, is a very strong scene, but a actually water purification agriculture attack, there's a awful other things like usually it's come from necessity.Yeah. Like, we have big part of our company of our state is like a desert. So there's, there's other things like ai by the way is, is, is big also in Israel. Like, for example, I think there's an Israeli competitor to open ai. I'm not saying like it's as big, but it's ai 21, I think out of 10.Yeah. Out. Oh yeah. 21. Is this really? Yeah. Out of 10 like most, mm-hmm. Profound research labs. Research lab is, for example, I, I love, I love their. Yeah. Yeah.Swyx: I, I think we should try to talk to one of them. But yeah, when you and I met, we connected a little bit Singapore, you know, I was in the Singapore Army and Israeli army.We do have a lot of connections between countries and small countries that don't have a lot of natural resources that have to make due in the world by figuring out some other services. I think the Singapore startup scene has not done as well as the Israeli startup scene. So I'm very interested in, in how small, small countries can have a world impact essentially.Itamar: It's a question we're being asked a lot, like why, for example, let's go to the soft skills. I think like failing is a bad thing. Yeah. Like, okay. Like sometimes like VCs prefer to [01:00:00] put money on a, on an entrepreneur that failed in his first startup and actually succeeded because now that person is knowledgeable, what it mean to be, to fail and very hungry to, to succeed.So I think like generally, like there's a few reason I think it's hard to put the finger exactly, but we talked about a few things. But one other thing I think like failing is not like, this is my fourth company. I did one as, it wasn't a startup, it was a company as a teenager. And then I had like my first startup, my second company that like, had a amazing run, but then very beautiful collapse.And then like my third company, my second startup eventually exit successfully to, to Alibaba. So, so like, I think like it's there, there are a lot of trial and error, which is being appreciated, not like suppressed. I guess like that's one of the reason,Alessio: wanna jump into lightning round?Swyx: Yes. I think we send you into prep, but there's just three questions now.We've, we've actually reduced it quite a bit, but you have it,Alessio: so, and we can read them that you can take time and answer. You don't have to right away. First question, what is a already appin in AI that Utah would take much longer than an sItamar: Okay, so I have to, I hope it doesn't sound like arrogant,

airhacks.fm podcast with adam bien
How JPA Buddy Happened

airhacks.fm podcast with adam bien

Play Episode Listen Later Mar 25, 2023 69:08


An airhacks.fm conversation with Aleksey Stukalov (@AlekseyStukalov) about: the deepest hole-11km, starting with Pascal, math - and there is nothing to argue about, integral and differential equations, writing an article about magnetic impulses, 1c - consulting company, implementing accounting software, learning C#, Silverlight and LINQ, from C# to Java, starting CUBA and jmix, cuba and jmix are like MS Access for Java, building a banking system for Paraguay, 25k developers is using CUBA, starting JPA Buddy, selling on IntelliJ marketplace, JPA Buddy - the Vlad automation tool, "#117 Java Persistence: From DB over JDBC to Transactions" episode with Vlad Mihalcea, JPA buddy is a IntelliJ plugin, the Jetbrains MPS (Meta Programming System), Intellij is an interesting distribution channel, DTO generation, mapping entities to DTOs, implementing projections with JPA, JSON-P with Java Records Aleksey Stukalov on twitter: @AlekseyStukalov

GOTO - Today, Tomorrow and the Future
Getting to Know IntelliJ IDEA • Trisha Gee & Helen Scott

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Mar 24, 2023 44:27 Transcription Available


This interview was recorded for the GOTO Book Club.gotopia.tech/bookclubRead the full transcription of the interview hereTrisha Gee - Lead Developer Evangelist at Gradle, Java Champion & Co-Author of "Getting to Know IntelliJ IDEA"Helen Scott - Developer Advocate at JetBrains & Co-Author of "Getting to Know IntelliJ IDEA"BLOG POSTShelenjoscott.comtrishagee.comDESCRIPTIONWe're frequently taught to use a text editor when we're learning to write code so that we understand the fundamentals. However, if we treat our IDE as a text editor, we are doing ourselves a disservice. As professional developers, we no longer need to learn the fundamentals; we need to deliver working applications. We can use the features of an IDE to help us with this.IntelliJ IDEA is an extremely fully-featured IDE that can help professional developers with almost any task they need to perform, and this can be overwhelming to get to grips with. Getting to Know IntelliJ IDEA uses two approaches to help newcomers and experienced users alike:• Tutorials that walk through writing code and developing applications that show when, why and how to use IntelliJ IDEA features to create working applications.• A questions-and-answers approach that demonstrates which features can be used to solve the problems that professional developers face.Seeing how to use IntelliJ IDEA from these different angles not only showcases the most useful features but also teaches multiple approaches for using these features. No matter which technologies you use or how you like to work, reading this book will help you find an approach that enables you to work comfortably and productively with IntelliJ IDEA.* Book description: © leanpub.comThe interview is based on Trisha's & Helen's co-authored book "Getting to Know IntelliJ IDEA"RECOMMENDED BOOKSTrisha Gee • Getting to Know IntelliJ IDEATrisha Gee, Kathy Sierra & Bert Bates • Head First JavaKevlin Henney & Trisha Gee • 97 Things Every Java Programmer Should KnowMichael Nygard • Release It! 2nd EditionAditya Y. Bhargava • Grokking AlgorithmsFord, Richards, Sadalage & Dehghani • Software Architecture: The Hard PartsTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

Exploring Axon
Introducing ”Inspector Axon”

Exploring Axon

Play Episode Listen Later Mar 3, 2023 31:33


Sara spoke with a colleague, Mitchell Herrijgers, one of Axon Framework's engineers and one of AxonIQ's Solutions Architects. Mitchell has been leading a project for a tool internally called "Inspector Axon." This tool helps developers monitor various things within their Axon Framework-based application and gauge metrics, find out about various event processors, and much more. The name will most likely change once it is publicly available, but for now, we are referring to it as "Inspector Axon." Mitchell also worked on a different project last year: the IntelliJ plugin for Axon Framework. He also briefly talked about that at the end of the session.  If you are interested in becoming one of our private beta testers for Inspector Axon, please watch our social media for an announcement, or you can reach us through our website.    Connect with Mitchell on LinkedIn.  Connect with Sara on LinkedIn and Twitter.    For more information about us, visit axoniq.io  

Enterprise Java Newscast
Stackd 62: Make JavaOne Again

Enterprise Java Newscast

Play Episode Listen Later Jan 22, 2023 102:28


Overview Josh, Kito, Danno, and Ian are back, discussing the first JavaOne in many years, and welcoming special guest Emily Jiang, Liberty Cloud Native Architect and Chief Advocate at IBM. They discuss JavaOne announcements, ways to avoid cold starts in Java, JDK virtual threads with Helidon Níma and Quarkus, Open Liberty, new releases from NetBeans, Eclipse and IntelliJ, Microprofile, Jakarta EE, Angular 15, Kito's SpeakerTrax project, and much more. We Thank DataDog for sponsoring this podcast! https://www.pubhouse.net/datadog JavaOne Recap  - GraalVM JIT and native image to be included with OpenJDK  - Liberty InstantOn using Linux CRIU (https://openliberty.io/blog/2022/09/29/instant-on-beta.html)  - Oracle will create JavaFX builds  - Java SE Subscription Enterprise Performance Pack, a drop-in replacement for JDK 8 with JDK 17 (https://blogs.oracle.com/java/post/introducing-the-java-se-subscription-enterprise-performance-pack)  - Generational ZGC (https://bugs.openjdk.org/browse/JDK-8272979)  - Announcing Java Management Service (https://blogs.oracle.com/java/post/announcing-java-management-service)  - JavaOne Content Feed (https://inside.java/javaone/)  - JavaOne Keynotes and Select Sessions on YouTube (https://www.youtube.com/playlist?list=PLX8CzqL3ArzU2i-ogxRAHYIZ8du6GwZyu) Web  - Angular v15 is now available! (https://blog.angular.io/angular-v15-is-now-available-df7be7f2f4c8) Server Side Java  - Please welcome.. Helidon Níma (https://medium.com/helidon/please-welcome-helidon-n%C3%ADma-9a882c5b6f1e)    - Virtual threads and what it means for servers (do we really need reactive programming models anymore)?  - Quarkus Virtual Threads (https://quarkus.io/guides/virtual-threads)  - Starting Quarkus 3 (https://quarkus.io/blog/road-to-quarkus-3/)  - Open Liberty (https://openliberty.io/)   IDEs and Tools  - NetBeans 16 Release Candidate Stage (now released) (https://netbeans.apache.org/download/nb16/index.html)  - Intellij 2022.3 (https://blog.jetbrains.com/idea/2022/11/intellij-idea-2022-3-beta/)  - Eclipse IDE 2022-09 (https://projects.eclipse.org/releases/2022-09) Java Platform  - Java: Developing smaller Docker images with jdeps and jlink | by Joe Honour | Level Up Coding (https://levelup.gitconnected.com/java-developing-smaller-docker-images-with-jdeps-and-jlink-d4278718c550) Emily Jiang  - Java Champion & Liberty Cloud Native Architect and Chief Advocate at IBM  - Microprofile specs: MicroProfile Config, Fault Tolerance  - Jakarta EE specs: Jakarta Config, Jakarta Context and Dependency Injection (CDI), Jakarta Interceptors  - Books:     Practical Cloud-Native Java Development with MicroProfile | Packt (https://www.amazon.com/Practical-Cloud-Native-Java-Development-MicroProfile/dp/1801078807)    97 Things Every Java Programmer Should Know [Book] (https://www.oreilly.com/library/view/97-things-every/9781491952689/) Other  - Kito's new speaker profile powered by SpeakerTrax: https://kitomann.com  - Twitter demolition (https://twitter.com/CCSewell/status/1592257949971673088)  - InterPlanetary File System  (https://ipfs.tech/) Picks   - Webcomponents.dev (Kito) https://webcomponents.dev/  - Podman Desktop is Released (Josh) https://podman-desktop.io/  - Run everywhere! (Emily)  - Github CoPilot (Ian) https://github.com/features/copilot  - Mastodon (Danno) https://joinmastodon.org/ Other Pubhouse Network podcasts (do not remove)  - Breaking into Open Source (https://www.pubhouse.net/breaking-into-open-source)  - OffHeap (https://www.javaoffheap.com/)  - Java Pubhouse (https://www.javapubhouse.com/) Events  - Codemash - Jan 12-15 Sandusky, OH, USA (https://codemash.org/)  - jChampions Conf - January 2023, Online (https://jchampionsconf.com/)  - DevNexus 2023 - April 4-6 2023, Atlanta, GA, USA (https://devnexus.com/call-for-papers)  

Les Cast Codeurs Podcast
LCC 290 - Mettre tes lunettes dans ta base de données

Les Cast Codeurs Podcast

Play Episode Listen Later Jan 14, 2023 75:48


Guillaume et Arnaud discutent de tech en cette nouvelle année 2023. GraalVM dans OpenJDK, Rust, Webassembly, containers. postgres, ChatGPT, le rôle de l'architecte et la ribambelle de rétrospective 2022. Enregistré le 13 janvier 2023 Téléchargement de l'épisode LesCastCodeurs-Episode–290.mp3 News Langages OpenJDK propose projet Galahad : pour fusionner dans OpenJDK certaines parties de GraalVM community edition https://www.infoq.com/news/2022/12/openjdk-galahad-Dec22/ https://www.infoq.com/articles/graalvm-java-compilers-openjdk/ Alex Snaps partage un article sur Rust pour le développeur Java https://wcgw.dev/posts/2023/rusty-java-intro/ Google a sorti sa formation interne sur Rust en libre accès https://google.github.io/comprehensive-rcust/ Paul King du projet Apache Groovy partage sa rétrospective de l'année 2022 https://blogs.apache.org/groovy/entry/apache-groovy–2022-year-in Webassembly pour le developpeur Java https://www.javaadvent.com/2022/12/webassembly-for-the-java-geek.html Un article assez critique sur TypeScript https://dev.to/wiseai/17-compelling-reasons-to-start-ditching-typescript-now–249b On voit souvent des articles plutôt positif sur TypeScript, mais est-ce que tout est tout rose tout le temps, pas forcément ! L'article cite 17 problèmes avec TypeScript, dont la courbe d'apprentissage, la baisse de productivité, la verbosité des types, le manque de flexibilité, le fait que ce n'est pas vraiment un sur-ensemble de JavaScript, la lenteur du temps de compilation… basé sur son talk sur le même thème qu'il a déjà présenté à Devoxx Maroc et Belgique Alex a également écrit une deuxième partie faisant suite à son article, dans lequel il parle un peu plus d'ownership, de borrowing, du trait Drop, etc. (càd sur la gestion mémoire) https://wcgw.dev/posts/2023/rusty-java–2/ Librairies Sortie du Micronaut 3.8 https://micronaut.io/2022/12/27/micronaut-framework–3–8–0-released/ support de GraalVM 22.3.0 possibilité d'annoter les records avec @RequestBean (pour binder les paramètres de requête et autre, aux paramètres de la méthode du controleur) amélioration du CorsFilter pour éviter certaines attaques également des améliorations sur le support de CRaC (Coordinated Restore at Checkpoint) et plein d'autres upgrades de versions, nouveaux plugins, et améliorations mineures Swing n'est pas mort ! Un nouveau DSL Java open source pour Swing dénommé Sierra, pour faciliter la création d'interfaces graphiques Swing https://github.com/HTTP-RPC/Sierra Infrastructure Comprendre root dans et en dehors des containers https://www.redhat.com/en/blog/understanding-root-inside-and-outside-container un article pas recent mais utile c'est quoi un container rootless on peut etre root et lancer le moteur de container on peut etre root dans le container lui meme quand on run en root le moteur, l'utilisateur exterieur et interieur sont mappés (meme # d'UID) ; par contre en non root, le UID de l'utilisateur du container est mappé sur un nouvel UID c'est top car les utilisateurs dedans et dehors ne sont pas mappés donc moins de risque en cas de sortie de bac a sable (sandbox escape) c'est le cas pour podman mais pour docker il y a un ajout: docker a un démon (root ou pas) et une CLI qui appelle ce demon (root ou pas), ce qui importe c'est le demon pour les risques de sécu l'idéal c'est de tourner non root le moteur et dans le container (meme si encore beaucoup d'images s'attendent a être root les folles) Cloud Kubernetes 1.26 avec notamment une de corrélation de l'hébergement de la Registry par Google https://www.infoq.com/news/2022/12/kubernetes–1–26/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=Devops Web Evan You, le créateur de Vue.js revient sur l'année 2022 https://blog.vuejs.org/posts/2022-year-in-review.html C'est la grande migration de Vue 2 vers Vue 3 Migration de l'API Composition de Vue 3 vers l'API Options de Vue 2 (mais supporté encore en 3) La documentation de Vue propose Vue 3 par défaut depuis février Pendant la phase de transition, gros focus sur l'outillage et l'expérience développeur L'écosystème a bien adopté Vue 3 et continue de le faire au fur et à mesure Pour 2023, espère faire plus de releases mineures régulières, et travail sur le “vapor mode” qui propose une stratégie de compilation plus rapide Data Un article de Stephan Schmidt qui suggère d'utiliser PostgreSQL… pour tout ! https://www.amazingcto.com/postgres-for-everything/ pour du caching à la place de REDIS comme une queue de messages pour stocker des documents JSON au lieu de MongoDB pour faire des requêtes géo-spatiales pour le full-text search à la place d'ElasticSearch pour générer du JSON directement en base comme stockage / adaptateur pour GraphQL ou pour Timescale (base de données time-series) Outillage ChatGPT en action sur le design d'un nouveau langage de programmation https://judehunter.dev/blog/chatgpt-helped-me-design-a-brand-new-programming-language ChatGPT, on lui attribue plus de magie qu'il n'en a https://arxiv.org/pdf/2212.03551.pdf Github rajoute le scan des secrets dans vos répos publics aussi https://github.blog/2022–12–15-leaked-a-secret-check-your-github-alerts-for-free/ ce n'est plus seulement pour les organisations des entreprises aussi accessible pour les répos publics permet d'éviter de leaker des clés d'API et autre Les nouveautés de Java sur Visual Studio Code https://foojay.io/today/java-on-visual-studio-code-update-december–2022/ amélioration visuelles pour les extensions Spring Boot et aussi pour la visualisation de la mémoire utilisée complétion “post-fix” comme dans IntelliJ plus de raccourcis pour générer du code support de Lombok intégré support de l'annotation processing de Gradle meilleure visualisation des erreurs de build 2 millions de développeurs utilisent Visual Studio Code pour Java Encore un guide pour sortir de Vi https://thevaluable.dev/vim-advanced/ Le client HTTP de IntelliJ peut maintenant être utilisé en ligne de commande et dans un environnement d'intégration continue https://blog.jetbrains.com/idea/2022/12/http-client-cli-run-requests-and-tests-on-ci/ Architecture L'évolution du rôle de l'architecte https://www.infoq.com/articles/architecture-architecting-role/ Le (très long) rapport des tendances 2023 par Didier Girard et Olivier Rafal https://www.linkedin.com/pulse/rapport-tendances–2023-didier-girard/?trackingId=wu9pJ4wNQAOKjh11R2UyjA%3D%3D un prisme tech/orga/culture pour préparer l'entreprise aux enjeux un prisme produits/plateformes/data pour structurer notre approche d'un SI moderne. couvre des tonnes de sujets de l'intelligence artificielle, les données, le cloud, le web1/2/3, mais aussi l'organisation des équipes, les rôles, etc. Loi, société et organisation Twitter n'apprécie guère Mastodon, et bride les tweets avec des liens vers Mastodon. La liberté d'expression façon Elon Musk ! https://twitter.com/bluxte/status/1603656787097534464 Statement de Mastodon sur le fait que Twitter bannit les liens vers Mastodon https://blog.joinmastodon.org/2022/12/twitter-suspends-mastodon-account-prevents-sharing-links/ Et finalement Twitter est revenu en arrière sur son changement des conditions d'utilisation Dans la famille “les informaticiens ont des supers passions”, je voudrais Cédric Champeau, qui nous fait une magnifique rétrospective de ces clichés d'astrophotographie https://melix.github.io/blog//2022/12/astrophoto–2022.html Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 19 janvier 2023 : Archilocus - Bordeaux (France) 19–20 janvier 2023 : Touraine Tech - Tours (France) 25–28 janvier 2023 : SnowCamp - Grenoble (France) 31 janvier 2023 : Duck Conf - Paris (France) 2 février 2023 : Very Tech Trip - Paris (France) 2 février 2023 : AgiLeMans - Le Mans (France) 9–11 février 2023 : World AI Cannes Festival - Cannes (France) 16–19 février 2023 : PyConFR - Bordeaux (France) 7 mars 2023 : Kubernetes Community Days France - Paris (France) 23–24 mars 2023 : SymfonyLive Paris - Paris (France) 23–24 mars 2023 : Agile Niort - Niort (France) 1–2 avril 2023 : JdLL - Lyon 3e (France) 5–7 avril 2023 : FIC - Lille Grand Palais (France) 12–14 avril 2023 : Devoxx France - Paris (France) 20–21 avril 2023 : Toulouse Hacking Convention 2023 - Toulouse (France) 4–6 mai 2023 : Devoxx Greece - Athens (Greece) 10–12 mai 2023 : Devoxx UK - London (UK) 12 mai 2023 : AFUP Day - lle & Lyon (France) 25–26 mai 2023 : Newcrafts Paris - Paris (France) 26 mai 2023 : Devfest Lille - Lille (France) 27 mai 2023 : Polycloud - Montpellier (France) 7 juin 2023 : Serverless Days Paris - Paris (France) 15–16 juin 2023 : Le Camping des Speakers - Baden (France) 29–30 juin 2023 : Sunny Tech - Montpellier (France) 19 septembre 2023 : Salon de la Data Nantes - Nantes (France) & Online 21–22 septembre 2023 : API Platform Conference - Lille (France) & Online 2–6 octobre 2023 : Devoxx Belgium - Antwerp (Belgium) 12 octobre 2023 : Cloud Nord - Lille (France) 12–13 octobre 2023 : Volcamp 2023 - Clermont-Ferrand (France) 6–7 décembre 2023 : Open Source Experience - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Screaming in the Cloud
Snyk and the Complex World of Vulnerability Intelligence with Clinton Herget

Screaming in the Cloud

Play Episode Listen Later Nov 17, 2022 38:39


About ClintonClinton Herget is Field CTO at Snyk, the leader is Developer Security. He focuses on helping Snyk's strategic customers on their journey to DevSecOps maturity. A seasoned technnologist, Cliton spent his 20-year career prior to Snyk as a web software developer, DevOps consultant, cloud solutions architect, and engineering director. Cluinton is passionate about empowering software engineering to do their best work in the chaotic cloud-native world, and is a frequent conference speaker, developer advocate, and technical thought leader.Links Referenced: Snyk: https://snyk.io/ duckbillgroup.com: https://duckbillgroup.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out.Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: This episode is bought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups.  If you're tired of the vulnerabilities, costs and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, thats V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the fun things about establishing traditions is that the first time you do it, you don't really know that that's what's happening. Almost exactly a year ago, I sat down for a previous promoted guest episode much like this one, With Clinton Herget at Snyk—or Synic; however you want to pronounce that. He is apparently a scarecrow of some sorts because when last we spoke, he was a principal solutions engineer, but like any good scarecrow, he was outstanding in his field, and now, as a result, is a Field CTO. Clinton, Thanks for coming back, and let me start by congratulating you on the promotion. Or consoling you depending upon how good or bad it is.Clinton: You know, Corey, a little bit of column A, a little bit of column B. But very glad to be here again, and frankly, I think it's because you insist on mispronouncing Snyk as Synic, and so you get me again.Corey: Yeah, you could add a couple of new letters to it and just call the company [Synack 00:01:27]. Now, it's a hard pivot to a networking company. So, there's always options.Clinton: I acknowledge what you did there, Corey.Corey: I like that quite a bit. I wasn't sure you'd get it.Clinton: I'm a nerd going way, way back, so we'll have to go pretty deep in the stack for you to stump me on some of this stuff.Corey: As we did with the, “I wasn't sure you'd get it.” See that one sailed right past you. And I win. Chalk another one up for me and the networking pun wars. Great, we'll loop back for that later.Clinton: I don't even know where I am right now.Corey: [laugh]. So, let's go back to a question that one would think that I'd already established a year ago, but I have the attention span of basically a goldfish, let's not kid ourselves. So, as I'm visiting the Snyk website, I find that it says different words than it did a year ago, which is generally a sign that is positive; when nothing's been updated including the copyright date, things are going really well or really badly. One wonders. But no, now you're talking about Snyk Cloud, you're talking about several other offerings as well, and my understanding of what it is you folks do no longer appears to be completely accurate. So, let me be direct. What the hell do you folks do over there?Clinton: It's a really great question. Glad you asked me on a year later to answer it. I would say at a very high level, what we do hasn't changed. However, I think the industry has certainly come a long way in the past couple years and our job is to adapt to that Snyk—again, pronounced like a pair of sneakers are sneaking around—it's a developer security platform. So, we focus on enabling the people who build applications—which as of today, means modern applications built in the cloud—to have better visibility, and ultimately a better chance of mitigating the risk that goes into those applications when it matters most, which is actually in their workflow.Now, you're exactly right. Things have certainly expanded in that remit because the job of a software engineer is very different, I think this year than it even was last year, and that's continually evolving over time. As a developer now, I'm doing a lot more than I was doing a few years ago. And one of the things I'm doing is building infrastructure in the cloud, I'm writing YAML files, I'm writing CloudFormation templates to deploy things out to AWS. And what happens in the cloud has a lot to do with the risk to my organization associated with those applications that I'm building.So, I'd love to talk a little bit more about why we decided to make that move, but I don't think that represents a watering down of what we're trying to do at Snyk. I think it recognizes that developer security vision fundamentally can't exist without some understanding of what's happening in the cloud.Corey: One of the things that always scares me is—and sets the spidey sense tingling—is when I see a company who has a product, and I'm familiar—ish—with what they do. And then they take their product name and slap the word cloud at the end, which is almost always codes to, “Okay, so we took the thing that we sold in boxes in data centers, and now we're making a shitty hosted version available because it turns out you rubes will absolutely pay a subscription for it.” Yeah, I don't get the sense that at all is what you're doing. In fact, I don't believe that you're offering a hosted managed service at the moment, are you?Clinton: No, the cloud part, that fundamentally refers to a new product, an offering that looks at the security or potentially the risks being introduced into cloud infrastructure, by now the engineers who were doing it who are writing infrastructure as code. We previously had an infrastructure-as-code security product, and that served alongside our static analysis tool which is Snyk Code, our open-source tool, our container scanner, recognizing that the kinds of vulnerabilities you can potentially introduce in writing cloud infrastructure are not only bad to the organization on their own—I mean, nobody wants to create an S3 bucket that's wide open to the world—but also, those misconfigurations can increase the blast radius of other kinds of vulnerabilities in the stack. So, I think what it does is it recognizes that, as you and I think your listeners well know, Corey, there's no such thing as the cloud, right? The cloud is just a bunch of fancy software designed to abstract away from the fact that you're running stuff on somebody else's computer, right?Corey: Unfortunately, in this case, the fact that you're calling it Snyk Cloud does not mean that you're doing what so many other companies in that same space do it would have led to a really short interview because I have no faith that it's the right path forward, especially for you folks, where it's, “Oh, you want to be secure? You've got to host your stuff on our stuff instead. That's why we called it cloud.” That's the direction that I've seen a lot of folks try and pivot in, and I always find it disastrous. It's, “Yeah, well, at Snyk if we run your code or your shitty applications here in our environment, it's going to be safer than if you run it yourself on something untested like AWS.” And yeah, those stories hold absolutely no water. And may I just say, I'm gratified that's not what you're doing?Clinton: Absolutely not. No, I would say we have no interest in running anyone's applications. We do want to scan them though, right? We do want to give the developers insight into the potential misconfigurations, the risks, the vulnerabilities that you're introducing. What sets Snyk apart, I think, from others in that application security testing space is we focus on the experience of the developer, rather than just being another tool that runs and generates a bunch of PDFs and then throws them back to say, “Here's everything you did wrong.”We want to say to developers, “Here's what you could do better. Here's how that default in a CloudFormation template that leads to your bucket being, you know, wide open on the internet could be changed. Here's the remediation that you could introduce.” And if we do that at the right moment, which is inside that developer workflow, inside the IDE, on their local machine, before that gets deployed, there's a much greater chance that remediation is going to be implemented and it's going to happen much more cheaply, right? Because you no longer have to do the round trip all the way out to the cloud and back.So, the cloud part of it fundamentally means completing that story, recognizing that once things do get deployed, there's a lot of valuable context that's happening out there that a developer can really take advantage of. They can say, “Wait a minute. Not only do I have a Log4Shell vulnerability, right, in one of my open-source dependencies, but that artifact, that application is actually getting deployed to a VPC that has ingress from the internet,” right? So, not only do I have remote code execution in my application, but it's being put in an enclave that actually allows it to be exploited. You can only know that if you're actually looking at what's really happening in the cloud, right?So, not only does Snyk cloud allows us to provide an additional layer of security by looking at what's misconfigured in that cloud environment and help your developers make remediations by saying, “Here's the actual IAC file that caused that infrastructure to come into existence,” but we can also say, here's how that affects the risk of other kinds of vulnerabilities at different layers in the stack, right? Because it's all software; it's all connected. Very rarely does a vulnerability translate one-to-one into risk, right? They're compound because modern software is compound. And I think what developers lack is the tooling that fits into their workflow that understands what it means to be a software engineer and actually helps them make better choices rather than punishing them after the fact for guessing and making bad ones.Corey: That sounds awesome at a very high level. It is very aligned with how executives and decision-makers think about a lot of these things. Let's get down to brass tacks for a second. Assume that I am the type of developer that I am in real life, by which I mean shitty. What am I going to wind up attempting to do that Snyk will flag and, in other words, protect me from myself and warn me that I'm about to commit a dumb?Clinton: First of all, I would say, look, there's no such thing as a non-shitty developer, right? And I built software for 20 years and I decided that's really hard. What's a lot easier is talking about building software for a living. So, that's what I do now. But fundamentally, the reason I'm at Snyk, is I want to help people who are in the kinds of jobs that I had for a very long time, which is to say, you have a tremendous amount of anxiety because you recognize that the success of the organization rests on your shoulders, and you're making hundreds, if not thousands of decisions every day without the right context to understand fully how the results of that decision is going to affect the organization that you work for.So, I think every developer in the world has to deal with this constant cognitive dissonance of saying, “I don't know that this is right, but I have to do it anyway because I need to clear that ticket because that release needs to get into production.” And it becomes really easy to short-sightedly do things like pull an open-source dependency without checking whether it has any CVEs associated with it because that's the version that's easiest to implement with your code that already exists. So, that's one piece. Snyk Open Source, designed to traverse that entire tree of dependencies in open-source all the way down, all the hundreds and thousands of packages that you're pulling in to say, not only, here's a vulnerability that you should really know is going to end up in your application when it's built, but also here's what you can do about it, right? Here's the upgrade you can make, here's the minimum viable change that actually gets you out of this problem, and to do so when it's in the right context, which is in you know, as you're making that decision for the first time, right, inside your developer environment.That also applies to things like container vulnerabilities, right? I have even less visibility into what's happening inside a container than I do inside my application. Because I know, say, I'm using an Ubuntu or a Red Hat base image. I have no idea, what are all the Linux packages that are on it, let alone what are the vulnerabilities associated with them, right? So, being able to detect, I've got a version of OpenSSL 3.0 that has a potentially serious vulnerability associated with it before I've actually deployed that container out into the cloud very much helps me as a developer.Because I'm limiting the rework or the refactoring I would have to do by otherwise assuming I'm making a safe choice or guessing at it, and then only finding out after I've written a bunch more code that relies on that decision, that I have to go back and change it, and then rewrite all of the things that I wrote on top of it, right? So, it's the identifying the layer in the stack where that risk could be introduced, and then also seeing how it's affected by all of those other layers because modern software is inherently complex. And that complexity is what drives both the risk associated with it, and also things like efficiency, which I know your audience is, for good reason, very concerned about.Corey: I'm going to challenge you on aspect of this because on the tin, the way you describe it, it sounds like, “Oh, I already have something that does that. It's the GitHub Dependabot story where it winds up sending me a litany of complaints every week.” And we are talking, if I did nothing other than read this email in that day, that would be a tremendously efficient processing of that entire thing because so much of it is stuff that is ancient and archived, and specific aspects of the vulnerabilities are just not relevant. And you talk about the OpenSSL 3.0 issues that just recently came out.I have no doubt that somewhere in the most recent email I've gotten from that thing, it's buried two-thirds of the way down, like all the complaints like the dishwasher isn't loaded, you forgot to take the trash out, that baby needs a change, the kitchen is on fire, and the vacuuming, and the r—wait, wait. What was that thing about the kitchen? Seems like one of those things is not like the others. And it just gets lost in the noise. Now, I will admit to putting my thumb a little bit on the scale here because I've used Snyk before myself and I know that you don't do that. How do you avoid that trap?Clinton: Great question. And I think really, the key to the story here is, developers need to be able to prioritize, and in order to prioritize effectively, you need to understand the context of what happens to that application after it gets deployed. And so, this is a key part of why getting the data out of the cloud and bringing it back into the code is so important. So, for example, take an OpenSSL vulnerability. Do you have it on a container image you're using, right? So, that's question number one.Question two is, is there actually a way that code can be accessed from the outside? Is it included or is it called? Is the method activated by some other package that you have running on that container? Is that container image actually used in a production deployment? Or does it just go sit in a registry and no one ever touches it?What are the conditions required to make that vulnerability exploitable? You look at something like Spring Shell, for example, yes, you need a certain version of spring-beans in a JAR file somewhere, but you also need to be running a certain version of Tomcat, and you need to be packaging those JARs inside a WAR in a certain way.Corey: Exactly. I have a whole bunch of Lambda functions that provide the pipeline system that I use to build my newsletter every week, and I get screaming concerns about issues in, for example, a version of the markdown parser that I've subverted. Yeah, sure. I get that, on some level, if I were just giving it random untrusted input from the internet and random ad hoc users, but I'm not. It's just me when I write things for that particular Lambda function.And I'm not going to be actively attempting to subvert the thing that I built myself and no one else should have access to. And looking through the details of some of these things, it doesn't even apply to the way that I'm calling the libraries, so it's just noise, for lack of a better term. It is not something that basically ever needs to be adjusted or fixed.Clinton: Exactly. And I think cutting through that noise is so key to creating developer trust in any kind of tool that scanning an asset and providing you what, in theory, are a list of actionable steps, right? I need to be able to understand what is the thing, first of all. There's a lot of tools that do that, right, and we tend to mock them by saying things like, “Oh, it's just another PDF generator. It's just another thousand pages that you're never going to read.”So, getting the information in the right place is a big part of it, but filtering out all of the noise by saying, we looked at not just one layer of the stack, but multiple layers, right? We know that you're using this open-source dependency and we also know that the method that contains the vulnerability is actively called by your application in your first-party code because we ran our static analysis tool against that. Furthermore, we know because we looked at your cloud context, we connected to your AWS API—we're big partners with AWS and very proud of that relationship—but we can tell that there's inbound internet access available to that service, right? So, you start to build a compound case that maybe this is something that should be prioritized, right? Because there's a way into the asset from the outside world, there's a way into the vulnerable functions through the labyrinthine, you know, spaghetti of my code to get there, and the conditions required to exploit it actually exist in the wild.But you can't just run a single tool; you can't just run Dependabot to get that prioritization. You actually have to look at the entire holistic application context, which includes not just your dependencies, but what's happening in the container, what's happening in your first-party, your proprietary code, what's happening in your IAC, and I think most importantly for modern applications, what's actually happening in the cloud once it gets deployed, right? And that's sort of the holy grail of completing that loop to bring the right context back from the cloud into code to understand what change needs to be made, and where, and most importantly why. Because it's a priority that actually translates into organizational risk to get a developer to pay attention, right? I mean, that is the key to I think any security concern is how do you get engineering mindshare and trust that this is actually what you should be paying attention to and not a bunch of rework that doesn't actually make your software more secure?Corey: One of the challenges that I see across the board is that—well, let's back up a bit here. I have in previous episodes talked in some depth about my position that when it comes to the security of various cloud providers, Google is number one, and AWS is number two. Azure is a distant third because it figures out what Crayons tastes the best; I don't know. But the reason is not because of any inherent attribute of their security models, but rather that Google massively simplifies an awful lot of what happens. It automatically assumes that resources in the same project should be able to talk to one another, so I don't have to painstakingly configure that.In AWS-land, all of this must be done explicitly; no one has time for that, so we over-scope permissions massively and never go back and rein them in. It's a configuration vulnerability more than an underlying inherent weakness of the platform. Because complexity is the enemy of security in many respects. If you can't fit it all in your head to reason about it, how can you understand the security ramifications of it? AWS offers a tremendous number of security services. Many of them, when taken in some totality of their pricing, cost more than any breach, they could be expected to prevent. Adding more stuff that adds more complexity in the form of Snyk sounds like it's the exact opposite of what I would want to do. Change my mind.Clinton: I would love to. I would say, fundamentally, I think you and I—and by ‘I,' I mean Snyk and you know, Corey Quinn Enterprises Limited—I think we fundamentally have the same enemy here, right, which is the cyclomatic complexity of software, right, which is how many different pathways do the bits have to travel down to reach the same endpoint, right, the same goal. The more pathways there are, the more risk is introduced into your software, and the more inefficiency is introduced, right? And then I know you'd love to talk about how many different ways is there to run a container on AWS, right? It's either 30 or 400 or eleventy-million.I think you're exactly right that that complexity, it is great for, first of all, selling cloud resources, but also, I think, for innovating, right, for building new kinds of technology on top of that platform. The cost that comes along with that is a lack of visibility. And I think we are just now, as we approach the end of 2022 here, coming to recognize that fundamentally, the complexity of modern software is beyond the ability of a single engineer to understand. And that is really important from a security perspective, from a cost control perspective, especially because software now creates its own infrastructure, right? You can't just now secure the artifact and secure the perimeter that it gets deployed into and say, “I've done my job. Nobody can breach the perimeter and there's no vulnerabilities in the thing because we scanned it and that thing is immutable forever because it's pets, not cattle.”Where I think the complexity story comes in is to recognize like, “Hey, I'm deploying this based on a quickstart or CloudFormation template that is making certain assumptions that make my job easier,” right, in a very similar way that choosing an open-source dependency makes my job easier as a developer because I don't have to write all of that code myself. But what it does mean is I lack the visibility into, well hold on. How many different pathways are there for getting things done inside this dependency? How many other dependencies are brought on board? In the same way that when I create an EKS cluster, for example, from a CloudFormation template, what is it creating in the background? How many VPCs are involved? What are the subnets, right? How are they connected to each other? Where are the potential ingress points?So, I think fundamentally, getting visibility into that complexity is step number one, but understanding those pathways and how they could potentially translate into risk is critically important. But that prioritization has to involve looking at the software holistically and not just individual layers, right? I think we lose when we say, “We ran a static analysis tool and an open-source dependency scanner and a container scanner and a cloud config checker, and they all came up green, therefore the software doesn't have any risks,” right? That ignores the fundamental complexity in that all of these layers are connected together. And from an adversaries perspective, if my job is to go in and exploit software that's hosted in the cloud, I absolutely do not see the application model that way.I see it as it is inherently complex and that's a good thing for me because it means I can rely on the fact that those engineers had tremendous anxiety, we're making a lot of guesses, and crossing their fingers and hoping something would work and not be exploitable by me, right? So, the only way I think we get around that is to recognize that our engineers are critical stakeholders in that security process and you fundamentally lack that visibility if you don't do your scanning until after the fact. If you take that traditional audit-based approach that assumes a very waterfall, legacy approach to building software, and recognize that, hey, we're all on this infinite loop race track now. We're deploying every three-and-a-half seconds, everything's automated, it's all built at scale, but the ability to do that inherently implies all of this additional complexity that ultimately will, you know, end up haunting me, right? If I don't do anything about it, to make my engineer stakeholders in, you know, what actually gets deployed and what risks it brings on board.Corey: This episode is sponsored in part by our friends at Uptycs. Attackers don't think in silos, so why would you have siloed solutions protecting cloud, containers, and laptops distinctly? Meet Uptycs - the first unified solution that prioritizes risk across your modern attack surface—all from a single platform, UI, and data model. Stop by booth 3352 at AWS re:Invent in Las Vegas to see for yourself and visit uptycs.com. That's U-P-T-Y-C-S.com. My thanks to them for sponsoring my ridiculous nonsense.Corey: When I wind up hearing you talk about this—I'm going to divert us a little bit because you're dancing around something that it took me a long time to learn. When I first started fixing AWS bills for a living, I thought that it would be mostly math, by which I mean arithmetic. That's the great secret of cloud economics. It's addition, subtraction, and occasionally multiplication and division. No, turns out it's much more psychology than it is math. You're talking in many aspects about, I guess, what I'd call the psychology of a modern cloud engineer and how they think about these things. It's not a technology problem. It's a people problem, isn't it?Clinton: Oh, absolutely. I think it's the people that create the technology. And I think the longer you persist in what we would call the legacy viewpoint, right, not recognizing what the cloud is—which is fundamentally just software all the way down, right? It is abstraction layers that allow you to ignore the fact that you're running stuff on somebody else's computer—once you recognize that, you realize, oh, if it's all software, then the problems that it introduces are software problems that need software solutions, which means that it must involve activity by the people who write software, right? So, now that you're in that developer world, it unlocks, I think, a lot of potential to say, well, why don't developers tend to trust the security tools they've been provided with, right?I think a lot of it comes down to the question you asked earlier in terms of the noise, the lack of understanding of how those pieces are connected together, or the lack of context, or not even frankly, caring about looking beyond the single-point solution of the problem that solution was designed to solve. But more importantly than that, not recognizing what it's like to build modern software, right, all of the decisions that have to be made on a daily basis with very limited information, right? I might not even understand where that container image I'm building is going in the universe, let alone what's being built on top of it and how much critical customer data is being touched by the database, that that container now has the credentials to access, right? So, I think in order to change anything, we have to back way up and say, problems in the cloud or software problems and we have to treat them that way.Because if we don't if we continue to represent the cloud as some evolution of the old environment where you just have this perimeter that's pre-existing infrastructure that you're deploying things onto, and there's a guy with a neckbeard in the basement who is unplugging cables from a switch and plugging them back in and that's how networking problems are solved, I think you missed the idea that all of these abstraction layers introduced the very complexity that needs to be solved back in the build space. But that requires visibility into what actually happens when it gets deployed. The way I tend to think of it is, there's this firewall in place. Everybody wants to say, you know, we're doing DevOps or we're doing DevSecOps, right? And that's a lie a hundred percent of the time, right? No one is actually, I think, adhering completely to those principles.Corey: That's why one of the core tenets of ClickOps is lying about doing anything in the console.Clinton: Absolutely, right? And that's why shadow IT becomes more and more prevalent the deeper you get into modern development, not less and less prevalent because it's fundamentally hard to recognize the entirety of the potential implications, right, of a decision that you're making. So, it's a lot easier to just go in the console and say, “Okay, I'm going to deploy one EC2 to do this. I'm going to get it right at some point.” And that's why every application that's ever been produced by human hands has a comment in it that says something like, “I don't know why this works but it does. Please don't change it.”And then three years later because that developer has moved on to another job, someone else comes along and looks at that comment and says, “That should really work. I'm going to change it.” And they do and everything fails, and they have to go back and fix it the original way and then add another comment saying, “Hey, this person above me, they were right. Please don't change this line.” I think every engineer listening right now knows exactly where that weak spot is in the applications that they've written and they're terrified of that.And I think any tool that's designed to help developers fundamentally has to get into the mindset, get into the psychology of what that is, like, of not fundamentally being able to understand what those applications are doing all of the time, but having to write code against them anyway, right? And that's what leads to, I think, the fear that you're going to get woken up because your pager is going to go off at 3 a.m. because the building is literally on fire and it's because of code that you wrote. We have to solve that problem and it has to be those people who's psychology we get into to understand, how are you working and how can we make your life better, right? And I really do think it comes with that the noise reduction, the understanding of complexity, and really just being humble and saying, like, “We get that this job is really hard and that the only way it gets better is to begin admitting that to each other.”Corey: I really wish that there were a better way to articulate a lot of these things. This the reason that I started doing a security newsletter; it's because cost and security are deeply aligned in a few ways. One of them is that you care about them a lot right after you failed to care about them sufficiently, but the other is that you've got to build guardrails in such a way that doing the right thing is easier than doing it the wrong way, or you're never going to gain any traction.Clinton: I think that's absolutely right. And you use the key term there, which is guardrails. And I think that's where in their heart of hearts, that's where every security professional wants to be, right? They want to be defining policy, they want to be understanding the risk posture of the organization and nudging it in a better direction, right? They want to be talking up to the board, to the executive team, and creating confidence in that risk posture, rather than talking down or off to the side—depending on how that org chart looks—to the engineers and saying, “Fix this, fix that, and then fix this other thing.” A, B, and C, right?I think the problem is that everyone in a security role or an organization of any size at this point, is doing 90% of the latter and only about 10% of the former, right? They're acting as gatekeepers, not as guardrails. They're not defining policy, they're spending all of their time creating Jira tickets and all of their time tracking down who owns the piece of code that got deployed to this pod on EKS that's throwing all these errors on my console, and how can I get the person to make a decision to actually take an action that stops these notifications from happening, right? So, all they're doing is throwing footballs down the field without knowing if there's a receiver there, right, and I think that takes away from the job that our security analysts really shouldn't be doing, which is creating those guardrails, which is having confidence that the policy they set is readily understood by the developers making decisions, and that's happening in an automated way without them having to create friction by bothering people all the time. I don't think security people want to be [laugh] hated by the development teams that they work with, but they are. And the reason they are is I think, fundamentally, we lack the tooling, we lack—Corey: They are the barrier method.Clinton: Exactly. And we lacked the processes to get the right intelligence in a way that's consumable by the engineers when they're doing their job, and not after the fact, which is typically when the security people have done their jobs.Corey: It's sad but true. I wish that there were a better way to address these things, and yet here we are.Clinton: If only there were better way to address these things.Corey: [laugh].Clinton: Look, I wouldn't be here at Snyk if I didn't think there were a better way, and I wouldn't be coming on shows like yours to talk to the engineering communities, right, people who have walked the walk, right, who have built those Terraform files that contain these misconfigurations, not because they're bad people or because they're lazy, or because they don't do their jobs well, but because they lacked the visibility, they didn't have the understanding that that default is actually insecure. Because how would I know that otherwise, right? I'm building software; I don't see myself as an expert on infrastructure, right, or on Linux packages or on cyclomatic complexity or on any of these other things. I'm just trying to stay in my lane and do my job. It's not my fault that the software has become too complex for me to understand, right?But my management doesn't understand that and so I constantly have white knuckles worrying that, you know, the next breach is going to be my fault. So, I think the way forward really has to be, how do we make our developers stakeholders in the risk being introduced by the software they write to the organization? And that means everything we've been talking about: it means prioritization; it means understanding how the different layers of the stack affect each other, especially the cloud pieces; it means an extensible platform that lets me write code against it to inject my own reasoning, right? The piece that we haven't talked about here is that risk calculation doesn't just involve technical aspects, there's also business intelligence that's involved, right? What are my critical applications, right, what actually causes me to lose significant amounts of money if those services go offline?We at Snyk can't tell that. We can't run a scanner to say these are your crown jewel services that can't ever go down, but you can know that as an organization. So, where we're going with the platform is opening up the extensible process, creating APIs for you to be able to affect that risk triage, right, so that as the creators have guardrails as the security team, you are saying, “Here's how we want our developers to prioritize. Here are all of the factors that go into that decision-making.” And then you can be confident that in their environment, back over in developer-land, when I'm looking at IntelliJ, or, you know, or on my local command line, I am seeing the guardrails that my security team has set for me and I am confident that I'm fixing the right thing, and frankly, I'm grateful because I'm fixing it at the right time and I'm doing it in such a way and with a toolset that actually is helping me fix it rather than just telling me I've done something wrong, right, because everything we do at Snyk focuses on identifying the solution, not necessarily identifying the problem.It's great to know that I've got an unencrypted S3 bucket, but it's a whole lot better if you give me the line of code and tell me exactly where I have to copy and paste it so I can go on to the next thing, rather than spending an hour trying to figure out, you know, where I put that line and what I actually have to change it to, right? I often say that the most valuable currency for a developer, for a software engineer, it's not money, it's not time, it's not compute power or anything like that, it's the right context, right? I actually have to understand what are the implications of the decision that I'm making, and I need that to be in my own environment, not after the fact because that's what creates friction within an organization is when I could have known earlier and I could have known better, but instead, I had to guess I had to write a bunch of code that relies on the thing that was wrong, and now I have to redo it all for no good reason other than the tooling just hadn't adapted to the way modern software is built.Corey: So, one last question before we wind up calling it a day here. We are now heavily into what I will term pre:Invent where we're starting to see a whole bunch of announcements come out of the AWS universe in preparation for what I'm calling Crappy Cloud Hanukkah this year because I'm spending eight nights in Las Vegas. What are you doing these days with AWS specifically? I know I keep seeing your name in conjunction with their announcements, so there's something going on over there.Clinton: Absolutely. No, we're extremely excited about the partnership between Snyk and AWS. Our vulnerability intelligence is utilized as one of the data sources for AWS Inspector, particularly around open-source packages. We're doing a lot of work around things like the code suite, building Snyk into code pipeline, for example, to give developers using that code suite earlier visibility into those vulnerabilities. And really, I think the story kind of expands from there, right?So, we're moving forward with Amazon, recognizing that it is, you know, sort of the de facto. When we say cloud, very often we mean AWS. So, we're going to have a tremendous presence at re:Invent this year, I'm going to be there as well. I think we're actually going to have a bunch of handouts with your face on them is my understanding. So, please stop by the booth; would love to talk to folks, especially because we've now released the Snyk Cloud product and really completed that story. So, anything we can do to talk about how that additional context of the cloud helps engineers because it's all software all the way down, those are absolutely conversations we want to be having.Corey: Excellent. And we will, of course, put links to all of these things in the [show notes 00:35:00] so people can simply click, and there they are. Thank you so much for taking all this time to speak with me. I appreciate it.Clinton: All right. Thank you so much, Corey. Hope to do it again next year.Corey: Clinton Herget, Field CTO at Snyk. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment telling me that I'm being completely unfair to Azure, along with your favorite tasting color of Crayon.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

airhacks.fm podcast with adam bien
Low Code, No Code, WYSIWYG …and some CRaC

airhacks.fm podcast with adam bien

Play Episode Listen Later Nov 13, 2022 61:08


An airhacks.fm conversation with John Ceccarelli (@jceccarelli1) about: Macintosh 512K, writing short stories and playing Dark Castle, studying European politics, enjoying Brno and Prague, learning Czech from a communist book, technical writing for Sun Microsystems, working on NetBeans Matisse, WYSIWYG precision is challenging, NetBeans Visual Web Pack was extremely popular, Sun's JSF woodstock, separation of generated and implemented code is challenging, explaining AWS Lambdas with EJBs, visual representation of complex code is challenging, NetBeans vs. IntelliJ strategies, Installing Java Support in Visual Studio Code, working on JVM internals at Azul Systems, Azul JVMs Zulu vs. Prime, the Falcon JIT, optimising JVM for Apache Cassandra, the Renaissance Suite, memento and openJDK CRaC, Azul's CRAC optimization, crowdourcing the optimizations, quarkus on Azul's CRaC, Azul Prime is based on LLVM, Foojay and azul John Ceccarelli on twitter: @jceccarelli1

Donau Tech Radio - DTR
iPads, Alexa Skills, Spring Native, IntelliJ Http Scratch

Donau Tech Radio - DTR

Play Episode Listen Later Oct 24, 2022 90:53


Diese Episode startet mit einem größere Consumer Block über, wer hättest es gedacht, Apples neueste Produktvorstellungen aus dieser Woche: die neuen iPad(Pro)s. Danach geht es weiter mit Kommentaren von Tom und André zum iPad OS 16.1 und macOS Ventura Release dass die kommende Woche über die Bühne gehen soll. In der zweiten Hälfte geht es um Developer Themen. Tom erzählt von seinen Versuchen mit dem Development von neuen Alexa Skills und damit einhergehend über erste Erfahrungen mit Spring Native und Graal VM. Zu guter Letzt gibt es noch einen kleinen Tipp Richtung HTTP Requests für Projekte mit IntelliJ.

FOCUS ON: DevOps
E42 – Entwicklungsumgebungen

FOCUS ON: DevOps

Play Episode Listen Later Sep 28, 2022 69:52


In dieser 42. Folge des Focus On DevOps Podcasts versuchen wir nicht die Antwort auf ‚die Frage nach dem Leben, dem Universum und dem ganzen Rest‘ zu finden, sondern widmen uns vielmehr einem anderen, aber mindestens ebenso wichtigen Thema: Entwicklungsumgebungen. Ob mit VIM, Visual Studio, IntelliJ und Co – jede:r Softwareentwickler:in baut sich aus verschiedenen Tools die Umgebung zusammen, die am besten für den jeweiligen Anwendungsbereich funktioniert. Doch was macht eigentlich eine gute Entwicklungsumgebung aus? Um dieser Frage möglichst objektiv auf den Grund zu gehen, hat sich Enrico mit dem (leiderprobten) ABAP-Entwickler Markus Rest und Christian Weichel von GitPod zusammengesetzt.

FOCUS ON: DevOps
E42 – Entwicklungsumgebungen

FOCUS ON: DevOps

Play Episode Listen Later Sep 28, 2022 69:52


In dieser 42. Folge des Focus On DevOps Podcasts versuchen wir nicht die Antwort auf ‚die Frage nach dem Leben, dem Universum und dem ganzen Rest‘ zu finden, sondern widmen uns vielmehr einem anderen, aber mindestens ebenso wichtigen Thema: Entwicklungsumgebungen. Ob mit VIM, Visual Studio, IntelliJ und Co – jede:r Softwareentwickler:in baut sich aus verschiedenen Tools die Umgebung zusammen, die am besten für den jeweiligen Anwendungsbereich funktioniert. Doch was macht eigentlich eine gute Entwicklungsumgebung aus? Um dieser Frage möglichst objektiv auf den Grund zu gehen, hat sich Enrico mit dem (leiderprobten) ABAP-Entwickler Markus Rest und Christian Weichel von GitPod zusammengesetzt.

Les Cast Codeurs Podcast
LCC 282 - Apérikube apomorphique - partie 2

Les Cast Codeurs Podcast

Play Episode Listen Later Jul 19, 2022 51:30


Cet épisode marathon sera découpé en deux morceaux pour éviter à vos oreilles une écoute marathon. Cette deuxième partie couvre des sujets d'architecture et de loi société et organisation ainsi que les conférences à venir. Logging, Migration Java 8 vers 11, Xerox Park, (manque de) sécurité, courbes elliptiques, sondage développeurs. Enregistré le 8 juillet 2022 Téléchargement de l'épisode LesCastCodeurs-Episode–282.mp3 News Architecture Pour ou contre le logging Contre puis pour tous les langages et plateformes utilisent les logs debugging, tracing, journaling, monitoring, and printing errors impact sur les performances (allocation supérieure sur un log que sur le code métier log = mémoire, CPU (GC), I/O risque de securité (dépendances et fonctionnalités sans besoin) format des log: pour lecture humaine main volume impose traitement automatique log level la bonne abstraction (souvent trop et pas ce que l'on veut à la fois debugging -> utiliser un debugger ; journaling -> event sourcing ou solution dédiée ; tracing > open tracing ; monitoring -> monitoring solution via metrics et health check bons usages de logging: en dev (println), fin de jobs automatiques, erreurs non récupérables ou innatendues, pas les erreurs utilisateur (logger les erreurs qui cachent un bug), dans les container, Sébastien utilise System.out et System.err vu que les logs sont gérés par la plateforme la réponse pour maintenant les logs peuvent etre structurés performance, on peut éviter les concatenations de String (parameterized logging), memory allocation est bien meilleure depuis 2012 (e.g. Shenandoah), vu des problèmes dans des cas plus rare de genre MDC.getCopyOfContextMap disk I/O: ok mais disque cape a 200 MiB/s donc bon…: si c;est le cas, sépare I/O log du reste (disque vs network par exemple) gros fan de logs structures via JSON ; log line sur console et JSON en fichier log plus de manière conditionelle tracing théoriquement bon mais limite dans son contexte métier et peu d'infos passables system.out problème de scalabilité d'usage, etc et appel blocant println (async usage n'est pas bon) LinkedIn et sa migration de Java 8 à 11 1000 apps sur 320k hosts Migration Java 8 vers 11 avec en vue G1 regardé depuis 2018 Jetty, Hadoop, Play, Samza: focalisé sur Jetty Mettre a jour le système de build, 2. Faire des tests de performance 3. Automatiser la migration mise. a jour vers gradle 5 G1 80% des applis CMS 20% pris 20 apps representatives focalisé sur les applications avec les tailles de piles les plus grosses de équipera jusquà 200% plus de latence et throughput: zones G1, Shenandoah et ZGC automatisé la migration du reste et tourné les builds de tests qui ont identifié les problèmes de migration quelques problèmes: suppression de certaines classes Java EE, changement du type de classloader par défaut, casting de classe plus stricte ils ont utilisé -release 8 et ont limité les usages des features Java 11 les options de CLI de la JVM ont beaucoup changé LinkedIn fait du microsercices ce qui veut dire que beaucoup de repositories sont liés à d'autre par un graphe de dépendance: euh c'est pas le principe des microservices d'éviter ça??? mise a jour de 500 librairies 3/4 de l'année Quelques challenges vus La JVM respecte groups et donc moins de thread GC sont crées aussi ils pouvaient piquer des cycles CPUs avant et plus maintenant Java 11 a un usage de mémoire hors pile plus important reduction de latence p99 par 10% et le throughput par 20% sans changer le type de GC C'est un bon retour qui sent le type de développement de la vrai vie Méthodologies Un article sur Xerox park et comment ils ont inventé le futur article de 1985 Xerox achète un constructeur de mainframe, et ils ont crée un lab de recherche pour aider les usages Macintosh et la souris et les fenêtres, les cartes météos colorées, imprimante laser, réseaux d'ordinateurs, lasers semi-conducteurs qui lisent les disques optiques, langages de programmation structurés developer l'architecture de l'information project proposes et faite en bottom up PARC construisait ses propres hardware ce qui a créer des inventions et qui devaient etre construits pour 100 utilisateurs (scale) recherche en construisant concrètement, pas en papier théorique académique bit map, distributed computing, email, frame buffer, LAN, object oriented programming Cree Alto un ordinateur « personnel » qui a permis aux chercheurs de tester leurs idées, beaucoup en avaient un. donc ils ont du inventer le LAN et Ethernet (packet) via une personne avec passe de radio amateur (medium partagé et non reliable premier projet distribué. (Un protocole d'impression) antialiasing : ils amélioraient en testant pour de vrai un gars a construit un proto de souris pour prouver que les curseurs étaient plus efficace: tests avec des dans la rue et IO a perdu :D concept de modal (insert, delete) vers comportement non modal, plus simple pour l'utilisateur small talk: un langage si simple qu'un enfant peut l'utiliser (simulation based programming) overlapping windows ont été développées en small talk autre groupe strong type system Xerox ne savait pas convertir ces recherches en produits et les amener sur le marcher (sauf l'imprimante laser) Sécurité Travis CI fuit encore des mots de passe permet d'accéder au compte privé des développeurs open source qui ont mis en place travisCI c'est la quatrième fois token offre accès lecture et écriture aux repos risque d'attaque de supply chain tokens github, AWS ou DockerHub apr exemple mais aussi les bases de données utilisées dans la CI via l'API TravisCI HDMI peut-être un vecteur d'attaque et d'infection de vos ordinateurs Un hack d'un adaptateur HDMI peut potentiellement infecter un video-projecteur, et qui à son tour pourra réinfecter les prochains ordinateurs qui s'y connecteront Cet article propose de construire une sorte de connecteur qui sert de firewall HDMI pour éviter ce genre d'infection il y a des préservatifs USB aussi qui ne laissent passer que la puissance et pas les données Un guide pour protéger son macOS Une suite de conseils comme de faire une installation toute fraiche, de mettre les mises à jour logicielle automatiques, n'autoriser que les applications signées, appliquer le chiffrement du disque… Mais aussi utiliser par exemple un gestionnaire de mot de passe, éviter les extensions de navigateur, faire tourner un firewall Et des liens vers des guides de sécurités plus avancés un truc que je n'ai pas fait mais qui me tente c'est un outbound firewall comme little snitch ou lulu Comment choisir un algorithme de courbes elliptiques un article qui détaille le pour et le contre de certaines courbes elliptiques cas d'usage, notamment gouvernemental faiblesses (timing attaques etc) pour les curieux mais la première courbe citée est celle la plus utilisée en ce moment Loi, société et organisation Stackoverflow sort son sondage sur les développeurs 70% apprennent a coder en ligne (les plus de 45 ans dans les bouquins) stackoverflow derrière la doc technique puis les blogs ; video 60% des gens ; podcast 7,21% damn! presque 60% ont moins de 10 ans d'expérience ; si t'es pas VP ou CxO a 17 ans d'expérience, tu as raté ta vie 9% cloud infra engineers 22% ont neuro atypiques Docker passe dans la catégorie outil fondamental (69% d'usage) les frameworks 3D genre Unity 3D ou Unreal Engine sont des outils que des non développeurs pro apprennent Rust technologie la plus aimée, Rust et Python en plus demandées Java 6eme position mais 4ème pour ceux qui apprenent Angular.is en framework le plus redouté / react.is le plus demandé Docker et Kube sont les plus aimés et demandé indépendants on augmenté de 5% et 4% pour les temples pleins 85% des dev sont dans une orga partiellement distancié le 62% des devs pro cherchent des réponses pendant plus de 30 minutes par jour, 25% 11h Azure prend la deuxième place des cloud, OVH 3,7% Spring framework le plus populaire de Java VSCode 74%, IntelliJ 28%, vim 23%, Eclipse 12%, EMacs 4,5% pleins d'outils asynchrone (tickets etc) que je ne connais pas salaires ont augmenté de 23% en median JavaScript change de licence open source toujours la licence Ecma international license, assez restrictive qui interdit le fork, mais avec certaines provisions pour l'intégration et la reproduction mais aussi une nouvelle licence dérivée de la W3C Document & Software License, un peu plus ouverte, qui permet d'intégrer et s'intégrer plus facilement avec les autres standards du Web Conférences de la part de Youen Cette année Codeurs en Seine, c'est le 17 novembre et le cfp est ouvert N'hésitez pas à amener un peu de JVM dans l'appel à orateur. (ca commence à se faire rare). Pour rappel : codeurs en seine c'est 1000 personnes autour des métiers du développement dans une des plus grande salle de Rouen, le kindarena. Nous contacter Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Faire un crowdcast ou une crowdquestion Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/

Message à caractère informatique
#71 – Le jaune sombre des bases de données froides

Message à caractère informatique

Play Episode Listen Later Jun 17, 2022 62:57


Dans cet épisode nous parlons de Matomo, une alternative à Google Analytics sur Clever Cloud et de notre nouveau Ticket Center. Il sera également question de Franglais, de NixOS, de la version 0.22 de Vector, de couleurs, de CSS, de prédiction, de Quickwit 0.3 et d'éditeur de tableau dans IntelliJ... Et nous finirons en musique (enfin on espère !) avec la recommandation de notre invité.

Tech Writer koduje
#40 Tech Writer spełnia swoje marzenia, czyli co i jak można zautomatyzować

Tech Writer koduje

Play Episode Listen Later Mar 17, 2022 54:24


Jedni marzą o drogim samochodzie a drudzy o ekskluzywnych wakacjach w ciepłych krajach. A o czym marzą Tech Writerzy? Odpowiedź znaleźliśmy w newsletterze "Write the Docs" z marca 2022. Okazuje się, że technoskrybowie marzą o tym, żeby pewne elementy ich pracy były zautomatyzowane. Jest to temat bliski naszemu sercu, dlatego postanowiliśmy zmierzyć się z listą życzeń z newslettera. Bazując na swoim doświadczeniu oraz zdobytych informacjach, staramy się zaproponować praktyczne rozwiązania, które przybliżą nasze koleżanki i kolegów po fachu do wymarzonej automatyzacji. Dźwięki wykorzystane w audycji pochodzą z kolekcji "107 Free Retro Game Sounds" dostępnej na stronie https://dominik-braun.net, udostępnianej na podstawie licencji Creative Commons license CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Informacje dodatkowe: Newsletter "Write the Docs", marzec 2022: https://www.writethedocs.org/blog/newsletter-march-2022/ TestCafe: https://testcafe.io/ ImageMagick: https://imagemagick.org/index.php "Simplified User Interface: The Beginner's Guide": https://www.techsmith.com/blog/simplified-user-interface/ Screen Capture API: https://developer.mozilla.org/en-US/docs/Web/API/Screen_Capture_API "Sharing Screens with the New Javascript Screen Capture API": https://fjolt.com/article/javascript-screen-capture-api Biblioteka Pillow: https://pillow.readthedocs.io/en/stable/ Selenium WebDriver: https://www.selenium.dev/documentation/webdriver/ Conventional commits: https://www.conventionalcommits.org Vale: https://github.com/errata-ai/vale "Documentation as code: Part 3: A Linting How To - The Vale Linter in action (Demo)", Tag1: https://www.tag1consulting.com/blog/documentation-code-linting-part3 "Documentation testing", GitLab: https://docs.gitlab.com/14.8/ee/development/documentation/testing.html Alex: https://alexjs.com/ LanguageTool: https://languagetool.org/pl Schematron: https://www.schematron.com/ "Creative writing with GitHub copilot", Chris Ward: https://www.youtube.com/watch?v=V_CmYyvaMqE "Lint, Lint and Away! Linters for the English Language", Chris Ward: https://dzone.com/articles/lint-lint-and-away-linters-for-the-english-languag Code Spell Checker: https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker Gremlins Checker: https://marketplace.visualstudio.com/items?itemName=nhoizey.gremlins "Meet Grazie: the ultimate spelling, grammar, and style checker for IntelliJ IDEA", IntelliJ: https://blog.jetbrains.com/idea/2019/11/meet-grazie-the-ultimate-spelling-grammar-and-style-checker-for-intellij-idea/ Pandoc: https://pandoc.org/ "DITA as code - a modern approach to the classic standard", Tech Writer koduje: https://techwriterkoduje.pl/dita-as-code AutoIt: https://www.autoitscript.com/site/ Bitnami: https://github.com/bitnami

Beyond Coding
Product Lines in Software with Rachel Appel

Beyond Coding

Play Episode Listen Later Feb 2, 2022 44:23


You'll see lots of product lines when it comes to physical products. For example the automotive industry. But in software, product lines also exist. Think of JetBrains and their IDEs. Some of the topics we cover this episode, in order: ☑️ The ultimate way to reuse software ☑️ Planning for a product line ☑️ Product suites and product lines ☑️ The IntelliJ open source edition just made sense Joining us for this episode is Rachel Appel, Developer Advocate at JetBrains More on Rachel: https://rachelappel.com

Thinking Elixir Podcast
82: Reviewing Elixir with José Valim - Part 1

Thinking Elixir Podcast

Play Episode Listen Later Jan 18, 2022 60:13


José Valim joins us to kick off part 1 of a 5 part series as we count down to the 10 year anniversary of the Elixir project we know and love. In Part 1, we talk through the Elixir releases of 1.1, 1.2, and 1.3. We cover the significant changes in each release, what instigated the change, the thinking behind it, and more. We cover some long-present features that people may not be aware of and recall what these changes meant to us. Show Notes online - http://podcast.thinkingelixir.com/82 (http://podcast.thinkingelixir.com/82) Elixir Community News - https://dashbit.co/blog/elixir-and-machine-learning-nx-v0.1 (https://dashbit.co/blog/elixir-and-machine-learning-nx-v0.1) – Nx (Numerical Elixir) v0.1 released - https://twitter.com/peterlodewijk/status/1479838598790893571 (https://twitter.com/peterlodewijk/status/1479838598790893571) – Project called Numerl is announced for Erlang. - https://www.info.ucl.ac.be/~pvr/Losseau457814002022.pdf (https://www.info.ucl.ac.be/~pvr/Losseau_45781400_2022.pdf) – PDF to the Numerl paper titled "Efficient Vector and Matrix Computation for Erlang" - https://twitter.com/theerlef/status/1480632969140310017 (https://twitter.com/theerlef/status/1480632969140310017) – Lambda Days conference moved to July 2022 - https://github.com/KronicDeth/intellij-elixir/releases/tag/v12.1.0 (https://github.com/KronicDeth/intellij-elixir/releases/tag/v12.1.0) – Updates to the IntelliJ plugin that supports Elixir - https://twitter.com/kronicdeth/status/1479538579298820098 (https://twitter.com/kronicdeth/status/1479538579298820098) – IntelliJ Elixir plugin tweet announcement - https://github.com/elixir-lang/ex_doc/blob/main/CHANGELOG.md#v0270-2022-01-11 (https://github.com/elixir-lang/ex_doc/blob/main/CHANGELOG.md#v0270-2022-01-11) – ExDoc v0.27 was released - https://twitter.com/josevalim/status/1480815810503495683 (https://twitter.com/josevalim/status/1480815810503495683) – Tweet threat about ExDoc v0.27 showing features - https://hexdocs.pm/nx/Nx.html (https://hexdocs.pm/nx/Nx.html) – Nx library docs using the new ExDoc - https://twitter.com/brooklinjmyers/status/1480614054360018951 (https://twitter.com/brooklinjmyers/status/1480614054360018951) – Brooklin Myers announced he's joining DockYard to help create an Elixir Developer Bootcamp - https://www.elixirnewbie.com/ (https://www.elixirnewbie.com/) – Brooklin Myers Elixir Newbie Podcast Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://github.com/elixir-lang/elixir/blob/v1.1/CHANGELOG.md (https://github.com/elixir-lang/elixir/blob/v1.1/CHANGELOG.md) – Changelog for v1.1 - https://groups.google.com/g/elixir-lang-core/c/yd90kWN99FE/m/x2WZu7iqPcgJ (https://groups.google.com/g/elixir-lang-core/c/yd90kWN99FE/m/x2WZu7iqPcgJ) – Discussion thread about maps in Elixir - https://github.com/elixir-lang/elixir/blob/v1.2/CHANGELOG.md (https://github.com/elixir-lang/elixir/blob/v1.2/CHANGELOG.md) – Changelog for v1.2 - https://www.youtube.com/watch?v=X25xOhntr6s (https://www.youtube.com/watch?v=X25xOhntr6s) – ElixirConf 2015 - Keynote - Elixir Should Take Over the World by Jessica Kerr - https://hexdocs.pm/elixir/Protocol.html#module-consolidation (https://hexdocs.pm/elixir/Protocol.html#module-consolidation) – Protocol consolidation - https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md (https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md) – Changelog for v1.3 - https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#mix-escriptinstall (https://github.com/elixir-lang/elixir/blob/v1.3/CHANGELOG.md#mix-escriptinstall) – Mix escript install - mix test --stale - https://elixirforum.com/t/how-to-describe-many-contexts-in-exunit-without-a-hierarchy/1551 (https://elixirforum.com/t/how-to-describe-many-contexts-in-exunit-without-a-hierarchy/1551) – How to Describe Many Contexts in ExUnit without a Hierarchy Guest Information - https://twitter.com/josevalim (https://twitter.com/josevalim) – on Twitter - https://github.com/josevalim (https://github.com/josevalim) – on Github - https://dashbit.co/ (https://dashbit.co/) – Dashbit website and blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

Whiskey Web and Whatnot
Decentralized Gaming, IntelliJ, Twitch, and the Shortcomings of Modern VR with Rob Cary

Whiskey Web and Whatnot

Play Episode Listen Later Jan 13, 2022 54:16


What do web development, Twitch, VR, and blockchain all have in common? More than you might think. After years as a game developer, Ship Shape's longest-tenured employee Rob Cary was bound to put those unlikely virtual dots together and today he's here to share a few original insights.  After meeting Robbie Wagner in an elementary school play as a couple of accountants, their lives continued to overlap. From a choice in college to a knack for web development, making sweet beats, and ultimately, their careers, an intro to Rob may sound identical to an intro to Robbie. But unlike Robbie Wagner, Rob Cary has years of VR experience under his belt. Not to mention, some interesting ideas about the future of our virtual metaverse.  In this episode, Robbie, Chuck, and Rob discuss the wonders of WebStorm and IntelliJ, what on earth decentralized gaming is, how VR has transformed the gaming world, and the mostly unknown link between Twitch and web development.  Key Takeaways [00:28] - Introduction to Rob. [01:27] - A whiskey review.  [06:57] - Rob, Robbie, or both?  [13:05] - Rob's technical background and the state of decentralized gaming. [16:12] - A game of Stumped. [24:50] - What Rob likes about WebStorm and IntelliJ. [30:16] - A conversation about the VR universe, how it's transformed, and where we're headed.   [39:10] - Why NFTs are everywhere.  [43:20] - Rob's hobbies outside of gaming and websites.  Quotes [26:35] - “VS Code is one of the few examples I've seen of an IDE that's really universally been adopted really quickly.” ~ @r0bc4ry [34:50] - “Some of the things you can do on VR, you could just never do in a traditional game. The technology has a ton of promise, there are just fundamental issues that still are being worked on that I think need to be fixed.” ~ @r0bc4ry Links Rob Cary React Ember Dojo Vuori Lululemon Widow Jane Rye Mash, Oak & Applewood-Aged Whiskey Twitter Whiskey Web and Whatnot: Next.js 12, React vs. Svelte, and the Future of Frameworks with Wes Bos Virginia Tech StarCraft Zoom Unity Blockchain Halo Syntax Web3 DoorDash Uber Eats Async/await NativeScript BlueJ JSON IntelliJ IDEA WebStorm Visual Studio Code Atom yarn install Sublime Text TextWrangler PHPStorm Vim EmberConf JetBrains Rider Oculus Rec Room Wii Nintendo Call of Duty Medal Of Honor Vader Immortal Ethereum adidas NFTs Pepsi NFTs Macy's Thanksgiving Day Parade NFTs Twitch esports Diablo II Hellgate  Connect with our hosts Robbie Wagner Chuck Carpenter Ship Shape Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts Whiskey Web and Whatnot Top-Tier, Full-Stack Software Consultants This show is brought to you by Ship Shape. Ship Shape's software consultants solve complex software and app development problems with top-tier coding expertise, superior service, and speed. In a sea of choices, our senior-level development crew rises above the rest by delivering the best solutions for fintech, cybersecurity, and other fast-growing industries. Check us out at shipshape.io.

Enterprise Java Newscast
COVID as the new normal (and new releases)

Enterprise Java Newscast

Play Episode Listen Later Sep 24, 2021


Recorded Date 9/3/2021 Description Another great episode with the entire band -- Kito, Daniel, Ian, and Josh. They discuss COVID as the new normal, NetBeans, Kotlin, IntelliJ, PrimeBlocks, Java 17 (including Spring's announcement), Docker requiring...

The Business of Open Source
Why Companies Go Cloud-Native with Austin Adams and Zach Arnold

The Business of Open Source

Play Episode Listen Later Jun 17, 2020 40:36


Some of the highlights of the show include The diplomacy that's required between software engineers and management, and why influence is needed to move projects forward to completion. Driving factors behind Ygrene's Kubernetes migration, which included an infrastructure bottleneck, a need to streamline deployment, and a desire to leverage their internal team of cloud experts. Management's request to ship code faster, and why it was important to the organization.  How the company's engineers responded to the request to ship code faster, and overcame disconnects with management. How the team obtained executive buy-in for a Kubernetes migration. Key cultural changes that were required to make the migration to Kubernetes successful. How unexpected challenges forced the team to learn the “depths of Kubernetes,” and how it helped with root cause analysis. Why the transition to Kubernetes was a success, enabling the team to ship code faster, deliver more value, secure more customers, and drive more revenue.  Links: HerdX: https://www.herdx.com/ Ygrene: https://ygrene.com/ Austin Twitter: https://twitter.com/_austbot Austin LinkedIn: https://www.linkedin.com/in/austbot/ Arnold's book on publisher site: https://www.packtpub.com/cloud-networking/the-kubernetes-workshop  Arnold's book on Amazon: https://www.amazon.com/Kubernetes-Workshop-Interactive-Approach-Learning/dp/1838820752/ TranscriptAnnouncer: Welcome to The Business of Cloud Native podcast where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. My name is Emily Omier, and I am here with Austin Adams and Zack Arnold, and we are here to talk about why companies go cloud-native.Austin: So, I'm currently the CTO of a small Agrotech startup called HerdX. And that means I spend my days designing software, designing architecture for how distributed systems talk, and also leading teams of engineers to build proof-of-concepts and then production systems as they take over the projects that I've designed. Emily: And then, what did you do at Ygrene? Austin: I did the exact same thing, except for without the CTO title. And I also had other higher-level engineers working with me at Ygrene. So, we made a lot of technical decisions together. We all migrated to Kubernetes together, and Zack was a chief proponent of that, especially with the culture change. So, I focused on the designing software that teams of implementation engineers could take over and actually build out for the long run. And I think Zack really focused on—oh, I'll let Zack say what he focused on. [laughs].Emily: Go for it, Zach.Zach: Hello. I'm Zack. I also no longer work for Ygrene, although I have a lot of admiration and respect for the people who do. It was a fantastic company. So, Austin called me up a while back and asked me to think about participating in a DevOps engineering role at Ygrene. And he sort of said at the outset, we don't really know what it looks like, and we're pretty sure that we just created a position out of a culture, but would you be willing to embody it? And up until this point, I'd had cloud experience, and I had had software engineering experience, but I didn't really spend a ton of time focused on the actual movement of software from developer's laptops to production with as few hiccups, and as many tests, and as much safety as possible in between. So, I always told people the role felt like it was three parts. It was part IT automation expert, part software engineer, and then part diplomat. And the diplomacy was mostly in between people who are more operations focused. So, support engineers, project managers, and people who were on-call day in and day out, and being a go-between higher levels of management and software engineers themselves because there's this awkward, coordinated motion that has to really happen at a fine-grained level in order to get DevOps to really work at a company. What I mean by that is, essentially, Dev and Ops seem to on the surface have opposing goals, the operation staff, it's job is to maintain stability, and the development side's job is to introduce change, which invariably introduces instability. So, that dichotomy means that being able to simultaneously satisfy both desires is really a goal of DevOps, but it's difficult to achieve at an organizational level without dealing with some pretty critical cultural components. So, what do I spend my day on? The answer to that question is, yes. It really depends on the day. Sometimes it's cloud engineers. Sometimes it's QA folks, sometimes it's management. Sometimes I'm heads-down writing software for integrations in between tools. And every now and again, I get to contribute to open-source. So, a lot of different actual daily tasks take place in my position.Emily: Tell me a little bit more about this diplomacy between software engineers and management.Zach: [laughs]. Well, I'm not sure who's going to be listening in this amazing audience of ours, but I assume, because people are human, that they have capital O-pinions about how things should work, especially as it pertains to either software development lifecycle, the ITIL process of introducing change into a datacenter, into a cloud environment, compliance, security. There's lots of, I'll call them thought frameworks that have a very narrow focus on how we should be doing something with respect to software. So, diplomacy is the—well, I guess in true statecraft, it's being able to work in between countries. But in this particular case, diplomacy is using relational equity or influence, to be able to have every group achieve a common and shared purpose. At the end of the day, in most companies the goal is actually to be able to produce a product that people would want to pay for, and we can do so as quickly and as efficiently as possible. To do that, though, it again requires a lot of people with differing goals to work together towards that shared purpose. So, the diplomacy looks like, aside from just having way too many meetings, it actually looks like being able to communicate other thought frameworks to different stakeholders and being able to synthesize all of the different narrow-focused frameworks into a common shared, overarching process. So, I'll give you a concrete example because it feels like I just spewed a bunch of buzzwords. A concrete example would be, let's say in the common feature that's being delivered for ABC Company, for this feature it requires X number of hours of software development; X number of hours of testing; X number of hours of preparing, either capacity planning, or fleet size recommendations, or some form of operational pre-work; and then the actual deployment, and running, and monitoring. So, in the company that I currently work for, we just described roughly 20 different teams that would have to work together in order to achieve the delivery of this feature as rapidly as possible. So, the process of DevOps and the diplomacy of DevOps, for me looks like—aside from trying to automate as much as humanly possible and to provide what I call interface guarantees, which are basically shared agreements of functionality between two teams. So, the way that the developers will speak to the QA engineers is through Git. They develop new software, and they push it into shared code repositories, the way that the QA engineers will speak to people who are going to be handling the deployments—or at management in this particular case—is going to be through a well-formatted XML test file. So, providing automation around those particular interfaces and then ensuring that everyone's shared goals are met at the particular period of time where they're going to be invoked over the course of the delivery of that feature, is the “subtle art,”—air quotes, you can't see but—to me of DevOps diplomacy. That kind of help?Emily: Yeah, absolutely. Let's take, actually, just a little bit of a step back. Can you talk about what some of the business goals were behind moving to Kubernetes for Ygrene? Who was the champion of this move? Was it business stakeholders saying, “Hey, we really need this to change,” or engineering going to business stakeholders? Who needed a change. I believe that the desire for Kubernetes came from a bottleneck of infrastructure. Not so much around performance, such as the applications weren't performing due to scale. We had projected scale that we were coming to where it would cause a problem potentially, but it was also in the ease of deployment. It had a very operations mindset as Zack was saying, our infrastructure was almost entirely managed—of the core applications set—by outsourcing. And so, we depended on them to innovate, we depended on them to spin up new environments and services. But we also have this internal competing team that always had this cloud background. And so, what we were trying to do was lessen the time between idea to deployment by utilizing platforms that were more scalable, more flexible, and all the things that Docker gives with the Dev/Prod Parity, the ease of packaging your environment together so that small team can ship an entire application. And so, I think our main goal with that was to take that team that already had a lot of cloud experience, and give them more power to drive the innovation and not be bottlenecked just by what the outsourcing team could do. Which, by the way, just for the record, the outsourcing team was an amazing team, but they didn't have the Kubernetes or cloud experience, either. So, in terms of a hero or champion of it, it just started as an idea between me and the new CTO, or CIO that came in, talking about how can we ship code faster? So, one of the things that happened in my career was the desire for a rapid response team which, that sounds like a buzzword or something, but it was this idea that Ygrene was shipping software fairly slow, and we wanted to get faster. So, really the CIO, and one of the development managers, they were the really big champions of, “Hey, let's deliver value to the business faster.” And they had the experience to ask their engineers how to make that happen, and then trust Zack and I through this process of delivering Kubernetes, and Istio, and container security, and all these different things that eventually got implemented.Emily: Why do you think shipping code faster matters?Austin: I think, for this company, why it mattered was the PACE financing industry is relatively new. And while financing has some old established patterns, I feel like there's still always room for innovation. If you hear the early days of the Bridgewater Financial Hedge Fund, they were a source of innovation and they used technology to deliver new types of assets and things like that. And so, our team at Ygrene was excellent because they wanted to try new things. They wanted to try new patterns of PACE financing, or ways of getting in front of the customer, or connections with different analytics so they could understand their customer better. So, it was important to be able to try things, experiment to see what was going to be successful. To get things out into the real world to know, okay, this is actually going to work, or no, this isn't going to work. And then, also, one of the things within financing is—especially newer financing—is there's a lot of speed bumps along the way. Compliance laws can come into effect, as well as working with cities and governments that have specialized rules and specialized things that they need—because everyone's an expert when it comes to legislation, apparently—they decide that they need X, and they give us a time when we have to get it done. And so, we actually have another customer out there, which is the legislative bodies. So, they have to get the software—their features that are needed within the financing system out by certain dates, or we're no longer eligible to operate in those counties. So, one of it was a core business risk, so we needed to be able to deliver faster. The other was how can we grow the business?Emily: Zach, this might be a question for you. Was there anything that was lost in translation as you were explaining what engineering was going to do in order to meet this goal of shipping code faster, of being more agile, when you were talking to C level management? How did they understand, and did anything get lost in translation?Zach: One of the largest disconnects, both on a technical and from a high level speaking to management issue I had was explaining how we were no longer going to be managing application servers as though they were pets. When you come from an on-premise setup, and you've got your VMware ESXi, and you're managing virtual machines, the most important thing that you have is backups because you want to keep those machines exactly as they are, and you install new software on those machines. When Kubernetes says, I'm going to put your pods wherever they fit on the cluster, assuming it conforms with the scheduling pattern, and if a node dies, it's totally fine, I'm going to spin a new one up for you, and move pods around and ensure that the application is exactly as you had stated—as in, it's in its desired state—that kind of thinking from switching from infrastructure as pets to infrastructure as cattle, is difficult to explain to people who have spent their careers in building and maintaining datacenters. And I think a lot—well, it's not guaranteed that this is across the board, but if you want to talk about a generational divide, people that usually occupy the C level office chairs are familiar with—in their heyday of their career—a datacenter-based setup. In a cloud-based consumption model where it really doesn't matter—I can just spin up anything anywhere—when you talk about moving from reasoning about your application as the servers it comprises and instead talking about your application as the workload it comprises, it becomes a place where you have to really, really concretely explain to people exactly how it's going to work that the entire earth will not come crashing down if you lose a server, or if you lose a pod, or if a container hiccups and gets restarted by Kubernetes on that node. I think that was the real key one. And the reason why that actually became incredibly beneficial for us is because once we actually had that executive buy-off when it came to, while I still may not understand, I trust that you know what you're doing and that this infrastructure really is replaceable, it allowed us to get a little bit more aggressive with how we managed our resources. So, now using Horizontal Pod Autoscaling, using the Kubernetes Cluster Autoscaler, and leveraging Amazon EC2 Spot Fleets, we were only ever paying for the exact amount of infrastructure that was required to run our business. And I think that is usually the thing that translates the best to management and non-technical leadership. Because when it comes down to if I'm aware that using this tool, and using a cloud-native approach to running my application, I am only ever going to be paying for the computational resource that I need in that exact minute to run my business, then the budget discussions become a lot easier, because everyone is aware that this is your exact run-rate when it comes to technology. Does that make sense? Emily: Absolutely. How important was having that executive buy-in? My understanding is that a lot of companies, they think that they're going to get all these savings from Kubernetes, and it doesn't always materialize. So, I'm just curious, it sounds like it really did for Ygrene.Zach: There was two things that really worked well for us when this transformation was taking place. The first was, Ygrene was still growing, so if the budget grew alongside of the growth of the company, nobody noticed. So, that was one really incredible thing that happened that, I think, now having had different positions in the industry, I don't know if I appreciated that enough because if you're attempting to make a cost-neutral migration to the Cloud, or to adopt cloud-native management principles, you're going to probably move too little, too late. And when that happens, you run the risk of really doing a poor job of adopting cloud-native, and then scrapping that project, because it never materialized the benefit, as you just described, that some people didn't experience. And the other benefit that we had, I think was the fact that because there were enough incredibly senior technical people—and again, I learned everything from these people—working with us, and because we were all, for the most part, on the same page when it came to this migration, it was easy to have a unified front with our management because every engineer saw the value of this new way of running our infrastructure and running our application. In one non—and this obviously helps with our engineers—one non-monetary benefit that helped really get the buy-in was the fact that, with Kubernetes, our on-call SEV-1 pages went down, I want to say, by over 40 percent which was insane because Kubernetes was automatically intervening in the case where servers went down. JVMs run out of memory, exceptions cause strange things, but a simple restart usually fixes the vast majority of them. Well, now Kubernetes was doing this and we didn't need to wake somebody up in order to keep the machine running.Emily: From when you started this transition to when you, I should say, when you probably left the company, but what were some of the surprises, either surprises for you, or surprises for other people in the organization?Austin: The initial surprise was the yes that we got. So, initially I pitched it and started talking about it, and then the culture started changing to where we realized we really needed to change, and bringing Zack on and then getting the yes from management was the initial surprise. And—Emily: Why was that a surprise?Austin: It was just surprising because, when you work as an engineer—I mean, none of us were C suite, or Dev managers, or anything. We were just highly respected engineers working in the HQ. So, it was just a surprise that what we felt was a semi-crazy idea at the time—because Kubernetes was a little bit earlier. I mean, EKS wasn't even a thing from Amazon. We ran our Kubernetes clusters from the hip, which is using kops, which is—kops is a great tool, but obviously it wasn't managed. It was managed by us, mainly by Zach and his team, to be honest. So, that was a surprise that they would trust a billion-dollar financing engine to run on the proposal of two engineers. And then, the next ones were just how much the single-server, vertical scaling, and depending on running on the same server was into our applications. So, as we started to look at the core applications and moving them into a containerized environment, but also into an environment that can be spun up and spun down, looking at the assumptions the application was making around being on the same server; having specific IP addresses, or hostnames; and things like that, where we had to take those assumptions out and make things more flexible. So, we had to remove some stateful assumptions in the applications, that was a surprise. We also had to enforce more of the idea of idempotency, especially when introducing Istio, and [00:21:44 retryable] connections and retryable logic around circuit breaking and service-to-service communication. So, some of those were the bigger surprises, is the paradigm shift between, “Okay, we've got this service that's always going to run on the same machine, and it's always going to have local access to its files,” to, “Now we're on a pod that's got a volume mounted, and there's 50 of them.” And it's just different. So, that was a big—[laughs], that was a big surprise for us.Emily: Was there anything that you'd call a pleasant surprise? Things that went well that you anticipated to be really difficult?Zach: Oh, my gosh, yes. When you read through Kubernetes for the first time, you tend to have this—especially if somebody else told you, “Hey, we're going to do this,” this sinking feeling of, “Oh my god, I don't even know nothing,” because it's so immense in its complexity. It requires a retooling of how you think, but there have been lots of open-source community efforts to improve the cluster lifecycle management of Kubernetes, and one such project that really helped us get going—do you remember this Austin?—was kops.Austin: Yep. Yep, kops is great.Zach: I want to say Justin Santa Barbara was the original creator of that project, and it's still open source, and I think he still maintains it. But to have a production-ready, and we really mean production-ready: it was private, everything was isolated, the CNI was provisioned correctly, everything was in the right place, to have a fully production-ready Kubernetes cluster ready to go within a few hours of us being able to learn about this tool in AWS was huge because then we could start to focus on what we didn't even understand inside of the cluster. Because there were lots of—Kubernetes is—there's two sides of it, and both of them are confusing. There's the infrastructure that participates in the cluster, and there's the actual components inside of the cluster which get orchestrated to make your application possible. So, not having to initially focus on the infrastructure that made up the cluster, so we could just figure out the difference between our butt and the hole in the ground, when it came to our application inside of Kubernetes was immensely helpful to us. I mean, there are a lot of tools these days that do that now: GKE, EKS, AKS, but we got into Kubernetes right after it went GA, and this was huge to help with that.Emily: Can you tell me also a little bit about the cultural changes that had to happen? And what were these cultural changes, and then how did it go?Zach: As Austin said, the notion of—I think a lot—and I don't want to offer this as a sweeping statement—but I think the vast majority of the engineers that we had in Seattle, in San Jose, and in Petaluma where the company was headquartered, I think, even if they didn't understand what the word idempotent meant, they understood more or less how that was going to work. The larger challenge for us was actually in helping our contractors, who actually made up the vast majority of our labor force towards the end of my tenure there, how a lot of these principles worked in software. So, take a perfect example: part of the application is written in Ruby on Rails, and in Ruby on Rails, there's a concept of one-off tasks called rake tasks. When you are running a single server, and you're sending lots of emails that have attachments, those attachments have to be on the file system. And this is the phrase I always said to people, as we refactor the code together, I repeated the statement, “You have to pretend this request is going to start on one server and finish on a different one, and you don't know what either of them are, ahead of time.” And I think using just that simple nugget really helped, culturally, start to reshape this skill of people because when you can't use or depend on something like the file system, or you can't depend on that I'm still on the same server, you begin to break your task into components, and you begin to store those components in either a central database or a central file system like Amazon S3. And adopting those parts of, I would call, cloud-native engineering were critical to the cultural adoption of this tool. I think the other thing was, obviously, lots of training had to take place. And I think a lot of operational handoff had to take place. I remember for, basically, a fairly long stretch of time, I was on-call along with whoever was also on-call because I had the vast majority of the operational knowledge of Kubernetes for that particular team. So, I think there was a good bit of rescaling and mindset shift from the technical side of being able to adopt a cloud-native approach to software building. Does that make sense?Emily: Absolutely. What do you think actually were some of the biggest challenges or the biggest pain points? Zach: So, challenges of cultural shift, or challenges of specifically Kubernetes adoption?Emily: I was thinking challenges of Kubernetes adoption, but I'm also curious about the cultural shift if that's one of the biggest pain points.Zach: It really was for us. I think—because now it wouldn't—if you wanted to take out Kubernetes and replace it with Nomad there? All of the engineers would know what you're talking about. It wouldn't take but whatever the amount of time it would to migrate your Kubernetes manifests to Nomad HCL files. So, I do think the rescaling and the mindset shift, culturally speaking, was probably the thing that helped solidify it from an engineering level. But Kubernetes adoption—or at least problems in Kubernetes adoption, there was a lot of migration horror stories that we encountered. A lot of cluster instability in earlier versions of Kubernetes prevented any form of smooth upgrades. I had to leave—it was with my brother's—it was his wedding, what was it—oh, rehearsal dinner, that's what it was. I had to leave his rehearsal dinner because the production cluster for Ygrene went down, and we needed to get it back up. So, lots of funny stories like that. Or Nordstrom did a really fantastic talk on this in KubeCon in Austin in 2017. But the [00:28:57 unintelligible] split-brain problem where suddenly the consensus in between all of the Kubernetes master nodes began to fail for one reason or another. And because they were serving incorrect information to the controller managers, then the controller managers were acting on incorrect information and causing the schedulers to do really crazy things, like delete entire deployments, or move pods, or kill nodes, or lots of interesting things. I think we unnecessarily bit off a little bit too much when it came to trying to do tricky stuff when it came to infrastructure. We introduced a good bit of instability when it came to Amazon EC2 Spot that I think, all things considered, I would have revised the decision on that. Because we faced a lot of node instability, which translated into application instability, which would cause really, really interesting edge cases to show up basically only in production.Austin: One of the more notable ones—and I think this is the symptom of one of the larger challenges was during testing, one of our project managers that also helped out in the testing side—technical project managers—which we nicknamed the Edge Case Factory, because she was just, anointed, or somehow had this superpower to find the most interesting edge cases, and things that never went wrong for anyone else always went wrong for her, and it really helped us build more robust software for sure, but there's some people out there with mutant powers to catch bugs, and she was one of them. We had two clusters, we had lower environment clusters, and then we had production cluster. The production cluster hosted two namespaces: the staging namespace, which is supposed to be an exact copy of production; and then the production namespace, so that you can smoke-test legitimate production resources, and blah blah blah. So, one time, we started to get some calls that, all of a sudden, people were getting the staging environment underneath the production URL. Zach: Yeah.Austin: And we were like, “Uh… excuse me?” It comes down to—we eventually figured it out. It was something within the networking layer. But it was this thing, as we rolled along, the deeper understanding of, okay, how does this—to use a term that Zack Arnold coined—this benevolent botnet, how does this thing even work, at the most fundamental and most detailed levels? And so, as problems and issues would occur, pre-production or even in production, we had to really learn the depths of Kubernetes. And I think the reason we had to learn it at that stage was because of how new Kubernetes was, all things considered. But I think now with a lot more of the managed systems, I would say it's not necessary, but it's definitely helpful to really know how Kubernetes works down in the depths. So, that was one of the big challenges was, to put it succinctly, when an issue comes up, knowing really what's going on under the hood, really, really helped us as we discovered and learned things about Kubernetes.Zach: And what you're saying, Austin, was really illuminated by the fact that the telemetry that we had in production was not sufficient, in our minds, at least until very recently, to be able to adequately capture all the data necessary to accurately do root cause analyses on particular issues. In early days, there was far too much root cause analysis by, “It was probably this,” and then we moved on. Now having actually taken the time to instrument tracing, to instrument metrics, to instrument logs with correlation, we used, eventually, Datadog, but working our way through the various telemetry tools to achieve this, we really struggled being able to give accurate information to stakeholders about what was really going wrong in production. And I think Austin was probably the first person in the headquarters side of the company—I'm not entirely certain about some of our satellite dev offices—but to really champion a data-driven way of actually running software. Which, it seems trivial now because obviously that's how a lot of these tools work out of the box. But for us, it was really like, “Oh, I guess we really do need to think about the HTTP error rate.” [laughs].Emily: So, taking another step back here, do you think that Ygrene got everything that it expected, or that it wanted out of moving to Kubernetes?Austin: I think we're obviously playing up some of the challenges that we had because it was our day-to-day, but I do believe that trust in the dev team grew, we were able to deploy code during the day, which we could have done that in the beginning, even with vertically scaled infrastructure, we would have done it with downtime, but it really was that as we started to show that Kubernetes and these cloud-native tools like Fluentd, Prometheus, Istio, and other things like that when you set them up properly, they do take a lot of the risk out. It added trust in the development team. It gave more responsibility to the developers to manage their own code in production, which is the DevOps culture, the DevOps mindset. And I think in the end, we were able to ship code faster, we were able to deliver more value, we were able to go into new jurisdictions and markets quicker, to get more customers, and to ultimately increase the amount of revenue that Ygrene had. So, it built a bridge between the data science side of things, the development side of things, the project management side of things, and the compliance side of things. So, I definitely think they got a lot out of trusting us with this migration. I think that were we to continue, probably Zack and I even to this day, we would have been able to implement more, and more, and more. Obviously, I left the company, Zach left the company to pursue other opportunities, but I do believe we left them in a good spot to take this ecosystem that was put in place and run with it. To continue to innovate and do experiments to get more business.Zach: Emily, I'd characterize it with an anecdote. After our Chief Information Officer left the company, our Chief Operating Officer actually took over the management of the Technology Group, and aside from basically giving dev management carte blanche authority to do as they needed to, I think there was so much trust there that we didn't have at the beginning of our journey with technology and Ygrene. And it was characterized in, we had monthly calls with all of the regional account managers, which are basically our out-of-office sales staff. And generally, the project managers from our group would have to sit in those meetings and hear just about how terrible our technology was relative to the competition, either lacking in features, lacking in stability, lacking in design quality, lacking in user interface design, or way overdoing the amount of compliance we had to have. And towards the end of my tenure, those complaints dropped to zero, which I think was really a testament to the fact that we were running things stably, the amount of on-call pages went down tremendously, the amount of user-impacting production outages was dramatically reduced, and I think the overall quality of software increased with every release. And to be able to say that, as a finance company, we were able to deploy 10 times during the day if we needed to, and not because it was an emergency, but because it was genuinely a value-added feature for customers. I think that that really demonstrated that we reached a level of success adopting Kubernetes and cloud-native, that really helped our business win. And we positioned them, basically, now to make experiments that they thought would work from a business sense we implement the technology behind it, and then we find out whether or not we were right.Emily: Let's go ahead and wrap up. We're nearing the top of the hour, but just two questions for both of you. One is, where could listeners find you or connect with you? And the second one is, do you have a can't-live-without engineering tool?Austin: Yeah, so I'll go first. Listeners can find me on Twitter @_austbot, or on LinkedIn. Those are really the only tools I use. And I can't really live without Prometheus and Grafana. I really love being able to see everything that's happening in my applications. I love instrumentation. I'm very data-driven on what's happening inside. So, obviously Kubernetes is there, but it's almost become that Kubernetes is the Cloud. I don't even think about it anymore. It's these other tools that help us monitor and create active monitoring paradigms in our application so we can deploy fast, and know if we broke something. Zach: And if you want to stay in contact with me, I would recommend not using Twitter, I lost my password and I'm not entirely certain how to get it back. I don't have a blue checkmark, so I can't talk to Twitter about that. I probably am on LinkedIn… you know what, you can find me in my house. I'm currently working. The engineering tool that I really can't live without, I think my IDE. I use IntelliJ by JetBrains, and—Austin: Yeah, it's good stuff.Zach: —I think I wouldn't be able to program without it. I fear for my next coding interview because I'll be pretending that there's type ahead completion in a Google Doc, and it just won't work. So, yeah, I think that would be the tool I'd keep forever.Austin: And if any of Zach's managers are listening, he's not planning on doing any coding interviews anytime soon.Zach: [laughs]. Yes, obviously.Emily: Well, thank you so much. Zach: Emily Omier, thank you so much for your time.Austin: Right, thanks.Austin: And don't forget Zack is an author. He and his team worked very hard on that book.Emily: Zack, do you want to give a plug to your book?Zach: Oh, yeah. Some really intelligent people that, for some reason, dragged me along, worked on a book. Basically it started as an introduction to Kubernetes, and it turned into a Master's Course on Kubernetes. It's from Packt Publishing and yeah, you can find it there, amazon.com or steal it on the internet. If you're looking to get started with Kubernetes I cannot recommend the team that worked on this book enough. It was a real honor to be able to work with people I consider to be heavyweights in the industry. It was really fun.Emily: Thank you so much.Announcer: Thank you for listening to The Business of Cloud Native podcast. Keep up with the latest on the podcast at thebusinessofcloudnative.com and subscribe on iTunes, Spotify, Google Podcasts, or wherever fine podcasts are distributed. We'll see you next time.This has been HumblePod production. Stay humble.

The History of Computing

Visual Basic Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to cover an important but often under appreciated step on the path to ubiquitous computing: Visual Basic. Visual Basic is a programming language for Windows. It's in most every realistic top 10 of programming languages of all time. It's certainly split into various functional areas over the last decade or so, but it was how you did a lot of different tasks in Windows automation and programming for two of the most important decades through a foundational period of the PC movement. But where did it come from? Let's go back to 1975. This was a great year. The Vietnam War ended, Sony gave us Betamax, JVC gave us VHS. Francisco Franco died. I don't wish ill on many, but if I could go back in time and wish ill on him, I would. NASA launched a joint mission with the Soviet Union. The UK voted to stay the EU. Jimmy Hoffa disappears. And the Altair ships. Altair Basic is like that lego starter set you buy your kid when you think they're finally old enough to be able to not swallow the smallest pieces. From there, you buy them more and more, until you end up stepping on those smallest pieces and cursing. Much as I used to find myself frequently cursing at Visual Basic. And such is life. Or at least, such is giving life to your software ideas. No matter the language, there's often plenty of cursing. So let's call the Altair a proto-PC. It was underpowered, cheap, and with this Microsoft Basic programming language you could, OMG, feed it programs that would blink lights, or create early games. That was 1978. And based largely on the work of John Kemeny and Thomas Kurts, the authors of the original BASIC in 1964, at Dartmouth College. As the PC revolution came, BASIC was popular on the Apple II and original PCs with QuickBASIC coming in 1985, and an IDE, or Integrated Development Environment, for QuickBASIC shipped in 2.0. At the time Maestro was the biggest IDE in use, but they'd been around since Microsoft released the first in 1974. Next, you could compile these programs into DOS executables, or .exe files in 3.0 and 4.0 brought debugging in the IDE. Pretty sweet. You could run the interpreter without ever leaving the IDE! No offense to anyone but Apple was running around the world pitching vendors to build software for the Mac, but had created an almost contentious development environment. And it showed from the number of programs available for the Mac. Microsoft was obviously investing heavily in enabling developers to develop in a number of languages and it showed; Microsoft had 4 times the software titles. Many of which were in BASIC. But the last version of QuickBASIC as it was known by then came in 4.5, in 1988, the year the Red Army withdrew from Afghanistan - probably while watching Who Framed Roger Rabbit on pirated VHS tapes. But by the late 80s, use began to plummet. Much as my daughters joy of the legos began to plummet when she entered tweenhood. It had been a huge growth spurt for BASIC but the era of object oriented programming was emerging. But Microsoft was in an era of hyper growth. Windows 3.0 - and what's crazy is they were just entering the buying tornado. 1988, the same year as the final release of QuickBASIC, Alan Cooper created a visual programming language he'd been calling Ruby. Now, there would be another Ruby later. This language was visual and Apple had been early to the market on Visual programming, with the Mac - introduced in 1984. Microsoft had responded with Windows 1.0 in 1985. But the development environment just wasn't very… Visual. Most people at the time used Windows to open a Window of icky text. Microsoft leadership knew they needed something new; they just couldn't get it done. So they started looking for a more modern option. Cooper showed his Ruby environment to Bill Gates and Gates fell in love. Gates immediately bought the product and it was renamed to Visual Basic. Sometimes you build, sometimes you partner, and sometimes you buy. And so in 1991, Visual Basic was released at Comdex in Atlanta, Georgia and came around for DOS the next year. I can still remember writing a program for DOS. They faked a GUI using ASCII art. Gross. VB 2 came along in 1992, laying the foundations for class modules. VB 3 came in 93 and brought us the JET database engine. Not only could you substantiate an object but you had somewhere to keep it. VB 4 came in 95 because we got a 32-bit option. That adds a year or 6 for every vendor. The innovations that Visual Basic brought to Windows can still be seen today. VBX and DLL are two of the most substantial. A DLL is a “dynamic link library” file that holds code and procedures that Windows programs can then consume. DLL allow multiple programs to use that code, saving on memory and disk space. Shared libraries are the cornerstone of many an object-oriented language. VBX isn't necessarily used any more as they've been replaced with OCXs but they're similar and the VBX certainly spawned the innovation. These Visual Basic Extensions, or VBX for short, were C or C++ components that were assembled into an application. When you look at applications you can still see DLLs and OCXs. VB 4 was when we switched from VBX to OCX. VB 5 came in 97. This was probably the most prolific, both for software you wanted on your computer and malware. We got those crazy ActiveX controls in VB 5. VB 6 came along in 1998, extending the ability to create web apps. And we sat there for 10 years. Why? The languages really started to split with the explosion of web tools. VBScript was put into Active Server Pages . We got the .NET framework for compiled web pages. We got Visual Basic for Applications, allowing Office to run VB scripts using VBA 7. Over the years the code evolved into what are now known as Unified Windows Platform apps, written in C++ with WinRT or C++ with CX. Those shared libraries are now surfaced in common APIs and sandboxed given that security and privacy have become a much more substantial concern since the Total Wave of the Internet crashed into our lego sets, smashing them back to single blocks. Yah, those blocks hurt when you step on them. So you look for ways not to step on them. And controlling access to API endpoints with entitlements is a pretty good way to walk lightly. Bill Gates awarded Cooper the first “Windows Pioneer Award” for his work on Visual Basic. Cooper continued to consult with companies, with this crazy idea of putting users first. He was an earlier proponent of User Experience and putting users first when building interfaces. In fact, his first book was called “About Face: The Essentials of User Interface Design.” That was published in 1995. He still consults and trains on UX. Honestly, Alan Cooper only needs one line on his resume: “The Father of Visual Basic.” Today Eclipse and Visual Studio are the most used IDEs in the world. And there's a rich ecosystem of specialized IDEs. The IDE gives code completion, smart code completion, code search, cross platform compiling, debugging, multiple language support, syntax highlighting, version control, visual programming, and so much more. Much of this isn't available on every platform or for every IDE, but those are the main features I look for - like the first time I cracked open IntelliJ. The IDE is almost optional in functional programming - but In an era of increasingly complex object-oriented programming where classes are defined in hundreds or thousands of itty bitty files, a good, smart, feature-rich IDE is a must. And Visual Studio is one of the best you can use. Given that functional programming is dead, there's no basic remaining in any of the languages you build modern software in. The explosion of object-orientation created flaws in operating systems, but we've matured beyond that and now get to find all the new flaws. Fun right? But it's important to think, from Alan Kay's introduction of Smalltalk in 1972, new concepts in programming in programming had been emerging and evolving. The latest incarnation is the API-driven programming methodology. Gone are the days when we accessed memory directly. Gone are the days when the barrier of learning to program was understanding functional and top to bottom syntax. Gone are the days when those Legos were simple little sets. We've moved on to building Death Stars out of legos with more than 3500 pieces. Due to increasingly complex apps we've had to find new techniques to keep all those pieces together. And as we did we learned that we needed to be much more careful. We've learned to write code that is easily tested. And we've learned to write code that protects people. Visual Basic was yet another stop towards the evolution to modern design principals. We've covered others and we'll cover more in coming episodes. So until next time, think of the continuing evolution and what might be next. You don't have to be in front of it, but it does help to have a nice big think on how it can impact projects you're working on today. So thank you for tuning in to yet another episode of the History of Computing Podcast. We're so lucky to have you. Have a great day!

The History of Computing
Smalltalk and Object-Oriented Programming

The History of Computing

Play Episode Listen Later Sep 29, 2019 12:22


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to cover the first real object-oriented programming language, Smalltalk. Many people outside of the IT industry would probably know the terms Java, Ruby, or Swift. But I don't think I've encountered anyone outside of IT that has heard of Smalltalk in a long time. And yet… Smalltalk influenced most languages in use today and even a lot of the base technologies people would readily identify with. As with PASCAL from Episode 3 of the podcast, Smalltalk was designed and created in part for educational use, but more so for constructionist learning for kids. Smalltalk was first designed at the Learning Research Group (LRG) of Xerox PARC by Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, Scott Wallace, and others during the 1970s. Alan Kay had coined the term object-oriented programming was coined by Alan Kay in the late 60s. Kay took the lead on a project which developed an early mobile device called the Dynabook at Xerox PARC, as well as the Smalltalk object-oriented programming language. The first release was called Smalltalk-72 and was really the first real implementation of this weird new programming philosophy Kay had called object-oriented programming. Although… Smalltalk was inspired by Simula 67, from Norwegian developers Kirsten Nygaard and Ole-johan Dahl. Even before that Stewart Nelson and others from MIT had been using a somewhat object oriented model when working on Lisp and other programs. Kay had heard of Simula and how it handled passing messages and wrote the initial Smalltalk in a few mornings. He'd go on work with Dan Ingalls to help with implementation and Adele Goldberg to write documentation. This was Smalltalk 71. Object oriented program is a programming language model where programs are organized around data, also called objects. This is a contrast to programs being structured around functions and logic. Those objects could be data fields, attributes, behaviors, etc. For example, a product you're selling can have a sku, a price, dimensions, quantities, etc. This means you figure out what objects need to be manipulated and how those objects interact with one another. Objects are generalized as a class of objects. These classes define the kind of data and the logic used when manipulating data. Within those classes, there are methods, which define the logic and interfaces for object communication, known as messages. As programs grow and people collaborate on them together, an object-oriented approach allows projects to more easily be divided up into various team members to work on different parts. Parts of the code are more reusable. The way programs are played out is more efficient. And in turn, the code is more scalable. Object-oriented programming is based on a few basic principals. These days those are interpreted as encapsulation, abstraction, inheritance, and polymorphism. Although to Kay encapsulation and messaging are the most important aspects and all the classing and subclassing isn't nearly as necessary. Most modern languages that matter are based on these same philosophies, such as java, javascript, Python, C++, .Net, Ruby. Go, Swift, etc. Although Go is arguably not really object-oriented because there's no type hierarchy and some other differences, but when I look at the code it looks object-oriented! So there was this new programming paradigm emerging and Alan Kay really let it shine in Smalltalk. At the time, Xerox PARC was in the midst of revolutionizing technology. The MIT hacker ethic had seeped out to the west coast with Marvin Minsky's AI lab SAIL at Stanford and got all mixed into the fabric of chip makers in the area, such as Fairchild. That Stanford connection is important. The Augmentation Research Center is where Engelbart introduced the NLS computer and invented the Mouse there. And that work resulted in advances like hypertext links. In the 60s. Many of those Stanford Research Institute people left for Xerox PARC. Ivan Sutherland's work on Sketchpad was known to the group, as was the mouse from NLS, and because the computing community that was into research was still somewhat small, most were also aware of the graphic input language, or GRAIL, that had come out of Rand. Sketchpad's had handled each drawing elements as an object, making it a predecessor to object-oriented programming. GRAIL ran on the Rand Tablet and could recognize letters, boxes, and lines as objects. Smalltalk was meant to show a dynamic book. Kinda' like the epub format that iBooks uses today. The use of similar objects to those used in Sketchpad and GRAIL just made sense. One evolution led to another and another, from Lisp and the batch methods that came before it through to modern models. But the Smalltalk stop on that model railroad was important. Kay and the team gave us some critical ideas. Things like overlapping windows. These were made possibly by the inheritance model of executions, a standard class library, and a code browser and editor. This was one of the first development environments that looked like a modern version of something we might use today, like an IntelliJ or an Eclipse for Java developers. Smalltalk was the first implementation of the Model View Controller in 1979, a pattern that is now standard for designing graphical software interfaces. MVC divides program logic into the Model, the View, and the Controller in order to separate internal how data is represented from how it is presented as decouples the model from the view and the controller allow for much better reuse of libraries of code as well as much more collaborative development. Another important thing happened at Xerox in 1979, as they were preparing to give Smalltalk to the masses. There are a number of different interpretations to stories about Steve Jobs and Xerox PARC. But in 1979, Jobs was looking at how Apple would evolve. Andy Hertzfeld and the original Mac team were mostly there at Apple already but Jobs wanted fresh ideas and traded a million bucks in Apple stock options to Xerox for a tour of PARC. The Lisa team came with him and got to see the Alto. The Alto prototype was part of the inspiration for a GUI-based Lisa and Mac, which of course inspired Windows and many advances since. Smalltalk was finally released to other vendors and institutions in 1980, including DEC, HP, Apple, and Berkely. From there a lot of variants have shown up. Instantiations partnered with IBM and in 1984 had the first commercial version at Tektronix. A few companies tried to take SmallTalk to the masses but by the late 80s SQL connectivity was starting to add SQL support. The Smalltalk companies often had names with object or visual in the name. This is a great leading indicator of what Smalltalk is all about. It's visual and it's object oriented. Those companies slowly merged into one another and went out of business through the 90s. Instantiations was acquired by Digitalk. ParcPlace owed it's name to where the language was created. The biggest survivor was ObjectShare, who was traded on NASDAQ, peaking at $24 a share until 1999. In a LA Times article: “ObjectShare Inc. said its stock has been delisted from the Nasdaq national market for failing to meet listing requirements. In a press release Thursday, the company said it is appealing the decision.” And while the language is still maintained by companies like Instantiations, in the heyday, there was even a version from IBM called IBM VisualAge Smalltalk. And of course there were combo-language abominations, like a smalltalk java add on. Just trying to breathe some life in. This was the era where Filemaker, Foxpro, and Microsoft Access were giving developers the ability to quickly build graphical tools for managing data that were the next generation past what Smalltalk provided. And on the larger side products like JDS, Oracle, Peoplesoft, really jumped to prominence. And on the education side, the industry segmented into learning management systems and various application vendors. Until iOS and Google when apps for those platforms became all the rage. Smalltalk does live on in other forms though. As with many dying technologies, an open source version of Smalltalk came along in 1996. Squeak was written by Alan Kay, Dan Ingalls, Ted Kaehler, Scott Wallace, John Maloney, Andreas Raab, Mike Rueger and continues today. I've tinkerated with Squeak here and there and I have to say that my favorite part is just getting to see how people who actually truly care about teaching languages to kids. And how some have been doing that for 40 years. A great quote from Alan Kay, discussing a parallel between Vannevar Bush's “As We May Think” and the advances they made to build the Dynabook: If somebody just sat down and implemented what Bush had wanted in 1945, and didn't try and add any extra features, we would like it today. I think the same thing is true about what we wanted for the Dynabook. There's a direct path with some of the developers of Smalltalk to deploying MacBooks and Chromebooks in classrooms. And the influences these more mass marketed devices have will be felt for generations to come. Even as we devolve to new models from object-oriented programming, and new languages. The research that went into these early advances and the continued adoption and research have created a new world of teaching. At first we just wanted to teach logic and fundamental building blocks. Now kids are writing code. This might be writing java programs in robotics classes, html in Google Classrooms, or beginning iOS apps in Swift Playgrounds. So until the next episode, think about this: Vannevar Bush pushed for computers to help us think, and we have all of the worlds data at our fingertips. With all of the people coming out of school that know how to write code today, with the accelerometers, with the robotics skills, what is the next stage of synthesizing all human knowledge and truly making computers help with As we may think. So thank you so very much for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!

The REPL
22: Cursive IDE with Colin Fleming

The REPL

Play Episode Listen Later Apr 2, 2019 41:21


Colin Fleming talks about building Cursive IDE on the IntelliJ platform. Cursive IDE The IntelliJ Platform

Develpreneur: Become a Better Developer and Entrepreneur
Integrated Development Environment Tools (IDE) - Free and Low Cost

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Nov 9, 2018 26:26


One of the most significant choices a developer makes is the integrated development environment (IDE) they use.  Nevertheless, there are new options that become available each year and niche solutions that may improve productivity.  It never hurts to review what is out there and avoid getting in a rut. Eclipse http://www.eclipse.org/ The odds are that this will look familiar, even if you have not used it.  Eclipse is the basis for a large number of IDE's due to its flexibility and expandability.  Few (if any) languages are not supported by at least one plugin on Eclipse.  This ability to expand it from the core along with a healthy set of standard tools like version control, syntax highlighting, project-based search, integrated debugging, and more combine to make a solid IDE.  It is free in many forms and more than worth the time to see if it may be the best solution for your needs. Visual Studio https://visualstudio.microsoft.com/downloads/ This should be your default IDE if you live on Windows and develop for those systems.  Although Visual Studio has grown to become a solid challenger to Eclipse in many areas, it still does not have the breadth of language support.  That being said, it is a better solution (IMHO) for any C# or other .NET related development.  It provides some tools and debugging features that make it perfect for a solo developer of teams of any size. Android Studio https://developer.android.com/studio/ This IDE is focused on, you guessed it, developing for Android devices.  It is Eclipse-based and the best alternative (IMHO) for building applications on those target platforms.  It does allow for plugins and can be used for more as well. Xcode https://developer.apple.com/xcode/ This application is effectively the default IDE for building iOS applications on mobile or Mac desktops.  It is not as extensible as the other IDEs on this list.  However, it is packed with features and makes developing native Apple applications easier than any other option out there.  XCode has at times been a little behind the others in modern features but has grown in the last few years to include everything you need for your mobile or desktop development in the Apple world. Cloud9 https://aws.amazon.com/cloud9/ Amazon bought this IDE/Service after realizing it was a perfect fit for their Cloud services.  Cloud9 is an IDE that includes connections to a virtual machine for your development and deployment.  That consists of a browser-based IDE so you can remotely do all of your code writing.  Better yet, it provides ways to quickly create a development environment for a substantial number of languages and environments. Aptana Studio http://www.aptana.com/ Although this is yet another Eclipse-based tool, it includes a cross-platform mobile development framework that is one of the best.  The primary coding is javascript-based and makes it easy to create applications targeted for Android, Apple, or other platforms.  It is also a robust IDE without additional extensions for web application development if you primarily use HTML, JavaScript, CSS, and maybe PHP. IntelliJ Idea (JetBrains) https://www.jetbrains.com/ The JetBrains family of IDEs cover a surprising number of environments.  Their tools are all high quality, easy to love, and one of (if not THE) the best solutions for their niche products.  For example, many Java developers prefer IntelliJ for those coding needs over all other Java IDEs.  This is the only solution on this list that often requires a paid license.  However, I think it is money well spent if you try one of their applications and like it. NetBeans https://netbeans.org/ This tool has been moved to an Apache project and is a little behind the other options we have covered.  However, they have a new version that is more what one expects in a modern tool.  It is not quite as popular as some of the other options.  Thus, the user community has not contributed to the level you see in an Eclipse or Visual Studio. Codenvy https://codenvy.com/ This solution is similar to Cloud9 except it uses Docker containers for your development environment.  That means you can quickly convert development to production deployments as is typical for a Docker solution.  It also uses Eclipse CHE which is a highly impressive browser-based IDE.  If you are looking for a remote development solution, then you must check this one out.

The Frontside Podcast
097: Language Server Protocol with Erich Gamma and Dirk Baeumer

The Frontside Podcast

Play Episode Listen Later Mar 1, 2018 41:42


Erich Gamma: @ErichGamme Dirk Baeumer: @dbktw Show Notes 01:11 - The Design Patterns Book 02:45 - The Eclipse Project 09:24 - Language Server Protocol: Overview 15:16 - What can you do with a server that implements the LSP? Incremental usage? 20:12 - Keeping the Tools in Sync and Refactoring Support 24:33 - Keeping it Performant 29:41 - What kind of proliferation of codesmart tools are there that implement the LSP? 34:51 - What are the challenges encountered trying to build abstractions that work for 40 different languages? Resources Visual Studio Code Transcript CHARLES: Hello everybody and welcome to The Frontside Podcast Episode 97. My name is Charles Lowell. I'm a developer here at The Frontside and your podcast host-in-training. And with me today, we have two very special guests. They have been working on technologies that have run very parallel to my entire career as a software developer. And we're going to talk about that. So with us today are Erich Gamma and Dirk Baeumer who are developers on the team developing VS Code, which if you're in the frontend space is taking that area of development by storm. It's just amazing, some of the things they can do. Lots of people are using it every day. Lots of people are trying it. And so, we're going to talk about the technologies that underlie that and the story of how it came to be. So, welcome Erich and welcome Dirk. ERICH: Hello from Zurich. CHARLES: Alright. Zurich to Albuquerque. Here we go. As a first start, I would have to say my first contact with this story, I at least have to mention it because – and this is for Erich – you wrote a book that was very, very instrumental in my formation as a young developer. I think I was about 22 years old when I read ‘Design Patterns'. And I don't know. I still carry a lot of those things with me to this day, even though a lot of things have changed about the way that we do development. I still carry a lot of those lessons, I think especially things like the state pattern and the strategy pattern, and stuff like that. I want to move onto other things, but I was hoping that we could talk just a little bit about, what are the things that you find still kind of relevant today? ERICH: Well, now as you said, some of the things are kind of timeless and we're lucky to have found these things. And I still love all the patterns. But I must say, things have changed, right? So, at that time, we thought objects are very cool. And as we have evolved, all of a sudden we think, “Oh, functions are actually very cool, too,” right? Closures and so on. So, I think we got more broader and of course if you use functional programming, you have many more patterns available as you program. So, I feel some of the object thinking still applies. But that's not the only thing that counts anymore. Today it's functions, stateless, immutability, and all those things within functional programming which is [straight] and which [inaudible] in our team. CHARLES: Yeah, yeah. I would love to see an update to how do these concepts transfer into functional programming. But anyway, just wanted to say thank you for that. And it was about the same time that, a few years after, I don't know the exact same timing, I want to wind back. Because we're going to talk about VS Code but before VS Code, there was a project that both of you all worked on called Eclipse, which I also used. Because at the very beginning of my career, I did a lot of Java development. And it really opened my eyes into a level of what tooling could do for you that I didn't see before. And I was wondering how did you arrive to there? Because before that, I was using Emacs and Vim and Joe's Editor and things that were editing the text files. And how did you kind of arrive at that problem? Because I feel like it's very similar to the one that VS Code solves, but this was what, 15 years ago? ERICH: I think it's older, right? CHARLES: Really? DIRK: It's 17, 18, yeah. Yeah, yeah. It was end of the millennium, right? So to be honest, Eclipse wasn't the first development tool we worked on. Then, we worked on the company ObjectTechnologyNational. They worked on Smalltalk tools. And of course, Smalltalk had a great IDE experience, right? So back then, Java became popular. One idea was, how can you preserve the great Smalltalk coding experience? [Inaudible] CHARLES: Ah, okay. DIRK: [Inaudible] and find all references, method-level history versioning, and so on. So, that was the input that got Eclipse kicked off. And one idea we had at that time, Eclipse is our opportunity to make everything right. And as we have seen now, when we did VS Code, we could even improve what we have [inaudible] at that time. So an example, in Eclipse we thought plugins are very cool and we have kind of a microkernel. And you load all of the plugins in the same process, they have a rich API, and so on, which is great. But we found over time, if you have lots of plugins and they do bad things and they run in the same process, it's not the best thing. CHARLES: Ah. Right. And so… DIRK: [Inaudible] have a different architecture. We believe now in isolation, separation. So, we now run extensions in separate process that communicates through RPC with the IDE so that we are in full control. And we can always say you can save the tool, save the document, no matter how bad a plugin behaves and decides to do an endless loop. Because in a separate process, the hope is still one CPU is open, available for you, that it can be safe from the other process. So, that's some example, right? Eclipse has done many things right, but the multi-process architecture I think is a major switch. And the other major switch is at Eclipse time you think Java is cool. Everything has to be in Java. CHARLES: Right. DIRK: No longer think like that, and that brings up this other topic of then the language servers that we can also talk at some point. CHARLES: Right, because that's the thing, is VS Code – now I've primarily been exposed to it through JavaScript and TypeScript development. But it really, it's designed to support all kinds of different languages. So, the C++ support is really good. The C support is really good. And I assume the Java support is really good. Is it safe to say? Because I only ever used Eclipse in the context of Java. Did Eclipse gain kind of a wider acceptance further beyond Java and C++? ERICH: Yeah. I think it's fair to say Eclipse has a rich ecosystem. Yeah. I think with all the tools. And it will be interesting to see that you can close the loop, because for Visual Studio Code, when you do Java development, you actually run Eclipse behind the scenes. That's how we kind of smiled at each other, Dirk and I, when he said, “Now we close the loop.” We started with all JavaScript and then we integrate Eclipse using this language server protocol and that's how we close the loop. CHARLES: Ah. DIRK: So maybe one thing I would like to add is that when you look at Eclipse and the tool and framework landscape that existed in the Java time, at that point in time when we started with Eclipse, it was very well-defined. There was Java. It was a well-defined set of libraries you were using and frameworks you were using. And if you look at the programming and tool landscape you have today, in months you see a new framework for JavaScript popping up or there's something else or another cool X, Y, Z thing. So, the tooling you build today has to be a lot more open to these new inventions, especially since they occur in a higher frequency than they did in the past. And that had influence on how we architected Visual Studio Code to give people a lower barrier of integrating their stuff into Visual Studio Code than you typically have in Eclipse. In Eclipse you needed to program in Java. With the LSP you can program in any programming language. In Eclipse, if you really want to try to do something nice with code complete and stuff like that, you had to hook up a lot of stuff. So, we raised that to another abstraction layer where we more talked about what people provide on data and we do a lot more for them in the user interface than compared for example to Eclipse, which lowers the barrier for people to integrate languages in Visual Studio Code than the barrier you had to integrate something in Eclipse. And so, [inaudible] for that one was that there are a lot more tools and programming languages out there that have importance than 10 years ago. ERICH: I'll give you an example. So, when we did C support in Eclipse, and it was also the team that seeded it. Of course, it took over and has now a great community behind it in Eclipse. But you wrote the C tooling in Java. And of course, that means you built the parser in Java and then of course, there are great C parsers around, C frameworks. But also it means you cannot dogfood what you write. You write Java but you don't program in C++. I think which is what makes VS Code so appealing is we are a very aggressive dogfooder. We want to use ourself and of course [inaudible]. That's why [inaudible] is very good. The C++ guide, they programmed C++ and they write in C++ so that's how they make it very good, that you have this feedback loop. CHARLES: And so, what's an example? We've talked about this low barrier of entry. So, if I were wanting to say, I do mostly programming in JavaScript. Let's say I wanted to add, I know all of this already exists, this infrastructure already exists, but let's say I wanted to add smart editing to JavaScript source files. What would that process look like for me as a JavaScript developer? DIRK: To be fair, whenever it comes to language services, it's never easy. But [inaudible] lower the bar. A language always means you have to do parsing, you have to do [9:59], type bindings. You have to make it fast, scale high up, and so on. So, this is never easy. But I think if you think about the different steps you can do, the first thing, let's not take JavaScript. Let's take a new language. CHARLES: Okay. DIRK: Your new cool language. CHARLES: Or maybe we take a Lisp or something where writing the parser is very easy. DIRK: Even that, you have to resolve symbols and so on. CHARLES: Okay, okay. DIRK: Even the parsing [inaudible]. But yeah, let's take a fancy language like Lisp or whatever. So, the first level I think is you want to get some nice coloring. That's the first level. CHARLES: Yes. DIRK: So, you get some coloring. And what we do there actually in VS Code is we tap into the community from TextMate. So, we use TextMate grammars to support colors in languages, which gives us access to a long [10:51 tail] of languages. So, to change the [10:54], if your language is not too exotic, you will find the grammar that describes how to color, what the tokens are in your language, and then you can get your language colored. That's step one. The next step is of course you want to get smarts like IntelliSense and so on. Ideally of course you can say, “Well, maybe there is something already around that has abstracted the parser and you can use this library.” CHARLES: Right. Because there actually are a bunch of JavaScript parsers written in JavaScript. I know I keep coming back to JavaScript, but let's assume with this language that we've got. I may not have to write a parser but I've got one. ERICH: You've got one, exactly. You've got one, right, and then technically it's not in the same language as the tool. So, that's why I don't want to go too much into JavaScript because for instance VS Code is written in TypeScript, which [transpiles] to JavaScript, which moves a little bit, makes it not as convincing as it could be. So, let's say it's a different language. Your fancy language is written, has a parser in your fancy language, which is different than the language of VS Code which is JavaScript. CHARLES: Right. ERICH: So, then the next level is to say, “Okay, well you have your code you encapsulate it in a server that you can talk to through some protocol.” And now the challenge is what protocol do you talk to? Typically in the language, the library you get, it will use some ASTs, symbols, type bindings. And what Dirk mentioned with lowering the bar is that assuming you have those ASTs, the way you talk then with our tool is through a protocol that is not at the level of the ASTs but at a higher level. CHARLES: A higher level than the ASTs. ERICH: No, yeah. A higher or simpler level. Let's give you an example. You want to find the definition of a symbol in your fancy language. The way the protocol works is you only tell it, in this document with the URI, at this position, I want to find the definition of the symbol that is this position. The request goes over the wire to the other process. Document URI, and the textual position. And what comes back of course now in the server you used AST, you find the symbol, you find the binding of the symbol which means it gives a definition for it. Of course you use your AST to analyze it. But then what gets back to send over the wire is yet another document, the reference, and the position. CHARLES: I see. So, you're really like pinpointing a point in just the raw bytes of the document. And you're saying, “Look, what is here?” And you just want to delegate that completely and totally to this other process. So, the IDE itself doesn't know anything about the document? ERICH: It knows about the document, right? CHARLES: I mean, it knows about the textual positions of the documents and the stream of characters, but not the meaning. DIRK: True. The smarts are in the server. And you talk to the smarts at the level of documents and positions. And the [good thing is] it's a protocol, is at this level it makes it easy to integrate into one editor, which is VS Code, but also into other editors. So, that's why we came up with the idea to have a common language server protocol which allows to provide a language not only for one editor but also for many editors. That was a challenge we had in VS Code. Remember when we started, we were kind of late to the game. We said, “VS Code should be in between an IDE and an editor.” But what we liked from an IDE is of course code understanding, IntelliSense. Go to definition, find all references. But how do you get that for a long tail of languages? We cannot do it all ourselves. So, we need to get a community to tap into. [Similar to] like TextMate grammars are kind of a lingua franca for coloring. So, we are looking for the lingua franca for language smarts. And that's what the language server protocol is, which means you can integrate it in different IDEs and once you've written a language server you can reuse it. CHARLES: I guess I've got two questions. What are the kind of things that I can do with a server that implements the language server protocol? And then I guess the – so we've talked about being able to find a reference. And is there a way you can incrementally implement certain parts of the protocol as you go along? ERICH: Yeah. DIRK: Yeah, basically you can. The protocol on the server and the client side talks about capabilities. The server can for example say, “I am only supporting code complete and go to definition and find all references.” And for example, something like, “Implementation hierarchy or document symbols or outline view is not supported.” And then the client adapts dynamically to the capabilities of the server. CHARLES: Okay. DIRK: That's one thing. And the set of capabilities is not fixed. So, we add them. We just added four or five new capabilities to the protocol last week. So of course, we listen to requests that come from other IDEs, what they would like to see in the protocols that we see in Visual Studio Code, we would like to extend. And that's the way we move the protocol forward. CHARLES: Okay. DIRK: It's capability-based and not so to speak version-based. So, [inaudible] versioning at the end of today. CHARLES: Right. You can incrementally say, “I'm going to have,” if I'm starting to write a server, I can say, “Well, I'm going to only start with just find definition at point.” And that's the only thing that my server can do. ERICH: Well, there are some basics, right? Keep in mind you have two processes. And once the user opens an editor, the truth is in the buffering memory on the one process. The basic thing you have to in a language, so you have to support the synchronization of [inaudible]. Once you open a file in the editor, then the truth in the buffer, and then you have to sync it over. CHARLES: Right. ERICH: [Inaudible] close the truth on the file system and you also have to tell this to the server. Because the server has to know where the truth is. DIRK: That's correct. These two open/close handshake methods and change methods, this is the minimum you have to implement. But for example, for Node itself, we provide libraries that help you with this. And the protocol is not very complicated. It's a buffer. Then it's change events. Either it's an insert, a delete, or an edit. CHARLES: So, let me try and get this straight in my head. I think I understand. The problem is that the VS Code, or your code editor, it's actually making changes to the buffer, and it needs to communicate those changes to the server. Or does the server actually make the changes itself? DIRK: The editor does make the changes. So, the protocol is spec'd in a way that as soon as an editor opens a document, the ownership travels from the server for the content to the tool. And the server is basically not allowed to read the state of that content from disk anymore, or get it [inaudible]. CHARLES: Aha. DIRK: Therefore, the client guarantees that everything the user does in that document is notified to the server, so that the server can move the document forward. CHARLES: Okay. DIRK: [Inaudible] we see the close event, that basically with the close, transfers the ownership of the document back to the language server. And it is allowed to re-read that content from disk if it wants. CHARLES: Okay. ERICH: Here, the protocol is really data-driven. Dirk mentioned that earlier, right? So, basically what flows between the server and the tool is data. So, what do we mean by data? You ask for IntelliSense or completions at the line. What follows is just the data. A list of completions that flows then from the server to the client. And then the client decides what to do with this data and decides to modify the document by inserting the completion proposed that the user selected. CHARLES: Right. And then if it decides to make any updates, it needs to send those to the server. DIRK: Exactly. CHARLES: So, if I actually insert the method that I want to call there, I'm going to be inserting nine characters, and I need to tell the server, “Hey, I just inserted nine characters to this document,” something like that? ERICH: Exactly. CHARLES: Okay. And so now how, because I remember now one of the coolest things about the class of tools of Eclipse that I hadn't really seen in the more lightweight editors – I went from Java, like so many of my generation went from Java to Ruby and then to JavaScript – once I moved out of the Java world, one of the things that I had come to expect from my tools was that they would help me make modifications to my codebase at a very high level. So, I would be able, if I had some class that was imported into say five modules in my codebase, I could say, “I want to change the name of this class,” and then it would find the references and then make the updates to those things. So, how do you manage that? So, if I have a class called ‘Person' that I want to change to ‘User', if I change it to ‘User' then it's going to break in those five different places unless I rename it to ‘User'. That's something that was very doable in the Java world. How do you keep the code editor, the tool I guess is what you were calling it, in sync? Like the server is going to make that change or does it just come back with data and says, “Here's the references if you wanted it to change”? ERICH: Yeah, yeah, yeah. So, two things you mentioned, right? Java and JavaScript or course. Java is a typed language which means you have better understanding of the code and what the reference is. In JavaScript which is typeless, you cannot know it as much, so that's actually why we developed also, we're using TypeScript. VS Code is actually [written] in TypeScript which allows you to do these kinds of things like refactorings. But if you look at the language server protocol, it has support for rename. And the way how rename is done is again it just documents positions. You say, “At this position, I want to rename the symbol with this other name.” And then you tell this to server and the server will handle the rename by giving you back a list of positions that need to be updated. CHARLES: Ah, okay. So now, I'm starting to understand what you're talking about when you say data-driven. It's literally just telling the tool – the tool proposes, “I want to do this rename.” And then the server provides all of the information that is required to actually do the rename. But it doesn't actually do the rename itself. It just provides the data. DIRK: A couple of reasons for it. The data effects, at the end of day, it's again, edit, and it's more or less the same edits the client sends to the server when the user types in the document. This is the protocol. On top of it, something that you can create a file or rename a file, this comes as a result back to the client. And then there, since it is a client/server architecture, the whole process is async. So, we have to give the client the change to revalidate if that edit structure that comes is still valid. If it is still valid, the client basically applies it. And by applying these edits to these documents, they will automatically flow back to the server until the client either closes these documents again or saves them. So, the reason being is that some of the tools may even show you a preview. You can only select some of them and apply them. So, there's always an interaction in these refactorings and to make that possible, as Erich mentioned, the whole protocol is data-driven. We don't go the server and say, “Okay, do that rename,” and he writes that back to disk. It computes a set of transformations to bring the current state of the workspace into that new state after the refactoring. CHARLES: I see. ERICH: [Inaudible] be fully transparent. Actually, no. Refactorings, Dirk [inaudible] refactorings for Eclipse so we can go deep on that. What we don't support right now in the protocol, we support edits in the buffer but when you want to rename a class in Java, you also want to rename the file. And that's something we're currently working on to support in the specification of the language server protocol. So, we don't have that yet. But we support code actions, quick fixes, that you like from Eclipse probably. And you can use then to do refactorings like extract method, extract constant or extract local variable, things like that you can do at the level of the language server protocol. CHARLES: Wow. That is… ERICH: I think [inaudible] right now. Let me go back to the Java thing. The Java language server actually has the support for refactorings. And there is now a language server protocol implementation of this Java provided by Eclipse. So, all the support you had in Eclipse for Java or most of the support is now also enabled in VS Code. CHARLES: Right. ERICH: [We don't] really have to reimplement it because you can reuse. And that's the big thought we have. You want to reuse language smarts as much as possible because they are so hard to implement. CHARLES: Right. And so, you can do that because you're providing this abstraction between the tool and the actual smarts, which is really, really cool. I do have to… how do you make it fast? Because you're describing this tool, this client and this server, and they're syncing. They're keeping this distributed state in sync and you know, how do you keep that from coming too chatty? Or is it something that you have to consider? Or is it just, maybe I'm overthinking it because I haven't dealt with it? DIRK: So, at the end of the day, it is chatty. But it is made performant in the way that it's very incremental and partly event-based. So for example, if you type in the document in the editor, you can either decide to [inaudible] sync the full content of the document, which we do not recommend but for some basic exploration, that is something people do. And we have [inaudible] the delta-encoded mechanism. So, we sync the buffer once and then after that you only get the edits the user does. These are chatty of course since the user types them, we debounce them and collapse them on the client side and only send them if we know that the server really needs to know them because we have another request we are asking the server or after a certain timeout. So, there are smarts behind it. But the protocol is kept performant by making it an incremental protocol at the end of the day, and not sending too much data back and forth. ERICH: Right. We don't serialize ASTs. We serialize positions, a list of items for completions. And actually, the transport is just JSON RPC. CHARLES: Okay. ERICH: And actually, someone, there is different usage now for language server protocol. And there is one host, Eclipse J, which brings it again back to Eclipse. They actually run language servers remote. CHARLES: Interesting. ERICH: And if you use it, you can run it on the browser, you get IntelliSense, and of course I guess it depends on how far away you are from the server. But it seems to work, according to feedback we've heard. CHARLES: Really? ERICH: The feedback we heard from them [is pleasant]. So, they use many of the language servers. CHARLES: So, is this a product that they have where the language server is running in the cloud and you send – your entire codebase essentially goes over to the language server and you can export the smarts to the cloud? ERICH: It's one step at a time. So, Eclipse J is kind of, they have what they call cloud workspace, which means the workspace is in the cloud. And [inaudible] code smarts of the workspace in the cloud, they can run the language servers in the cloud. It's a [inaudible]. One user has one workspace, has one language server. CHARLES: That sounds amazing. And if they can make it performant. ERICH: We have done cloud IDEs, right? If you look at the history from Visual Studio Code, you also had our stuff running in the cloud at some point. That's how we started. Before we pivoted to VS Code, we built – our exploration was, that's why the project is six years old. The first two years, we explored how far you can get coding done in the browser. CHARLES: Right. ERICH: And we had some [inaudible] there. CHARLES: So, I've played around with a lot of cloud IDEs and I've found them to be neat, because every few years it comes along. But yeah, it does seem that there are certain challenges that it's nice to have a client running and just be able to have the files locally. And is that a performance thing or if VS Code is written in TypeScript, theoretically it could run in a browser, right? ERICH: Of course. The [inaudible] there still runs in the browser. Then it's used by many tools that run in the browser. Like actually, if you want to edit your source code in the browser, there it's using the same editor that's running VS Code. So, that's how we started. Cloud IDEs, yeah we were at this point. We had our cloud IDE. We could edit websites in the browser, source control them, have a command line, deploy them. What we found is it's great for some scenarios like code reviews or doing small tweaks to files. But when it comes to really development, you use so many other tools. And you want to just have them. And [inaudible] a long tool chain problem. So, as a developer, you just want to use other tools as well. And that's why you can't have them all in the cloud. CHARLES: Right. ERICH: And [inaudible] we said at some point, it was a great lesson we had that you can program in the browser. But now we want to go to have a really [seven by 24] coding, you want to have a desktop experience. So, what we then did, we moved over the code we had run in the browser using a shell, the Electron shell, and can run it on the desktop. CHARLES: But there's theoretically, you could be running your language server for example in the cloud, but everything else on the desktop. ERICH: Yeah. Some people do that. DIRK: Right. CHARLES: Okay. Wow. It's crazy. It's heady stuff. We've talked about the barrier to implement the code smarts is much lower than it has been in the past. What kind of proliferation of code smart tools are there now that implement the language server protocol? Like how many different languages would you say have airtight…? DIRK: So now, [inaudible] time where we don't count anymore. You tell us a language and I can look it up, whether it's supported. Tell me a language and I can tell you whether – no, we have a website. CHARLES: Okay. DIRK: And when I look at it, we have about 40 languages. CHARLES: Wow. That's probably about, pretty much every mainstream language. DIRK: Yeah. I cannot find what isn't there. CHARLES: Yeah. It almost kind of begs the question, is this going to be the new bar for a language? Because I remember when I was starting out, really you just needed to have some interpreter or some compiler to have “a language”. And nowadays, it's not just the language. You need to have a command line tool for managing your dependencies. And you need to have a package system with a public repository where people can publish reusable units of code. And what's become expected out of a language to succeed has upped. Is having a language server implementation going to be part of the bar, the new bar, for “Hey, I'm thinking about creating a language”? I haven't really arrived until I have a package manager, I have a command line for resolving dependencies, I have documentation, and I have a language server. DIRK: I personally think that is our dream at the end of the day, to get there. We know about languages that do so. So, a lot of these language servers come for example from the people that developed the language. For example, the WASP guys, they do the compiler and they actively work on their language server as well. So, at the end of the day, the advantage of that approach since the WASP language server is written in WASP and runs in WASP, they can reuse so much code that they already have written in WASP. That's easy for them to package that up in the server and basically the people that maintain the compiler, at least the same team, maintains the language server at the end of the day. ERICH: And that's why we call [those] a win-win for the language provider. Because if you implement the language server using the language server protocol, then it can be integrated easily by the tool provider. And it's a win for the tool provider since there is a common protocol across all these languages you have to support. You can write an implementation once and again benefit and support many different languages, which makes the matrix problem one language support for each tool into more a vector, right? It reduces the matrix into a vector. You only write language servers that get integrated into different tools. CHARLES: Right. DIRK: And [inaudible] especially I think appealing for new languages that come out, because it lowers the bar for them to get into existing tools. Because if they write a language server speaking the language server protocol integrating that at the end of the day in Visual Studio Code is basically packaging up an extension for Visual Studio Code and writing 20 lines of code. CHARLES: Yeah. DIRK: And same [inaudible] for other IDEs that exist where people implemented the language protocol client side for the tool, for example. For vim or for Atom. CHARLES: Yeah. DIRK: So, new languages I think definitely, we see that trend go onto the language server protocol because that gives them an entry point into a large tool community. CHARLES: Yeah. I'm really excited about it. I'm actually an Emacs user. And that's actually how I found out about LSP, was in my Emacs newsfeed I saw that someone was starting on LSP support, and got digging into it. And I think that one of the problems that has plagued not only Emacs but all these editors is what you're describing where for example the JavaScript support was really great – is really great – in Emacs. There's refactorings. There's IntelliSense, code completion, all that stuff. But that's because someone wrote an entire JavaScript parser and code smart system in Elisp, which is just an absurd hurdle to jump over, to expected. And so, what you expect out of your editing experience, like when I went to try – if I were to go to try Python, well it's not nearly as good as what I'm expecting. And so yeah, I think it's exciting to hear what you're describing where with having some shared set of abstractions, you can offload all of that code smart onto the community that's building these new tools so that they're really easy to integrate into your environment. I think it's really exciting. Although it does make me ask – and I think we've got time for one more question – is we've been talking about all these different languages. Java, C++, JavaScript, TypeScript, Ruby, Python, et cetera, all these, the 40 languages that you talked about that have this implementation. What are the challenges that you've encountered trying to build abstractions that work for 40 different languages? All with their different syntax, all with their different conventions. It sounds like when aside from the fact that you've actually done it, I would say it's impossible. So, I'm curious. What were the unique challenges to solve there? DIRK: I think we already touched that at the beginning, the appealing stuff of the LSP is that it's not talking about the programming language itself. It's talking about things I can do with source code. For example, requesting code complete, go to definition, find all references. And the data that flows between the client and the server is not in terms of the programming language itself. It's about editor abstractions. We talk about documents and positions. We talk about edits that are applied to documents. We talk about snippets and stuff like that. And these abstractions, since they are programing-language-neutral, are a lot easier to implement for different editors. And the [inaudible] where the [inaudible] would speak AST nodes and symbols and functions and classes and methods, that at the end of the day, would not work. Because if I ask go to definition, the result is not a function or a variable definition. It's simply a position in the document with a hint which range to select. CHARLES: Okay. Yeah. ERICH: [Inaudible] places. In only a few places, we have to really abstract across languages. Like for instance, completions. When you do completions, you don't know, is it a variable? Is it a function or a method? That's where we have to abstract. But that's one of the few places. But again, it's an enumeration. DIRK: Yeah. And that's only to present an [icon]. ERICH: Yes. DIRK: It's only to give you a nice icon in front, because when you insert it, what comes back for completion item is basically a textual edit or a bunch of textual edits that when you select that completion item, we take these edits and apply them to the document buffer. And whether you edit a functional programming language or some other stuff, Prolog or whatsoever, it does not matter at the end of the day. CHARLES: Yeah. That simplicity, and treating it at that simple of a level is what unlocks all those superpowers. ERICH: It unlocks lowering the bar. But of course, if you look at some [of the demands], refactorings, whatever, they cannot easily be funneled. Not all of them can funnel to this low-level abstraction. Then of course, the criticism of the LSP protocol is that if you have already a very rich language service, you might not get it all through the LSP. DIRK: That's true. ERICH: And the [inaudible], that criticism we see of the LSP. But it's a tradeoff, like so many things in software. DIRK: Yeah. But what we learned there looking at different types of refactorings, it's more the set of input parameters that vary much between languages. The result of a refactoring can for every programming language that is at least document-based, [inaudible] in that lingo the LSP speaks. Because at the end of the day, it's textual edits to a document, right? ERICH: So, many people like LSP but there are people that don't like it. And people that have rich language services like IntelliJ, [Cool Tool], and [inaudible], even with LSP we would only get 20% of [our cool] features. Which is a little bit downgraded and not really true. But you see, it's a tradeoff. CHARLES: Right. ERICH: And if you want to [inaudible] language available broadly, I highly recommend it packaged as a language server. Your chances that it gets used, supported by different tools, is much higher than anything else. CHARLES: Right, right. So, it's kind of like, what's the UNIX thing? The universal text interface and how it seems counterintuitive but it actually just means you can literally compose anything. Because so few assumptions are made. ERICH: I would just recommend, [inaudible], go to the website that we have about the language server protocol. I'm pretty sure it will be in the introduction or whatever. It's microsoft.github.io/language-server-protocol and then you see the implementations, all the implementation of languages, who integrates language servers, and also what kind of libraries are available, if you want to implement your language server. DIRK: And a full specification. ERICH: And the specification is there as well. Yeah. CHARLES: Yeah. If you want to go ahead and do it yourself. Well, thank you so much, Erich. Thank you so much, Dirk, for coming on the show to talk about the language server protocol. It's very exciting to me and I think it's exciting for development in general because I just think by having – even if it's 20, 30, 50% code smarts for ever single language, just the billions and billions of hours that you are going to save developers over the next, over the coming years, it's a great feeling to think about. So, thank you for all your work and thank you for coming on the show. ERICH: You're welcome. DIRK: Yeah. It was fun talking to you. ERICH: Yeah. [Inaudible] CHARLES: Yeah. If people want to continue the conversation, is there a good way that they can get in touch with you? DIRK: Usually GitHub Issues. So, where the language protocol is, it's a project on GitHub. Simply find issues. We accept pull requests. I think that's the way we communicate. CHARLES: Awesome. Again, if you want to get in touch with us, you can get in touch with us at contact@frontside.io or you can reach out to us on Twitter. We're @TheFrontside. So, thank you everybody for listening. And we will see you next time.

BSD Now
170: Sandboxing Cohabitation

BSD Now

Play Episode Listen Later Nov 30, 2016 76:24


This week on the show, we've got some new info on the talks from EuroBSDCon, a look at sharing a single ZFS pool between Linux and BSD, Sandboxing and much more! Stay tuned for your place to B...SD! This episode was brought to you by Headlines EuroBSDcon 2016 Presentation Slides (https://2016.eurobsdcon.org/PresentationSlides/) Due to circumstances beyond the control of the organizers of EuroBSDCon, there were not recordings of the talks given at the event. However, they have collected the slide decks from each of the speakers and assembled them on this page for you Also, we have some stuff from MeetBSD already: Youtube Playlist (https://www.youtube.com/playlist?list=PLb87fdKUIo8TAMC2HJLZ7H54edD2BeGWv) Not all of the sessions are posted yet, but the rest should appear shortly MeetBSD 2016 Trip Report: Domagoj Stolfa (https://www.freebsdfoundation.org/blog/meetbsd-2016-trip-report-domagoj-stolfa/) *** Cohabiting FreeBSD and Gentoo Linux on a Common ZFS Volume (https://ericmccorkleblog.wordpress.com/2016/11/15/cohabiting-freebsd-and-gentoo-linux-on-a-common-zfs-volume/) Eric McCorkle, who has contributed ZFS support to the FreeBSD EFI boot-loader code has posted an in-depth look at how he's setup dual-boot with FreeBSD and Gentoo on the same ZFS volume. He starts by giving us some background on how the layout is done. First up, GRUB is used as the boot-loader, allowing boot of both Linux and BSD The next non-typical thing was using /etc/fstab to manage mount-points, instead of the typical ‘zfs mount' usage, (apart from /home datasets) data/home is mounted to /home, with all of its child datasets using the ZFS mountpoint system data/freebsd and its child datasets house the FreeBSD system, and all have their mountpoints set to legacy data/gentoo and its child datasets house the Gentoo system, and have their mountpoints set to legacy as well So, how did he set this up? He helpfully provides an overview of the steps: Use the FreeBSD installer to create the GPT and ZFS pool Install and configure FreeBSD, with the native FreeBSD boot loader Boot into FreeBSD, create the Gentoo Linux datasets, install GRUB Boot into the Gentoo Linux installer, install Gentoo Boot into Gentoo, finish any configuration tasks The rest of the article walks us through the individual commands that make up each of those steps, as well as how to craft a GRUB config file capable of booting both systems. Personally, since we are using EFI, I would have installed rEFInd, and chain-loaded each systems EFI boot code from there, allowing the use of the BSD loader, but to each their own! HardenedBSD introduces Safestack into base (https://hardenedbsd.org/article/shawn-webb/2016-11-27/introducing-safestack) HardenedBSD has integrated SafeStack into its base system and ports tree SafeStack (http://clang.llvm.org/docs/SafeStack.html) is part of the Code Pointer Integrity (CPI) project within clang. “SafeStack is an instrumentation pass that protects programs against attacks based on stack buffer overflows, without introducing any measurable performance overhead. It works by separating the program stack into two distinct regions: the safe stack and the unsafe stack. The safe stack stores return addresses, register spills, and local variables that are always accessed in a safe way, while the unsafe stack stores everything else. This separation ensures that buffer overflows on the unsafe stack cannot be used to overwrite anything on the safe stack.” “As of 28 November 2016, with clang 3.9.0, SafeStack only supports being applied to applications and not shared libraries. Multiple patches have been submitted to clang by third parties to add support for shared libraries.” SafeStack is only enabled on AMD64 *** pledge(2)… or, how I learned to love web application sandboxing (https://learnbchs.org/pledge.html) We've talked about OpenBSD's sandboxing mechanism pledge() in the past, but today we have a great article by Kristaps Dzonsons, about how he grew to love it for Web Sandboxing. +First up, he gives us his opening argument that should make most of you sit up and listen: I use application-level sandboxing a lot because I make mistakes a lot; and when writing web applications, the price of making mistakes is very dear. In the early 2000s, that meant using systrace(4) on OpenBSD and NetBSD. Then it was seccomp(2) (followed by libseccomp(3)) on Linux. Then there was capsicum(4) on FreeBSD and sandbox_init(3) on Mac OS X. All of these systems are invoked differently; and for the most part, whenever it came time to interface with one of them, I longed for sweet release from the nightmare. Please, try reading seccomp(2). To the end. Aligning web application logic and security policy would require an arduous (and usually trial-and-error or worse, copy-and-paste) process. If there was any process at all — if the burden of writing a policy didn't cause me to abandon sandboxing at the start. And then there was pledge(2). This document is about pledge(2) and why you should use it and love it. “ +Not convinced yet? Maybe you should take his challenge: Let's play a drinking game. The challenge is to stay out of the hospital. 1.Navigate to seccomp(2). 2. Read it to the end. 3. Drink every time you don't understand. For capsicum(4), the challenge is no less difficult. To see these in action, navigate no further than OpenSSH, which interfaces with these sandboxes: sandbox-seccomp-filter.c or sandbox-capsicum.c. (For a history lesson, you can even see sandbox-systrace.c.) Keep in mind that these do little more than restrict resources to open descriptors and the usual necessities of memory, signals, timing, etc. Keep that in mind and be horrified. “ Now Kristaps has his theory on why these are so difficult (NS..), but perhaps there is a better way. He makes the case that pledge() sits right in that sweet-spot, being powerful enough to be useful, but easy enough to implement that developers might actually use it. All in all, a nice read, check it out! Would love to hear other developer success stories using pledge() as well. *** News Roundup Unix history repository, now on GitHub (http://www.osnews.com/story/29513/Unix_history_repository_now_on_GitHub) OS News has an interesting tidbit on their site today, about the entire commit history of Unix now being available online, starting all the way back in 1970 and bringing us forward to today. From the README The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology. This is a fascinating find, especially will be of value to students and historians who wish to look back in time to see how UNIX evolved, and in this repo ultimately turned into modern FreeBSD. *** Yandex commits improvements to FreeBSD network stack (https://reviews.freebsd.org/D8526) “Rework ip_tryforward() to use FIB4 KPI.” This commit brings some code from the experimental routing branch into head As you can see from the graphs, it offers some sizable improvements in forwarding and firewalled packets per second commit (https://svnweb.freebsd.org/base?view=revision&revision=309257) *** The brief history of Unix socket multiplexing – select(2) system call (https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/) Ever wondered about the details of socket multiplexing, aka the history of select(2)? Well Marek today gives a treat, with a quick look back at the history that made today's modern multiplexing possible. First, his article starts the way all good ones do, presenting the problem in silent-movie form: In mid-1960's time sharing was still a recent invention. Compared to a previous paradigm - batch-processing - time sharing was truly revolutionary. It greatly reduced the time wasted between writing a program and getting its result. Batch-processing meant hours and hours of waiting often to only see a program error. See this film to better understand the problems of 1960's programmers: "The trials and tribulations of batch processing". Enter the wild world of the 1970's, and we've now reached the birth of UNIX which tried to solve the batch processing problem with time-sharing. These days when a program was executed, it could "stall" (block) only on a couple of things1: + wait for CPU + wait for disk I/O + wait for user input (waiting for a shell command) or console (printing data too fast)“ Jump forward another dozen years or so, and the world changes yet again: This all changed in 1983 with the release of 4.2BSD. This revision introduced an early implementation of a TCP/IP stack and most importantly - the BSD Sockets API.Although today we take the BSD sockets API for granted, it wasn't obvious it was the right API. STREAMS were a competing API design on System V Revision 3. Coming in along with the sockets API was the select(2) call, which our very own Kirk McKusick gives us some background on: Select was introduced to allow applications to multiplex their I/O. Consider a simple application like a remote login. It has descriptors for reading from and writing to the terminal and a descriptor for the (bidirectional) socket. It needs to read from the terminal keyboard and write those characters to the socket. It also needs to read from the socket and write to the terminal. Reading from a descriptor that has nothing queued causes the application to block until data arrives. The application does not know whether to read from the terminal or the socket and if it guesses wrong will incorrectly block. So select was added to let it find out which descriptor had data ready to read. If neither, select blocks until data arrives on one descriptor and then awakens telling which descriptor has data to read. [...] Non-blocking was added at the same time as select. But using non-blocking when reading descriptors does not work well. Do you go into an infinite loop trying to read each of your input descriptors? If not, do you pause after each pass and if so for how long to remain responsive to input? Select is just far more efficient. Select also lets you create a single inetd daemon rather than having to have a separate daemon for every service. The article then wraps up with an interesting conclusion: > CSP = Communicating sequential processes In this discussion I was afraid to phrase the core question. Were Unix processes intended to be CSP-style processes? Are file descriptors a CSP-derived "channels"? Is select equivalent to ALT statement? I think: no. Even if there are design similarities, they are accidental. The file-descriptor abstractions were developed well before the original CSP paper. It seems that an operating socket API's evolved totally disconnected from the userspace CSP-alike programming paradigms. It's a pity though. It would be interesting to see an operating system coherent with the programming paradigms of the user land programs. A long (but good) read, and worth your time if you are interested in the history how modern multiplexing came to be. *** How to start CLion on FreeBSD? (https://intellij-support.jetbrains.com/hc/en-us/articles/206525024-How-to-start-CLion-on-FreeBSD) CLion (pronounced "sea lion") is a cross-platform C and C++ IDE By default, the Linux version comes bundled with some binaries, which obviously won't work with the native FreeBSD build Rather than using Linux emulation, you can replace these components with native versions pkg install openjdk8 cmake gdb Edit clion-2016.3/bin/idea.properties and change run.processes.with.pty=false Start CLion and open Settings | Build, Execution, Deployment | Toolchains Specify CMake path: /usr/local/bin/cmake and GDB path: /usr/local/bin/gdb Without a replacement for fsnotifier, you will get a warning that the IDE may be slow to detect changes to files on disk But, someone has already written a version of fsnotifier that works on FreeBSD and OpenBSD fsnotifier for OpenBSD and FreeBSD (https://github.com/idea4bsd/fsnotifier) -- The fsnotifier is used by IntelliJ for detecting file changes. This version supports FreeBSD and OpenBSD via libinotify and is a replacement for the bundled Linux-only version coming with the IntelliJ IDEA Community Edition. *** Beastie Bits TrueOS Pico – FreeBSD ARM/RPi Thin Clients (https://www.trueos.org/trueos-pico/) A Puppet package provider for FreeBSD's PkgNG package manager. (https://github.com/xaque208/puppet-pkgng) Notes from November London *BSD meetup (http://mailman.uk.freebsd.org/pipermail/ukfreebsd/2016-November/014059.html) SemiBug meeting on Dec 20th (http://lists.nycbug.org/pipermail/semibug/2016-November/000131.html) Feedback/Questions Erno - SSH without password (http://pastebin.com/SMvxur9v) Jonathan - Magical ZFS (http://pastebin.com/5ETL7nmj) George - TrueOS (http://pastebin.com/tSVvaV9e) Mohammad - Jails IP (http://pastebin.com/T8nUexd1) Gibheer - BEs (http://pastebin.com/YssXXp70) ***

.NET Rocks!
Programming in Python with Kathleen Dollard

.NET Rocks!

Play Episode Listen Later Oct 1, 2015 53:28


Kathleen Dollard has been exploring different development environments and wants to tell the world! Carl and Richard talk to Kathleen about her experience using Python and Django with JetBrain's IntelliJ development environment. As Kathleen says, it's the whole development suite, not just a given language, that you have to evaluate as a whole. But if you're going to live in the dynamic language like Python, you need to take testing seriously - and Kathleen dives into her experience of doing semantic testing to build resilient tests that are easy to read and have comprehensive coverage. The conversation turns to the story of how all this came to pass - Kathleen's new job and new leadership role with a consulting firm has pushed her to try new things!Support this podcast at — https://redcircle.com/net-rocks/donations