Podcasts about OSI

  • 586PODCASTS
  • 1,470EPISODES
  • 47mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about OSI

Show all podcasts related to osi

Latest podcast episodes about OSI

Living The Next Chapter: Authors Share Their Journey
E544 - Derrick Jackson - Shadow One - Air Force Office of Special Investigations, world of criminal investigations and counterintelligence

Living The Next Chapter: Authors Share Their Journey

Play Episode Listen Later May 28, 2025 50:52


Episode 544 - Derrick Jackson - Shadow One - Air Force Office of Special Investigations, world of criminal investigations and counterintelligenceAbout the authorDerrick Jackson joined the U.S. Air Force and served as a jet engine specialist on the F-15 Eagle, C-5 Galaxy, C-141 Stratofortress and C-17 Globemaster. After 10 years of service, he was recruited to become a Special Agent with the Air Force Office of Special Investigations. His first assignments were as a criminal investigator at Tyndall AFB and Osan Airbase, Republic of Korea. He then volunteered to join OSI's Special Missions Branch at Hurlburt Field, FL to provide counterintelligence services for the Air Force Special Operations Command missions worldwide. After a brief stint at Bolling AFB, DC with the Protective Service Detachment, providing security for foreign dignitaries, Agent Jackson became the Chief of the Economic Crimes Branch at Joint Base Andrews. In 2014, Special Agent retired from the Air Force after 21 years of service.Book: Shadow One - Torn between the love of his life and his career, Air Force Staff Sergeant Devin Jackson is recruited to become a Special Agent with the Office of Special Investigations.When the Agents uncover an international human trafficking and drug smuggling ring, the crime syndicate decides to strike back; and soon the hunters become the prey. Once the pressure mounts, the team begins to crack and questions if one of their own has betrayed them. As Devin struggles to find balance between the disturbing reality of trafficking and his personal life; disaster strikes, and he fails to protect the person closest to him.Depression, self-doubt, and grief overcome him until an old friend arrives back on the scene and provides the healing he needs to seek revenge and bring the criminals to justice.https://a.co/d/hhTERZ2Support the show___https://livingthenextchapter.com/podcast produced by: https://truemediasolutions.ca/Coffee Refills are always appreciated, refill Dave's cup here, and thanks!https://buymeacoffee.com/truemediaca

The Cryptonaut Podcast
#390: Jailbreak Area51! Part 2: Enter The Quataloid

The Cryptonaut Podcast

Play Episode Listen Later May 19, 2025 70:26


While on a camping trip in an abandoned mining town just outside the Nellis Air Force Range, a husband and father of two was killed by a bizarre, bug-like being who had escaped from the S-2 Annex of the notorious Area-51. Following the escape, a pair of OSI officers were tasked with finding out how the mantis-like monstrosity had managed to get out and, more pressing still, what had become of it. The Cryptonaut Podcast Patreon:https://www.patreon.com/cryptonautpodcast  The Cryptonaut Podcast Merch Stores:Hellorspace.com - Cryptonautmerch.com  Stay Connected with the Cryptonaut Podcast: Website - Instagram - TikTok - YouTube- Twitter - Facebook 

Control Intelligence
Patrick Bunn: laying the groundwork for future generations of automation

Control Intelligence

Play Episode Listen Later May 19, 2025 33:14


This week's guest on Control Intelligence is Patrick Bunn, owner of Bunn Automation Consulting in Birmingham, Alabama. Bunn's career has included valuable experience with CMC Steel Alabama, Revere Control Systems, EC&S, Hargrove Controls & Automation and Enercon, as well as his volunteer work with younger people in technology, engineering and automation. He will be one of the presenters at OT SCADA CON this July in Houston, where he'll be discussing industrial network protocols and how to use Wireshark software for troubleshooting, as well as the OSI model, which he wrote a very popular on for Control Design. He will speak at the event on July 24 at 3 p.m.

The Birth Trauma Mama Podcast
Ep. 166: 4th Degree Tear, Rectovaginal Fistula, & Ongoing Recovery feat. Scarlett

The Birth Trauma Mama Podcast

Play Episode Listen Later May 16, 2025 53:31


In this week's Listener Series episode of The Birth Trauma Mama Podcast, Scarlett bravely shares her story a layered, and still-unfolding journey through birth trauma, postpartum hemorrhage, and complex pelvic floor injuries that continue to impact her life more than five years later.She speaks candidly about the realities of:

Ones Ready
Ep 470: From AF Security Forces to FBI - Endex Archery's Jay Joins Us!

Ones Ready

Play Episode Listen Later May 12, 2025 56:49


Send us a textIn this epic Ones Ready episode, we sit down with Jay from Endex Archery, whose resume reads like a military fever dream: SERE drop, Security Forces, Combat Arms, OSI Agent, tier-one special mission unit… and then casually slides into the FBI. This man literally said “nah” to survival school and then sprinted straight into a career that landed him fighting terror, investigating spies, and now helping vets heal through archery.Jay opens up about losing six teammates to a VBIED in Afghanistan, how that tragedy turned into his life's mission, and why a bow and arrow saved his soul. We talk survivor's guilt, FBI hostage rescue, sneaky pull-up bars, and why he's built an organization to help other veterans shoot their way back into mental clarity. Also: hilarious stories about SERE instructors, Air Force recruiter lies, and the absolute dumpster fire that is military admin.If you've ever thought you couldn't pivot, couldn't overcome, or couldn't create something powerful out of pain—this episode is your proof otherwise.

Ones Ready
AFSPECWAR Q&A Live Stream - Late Night Love with Aaron

Ones Ready

Play Episode Listen Later May 11, 2025 63:27


Send us a textJust a quick hitter answering all your AFSPECWAR questionsIn this conversation, Aaron discusses various topics related to the Air Force, including the current state of the Air Force amidst budget cuts, the enduring demand for special operations forces, and the importance of military brotherhood. He reflects on a recent retirement event, shares thoughts on U.S. foreign policy obligations, and expresses gratitude for the community's support. The conversation also touches on future training initiatives, the special warfare pipeline, and interactions with the OSI, emphasizing the importance of collaboration and community engagement. In this conversation, Aaron discusses various aspects of military life, including the importance of engaging with local communities during deployments, the differences between enlisted personnel and officers, and the evolving nature of drone warfare. He shares insights on physical preparation, experiences with unidentified aerial phenomena, and the advancements in military technology. Additionally, he addresses changes in training pipelines and the considerations for age when enlisting in the military.TakeawaysThe Air Force is always evolving and adapting to changes.Special operations jobs will always be in demand due to their critical roles.Military brotherhood is a cherished aspect of service that many miss after retirement.The U.S. is not obligated to intervene in every foreign conflict.Community engagement and gratitude are vital for morale and support.Future training initiatives are being planned to better prepare candidates.Understanding the special warfare pipeline is crucial for aspiring candidates.Collaboration with OSI enhances mission effectiveness and safety.Everyone has a role to play in the military, regardless of their specific job.Open communication with the community is essential for growth and improvement. Engaging with local communities is crucial during deployments.There are significant differences between enlisted personnel and officers.The Air Force leads in drone warfare tactics and strategies.Physical preparation is essential for military readiness.Unidentified aerial phenomena can be perplexing and concerniSupport the showJoin this channel to get access to perks: HEREBuzzsprout Subscription page: HERECollabs:Ones Ready - OnesReady.com 18A Fitness - Promo Code: 1Ready ATACLete - Follow the URL (no promo code): ATACLeteCardoMax - Promo Code: ONESREADYDanger Close Apparel - Promo Code: ONESREADYDFND Apparel - Promo Code: ONESREADYHoist - Promo Code: ONESREADYKill Cliff - Pro...

Les Cast Codeurs Podcast
LCC 325 - Trier le hachis des concurrents

Les Cast Codeurs Podcast

Play Episode Listen Later May 9, 2025 109:42


Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

The Flip
Why the NFL is Bringing American Football to Africa

The Flip

Play Episode Listen Later Apr 23, 2025 6:37


There are 1,696 active players in the NFL. Just 138 are African. But if it were up to Osi Umenyiora, 11-year veteran and 2-time Super Bowl Champion, there would be many more. Osi is the Founder of The Uprise, the NFL's lead in Africa, and he's pioneering American football on the African continent. At the NFL's camp in Lagos, Nigeria, young athletes are vying for a shot to join the NFL Academy in London or to go straight to the League through the International Player Pathway Program. But many of them have never played American football before. So why is the NFL hosting camps in Africa? Is there really any shot of these players making it to the NFL?00:00 - The NFL is in Lagos, Nigeria00:30 - Osi Umenyiora is bringing football to Africa02:05 - The NFL wants the best talent in the world03:55 - Creating opportunities for African talentOur Links -

The Current
Process Palooza Podcast Ep. 8: Lean Six Sigma vs. AI - The Ultimate Webinar Showdown

The Current

Play Episode Listen Later Apr 17, 2025 27:14


In this episode, we discuss an exciting offering at UC San Diego's Office of Strategic Initiatives: a new webinar series to foster conversations about continuous improvement, artificial intelligence, and more. Joining us is Laura De La Peña, who previews the upcoming webinar 'Lean Six Sigma vs. Artificial Intelligence.' We explore how Lean Six Sigma and AI can complement each other's strengths and weaknesses in process improvement. You can join the April 24th webinar and get insights on how these methodologies can enhance efficiency and effectiveness in both academia and industry. Simply email osi@ucsd.edu! Also, Don't forget to register for Process Palooza 2025 at UC San Diego on August 6-7! Head to processpalooza.ucsd.edu for more information! 00:00 Welcome to Process Palooza Podcast 01:03 Introducing Triton GPT and OSI's Role 02:20 Upcoming AI Essentials Webinar 03:44 Interview with Laura Dela Pena 05:25 Lean Six Sigma vs. AI: A Deep Dive 07:57 The Future of AI and Process Improvement 15:06 Triton GPT: Development and Training 24:31 Public Access and Benefits of Webinars 25:56 Conclusion and Upcoming Events

Historia Jakiej Nie Znacie
Dlaczego historia Polski musi się powtarzać, a nasi bohaterowie tak często musieli ginąć? gość Jan Maciejewski

Historia Jakiej Nie Znacie

Play Episode Listen Later Apr 16, 2025 71:28


Książę Poniatowski tonący w nurtach Elstery, finał insurekcji kościuszkowskiej skąpany w krwi w czasie rzezi Pragi i rzeź stolicy 150 lat później w czasie powstania 1944. Do tego nie mające szans zrywy niepodległościowe, często straceńcza wręcz walka dumnych Polaków. Osiągnęło to zenit w czasie II wojny światowej gdy nasi przodkowie ginęli w interesie perfidnych mocarstw. Dlaczego tak chętnie Polacy mówili "nic to!", kładąc na szali swe życie w imię honoru i męczeństwa narodu. Dlaczego czasem robili to w gruncie rzeczy przelewając krew w imię interesów obcych imperiów i wykorzystujących ich naiwność przywódców, takich jak choćby Napoleon czy później zachodni alianci. Czy naprawdę potrzebujemy dziś czcić bohaterów składających swoje życie na ołtarzu dla ojczyzny? O tym w dzisiejszym odcinku, którego gościem jest autor książki Nic to! Dlaczego historia Polski musi się powtarzać?" Jan Maciejewski.

DejaVue
Open Source Sustainability (with Daniel Roe, Chad Whitacre & Rijk van Zanten)

DejaVue

Play Episode Listen Later Apr 6, 2025 60:24 Transcription Available


Join hosts Michael Thiessen and Alexander Lichter for a special episode of DejaVue - a fascinating panel discussion on open source sustainability with three open source enthusiasts: Daniel Roe (Nuxt Team Lead),Chad Whitacre (Head of Open Source at Sentry), and Rijk van Zanten (CTO and co-founder of Directus).The panelists dive deep into what sustainability truly means in open source and get deep into the weeds of different licensing models, debating whether open source functions as a gift economy, and discuss the challenges of project governance.The panel also discusses important questions about leadership structures in open source projects, the role of companies in funding development, and practical ways everyone can contribute to making the ecosystem more sustainable - whether financially or through other meaningful contributions.Enjoy the episode!Our GuestsDaniel Roe - Nuxt Team Lead  BlueskyWeb Chad Whitacre - Head of Open Source at Sentry  BlueskyTwitterWebRijk van Zanten - CTO and co-founder of Directus  BlueskyTwitterWebChapters(00:00) - Welcome to DejaVue (04:06) - What is Open Source Sustainability (12:51) - Open Source as a gift economy (19:20) - The Projects and their Licenses (29:50) - Sentry is not Open Source (34:53) - Open Source Definition and OSI (37:09) - Why people adopt open source software? (39:44) - Open Source Governance (47:50) - Stewarding an open source team (52:22) - Open Source Leadership (55:40) - What can YOU do to help open source? Links and ResourcesMastering Nuxt*xkcd on standardsJacob Kaplan-Moss: Paying Maintainers is GoodSentryDirectusNuxtBSLFair Source MITAGPLOSIBDFLNuxt GovernanceOpen Source Steward in the CRADaniel Roe: Contributing to NuxtOpen Source PledgeYour HostsAlexander LichterBlueskyYouTubeWebsiteMichael ThiessenTwitterYouTubeWebsite---Links marked with * are affiliate links. We get a small commission when you register for the service through our link. This helps us to keep the podcast running. We only include affiliate links for services mentioned in the episode or that we use ourselves.

The Protectors
534 | Derrick Jackson | BOOK DISCUSSION: "Shadow One"

The Protectors

Play Episode Listen Later Apr 4, 2025 21:17 Transcription Available


Send us a textDerek Jackson shares his journey from Air Force jet engine mechanic to OSI special agent, revealing the intense personal and professional challenges of federal law enforcement, and how these experiences inspired his crime thriller novel "Shadow One."• Starting as an enlisted jet engine mechanic before getting recruited to OSI• Discussing the reality of FLETC (Federal Law Enforcement Training Center) and the demanding training process• Transitioning from mechanic to criminal investigator and the mental shift required• Balancing investigative work with personal life and relationships• Using music to inspire creative writing and developing characters• Exploring how traumatic cases affect agents psychologically• Turning real-life experiences into a crime thriller novel• Finding the courage to follow your own path despite others' doubtsFind Derek Jackson's book "Shadow One" on Amazon, Barnes & Noble, Books A Million, and Walmart online.Support the showMake sure to check out Jason on IG @drjasonpiccolo

Z pasją o mocnych stronach
#259 Czy każdy produkt ma osobę, która się nim opiekuje? Rozmowa z Natalią Cholewą.

Z pasją o mocnych stronach

Play Episode Listen Later Mar 20, 2025 57:59


Każdy i każda z nas styka się z produktami. I każdy produkt jest w jakiś sposób zaprojektowany. Kto to robi i na czym polega ta rola? O różnicy w pracy właściciela produktu (Product Owner), menedżera produktu (Product Manager) oraz kierownika projektu (Project Manager) rozmawiam z Natalią Cholewą. Ciekawie było usłyszeć o jasnych stronach tej pracy, ale też o wyzwaniach. Natalia wspiera właścicieli produktów. Otwarcie rozmawialiśmy o tym jak to jest pracować nad produktem. Zapraszam do słuchania i rozmowy! Gościni Natalia Cholewa – mama, żona, córka, przyjaciółka, zawodowo jest produktowcem.Top 5: Osiąganie, Ukierunkowanie, Dyscyplina, Bezstronność, BliskośćInstagram Intencjonalny newsletter Co tydzień wysyłam list, w którym zapraszam do rozmowy i zadania sobie ważnych pytań. Linki Streszczenie Kim jest Product Owner? Dominik: Zacznijmy od podstaw – jak myślisz o poszczególnych rolach? Natalia: W Polsce są one trochę wypaczone. W ogłoszeniach o pracę zamiennie stosuje się nazwy Product Owner, Analityk Biznesowy, a nawet Product Manager. Wszystkie te role zajmują się zaspokajaniem potrzeb biznesowych, jakkolwiek by one nie były zdefiniowane. Analityk Biznesowy jest bliżej zespołu niż Product Owner – jeśli obie te role funkcjonują w firmie. Product Owner decyduje o kierunkach rozwoju produktu. Product Manager to osoba, która stawia rozwój produktu na pierwszym miejscu – wyznacza […] The post #259 Czy każdy produkt ma osobę, która się nim opiekuje? Rozmowa z Natalią Cholewą. appeared first on Near-Perfect Performance.

WP Tavern
#161 – Robert Jacobi on WordPress, Security, and the OSI Model

WP Tavern

Play Episode Listen Later Mar 18, 2025 43:44


On the podcast today we have Robert Jacobi and he's here to talk about his tech journey, and his role at Black Wall, formerly BotGuard. We talk about the OSI model, explaining how computer networks communicate through seven layers, from application to physical. Robert shares insights into Black Wall's focus on preventing bot attacks at a layer far from the website, mitigating risks before they hit the hosting company. There's also a brief discussion of WordPress plugins and the complexity of online security, with a nod to the hope of increasing listeners' understanding of these intricate processes. If you've ever wondered about the unseen layers of internet security and infrastructure, or the strategic moves involved in rebranding a tech company, this episode is for you.

Jukebox
#161 – Robert Jacobi on WordPress, Security, and the OSI Model

Jukebox

Play Episode Listen Later Mar 18, 2025 43:44


On the podcast today we have Robert Jacobi and he's here to talk about his tech journey, and his role at Black Wall, formerly BotGuard. We talk about the OSI model, explaining how computer networks communicate through seven layers, from application to physical. Robert shares insights into Black Wall's focus on preventing bot attacks at a layer far from the website, mitigating risks before they hit the hosting company. There's also a brief discussion of WordPress plugins and the complexity of online security, with a nod to the hope of increasing listeners' understanding of these intricate processes. If you've ever wondered about the unseen layers of internet security and infrastructure, or the strategic moves involved in rebranding a tech company, this episode is for you.

20 Years Done Podcast
Episode 103 - Derrick Jackson

20 Years Done Podcast

Play Episode Listen Later Mar 17, 2025 46:29


I'm joined by Derrick Jackson to discuss his 21 year career as an aircraft maintainer and OSI Special Agent. Derrick has authored his first novel Shadow One loosely based on his career. Derrick and I discuss myths around OSI, human trafficking in South Korea and the impacts on security clearance, counter-intelligence, and crime.

Packet Pushers - Full Podcast Feed
N4N017: Routing Fundamentals

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Mar 13, 2025 49:48


On today's N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today's... Read more »

Packet Pushers - Fat Pipe
N4N017: Routing Fundamentals

Packet Pushers - Fat Pipe

Play Episode Listen Later Mar 13, 2025 49:48


On today's N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today's... Read more »

Open at Intel
Open Source and Public Policy: A Conversation with Deb Bryant

Open at Intel

Play Episode Listen Later Mar 13, 2025 20:23


Deb Bryant discusses her career journey and the significant role of open source software in public policy, particularly in the US and Europe. She highlights her work with the Open Source Initiative, Oregon State University, and Red Hat, emphasizing the importance of open source in government operations and cybersecurity. Deb also addresses the challenges and evolution of open source policies, the critical need for sustainability in open source projects, and her current focus on AI's impact on the ecosystem. She concludes by advocating for harmonized international regulations and human-centered AI approaches. 00:00 Introduction 00:44 Government and Open Source Software 01:38 Experiences in the Private Sector 02:14 Open Source in Public Policy 04:31 Cybersecurity and Open Source 07:42 Sustainability in Open Source 15:05 Future of Open Source and AI 18:53 Conclusion and Final Thoughts Guest: Deb Bryant, Director, US Policy and Founder, Open Policy Alliance Open Source Initiative Throughout her career, Deborah has lent her voice to supporting open source projects and developers, building bridges between academia, industry, non-profits, and government along the way. Today she provides guidance to open source foundations seeking to support public policy development in open technology domains. She has worked in emerging technology and has been an advocate of free and open source software and the community that makes it so since the 1990s. Deborah is board director emeritus at the Open Source Initiative (OSI); serves on the DemocracyLab board; serves on the advisory boards of Open Source Elections Technology Foundation and the OASIS Open Project, and as an advisor to the Brandeis University Open Technology Management program. She also represents OSI as a member of the Digital Public Goods Alliance. For eight years prior to her reentry into the nonprofit world, she led one of the world's largest open source program offices (OSPO) at Red Hat where her global team was responsible for the company's strategy and stewardship in open source software communities. While at Red Hat she served on the Eclipse Foundation board for two years. Deborah's published academic research includes the Use of Open Source in Cybersecurity in the Energy Industry and Collaborative Models for Creating Software in the Public Sector.  

Cyber Work
CompTIA Network+ training: What to expect in a boot camp | Instructor Tommy Gober

Cyber Work

Play Episode Listen Later Mar 10, 2025 17:41


Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastIn this episode of Cyber Work Hacks, Infosec Boot Camp Instructor Tommy Gober walks us through what his Network+ training course is like. He talks about the supportive learning environment and explains how the boot camp is designed for those new to the field. He covers the structure of the five-day program, touching on topics like the OSI model, binary number systems, networking hardware and more. He also shares his insights on the benefits of boot camp learning versus self-study or traditional schooling — and offers tips for taking the Network+ exam. 0:00 Introduction 1:19 Free cybersecurity salary guide2:54 Boot camp training vs. other learning methods4:23 In-person and online boot camp experience9:13 Network+ boot camp daily breakdown11:56 Network+ exam preparation and testing strategies14:55 Final thoughts and wrap up– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.

EECO Asks Why Podcast
Unlocking the Power of Ethernet in Industrial Manufacturing

EECO Asks Why Podcast

Play Episode Listen Later Mar 4, 2025 21:47 Transcription Available


This episode highlights Ethernet's essential role in smart manufacturing, emphasizing its reliability, scalability, and interoperability as key components of modern industrial connectivity. Listeners will gain insights into Ethernet's network levels, topologies, and the importance of choosing the right communication protocol for their manufacturing needs.• Ethernet enhances real-time data flow from the factory floor to business systems • Four primary network levels with distinct functions explained • Overview of network topologies: star, ring, trunk • Introduction to the OSI model and its relevance to Ethernet • Comparison of Ethernet and PROFINET protocols • The importance of interoperability in industrial networks • Future outlook on Ethernet as a cornerstone of industrial connectivity  Remember to keep asking why...Smart Manufacturing Resources:Smart Manufacturing GuideRead our latest article on Smart Manufacturing herehttps://eecoonline.com/inspire/smart_manufacture_concept Online Account Registration:Video Explanation of Registering for an AccountRegister for an AccountOther Resources to help with your journey:Installed Asset Analysis SupportSystem Planning SupportSchedule your Visit to a Lab in North or South CarolinaSchedule your Visit to a Lab in VirginiaSubmit your questions and feedback to: podcast@eecoaskwhy.comFollow EECO on LinkedInHost: Chris Grainger

LEO Radio
Episode 32: EXPERT ANALYSIS of an Officer Involved Shooting

LEO Radio

Play Episode Listen Later Mar 3, 2025 48:59


Lieutenant Joey Sperlazza, Ed.S. is joined by J. Harris Academy of Police Training Instructor Lieutenant Collin Congleton, JD in LEO Radio Episode 32. Collin is a subject matter expert in Use of Force. Collin founded and now supervises the Force Investigation Unit for a major urban Police Department in New Jersey, is a United States Marine Corps combat veteran of Operation Iraqi Freedom and a nearly 20 year veteran law enforcement professional. He earned a B.S. in National Security Studies (Summa Cum Laude) from New Jersey City University and a Juris Doctorate from Rutgers School of Law (Newark) where he was awarded the prestigious Judge J. Skelly Wright Prize for work in civil rights. He also completed the Certified Public Manager Program at Rutgers School of Public Affairs and Administration, where his research work on police training reform was nominated for the nationally recognized American Academy of CPM's Askew Award. Some of his certifications include: FBI certified hostage and crisis negotiator; Force Science Institute certified force analyst; Gracie University certified Instructor of Gracie Survival Tactics (GST) Levels Iⅈ NJ PTC certified firearms instructor; NJ certified ICAT and ABLE instructor. Collin is also heavily involved in volunteer work supporting veterans and is an avid practitioner of Brazilian Jiujitsu. Full bios for Joey and Collin are available at JharrisTraining.com. Joey and Collin analyze three incidents in this episode of LEO Radio: • NYPD Officer Involved Shooting of Christopher Ferguson: The New York State Attorney General Office of Special Investigation is currently conducting an investigation into the death of Christopher Ferguson who died on December 5, 2024 following an encounter with members of the New York City Police Department (NYPD 77th Precinct) in Brooklyn. OSI has released videos from body-worn cameras that officers were equipped with during the incident and video from a dashboard camera. The full footage is available on the NYC AG website (https://ag.ny.gov/osi/footage/christopher-ferguson) and on the NYPD YouTube channel (https://youtu.be/I1oAPLhD9GU?si=JsdFy0uw8TZNIw-h) • Cranford, NJ Police Officer Involved Shooting of Kevin Harlfinger: According to the preliminary investigation, officers from the Cranford Police Department were pursuing a stolen vehicle when it crashed in the area of a southbound exit on the Garden State Parkway. After the crash, the driver of the vehicle, Mr. Harlfinger, exchanged gunfire with Cranford Police Officers Christopher Folinusz and Ryan McSharry. Mr. Harlfinger was pronounced deceased at the scene at approximately 12:04 a.m., December 29, 2023. A firearm was recovered near Mr. Harlfinger. Officer Folinusz sustained a non-fatal gunshot wound, was treated at a hospital, and later released. The full footage is available on the NJ AG Website: https://njoag.app.box.com/s/q0so00a1u4lbh1x005yy7gch4nuewvf5• Cleveland Police Officer Involved Shooting of Tamir Rice On November 22, 2014, 12-year-old Tamir Rice, an African-American boy, was fatally shot by Cleveland police officer Timothy Loehmann while playing with a toy gun at a recreation center; the incident sparked widespread controversy due to the rapid shooting despite the caller mentioning the gun was likely fake and the victim's young age, highlighting issues of police brutality and racial profiling within the Black Lives Matter movement. Tamir Rice was playing with a replica airsoft pistol at a park when someone called 911 to report a person with a gun. The video was released by the Cleveland Police Department, and the incident is well covered with dispatch and 911 audio by News 5 Cleveland: https://www.news5cleveland.com/news/local-news/cleveland-metro/tamir-rice-shooitng-video-warning-cleveland-police-release-surveillance-video LEO Radio is a podcast produced by the J. Harris Academy of Police Training. As always, the items described in this podcast are for information only.

Popołudniowa rozmowa w RMF FM
Wasyl Bodnar o umowie USA z Ukrainą: Nie będzie dotyczyć tylko minerałów

Popołudniowa rozmowa w RMF FM

Play Episode Listen Later Feb 26, 2025 22:47


"Osiągnęliśmy porozumienie. Przez tydzień trwała rozmowa na poziomie ministra ds. gospodarki z naszej strony i ds. skarbu ze strony amerykańskiej. Co do pieniędzy, zobaczymy (...). Z mojej wiedzy to będzie inna umowa, nie tylko o minerałach" - powiedział w Popołudniowej rozmowie w RMF FM ambasador Ukrainy w Polsce Wasyl Bodnar, odnosząc się do rozmów z USA dotyczących ewentualnych warunków dalszej pomocy. "Nam dawali trzy dni. Eksperci - miesiąc. Przedwczoraj minęły trzy lata" - dodał dyplomata, który zapewnił również, że Ukraina "w żadnym razie nie zapomniała o pomocy ze strony Polski".

Z pasją o mocnych stronach
#257 Słownik talentów – sezon 2 – Rywalizacja (Competition)

Z pasją o mocnych stronach

Play Episode Listen Later Feb 20, 2025 90:47


Osoby, których talentem jest Rywalizacja, mierzą swój rozwój w konfrontacji z wynikami innych. Walczą zawsze o pierwsze miejsce i uwielbiają zawody. “Rywalizacja jest często rozumiana jako brak współpracy, a tu chodzi o zupełnie co innego, o paliwo do działania”. Trudno byłoby mi lepiej podsumować odcinek o talencie Rywalizacja lepiej niż zrobiły to osoby, które o nim opowiadały. W naszej rozmowie usłyszycie o dojrzałym talencie Rywalizacja, który dzięki różnym partnerstwom talentowym realnie pomaga osiągać zamierzone cele. Warto posłuchać jak macie ten talent wysoko, ale też jeżeli macie kogoś z talentem Rywalizacja w swoim otoczeniu. Można nauczyć się co to znaczy dojrzały talent. Zapraszam do słuchania i rozmowy! Goście i gościni odcinka Katarzyna Ostrowska – realizuje projekty w obszarach fotografii, video, grafiki i marketingu. Kieruje swoją kreatywność w stronę branży sportowej.Top 5: Zbieranie, Rozwaga, Rywalizacja, Odkrywczość, WizjonerInstagram Jan Dąbrowski – jest związany z branżą nieruchomości inwestycyjnych.Top 5: Ukierunkowanie, Uczenie się, Analityk, Osiąganie, RywalizacjaLinkedinFacebook Mariusz Szydłowski – buduje architekturę procesową (pomaga właścicielom procesów w ich optymalizacji).Top 5: CZAR, Komunikatywność, Rywalizacja, Osiąganie, UkierunkowanieLinkedinFacebook Intencjonalny newsletter Co tydzień wysyłam list, w którym zapraszam do rozmowy i zadania sobie ważnych pytań. Linki Streszczenie Rywalizacja w życiu zawodowym Dominik: Jak talent Rywalizacja jest widoczny w Waszej pracy? […] The post #257 Słownik talentów – sezon 2 – Rywalizacja (Competition) appeared first on Near-Perfect Performance.

BarCode
NeuroVector

BarCode

Play Episode Listen Later Feb 17, 2025 44:57 Transcription Available


In this conversation, Chris Glanden and Matt Canham delve into the realms of deep fakes, cognitive security, and the implications of AI technology on human agency. They explore the nuances of cognitive security, differentiating it from social engineering, and discuss the potential vulnerabilities that arise as AI continues to evolve. The conversation also touches on the OSI model and how cognitive security can be integrated into existing frameworks, highlighting the importance of understanding cognitive attacks and their implications for both humans and AI systems. In this conversation, Chris Glanden and Matt Canham delve into the evolving landscape of cognitive security, emphasizing the need for security practitioners to adopt a systems-thinking approach. They discuss the implications of AI and direct neural interfaces on security practices, the role of DeepSeek in shaping AI interactions, and insights from the Cognitive Security Institute's meetings. The conversation also touches on emerging trends in cognitive warfare and concludes with a thematic drink inspired by the complexities of the field.TIMESTAMPS:00:00 Introduction to Deep Fakes and AI Technology02:28 Understanding Cognitive Security09:58 Differentiating Cognitive Security from Social Engineering19:05 Exploring the OSI Model and Cognitive Security Layers21:48 Bringing Security Back to Earth24:26 The Role of Cognitive Security in Modern Threats25:02 AI's Impact on Security Practices30:36 DeepSeek and Its Implications33:47 Insights from the Cognitive Security Institute41:28 Emerging Trends in Cognitive Warfare45:43 The Complexity Cocktail: A Thematic ConclusionSYMLINKS:Dr. Matthew Canham – Home – https://www.canham.ai/The official website of Dr. Matthew Canham, showcasing his expertise in human–AI integration, cognitive security, and updates on his latest projects and research. Research – https://www.canham.ai/researchA dedicated section highlighting Dr. Canham's research initiatives and academic contributions in the field of human–AI integration and cognitive security.LinkedIn – https://www.linkedin.com/Dr. Canham's professional networking profile where you can learn more about his career achievements, collaborations, and thought leadership.Cognitive Security: Exploring the Human Layer w/ Dr. Matthew Canham | CSI Talks #1 – https://youtu.be/OGmvoj5Dj_AA YouTube video where Dr. Canham elaborates on cognitive security and human–AI integration, aligning closely with the conversation's focus on evolving security threats.Cognitive Security – Army Cyber Institute – https://cyber.army.mil/Research/Research-Labs/Cognitive-Security/An official U.S. Department of Defense page describing research into cognitive security—protecting decision-making under adversarial conditions. This resource underscores the growing institutional focus on the subject.Apple's Mind-Blowing Invention: AirPods That Can Read Your Thoughts – https://digialps.com/apples-mind-blowing-invention-airpods-that-can-read-your-thoughts/]An article that examines Apple's patent for AirPods designed to detect brain signals, diCONNECT WITH USwww.barcodesecurity.comBecome a SponsorFollow us on LinkedInTweet us at @BarCodeSecurityEmail us at info@barcodesecurity.com

Packet Pushers - Full Podcast Feed
N4N012: Russ White On Why We Need Network Models

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Feb 6, 2025 49:21


Why do we need network models such as OSI? Network architect and author Russ White joins Holly and Ethan to talk about how network models can help engineers ask intelligent questions and understand networking problems. And OSI isn’t your only option–Russ digs into the RINA model and how it compares to OSI (which we covered... Read more »

Packet Pushers - Fat Pipe
N4N012: Russ White On Why We Need Network Models

Packet Pushers - Fat Pipe

Play Episode Listen Later Feb 6, 2025 49:21


Why do we need network models such as OSI? Network architect and author Russ White joins Holly and Ethan to talk about how network models can help engineers ask intelligent questions and understand networking problems. And OSI isn’t your only option–Russ digs into the RINA model and how it compares to OSI (which we covered... Read more »

Consciousness Is All There Is
Dr. Tony Nader and Osi Atikpoh Discuss a Groundbreaking Perspective on Consciousness

Consciousness Is All There Is

Play Episode Listen Later Feb 4, 2025 49:33


Episode hosted by Osi Atikpoh of A Song Called Life Podcast. In Episode #215, neuroscientist Tony Nader joins Osi to discuss a groundbreaking perspective on consciousness, exploring how it serves as the essence of all existence and offering practical tools and methods to connect with our authentic inner nature. Tony Nader, M.D., Ph.D., MARR, is a medical doctor trained at Harvard University and Massachusetts Institute of Technology (Ph.D. in neuroscience), and a globally recognised Vedic scholar. As Maharishi Mahesh Yogi's successor, Dr Nader is head of the international Transcendental Meditation® organisations in over 100 countries. From the Americas to Asia, from Europe to Africa, Dr Nader guides the Transcendental Meditation program and its advanced practices, and the practical applications of this technology in all areas of national life – education, health, business, defense, agriculture, and more. Connect with Osi Atikpoh: iTunes https://podcasts.apple.com/us/podcast/a-song-called-life/id1562899908 Spotify https://creators.spotify.com/pod/show/osi-atikpoh Instagram https://www.instagram.com/asongcalledlifepodcast To order Dr Tony Nader's book Consciousness Is All There Is: https://www.drnaderbooks.com or use your favorite bookseller. Website: https://www.drtonynader.com Instagram https://www.instagram.com/drtonynader Facebook https://www.facebook.com/DrTonyNader YouTube https://www.youtube.com/@DrTonyNader X (Twitter) https://twitter.com/DrTonyNader TikTok https://www.tiktok.com/@drtonynader Linked In https://www.linkedin.com/in/drtonynader Transcendental Meditation website https://www.tm.org Link to original episode: iTunes https://podcasts.apple.com/us/podcast/episode-215-dr-tony-nader/id1562899908?i=1000683778810 Spotify https://creators.spotify.com/pod/show/osi-atikpoh/episodes/Episode-215-Dr--Tony-Nader-e2td2qv

OpenObservability Talks
Open Source AI: Perspectives from the OSI - OpenObservability Talks S5E08

OpenObservability Talks

Play Episode Listen Later Jan 30, 2025 61:06


We all know pretty well what open source means and what AI means. But what does open source AI mean? Is there even such a thing? Join us for an intriguing episode as we host Stefano Maffulli, Executive Director of the Open Source Initiative (OSI). The OSI has been the steward of the Open Source Definition for over two decades, and has recently launched its first draft of the Open Source AI Definition—OSAID 1.0—following intense community discussions, and amid corporate and governmental policy making and opinion shaping efforts. In this engaging fireside chat, host Dotan Horovits and Stefano delve into the evolving role of open source in the age of AI. Stefano shares insights into how open source principles are being applied to artificial intelligence, the challenges of defining openness in this rapidly advancing field, and the impact of OSI's work in shaping the future of AI innovation. Whether you're a developer, data scientist, FOSS advocate, or simply curious about the intersection of open source and AI, this is an episode you won't want to miss! Stefano Maffulli is a leader in the global Open Source community and has experience leading global projects across organisations, partners, and countries. Stefano is a skilled writer and speaker on community building and an active Open Source contributor. The episode was live-streamed on 21 January 2025, and the video is available at https://youtube.com/live/DIcv2YbFC6c You can read the recap blog here: https://medium.com/p/5ba89e11c26d OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. ⁠⁠https://www.youtube.com/@openobservabilitytalks⁠   https://www.twitch.tv/openobservability⁠ Show Notes: 00:44 - Episode and guest intro 02:35 - What's the OSI's role today 07:33 - Trend of open source projects turning to the dark side 17:38 - AI challenges the open source definition 20:39 - How is open source different in AI than in software? 25:41 - The broken social contract of data 34:17 - White paper by the OSI and Open Future 40:23 - Open Source AI Definition v1.0, industry feedback and roadmap 49:55 - The EU AI Act and legislative work for open source AI 55:31 - What's next from the OSI Resources: Open Source Initiative (OSI): https://opensource.org/ Open Future and OSI - white paper on open source AI: https://opensource.org/data-governance-open-source-ai Mozilla and Eleuther - white paper on open source AI: https://foundation.mozilla.org/en/research/library/towards-best-practices-for-open-datasets-for-llm-training/ Dark Side of Open Source: The Community Strikes Back: https://medium.com/p/400a1a5c679e Socials: Twitter:⁠ https://twitter.com/OpenObserv⁠ YouTube: ⁠https://www.youtube.com/@openobservabilitytalks⁠ Dotan Horovits ============ Twitter: https://twitter.com/horovits LinkedIn: https://www.linkedin.com/in/horovits/ Mastodon: https://fosstodon.org/@horovits BlueSky: https://bsky.app/profile/horovits.bsky.social Stefano Maffulli ============= Linkedin: https://www.linkedin.com/in/maffulli Mastodon: @ed@opensource.org Bluesky: @maffulli.net Threads: smaffulli

Cyber Work
CompTIA Network+: Is it necessary for a cybersecurity career? | Guest Tommy Gober

Cyber Work

Play Episode Listen Later Jan 27, 2025 10:45


Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastInfosec Instructor Tommy Gober joins Cyber Work Hacks to discuss the CompTIA Network+ certification. Despite often being bypassed in favor of Security+, Gober explains why Network+ is fundamental for a robust cybersecurity knowledge base. Learn about critical networking concepts like the OSI model, IP addresses and protocols, which are vital for understanding how cyberattacks work. Discover how strengthening your networking proficiency can enhance your cybersecurity career, even if you don't aim to become a network admin. Gober also shares top tips for excelling in the Network+ exam, including mastering port numbers and subnetting. Don't miss this enriching episode designed to boost your cybersecurity skills!0:00 Introduction 0:50 Cybersecurity salary ebook1:44 Overview of Network+ certification2:55 Deep dive into networking concepts5:15 Integrating Network+ with Security+7:03 Essential networking skills for cybersecurity9:03 Top tips for Network+ exam preparation10:02 Final thoughts– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.

Zināmais nezināmajā
Zinātnieki iemācījušies mākslīgi iegūt zirnekļa zīdu

Zināmais nezināmajā

Play Episode Listen Later Jan 23, 2025 40:57


Jau vairākas desmitgades zinātnieki meklē veidu, kā atdarināt to, kas dabā tik šķietami viegli tiek veidots – zirnekļa zīdu. No kā šis dzīvnieks tik prasmīgi veido savus tīklus? Ne reizi vien dzirdēts, ka šis biomateriāls pārspēj pat tēraudu un cilvēka radītu izturīgāko polimēru gan stingrībā, gan elastībā. Kā dabisku zirnekļa zīdu var radīt mākslīgi un kādam nolūkam tas kalpotu? Raidījumā Zināmais nezināmajā skaidro Latvijas Universitātes Medicīnas un dzīvības zinātņu fakultātes dekāns, profesors un ķīmiķis, Organiskās sintēzes institūta (OSI) vadošais pētnieks Kristaps Jaudzems, Latvijas Universitātes Medicīnas un dzīvības zinātņu fakultātes doktorants, Organiskās sintēzes institūta zinātniskais asistents Viktors Romaņuks un jaunuzņēmuma "PrintyMed" vadītāja Jekaterina Romanova. Kā zirneklis pin savu tīklu un no kā tas sastāv, vairāk stāsta zirnekļu pētniece Inese Cera. Zirnekļu zīds ir viens no izturīgākajiem dabas materiāliem, un jau vairākas desmitgades zinātnieki aktīvi pēta tā īpašības un meklē veidus, kā to atdarināt. Šis biomateriāls izceļas ar unikālu stingrības un elastības kombināciju, kas pārsniedz pat kevlāra un tērauda īpašības. Turklāt zirnekļu zīds ir bioloģiski saderīgs un bioloģiski noārdāms, padarot to par daudzsološu materiālu ne tikai materiālzinātnei, bet arī medicīnas un kosmētikas nozarēm. Tomēr zirnekļu zīda proteīnu (spidroīnu) iegūšana no zirnekļiem industriālos apjomos nav iespējama viņu kanibālistiskā un teritoriālā dzīvesveida dēļ. Šis ierobežojums ir veicinājis alternatīvu spidroīnu iegūšanas metožu izstrādi biotehnoloģiskā ceļā. OSI ar sadarbības partneriem Kristapa Jaudzema vadībā ir izstrādājis inovatīvu metodi mākslīgā zirnekļa zīda šķiedru iegūšanai ar uzlabotām fizikālajām īpašībām, kuru elastība ir gandrīz divreiz lielāka nekā dabiskajam zirnekļa zīdam. Latvijas Zinātņu akadēmija šo atklājumu ir atzinusi par vienu no izcilākajiem sasniegumiem zinātnē 2024. gadā. Pētījums tapis, sadarbojoties pētniekiem Latvijas Organiskās sintēzes institūtā, Latvijas Universitātē, Rīgas Tehniskajā universitātē un Karolinskas institūtā Zviedrijā. Pirms iepazīstam, kā zinātnieki spējuši radīt zirnekļa zīdu, zinātnes ziņas. Par pupiņām uz Marsa, smadzeņu implantu, kas darbina virtuālu dronu, ASV prezidenta inaugurācijas ceremonijas vēsturi un notikumiem Latvijas zinātnē, tostarp ornitoloģijā stāsts zinātnes ziņās. Marsa satelītattēlos pamanītās pupiņām līdzīgās ģeoloģiskās formas varētu norādīt uz ūdens un dzīvības pazīmēm. Iespējams, dzirdot šoteikumu, kādiem rodas jautājums: “Kādas vēl pupiņas uz Marsa?” Bet fotogrāfijas, kas pēdējās nedēļas laikā ne reizi vien bijusi ieraugāma tīmeklī, ataino ēdamajām pupiņām vienreizēji līdzīgus objektus, bet tās gan nav ēdamās pupiņas. Tās ir ģeoloģiskas formas, kas patiesībā ir sasalušas smilšu kāpas. Attēlu var apskatīt arī vietnē “Live Science”.  Smadzeņu implants ļauj paralizētam cilvēkam ar domu spēku vadīt virtuālu dronu. Konkrētajam cilvēkam ir tetraplēģija, attiecīgi visu četru locekļu - kāju, roku - paralīze. Līdz ar to cilvēks, tikai skatoties uz dronu ekrānā, vienkārši domā, ka kustina pirkstus. Cilvēka smadzeņu signālus interpretē mākslīgā intelekta modelis, kas tālāk tiek izmantots, lai vadītu simulētu dronu. Labākam priekšstatam par šo var ielūkoties vietnē “Nature”. Amerikas inaugurācijas tradīcijas aizsākušās pirms daudziem gadsimtiem. Vēl viens šīs nedēļas būtisks notikums ir ASV 47. prezidenta Donalda Trampa inaugurācijas ceremonija, un temats, kas arī ir vesela pētījuma cienīgs, tā ir inaugurācijas tradīciju vēsture. Par šo tad stāstu var meklēt “Nacionālās ģeogrāfijas” vietnē. Kā tur norādīts, tad, neskatoties uz to, ka inaugurācijas diena parasti tiek svinēta ar dažādām parādēm un ballēm, tas nav stāsts tikai par greznību. ASV vēsture apliecinājusi, ka pat pēc strīdīgām vēlēšanām, pat Pilsoņu kara laikā, prezidenta inaugurācija ir bijis galvenais simbols miermīlīgai varas nodošanai, un tas ir viens no Amerikas konstitucionālās demokrātijas pamatiem.  Par inaugurācijas tradīciju evolūciju vairāk lasāms “Nacionālās ģeogrāfijas” lappusēs. Nedēļas nogalē ikvienu aicina stundas garumā pie barotavām skaitīt ziemojošos putnus. Latvijas Ornitoloģijas biedrība no 24. līdz 26. janvārim aicina ikvienu stundas garumā piedalīties akcijā “Ziņo par putniem dārzā 2025”.    

Vlevo dole
Reaktor: Konečná sehrála s Maláčovou drsnou partii

Vlevo dole

Play Episode Listen Later Jan 16, 2025 27:37


Námluvy levicových lídryň, šéfky komunistů Kateřiny Konečné a předsedkyně SOCDEM Jany Maláčové, zkrachovaly. Sen o vytvoření široké levicové koalice Stačilo! nevyšel, bude v ní chybět právě sociální demokracie.Kateřina Konečná ve středu oznámila, v jaké sestavě půjde koalice Stačilo! do zářijových sněmovních voleb: její KSČM plus dvě menší, národně konzervativní strany. Tedy bez Jany Maláčové - ačkoliv se obě veřejně měsíce vyjadřovaly, jak moc se chtějí domluvit.Překotné dění na levicové frontě podrobně probírají političtí reportéři Václav Dolejší a Lucie Stuchlíková v aktuální epizodě svého podcastu Vlevo dole.„Mám dojem, že Konečná si s Maláčovou od začátku jen hrála, lákala ji do pasti. Budila v ní falešný dojem, jaký o ni má zájem, ale ve skutečnosti o sociální demokraty nestála,“ myslí si Dolejší. „Šéfka komunistů tím jen vzala SOCDEM čas na restart před volbami. A ještě ji ušpinila koketováním s proruskými komunisty.“Kateřině Konečné se nelze divit, protože po loňských úspěších v evropských a krajských volbách si vyzkoušela, že sociální demokraty vlastně vůbec nepotřebuje. Osiřelá SOCDEM teď stojí před rozhodnutím, jakou strategii zvolit do voleb.„Nevidím už moc dobrých variant a času je málo, vždyť volby jsou za osm měsíců. Sociální demokraté do nich mohou jít sami, ale tak nemají šanci. Nebo se třeba mohou spojit s liberálními levicovými hnutími, jako jsou Budoucnost nebo Idealisté. Jenže po námluvách s komunisty by takový obrat nebyl moc věrohodný,“ říká Lucie Stuchlíková.Co má Kateřina Končená společného s Ivo Kaderkou, bývalým šéfem tenisového svazu? Kdo hacknul mladým sociálním demokratům Facebook? A máte radši uhlobarony, nebo agrobarony? V každém případě si poslechněte aktuální Reaktor.----Vlevo dole řeší politické kauzy, boje o vliv i šeptandu z kuloárů Sněmovny. Vychází každou středu v poledne.Podcast pro vás připravují Lucie Stuchlíková (@StuchlikovLucie) a Václav Dolejší (@VacDol), reportéři Seznam Zpráv.Další podcasty, ale taky články, komentáře a videa najdete na zpravodajském serveru Seznam Zprávy. Poslouchejte nás na webu Seznam Zpráv, na Podcasty.cz nebo ve své oblíbené podcastové aplikaci.Své názory, návrhy, otázky, stížnosti nebo pochvaly nám můžete posílat na adresu audio@sz.cz.Sledujte @SeznamZpravy na sociálních sítích: Twitter // Facebook // Instagram.Seznam Zprávy jsou zdrojem původních informací, nezávislé investigace, originální publicistiky. 

Z pasją o mocnych stronach
#254 Rywalizacja, która pomaga prowadzić firmę – talenty w praktyce

Z pasją o mocnych stronach

Play Episode Listen Later Jan 9, 2025 52:45


Przygotowywałem się do odcinka o talencie Rywalizacja, kiedy odezwał się do mnie Nataniel Brożnowicz z maturalni.com i napisał, że korzystają z talentów na co dzień w swojej firmie. Wspomniał też, że mają ciekawą sytuację – w zarządzie firmy jest kilka osób, które mają talent Rywalizacja w Top 5. Jak wygląda Rywalizacja w zespole? Jak w praktyce wykorzystują talenty do prowadzenia firmy? Od razu pomyślałem, że to będzie interesująca rozmowa. Umówiliśmy się na rozmowę z Natanielem oraz Sebastianem Peretem, której możecie posłuchać w tym odcinku. Jest ciekawie, z kilkoma zaskoczeniami! Jest też bardzo praktycznie – a wiecie, że to uwielbiam. Zapraszam do słuchania! Goście Sebastian Peret – na co dzień prowadzi trzy firmy i fundację. Definiuje się jako osoba czynu i społecznik.Top 5: Rywalizacja, Strateg, Osiąganie, Bliskość, OrganizatorFacebook, LinkedIn Nataniel Brożnowicz – całe swoje życie poświęca temu, by pomagać młodym ludziom dostawać się na wymarzone studia. Wspólnie z Sebastianem tworzy maturalni.com. Jest twórcą internetowym, poruszającym tematy związane z edukacją, olimpiadami oraz sposobami na wykorzystanie młodości.Top 5: Rywalizacja, Aktywator, Ukierunkowanie, Wizjoner, PoważanieInstagram, Maturalni na Instagramie Linki Intencjonalny newsletter Co tydzień wysyłam list, w którym zapraszam do rozmowy i zadania sobie ważnych pytań. Streszczenie Dominik: Wrzuciłem na grupę na Facebooku informację, że szukam […] The post #254 Rywalizacja, która pomaga prowadzić firmę – talenty w praktyce appeared first on Near-Perfect Performance.

Hacker Public Radio
HPR4284: HPR Developer Information

Hacker Public Radio

Play Episode Listen Later Jan 2, 2025


This show has been flagged as Clean by the host. Taken from https://repo.anhonesthost.net/HPR/hpr_documentation/src/branch/main/developer_information.md Developer Information. Hacker Public Radio (HPR) is a long term project run by volunteers. Project Principles There are a few things you need to be aware of before you decide to contribute to HPR. Our prime directive is that "HPR is dedicated to sharing knowledge". Any software development is done with the goal of supporting the distribution of the podcast media, locally so they can be played on as many devices as possible. The priority is to keep the flow of shows coming in and going out, fix any accessibility issue that arise, then work on any other feature requests. Some things we can change without discussion but other things we need to get input from the HPR Community . Changes can take a long time, community approval can take several months, while other changes require a lot of work from volunteers who are focused on other priorities. We allow redistribution by releasing all our content under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) . In the same vein all our code is released under aGPLv3 , or other OSI approved licenses . We do not track statistics to the detriment of our prime directive. We make the entire delivery ecosystem redundant using native Internet standards, and the cooperation of community members. All Data is available by default. Community Members, sponsors, and hosting platforms will change over time. We have a distrust of online platforms, libraries and niche tools (that we do not support ourselves) as they can and have disappeared overnight . We are very conservative in our choice of tech. As a rule of thumb, all software choices tend to be technology that was developed years ago, and is likely to be around for years to come. We make our code as simple to understand as possible as our replacement may not have the skill set we do. That said, we move with the times when there is a clear advantage to do so. We run up to date patched stable software. We have a long tradition of supporting and sharing hacker culture. Any identified vulnerability are fixed with credit if requested. We use RSS as a delivery mechanism, which is by default fault tolerant. Our primary domains HackerPublicRadio.com and HackerPublicRadio.org are registered with different providers, and the DNS is served from different locations. All our code is on GitTea , please clone locally. Our database is updated frequently, please copy locally. Our media is served from our Community Content Delivery Network (CCDN) Bug reports, and patches are welcome from anyone without commitment. If you are contributing new code, or new technology we ask you commit to supporting it for a minimum of two years . This allows the other Janitors time to learn the new tech and support it when you leave. Create an Account If you're happy with all that, then... In order to contribute you need to create an account , but you also need to notify the admins@hpr either via email, mastodon, or matrix that you have created an account. Due to the level of spam accounts we need to approve each one by one. Provide feedback on this episode.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe

Radio Naukowe
#230 Cukrzyca – niebezpieczna dla ciała, trudna dla psychiki | prof. Agnieszka Szadkowska

Radio Naukowe

Play Episode Listen Later Dec 19, 2024 45:49


Osiągnięcia medycyny w przypadku cukrzycy mają słodko-gorzki posmak. – Nie potrafimy wyleczyć pacjenta, ale potrafimy świetnie leczyć – mówi w Radiu Naukowym z prof. Agnieszka Szadkowska z Uniwersytetu Medycznego w Łodzi kierujaca Kliniką Pediatrii, Diabetologii, Endokrynologii i Nefrologii.

Packet Pushers - Full Podcast Feed
N4N005: The Sort-of-Useful OSI Model

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Dec 5, 2024 31:59


Network engineers should be familiar with the Open Systems Interconnection (OSI) model, a framework for understanding network communications. On today’s show, Ethan and Holly delve into each of the model’s seven layers to discuss their functions, associated protocols, and practical implications for modern networking. They also talk about how the OSI model is, in fact,... Read more »

Packet Pushers - Fat Pipe
N4N005: The Sort-of-Useful OSI Model

Packet Pushers - Fat Pipe

Play Episode Listen Later Dec 5, 2024 31:59


Network engineers should be familiar with the Open Systems Interconnection (OSI) model, a framework for understanding network communications. On today’s show, Ethan and Holly delve into each of the model’s seven layers to discuss their functions, associated protocols, and practical implications for modern networking. They also talk about how the OSI model is, in fact,... Read more »

NFL: Good Morning Football
Osi Umenyiora talks his attempt to keep Saquon Barkley in NY, his Nigeria to Alabama culture shock, and the Africa to NFL player pipeline

NFL: Good Morning Football

Play Episode Listen Later Dec 4, 2024 45:18 Transcription Available


On the latest Off the Edge with Cam Jordan podcast, Cam is joined by 2-time Super Bowl champ and NFL ambassador to Africa Osi Umenyiora. Osi has worked tirelessly to build the NFL game all across the continent of Africa, and he shares the success stories of players who have made the transition. He and Cam also discuss their roles in the heated New Orleans Saints vs. Atlanta Falcons rivalry. Osi reveals details of a conversation he had with Philadelphia Eagles running back Saquon Barkley to try and keep the MVP candidate in New York. He also shares his journey from Nigeria to Alabama that rivals the movie Coming to America. Osi also explains why his six sack game vs. the Eagles is not the best game of his career. The Off the Edge with Cam Jordan podcast is a production of the NFL in partnership with iHeart Media.See omnystudio.com/listener for privacy information.

Off the Edge with Cam Jordan
Osi Umenyiora talks his attempt to keep Saquon Barkley in NY, his Nigeria to Alabama culture shock, and the Africa to NFL player pipeline

Off the Edge with Cam Jordan

Play Episode Listen Later Dec 4, 2024 45:18 Transcription Available


On the latest Off the Edge with Cam Jordan podcast, Cam is joined by 2-time Super Bowl champ and NFL ambassador to Africa Osi Umenyiora. Osi has worked tirelessly to build the NFL game all across the continent of Africa, and he shares the success stories of players who have made the transition. He and Cam also discuss their roles in the heated New Orleans Saints vs. Atlanta Falcons rivalry. Osi reveals details of a conversation he had with Philadelphia Eagles running back Saquon Barkley to try and keep the MVP candidate in New York. He also shares his journey from Nigeria to Alabama that rivals the movie Coming to America. Osi also explains why his six sack game vs. the Eagles is not the best game of his career. The Off the Edge with Cam Jordan podcast is a production of the NFL in partnership with iHeart Media.See omnystudio.com/listener for privacy information.

Late Night Linux
Late Night Linux – Episode 306

Late Night Linux

Play Episode Listen Later Nov 5, 2024 31:20


Linux removes Russian maintainers and bungles the explanation, Flutter is forked due to Google's “labor shortage”, the OSI finally defines open source AI (and we don't take it very seriously), Hollywood uses loads of FOSS, an easy way to help out Home Assistant, and Thunderbird for Android arrives.   News Some Clarity On The Linux... Read More

This Week in Tech (Audio)
TWiT 1004: Embrace Uncertainty - Political Texts, Daylight Saving Time, Digital Ad Market

This Week in Tech (Audio)

Play Episode Listen Later Nov 4, 2024 172:42


The panel discusses the implications of the 2024 US elections to the tech industry Alexa's New AI Brain Is Stuck in the Lab Why do we keep changing our clocks? ChatGPT Search Online political ads--we're almost done! Elon Musk Wants You to Think This Election's Being Stolen Open-source AI must reveal its training data, per new OSI definition Chinese sanctions hit US drone maker supplying Ukraine Digital ad market booms for Big Tech Host: Leo Laporte Guests: Shoshana Weissmann, Nicholas De Leon, and Amy Webb Download or subscribe to this show at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit lookout.com NetSuite.com/TWIT ZipRecruiter.com/Twit canary.tools/twit - use code: TWIT

This Week in Tech (Video HI)
TWiT 1004: Embrace Uncertainty - Political Texts, Daylight Saving Time, Digital Ad Market

This Week in Tech (Video HI)

Play Episode Listen Later Nov 4, 2024 172:42


The panel discusses the implications of the 2024 US elections to the tech industry Alexa's New AI Brain Is Stuck in the Lab Why do we keep changing our clocks? ChatGPT Search Online political ads--we're almost done! Elon Musk Wants You to Think This Election's Being Stolen Open-source AI must reveal its training data, per new OSI definition Chinese sanctions hit US drone maker supplying Ukraine Digital ad market booms for Big Tech Host: Leo Laporte Guests: Shoshana Weissmann, Nicholas De Leon, and Amy Webb Download or subscribe to this show at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit lookout.com NetSuite.com/TWIT ZipRecruiter.com/Twit canary.tools/twit - use code: TWIT

All TWiT.tv Shows (MP3)
This Week in Tech 1004: Embrace Uncertainty

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 4, 2024 172:42


The panel discusses the implications of the 2024 US elections to the tech industry Alexa's New AI Brain Is Stuck in the Lab Why do we keep changing our clocks? ChatGPT Search Online political ads--we're almost done! Elon Musk Wants You to Think This Election's Being Stolen Open-source AI must reveal its training data, per new OSI definition Chinese sanctions hit US drone maker supplying Ukraine Digital ad market booms for Big Tech Host: Leo Laporte Guests: Shoshana Weissmann, Nicholas De Leon, and Amy Webb Download or subscribe to this show at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit lookout.com NetSuite.com/TWIT ZipRecruiter.com/Twit canary.tools/twit - use code: TWIT

Radio Leo (Audio)
This Week in Tech 1004: Embrace Uncertainty

Radio Leo (Audio)

Play Episode Listen Later Nov 4, 2024 172:42


The panel discusses the implications of the 2024 US elections to the tech industry Alexa's New AI Brain Is Stuck in the Lab Why do we keep changing our clocks? ChatGPT Search Online political ads--we're almost done! Elon Musk Wants You to Think This Election's Being Stolen Open-source AI must reveal its training data, per new OSI definition Chinese sanctions hit US drone maker supplying Ukraine Digital ad market booms for Big Tech Host: Leo Laporte Guests: Shoshana Weissmann, Nicholas De Leon, and Amy Webb Download or subscribe to this show at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit lookout.com NetSuite.com/TWIT ZipRecruiter.com/Twit canary.tools/twit - use code: TWIT

This Week in Google (MP3)
TWiG 792: 5 to 10 People at your Door - Alphabet Earnings, RIP Foursquare, McFlurries

This Week in Google (MP3)

Play Episode Listen Later Oct 31, 2024 168:30


Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit

All TWiT.tv Shows (MP3)
This Week in Google 792: 5 to 10 People at your Door

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 31, 2024 168:30


Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit

Radio Leo (Audio)
This Week in Google 792: 5 to 10 People at your Door

Radio Leo (Audio)

Play Episode Listen Later Oct 31, 2024 168:30


Google Preps AI That Takes Over Computers Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said Jeff: Why Are Liberals Infuriated with the Media? Apples Have Never Tasted So Delicious. Here's Why Instagram saves the best video quality for the most popular content Video game preservationists have lost a legal fight to study games remotely Internet Archive: Vanishing Culture: A Report on Our Fragile Cultural Record Alphabet posts big revenue and profit growth More than a quarter of new code at Google is generated by AI Open-source AI must reveal its training data, per new OSI definition McDonald's Finds an Unlikely Savior to Finally Fix Its McFlurry Machines RIP Foursquare Craig gives CR $5 million for cybersecurity WordPress co-founder Matt Mullenweg says a fork would be 'fantastic' LeCun blasts Musk as the biggest threat to democracy today Workers Say They Were Tricked and Threatened as Part of Elon Musk's Get-Out-the-Vote Effort Trump's Truth Social valued at more than Musk's X after extraordinary rally Masnick on Elon Musk Events TikTok founder becomes China's richest man The Age of Cage Russian court fines Google $20,000,000,000,000,000,000,000,000,000,000,000 McKinsey's 18 next big arenas of competition Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: veeam.com uscloud.com INFO.ACILEARNING.COM/TWIT - code TWIT100 cachefly.com/twit

Business of Tech
AI Truthfulness, Open Source Standards, TeamViewer & GitHub Innovations, Cisco's Partner Program

Business of Tech

Play Episode Listen Later Oct 31, 2024 11:51


A new study that reveals how large language models (LLMs) encode truthfulness internally. The research focused on specific response tokens that determine correctness across various models, indicating that LLMs have a structured way of representing truthfulness. This finding could lead to improved reliability in AI outputs, particularly in critical applications like healthcare, where inaccuracies can have serious consequences.The episode also highlights the release of the Open Source AI definition 1.0 by the Open Source Initiative, which aims to clarify what constitutes open-source AI. This new standard requires AI models to disclose detailed information about their design and training data, addressing concerns about transparency in the AI development space. Sobel emphasizes the importance of this definition for IT leaders and developers, as it provides a framework to assess models for true openness, thereby reducing reputational risks and legal liabilities associated with unverified datasets.In addition to these developments, Sobel covers the launch of AI-powered features by TeamViewer, designed to enhance remote support efficiency for IT teams. The new tools, called Session Insights, automatically summarize sessions and provide analytics, which can significantly improve decision-making and handovers. GitHub also announced updates to its coding assistant, GitHub Copilot, which will soon support new large-language models, enhancing developer choice and functionality. Meanwhile, LinkedIn introduced its AI Hiring Assistant to streamline the recruiting process, allowing recruiters to connect with potential candidates more efficiently.Finally, Sobel discusses Cisco's new 360 Partner Program, which aims to modernize infrastructure and enhance the value partners deliver to customers. The program will focus on skill development and solution-based specialization, reflecting a shift in how partners will operate in the evolving tech landscape. The episode concludes with a call for caution regarding the full automation of processes that rely on AI-generated outputs, stressing the need for review and verification policies to mitigate risks associated with AI inaccuracies. Four things to know today00:00 New Study Finds LLMs Encode Truthfulness Internally, Offering Potential to Reduce Hallucinations in AI Responses02:54 OSI's Open Source AI Definition 1.0 Sets New Benchmark for Transparency, Targeting ‘Open in Name Only' Models 04:35 TeamViewer, GitHub, and LinkedIn Launch AI Innovations for IT07:22 Cisco Transitions Partners to Solution-Based Specializations with New Program  Supported by:  https://mspradio.com/engage/  All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social