POPULARITY
In der 40. Folge von Informatik für die moderne Hausfrau spreche ich mit Eveline Prochaska über Informatik im Kontext des Gesundheitswesens. Dabei beschäftigen wir uns einerseits mit Gesundheitsdaten und andererseits mit den Chancen der Digitalisierung. Wir sprechen über die elektronische Patientenakte (ePA), die in Deutschland künftig flächendeckend zum Einsatz kommen soll, und insbesondere über die Daten, die dadurch für die Wissenschaft nutzbar gemacht werden können. Eveline schildert unter anderem, welche Anforderungen Forschungsprojekte erfüllen müssen, um tatsächlich mit diesen Daten arbeiten zu dürfen. Weil wir häufig dazu neigen, unsere Gesundheitsdaten sehr freigebig zu teilen, werfen wir einen genaueren Blick auf Gesundheitsapps. Eveline berichtet von Qualitätsunterschieden nicht nur in Bezug auf den Datenschutz, sondern auch auf Messungen, Auswertungen und somit auf die gesamte Funktionalität. In diesem Zusammenhang erklärt sie uns, wie Apps, die als Medizinprodukte zertifiziert sind, zuverlässig bei Diagnoseprozessen unterstützen können. Wir erhalten Einblicke in Evelines Forschung im Bereich des Healthcare Engineering, im Rahmen derer sie Messverfahren aus dem klinischen Bereich digitalisiert hat, die zum Beispiel in der Ergo- und Physiotherapie zum Einsatz kommen. Warum das Ergebnis zwar ein Erfolg, aber (noch) kein Markterfolg ist und was der Zertifizierungsprozess für Medizinprodukte damit zu tun hat, erfahren wir ebenfalls. Eveline erzählt außerdem von ihrem (Um-)Weg zur Promotion und in die Wissenschaft, von den Hürden und Herausforderungen, die sie als First-Gen-Akademikerin zu bewältigen hatte, und von der prekären Situation alleinerziehender Mütter im (deutschen und österreichischen) Wissenschaftssystem. Weitere Informationen zu Eveline Prochaska sowie Kontaktmöglichkeiten findet ihr hier: https://www.linkedin.com/in/eveline-prochaska-3a5987209/ Mehr über Evelines Forschung könnt ihr hier erfahren: https://www.researchgate.net/profile/Eveline-Prochaska Mehr zum Thema Healthcare Engineering könnt ihr auf Evelines Webseite nachlesen: http://www.healthcare-engineering.de Die Aufzeichnung eines Beitrags auf dem Chaos Communication Congress 2024 zu den Sicherheitslücken der elektronischen Patientenakte könnt ihr euch hier anschauen: https://media.ccc.de/v/38c3-konnte-bisher-noch-nie-gehackt-werden-die-elektronische-patientenakte-kommt-jetzt-fr-alle Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden. Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung oder eine kurze Rezension, um ihm zu mehr Sichtbarkeit zu verhelfen. Rezensionen könnt ihr zum Beispiel bei Apple Podcasts schreiben oder auf panoptikum.social. Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.
Spoločenské zmeny za rok 2024 a čo nás čaká v roku 2025. Hlavnými témami sú zmeny v oblasti slobody, o význame umelej inteligencie a o tom, ako nám technológie poskytujú nové možnosti. Odkazy k podcastu: Titulná fotografia je z Chaos Communication Congress, použitá pod Creative Commons Share Alike licenciou od Leah Oswald.
Ako sa Marianna dostala na majstrovstvá sveta v saunových ceremoniáloch, ako dopadlo oranžové leto, ako sa zabávajú hackeri a čo nás čaká v roku 2025. Toto je prvý z dvojdielnej série podcastov o roku 2024 s výhľadom do roku 2025. Odkazy: Titulná fotografia je z lounge Chaos Communication Congress, použitá pod Creative Commons Share Alike […]
Anna Kalinowsky, heise-online-Chefredakteur Dr. Volker Zota (@DocZet) und Malte Kirchner (@maltekir) sprechen in dieser Ausgabe der #heiseshow unter anderem über folgende Themen: - CES 2025: KI und Killefit? KI dominiert die CES 2025 – Die Technikmesse in Las Vegas präsentiert sich als Leistungsschau der KI-Integration. Von smarten Haushaltsgeräten bis zu autonomen Fahrzeugen ist alles dabei. Welche Innovationen sind wirklich relevant? Und wie nachhaltig ist der aktuelle KI-Trend? - Kehrtwende bei Meta: Kommt jetzt X 2.0? Meta lockert Moderationsrichtlinien Mark Zuckerberg kündigt weniger Moderation und mehr Meinungsfreiheit auf Facebook und Instagram an. Was steckt hinter dieser strategischen Neuausrichtung? Könnte dies zu einer Entwicklung der Plattformen führen wie bei X (ehemals Twitter)? - 38C3: Gemischtes vom Hack – Der Chaos Communication Congress thematisiert kritische Schwachstellen, so etwa in der elektronischen Patientenakte. Wie gravierend sind die entdeckten Sicherheitslücken? Und welche Eindrücke nahm Volker vor Ort mit? Außerdem wieder mit dabei: ein Nerd-Geburtstag, das WTF der Woche und knifflige Quizfragen.
38C3 Chaos Communication Congress 2024 - mehr als 200 Vorträge. Wir teilen mit euch die packendsten Vorträge zu Technologie, Gesellschaft und digitaler Zukunft - von Security bis Utopie. Jetzt reinhören!
Auf dem 38C3 (dem 38. Chaos Communication Congress) wurde eine Studie zur KI-Korrekturhilfe der Firma Fobizz vorgestellt. Dazu wurde über Wahlsoftware gesprochen. Was das alles miteinander zu tun hat, und warum wir diese Phänomene haben, darüber spreche ich hier... und ja es ist etwas rantig.
Wir haben uns etwas spontan auf dem 38. Chaos Communication Congress zusammen gefunden und über Lower Decks gesprochen. Dabei bleibt nicht aus über die wirre Distributions Form und Kommunikation von Paramount zu reden, aber wir reden vor allem über unsere Highlights.
På årets upplaga av Chaos Communication Congress avslöjades en stor dataläcka. Volkswagen Group hade samlat in bilars GPS-positioner och sparat dessa utan totalsträckskryptering. De etiska hackarna i Chaos Computer Club kom över datan i samband med att de upptäckte att Volkswagen hade råkat lägga sina inloggningsuppgifter till molntjänsten Amazon AWS publikt på webben. Totalt berörs över 800 000 elbilar av märkena Volkswagen, Skoda, Audi och Cupra (Seat). Enligt tyska Spiegel tillhör 68 000 av bilarna svenska bilägare. Volkswagen- och Cupra-bilarna är värst drabbade eftersom Volkswagen där, i strid med sin en policy, hade sparat GPS-positionerna med 10 cm noggrannhet. Det var därför inga problem för de etiska hackarna att se var bilarna hade parkerat historiskt, vilket i sin tur avslöjade allt från ett hemligt Volkswagen-labb till platserna där polis- och underrättelsetjänstfordon brukade parkera. I årets första poddavsnitt pratar Peter och Nikka om Volkswagen-läckan. Peter förklarar varför enbart specifika bilmodeller drabbades. Nikka förklarar varför de läcka GPS-positionerna ska ses som personuppgifter och varför Volkswagen inte borde ha samlat in dem från första början. Se fullständiga shownotes på https://go.nikkasystems.com/podd281.
In der ersten Folge des Jahres meldet sich Christopher aus dem Hamburger Außenstudio. Mit zwei Gästen, nämlich Linus Neumann vom CCC und Prof. Florian Adamsky von der Hochschule Hof, spricht er über vier aktuelle Themen, die auch Gegenstand von 38C3-Vorträgen sind: Die Rowhammer-Sicherheitslücke in DRAM, das Datenleck bei VW, unsichere Wahlsoftware und aus China gesteuerte Fake-Shops. - 38C3-Talk zu FlippyRAM: https://media.ccc.de/v/38c3-ten-years-of-rowhammer-a-retrospect-and-path-to-the-future - FlippyRAM: https://flippyr.am/ - 38C3-Talk zu Volkswagen-Leck: https://media.ccc.de/v/38c3-wir-wissen-wo-dein-auto-steht-volksdaten-von-volkswagen - SRLabs zu BogusBazaar: https://www.srlabs.de/blog-post/bogusbazaar - Fakeshop-Finder der Verbraucherzentrale: https://www.verbraucherzentrale.de/fakeshopfinder-71560 - 38C3-Talk zu BogusBazaar: https://media.ccc.de/v/38c3-fake-shops-von-der-stange-bogusbazaar - 38C3-Talk zum Thüring-Test: https://media.ccc.de/v/38c3-der-thring-test-fr-wahlsoftware Mitglieder unserer Security Community auf heise security PRO hören alle Folgen bereits zwei Tage früher. Mehr Infos: https://pro.heise.de/passwort
freie-radios.net (Radio Freies Sender Kombinat, Hamburg (FSK))
Eine dreiviertel Stunde reden im gehen. https://events.ccc.de/congress/2024/infos/startpage.html
Diesmal präsentieren wir eine ganz besondere Episode: eine Live-Aufnahme vom 38C3, dem 38. Chaos Communication Congress. In dieser Episode geht es um ein Update zu GPS-Spoofing und Jamming sowie um die Auswirkungen aktueller Krisen auf Flugrouten.
Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur
Der 38. Kongress des Chaos Computer Clubs ruft zu Widerstand gegen Sicherheitslücken und autoritäre Maßnahmen auf. Wie strategische Prozessführung als Hack im Gerichtssaal, Risikobewertung synthetischer Biologie oder der Test von Schul-Chatbots. Genzmer, Jenny; Linß, Vera; Dreier, Jochen www.deutschlandfunkkultur.de, Breitband
Breitband - Medien und digitale Kultur - Deutschlandfunk Kultur
Der 38. Kongress des Chaos Computer Clubs ruft zu Widerstand gegen Sicherheitslücken und autoritäre Maßnahmen auf. Wie strategische Prozessführung als Hack im Gerichtssaal, Risikobewertung synthetischer Biologie oder der Test von Schul-Chatbots. Genzmer, Jenny; Linß, Vera; Dreier, Jochen www.deutschlandfunkkultur.de, Breitband
Welchering, Peter www.deutschlandfunk.de, Forschung aktuell
Straehler-Pohl, Jörn www.deutschlandfunk.de, Das war der Tag
Heute vor 40 Jahren organisierte der Chaos Computer Club in Hamburg den ersten Hacker-Kongress, heute bekannt als Chaos Communication Congress.
Eine weitere Runde mit Thomas und dieses Mal blicken wir in die USA wo die Geheimdienste einen bemerkenswerten Sinneswandel in Hinblick auf Verschlüsselung vollziehen, zur EU wo die Chatkontrolle weiterhin (vorerst) scheitert, nach Ungarn wo EU-Beamte gehackt werden, nach Berlin, wo die Agentur für Sprunginnovationen ihren Funke-Event für die EUDI Wallet Prototyp-Wettbewerb abgehalten hat, zur UN, wo die Unterstützung für die Cybercrime Convention langsam bröckelt und nach Hamburg, wo am Jahresende der 38. Chaos Communication Congress stattfinden wird.
Nach unserem Ausflug in die englische Sprache kehren wir zu unserer Muttersprache zurück und klappern die Nachrichtenlage ab. Nachdem wir durch das Feedback gegangen sind reichen wir noch ein paar Status Codes nach und geben dann aktuelle Informationen zum anstehenden 38. Chaos Communication Congress. Umfangreich diskutieren wir dann das drohende Sicherheitspaket inkl. unserem Dauerbrenner Vorratsdatenspeicherung , die elektronische Patientenakte und die anstehende Änderung des Hackerparagraphen. Außerdem ist ein erstes Urteil im Fall von Arne Semsrott ergangen, der versucht, über den Gerichtsweg die Verfassungswidrigkeit eines Gesetzes gegen die Veröffentlichung von Gerichtsdokumenten zu belegen. Wir besprechen, wie beide Seiten argumentieren und wie es in dem Fall wohl weitergehen wird. Zum Schluß weisen wir auf die finanzielle Notlage von epicenter.works hin und stellen fest, dass wir Logbuch:Netzpolitik mittlerweile seit 13 Jahren produzieren.
Aufgenommen im Dezember 2023 am Podcasttisch vom Sendezentrum auf dem 37. Chaos Communication Congress in Hamburg begleitet MacSnider euch durch diese Folge, Oliver ist per Audiokommentar dabei!
Radiotipi ist das populäre Medienmagazin von Radio-T aus Chemnitz. „Vom Kinderkanzler Kurz zum Volkskanzler Kickl - Politik in Österreich“ lautete der Titel eines Vortrags auf dem 37C3 - dem Chaos Communication Congress im Dezember in Hamburg. Passend zu den grossen antifaschistischen Demonstrationen in Deutschland am Anfang des Jahres 2024 werfen wir heute einen Blick auf die ENTWICKLUNG der österreichischen, neofaschistischen Partei FPÖ. Theresa und Alexander Muigg erzählen in ihrem Vortrag spannende Geschichten aus dem Land, in dem Deutschlands Gröfaz geboren wurde und das auch heute wieder viel zum Erstarken der Neonazis in ganz Europa beigetragen hat. Dieser Blick auf die aktuelle Politik in Österreich könnte auch ein Blick in die Zukunft Deutschlands sein. Deshalb gibt es am Ende des Vortrags einige Tipps wie das verhindert werden kann. # Radiotipi Radiotipi ist das populäre Medienmagazin von Radio-T aus Chemnitz. Wir produzieren monatlich Radioprogramme über Medien, Hacken, DIY, Antifa, Klima, Frieden und aktuelle Tanzmusik. Skripts und Links zum Nachhören findet Ihr auf: * http://www.radiotipi.de
Nach einer kurzen Pause setzen wir unser Programm fort und wie zu erwarten war blicken wir unter anderem ausführlich auf den 37. Chaos Communication Congress zurück und geben diverse Tips für Vorträge, die man nachschauen kann, Natürlich nimmt auch die aktuelle AfD-Debatte Raum ein und wir schauen auch auf Großbritannien wo ein Softwarefehler unendlich viel Leid erzeugt hat. Dazu noch Informationen rund um Funkzellenabfrage, noyb, Regenschirme und Rohrpostsysteme. Alles drin also.
Wir begrüßen heute Caspar Clemens Mierau (aka Leitmedium) in unserer Runde. Caspar setzt sich sehr mit der Computergeschichte auseinander und so reden wir über den jüngst verstorben Niklaus Wirth, aber es gibt auch andere Blicke in die Historie wie Bitcoin und Tetris. Dann blicken wir natürlich auf den letzten 37. Chaos Communication Congress, reden über ein paar nützliche macOS Tools, Buchhaltung und sehr viel über den aktuellen Stand in Social Media. Außerdem berichtet Ralf von seinem tollen neuen Job und was wir uns so von der Apple Vision Pro erwarten.
In dieser Episode führe ich eine neue Rubrik ein, den Haltungsblock. Wenn Euch das beim Einschlafen also zu sehr aufregen, überspringt es und hört es eventuell erst am nächsten Morgen an. Außerdem gibt es heute meine Eindrücke vom 37. Chaos Communication Congress, kurz 37c3.
Andi und Thomas waren auf den 37C3 also dem 27. Chaos Communication Congress und sprachen unter anderem auch über Hackers von 1995, allerdings nicht erschöpfend, weil Chaos war auch etwas das Thema.
Letzte Folge aus Hamburg vom 37.Chaos Communication Congress mit Maja Göpel, Politikökonomin sowie Expertin für Nachhaltigkeitspolitik und Transformationsforschung. Wir sprechen über Majas Premiere beim 37c3, Akkumulation von Kapital, Korrekturen bei der Messung von Wertschöpfung, "verbeamtete" Geschäftsmodelle, Entschädigung von Konzernen, fossile Rekordgewinne, Verbot von fossilen Investments, erneuerbare Energien im Überfluss, das Ergebnis der Weltklimakonferenz 2023, Wut und Mut, die Macht der Reichen und die Krise der Demokratie, das "Delta of Doom" sowie den Energiehunger durch Digitalisierung Bitte unterstützt unsere Arbeit finanziell: Konto: Jung & Naiv IBAN: DE854 3060 967 104 779 2900 GLS Gemeinschaftsbank PayPal ► http://www.paypal.me/JungNaiv Aufgenommen am 29. Dezember 2023
Ich erzähle ausschließlich von meinem Besuch beim 37. Chaos Communication Congress. Nach Jahren mal wieder im Congress Center Hamburg zu sein, überhaupt mal wieder einen Präsenz-Congress zu haben - das war alles sehr fein und hat für mich sehr gut funktioniert. Kapitelmarken lohnen sich eigentlich nicht besonders, weil es eben nur ein Thema gibt.
This week in the Podcast, Elliot Williams is off at Chaos Communication Congress, hearing tales of incredible reverse engineering that got locomotives back up and running, while Al Williams is thinking over what happened in 2023. There's a lot of “how things work” in this show, from data buoys to sewing machines to the simulated aging of ICs. Whether you're into stacking bricks, stacking Pi Picos, or stacking your 3D prints to make better use of precious bed space, this episode is for you. Enjoy. And as always, check out the links over at Hackaday, and let us know what you think in the comments!
Kloiber, Manfredwww.deutschlandfunk.de, Forschung aktuell
Wir sind zurück in Hamburg beim 37.Chaos Communication Congress und sprechen mit Constanze Kurz, Sprecherin des Chaos Computer Clubs. Wir sprechen über die Rückkehr des Kongresses, den Hype über Künstliche Intelligenz, die EU-Chatkontrolle, die Polizei-Software von Palantir namens "Gotham", die elektronische Gesundheitsakte uvm. Bitte unterstützt unsere Arbeit finanziell: Konto: Jung & Naiv IBAN: DE854 3060 967 104 779 2900 GLS Gemeinschaftsbank PayPal ► http://www.paypal.me/JungNaiv
Vom 27. bis 30. Dezember tagt in Hamburg der Chaos Communication Congress, ein Diskussionsfoum des Chaos Computer Club. Der frühere Hacker-Club pflegt sein Image bis heute – aber letztlich sind die CCC-Mitglieder heute Fachleute in allen Belangen rund um IT-Sicherheit. Nach zwei Jahren im Virtuellen kommt die Szene nun in Hamburg erstmals wieder in Präsenz zusammen.
Hallo Ihr Lieben, Frank ist immer noch auf dem Chaos Communication Congress 37c3 im CCH in Hamburg. Also rufen Paula und Sarah ihn an und Frank berichtet über Gehörtes, persönliche Treffen und Diverses. O-Töne gibt's auch. Als Sendung außer der Reihe endet die Sendung ohne WTF. Hör(t) doch mal zu, Frank, Paula und Sarah Aufgenommen am 28. Dezember 2023, veröffentlicht am 28. Dezember 2023 Hörzeit: 40:27
Hallo Ihr Lieben, Frank ist auf dem Chaos Communication Congress 37c3, Paula leider nicht. Zugeschaltet aus Hamburg berichtet Frank mit Fokus auf das Sendezentrum. Hör(t) doch mal zu, Frank und Paula Aufgenommen am 27. Dezember 2023, veröffentlicht am 27. Dezember 2023 Hörzeit: 39:21
Nach der letzten Spezialfolge tauchen wir heute wieder in die Nachrichten-Berichterstattung ein und bringen auch das Feedback zur vorletzten Folge. Dabei betrachten wir auch noch mal wie die Geschichte um OpenAI weiterging und vor allem gibt es einige Nachhilfe in Rechtskunde und diverse weitere Entscheidungen und Revisionen vor deutschen Gerichten. Dazu klären wir auch die Frage, wer in Deutschland eigentlich wirklich witzig ist und erhalten Besuch von einem renommierten Tierforscher der siebziger Jahre. Abschließend blicken wir auf das Vortragsprogramm des kommenden 37. Chaos Communication Congress in Hamburg.
This is a recap of the top 10 posts on Hacker News on October 6th, 2023.This podcast was generated by wondercraft.ai(00:40): Making Hard Things EasyOriginal post: https://news.ycombinator.com/item?id=37791002&utm_source=wondercraft_ai(02:38): AMD may get across the CUDA moatOriginal post: https://news.ycombinator.com/item?id=37793635&utm_source=wondercraft_ai(04:27): We're opening up access to Gov.uk formsOriginal post: https://news.ycombinator.com/item?id=37789107&utm_source=wondercraft_ai(06:35): Thread-per-coreOriginal post: https://news.ycombinator.com/item?id=37790745&utm_source=wondercraft_ai(08:20): CCC Invites to the 37th Chaos Communication Congress in HamburgOriginal post: https://news.ycombinator.com/item?id=37788153&utm_source=wondercraft_ai(10:10): 23andMe says user data stolen in credential stuffing attackOriginal post: https://news.ycombinator.com/item?id=37794379&utm_source=wondercraft_ai(11:56): Narges Mohammadi wins 2023 Nobel Peace PrizeOriginal post: https://news.ycombinator.com/item?id=37788847&utm_source=wondercraft_ai(13:37): Java 21 VirtualThreads vs. Clojure Lazy SeqsOriginal post: https://news.ycombinator.com/item?id=37792294&utm_source=wondercraft_ai(15:08): Thirty Years Ago: MS-DOS 6.00Original post: https://news.ycombinator.com/item?id=37790174&utm_source=wondercraft_ai(16:57): Unreal Engine will no longer be free for allOriginal post: https://news.ycombinator.com/item?id=37792030&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
In der Folge 108 des Keep Calm & Learn On Podcasts spreche ich mit Oli und Tobias über unsere Eindrücke von der lernOS Convention 2023. Eigentlich hatten die beiden ihre Episode an einer Vernetzungsinsel auf der Veranstaltung aufgenommen. Dann habe ich aber die SD-Karte mit der Aufnahme verloren :-( Daher haben wir die Chance genutzt und die Episode einfach gemeinsam nochmal aufgenommen. Shownotes: Was ist eigentlich lernOS? lernOS unter Creative Commons Lizenz (wie Open Source Lizenzen, aber für Inhalte, nicht Software-Code) Wie funktionioniert das Lernen mit einem lernOS Leitfaden? Lernziele, Lerntandems, Lerngruppen, Lern-Sprints, Katas, Dojos & Co. (s.a. Lernen in Gruppen) Kaiserburg Nürnberg als Lokation, 4k-Modell (21st Century Skills, Deeper Learning) Die Ideale Größe für Learning Circle (LeanIn Circles, WOL Circles, Lerngruppen im Management 2.0 MOOC, Learning Circle Experiences bei Conti, SAP, DATEV, s.a. ENC181 Podcast dazu) Das hybride Konzept der lernOS Convention (von Slack Huddles zu Discord, Audio-only Barcamp, Mumble im CCC, Remo, Workadventure, KI-Netzwerk LIAON auf Discord, Simultanübersetzung, Speaker/Session Owner vor Ort und Remote, loscon22 “perfectly hybrid”) Onboarding von Noobs in der Community (losconCircles, Engelsystem beim Chaos Communication Congress, Chaospat:innen, s.a spiegel.de Artikel, Vernetzungsinseln mit Podcast-Studio mit Zoom PodTrack P4 und Rodecaster Pro, 2nd-Screen-Ansatz) loscon23 Highlights (“Wenn jemand Mindset sagt, stirb irgendwo ein kleines Kätzchen”, “Selbstorganisation braucht Regeln” in Session Mindset, Skillset, Toolset … and Frameset? How organisational framework conditions influence working and learning, Liberating Structures, Session zum neuen lernOS Zettelkasten Leitfaden, Zettelkasten von Luhmann, Logseq, Obsidian, Onenote, Frameset Ursprung im Benchlearning Projekt, Stimmung und Energie auf der loscon23) Gamification mit Minecraft, Minetest und Voxelspielen (Minetest, Totschlagargument “Wir sind hier bei der Arbeit, hier wird nicht gespielt”, Lego Serious Play, loscon21 Crafting Challenge “lernOS Freizeitpark”, Minetest Bildungsnetzwerk, SAVE THE DATE: loscon24 am 2./3. Juli 2024 in Nürnberg und Online
Finanzierung von Open-Source-Projekten ist essentiell - Doch welche Möglichkeiten gibt es?Open-Source-Projekte sind wichtiger denn je, in unserer aktuellen Gesellschaft. Projekte wie cURL, OpenSSL, sqlite und Co. werden oft von wenigen Leuten maintained, doch Millionen Menschen nutzen diese jeden Tag, auch oft ohne es zu wissen. Die meisten Open-Source-Projekte werden in der Freizeit maintained. Doch wie passt das zusammen, besonders wenn die Miete gezahlt werden muss und auch Essen auf dem Tisch sein soll?Da kommt das (nicht ganz so einfache) Thema der Finanzierung von Open Source Projekten auf. In dieser Episode gehen wir genau darauf ein und stellen euch ein paar Möglichkeiten vor, wie du Geld mit bzw. für dein Open-Source-Projekt bekommen kannst. Dabei geht es nicht nur um den Platzhirsch GitHub Sponsors, sondern auch um professionelles Sponsoring von Firmen, dem Early-Access-Modell, staatliche Förderungen und so langweilige Themen wie Steuern.Bonus: Was Rundfunkgeräte mit Batterien mit Open-Source zu tun haben und ob Geld wirklich motivierend ist.Das schnelle Feedback zur Episode:
About RichardRichard "RichiH" Hartmann is the Director of Community at Grafana Labs, Prometheus team member, OpenMetrics founder, OpenTelemetry member, CNCF Technical Advisory Group Observability chair, CNCF Technical Oversight Committee member, CNCF Governing Board member, and more. He also leads, organizes, or helps run various conferences from hundreds to 18,000 attendess, including KubeCon, PromCon, FOSDEM, DENOG, DebConf, and Chaos Communication Congress. In the past, he made mainframe databases work, ISP backbones run, kept the largest IRC network on Earth running, and designed and built a datacenter from scratch. Go through his talks, podcasts, interviews, and articles at https://github.com/RichiH/talks or follow him on Twitter at https://twitter.com/TwitchiH for musings on the intersection of technology and society.Links Referenced: Grafana Labs: https://grafana.com/ Twitter: https://twitter.com/TwitchiH Richard Hartmann list of talks: https://github.com/richih/talks TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: This episode is brought to us in part by our friends at Datadog. Datadog's SaaS monitoring and security platform that enables full stack observability for developers, IT operations, security, and business teams in the cloud age. Datadog's platform, along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security signals across your applications, infrastructure, and third party services in a single pane of glass.Combine these with drag and drop dashboards and machine learning based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability. Try Datadog in your environment today with a free 14 day trial and get a complimentary T-shirt when you install the agent.To learn more, visit datadoghq/screaminginthecloud to get. That's www.datadoghq/screaminginthecloudCorey: Welcome to Screaming in the Cloud, I'm Corey Quinn. There are an awful lot of people who are incredibly good at understanding the ins and outs and the intricacies of the observability world. But they didn't have time to come on the show today. Instead, I am talking to my dear friend of two decades now, Richard Hartmann, better known on the internet as RichiH, who is the Director of Community at Grafana Labs, here to suffer—in a somewhat atypical departure for the theme of this show—personal attacks for once. Richie, thank you for joining me.Richard: And thank you for agreeing on personal attacks.Corey: Exactly. It was one of your riders. Like, there have to be the personal attacks back and forth or you refuse to appear on the show. You've been on before. In fact, the last time we did a recording, I believe you were here in person, which was a long time ago. What have you been up to?You're still at Grafana Labs. And in many cases, I would point out that, wow, you've been there for many years; that seems to be an atypical thing, which is an American tech industry perspective because every time you and I talk about this, you look at folks who—wow, you were only at that company for five years. What's wrong with you—you tend to take the longer view and I tend to have the fast twitch, time to go ahead and leave jobs because it's been more than 20 minutes approach. I see that you're continuing to live what you preach, though. How's it been?Richard: Yeah, so there's a little bit of Covid brains, I think. When we talked in 2018, I was still working at SpaceNet, building a data center. But the last two-and-a-half years didn't really happen for many people, myself included. So, I guess [laugh] that includes you.Corey: No, no you're right. You've only been at Grafana Labs a couple of years. One would think I would check the notes for shooting my mouth off. But then, one wouldn't know me.Richard: What notes? Anyway, I've been around Prometheus and Grafana Since 2015. But it's like, real, full-time everything is 2020. There was something in between. Since 2018, I contracted to do vulnerability handling and everything for Grafana Labs because they had something and they didn't know how to deal with it.But no, full time is 2020. But as to the space in the [unintelligible 00:02:45] of itself, it's maybe a little bit German of me, but trying to understand the real world and trying to get an overview of systems and how they actually work, and if they are working correctly and as intended, and if not, how they're not working as intended, and how to fix this is something which has always been super important to me, in part because I just want to understand the world. And this is a really, really good way to automate understanding of the world. So, it's basically a work-saving mechanism. And that's why I've been sticking to it for so long, I guess.Corey: Back in the early days of monitoring systems—so we called it monitoring back then because, you know, are using simple words that lack nuance was sort of de rigueur back then—we wound up effectively having tools. Nagios is the one that springs to mind, and it was terrible in all the ways you would expect a tool written in janky Perl in the early-2000s to be. But it told you what was going on. It tried to do a thing, generally reach a server or query it about things, and when things fell out of certain specs, it screamed its head off, which meant that when you had things like the core switch melting down—thinking of one very particular incident—you didn't get a Nagios alert; you got 4000 Nagios alerts. But start to finish, you could wrap your head rather fully around what Nagios did and why it did the sometimes strange things that it did.These days, when you take a look at Prometheus, which we hear a lot about, particularly in the Kubernetes space and Grafana, which is often mentioned in the same breath, it's never been quite clear to me exactly where those start and stop. It always feels like it's a component in a larger system to tell you what's going on rather than a one-stop shop that's going to, you know, shriek its head off when something breaks in the middle of the night. Is that the right way to think about it? The wrong way to think about it?Richard: It's a way to think about it. So personally, I use the terms monitoring and observability pretty much interchangeably. Observability is a relatively well-defined term, even though most people won't agree. But if you look back into the '70s into control theory where the term is coming from, it is the measure of how much you're able to determine the internal state of a system by looking at its inputs and its outputs. Depending on the definition, some people don't include the inputs, but that is the OG definition as far as I'm aware.And from this, there flow a lot of things. This question of—or this interpretation of the difference between telling that, yes, something's broken versus why something's broken. Or if you can't ask new questions on the fly, it's not observability. Like all of those things are fundamentally mapped to this definition of, I need enough data to determine the internal state of whatever system I have just by looking at what is coming in, what is going out. And that is at the core the thing. Now, obviously, it's become a buzzword, which is oftentimes the fate of successful things. So, it's become a buzzword, and you end up with cargo culting.Corey: I would argue periodically, that observability is hipster monitoring. If you call it monitoring, you get yelled at by Charity Majors. Which is tongue and cheek, but she has opinions, made, nonetheless shall I say, frustrating by the fact that she is invariably correct in those opinions, which just somehow makes it so much worse. It would be easy to dismiss things she says if she weren't always right. And the world is changing, especially as we get into the world of distributed systems.Is the server that runs the app working or not working loses meaning when we're talking about distributed systems, when we're talking about containers running on top of Kubernetes, which turns every outage into a murder mystery. We start having distributed applications composed of microservices, so you have no idea necessarily where an issue is. Okay, is this one microservice having an issue related to the request coming into a completely separate microservice? And it seems that for those types of applications, the answer has been tracing for a long time now, where originally that was something that felt like it was sprung, fully-formed from the forehead of some God known as one of the hyperscalers, but now is available to basically everyone, in theory.In practice, it seems that instrumenting applications still one of the hardest parts of all of this. I tried hooking up one of my own applications to be observed via OTEL, the open telemetry project, and it turns out that right now, OTEL and AWS Lambda have an intersection point that makes everything extremely difficult to work with. It's not there yet; it's not baked yet. And someday, I hope that changes because I would love to interchangeably just throw metrics and traces and logs to all the different observability tools and see which ones work, which ones don't, but that still feels very far away from current state of the art.Richard: Before we go there, maybe one thing which I don't fully agree with. You said that previously, you were told if a service up or down, that's the thing which you cared about, and I don't think that's what people actually cared about. At that time, also, what they fundamentally cared about: is the user-facing service up, or down, or impacted? Is it slow? Does it return errors every X percent for requests, something like this?Corey: Is the site up? And—you're right, I was hand-waving over a whole bunch of things. It was, “Okay. First, the web server is returning a page, yes or no? Great. Can I ping the server?” Okay, well, there are ways of server can crash and still leave enough of the TCP/IP stack up or it can respond to pings and do little else.And then you start adding things to it. But the Nagios thing that I always wanted to add—and had to—was, is the disk full? And that was annoying. And, on some level, like, why should I care in the modern era how much stuff is on the disk because storage is cheap and free and plentiful? The problem is, after the third outage in a month because the disk filled up, you start to not have a good answer for well, why aren't you monitoring whether the disk is full?And that was the contributors to taking down the server. When the website broke, there were what felt like a relatively small number of reasonably well-understood contributors to that at small to midsize applications, which is what I'm talking about, the only things that people would let me touch. I wasn't running hyperscale stuff where you have a fleet of 10,000 web servers and, “Is the server up?” Yeah, in that scenario, no one cares. But when we're talking about the database server and the two application servers and the four web servers talking to them, you think about it more in terms of pets than you do cattle.Richard: Yes, absolutely. Yet, I think that was a mistake back then, and I tried to do it differently, as a specific example with the disk. And I'm absolutely agreeing that previous generation tools limit you in how you can actually work with your data. In particular, once you're with metrics where you can do actual math on the data, it doesn't matter if the disk is almost full. It matters if that disk is going to be full within X amount of time.If that disk is 98% full and it sits there at 98% for ten years and provides the service, no one cares. The thing is, will it actually run out in the next two hours, in the next five hours, what have you. Depending on this, is this currently or imminently a customer-impacting or user-impacting then yes, alert on it, raise hell, wake people, make them fix it, as opposed to this thing can be dealt with during business hours on the next workday. And you don't have to wake anyone up.Corey: Yeah. The big filer with massive amounts of storage has crossed the 70% line. Okay, now it's time to start thinking about that, what do you want to do? Maybe it's time to order another shelf of discs for it, which is going to take some time. That's a radically different scenario than the 20 gigabyte root volume on your server just started filling up dramatically; the rate of change is such that'll be full in 20 minutes.Yeah, one of those is something you want to wake people up for. Generally speaking, you don't want to wake people up for what is fundamentally a longer-term strategic business problem. That can be sorted out in the light of day versus, “[laugh] we're not going to be making money in two hours, so if I don't wake up and fix this now.” That's the kind of thing you generally want to be woken up for. Well, let's be honest, you don't want that to happen at all, but if it does happen, you kind of want to know in advance rather than after the fact.Richard: You're literally describing linear predict from Prometheus, which is precisely for this, where I can look back over X amount of time and make a linear prediction because everything else breaks down at scale, blah, blah, blah, to detail. But the thing is, I can draw a line with my pencil by hand on my data and I can predict when is this thing going to it. Which is obviously precisely correct if I have a TLS certificate. It's a little bit more hand-wavy when it's a disk. But still, you can look into the future and you say, “What will be happening if current trends for the last X amount of time continue in Y amount of time.” And that's precisely a thing where you get this more powerful ability of doing math with your data.Corey: See, when you say it like that, it sounds like it actually is a whole term of art, where you're focusing on an in-depth field, where salaries are astronomical. Whereas the tools that I had to talk about this stuff back in the day made me sound like, effectively, the sysadmin that I was grunting and pointing: “This is gonna fill up.” And that is how I thought about it. And this is the challenge where it's easy to think about these things in narrow, defined contexts like that, but at scale, things break.Like the idea of anomaly detection. Well, okay, great if normally, the CPU and these things are super bored and suddenly it gets really busy, that's atypical. Maybe we should look into it, assuming that it has a challenge. The problem is, that is a lot harder than it sounds because there are so many factors that factor into it. And as soon as you have something, quote-unquote, “Intelligent,” making decisions on this, it doesn't take too many false positives before you start ignoring everything it has to say, and missing legitimate things. It's this weird and obnoxious conflation of both hard technical problems and human psychology.Richard: And the breaking up of old service boundaries. Of course, when you say microservices, and such, fundamentally, functionally a microservice or nanoservice, picoservice—but the pendulum is already swinging back to larger units of complexity—but it fundamentally does not make any difference if I have a monolith on some mainframe or if I have a bunch of microservices. Yes, I can scale differently, I can scale horizontally a lot more easily, vertically, it's a little bit harder, blah, blah, blah, but fundamentally, the logic and the complexity, which is being packaged is fundamentally the same. More users, everything, but it is fundamentally the same. What's happening again, and again, is I'm breaking up those old boundaries, which means the old tools which have assumptions built in about certain aspects of how I can actually get an overview of a system just start breaking down, when my complexity unit or my service or what have I, is usually congruent with a physical piece, of hardware or several services are congruent with that piece of hardware, it absolutely makes sense to think about things in terms of this one physical server. The fact that you have different considerations in cloud, and microservices, and blah, blah, blah, is not inherently that it is more complex.On the contrary, it is fundamentally the same thing. It scales with users' everything, but it is fundamentally the same thing, but I have different boundaries of where I put interfaces onto my complexity, which basically allow me to hide all of this complexity from the downstream users.Corey: That's part of the challenge that I think we're grappling with across this entire industry from start to finish. Where we originally looked at these things and could reason about it because it's the computer and I know how those things work. Well, kind of, but okay, sure. But then we start layering levels of complexity on top of layers of complexity on top of layers of complexity, and suddenly, when things stop working the way that we expect, it can be very challenging to unpack and understand why. One of the ways I got into this whole space was understanding, to some degree, of how system calls work, of how the kernel wound up interacting with userspace, about how Linux systems worked from start to finish. And these days, that isn't particularly necessary most of the time for the care and feeding of applications.The challenge is when things start breaking, suddenly having that in my back pocket to pull out could be extremely handy. But I don't think it's nearly as central as it once was and I don't know that I would necessarily advise someone new to this space to spend a few years as a systems person, digging into a lot of those aspects. And this is why you need to know what inodes are and how they work. Not really, not anymore. It's not front and center the way that it once was, in most environments, at least in the world that I live in. Agree? Disagree?Richard: Agreed. But it's very much unsurprising. You probably can't tell me how to precisely grow sugar cane or corn, you can't tell me how to refine the sugar out of it, but you can absolutely bake a cake. But you will not be able to tell me even a third of—and I'm—for the record, I'm also not able to tell you even a third about the supply chain which just goes from I have a field and some seeds and I need to have a package of refined sugar—you're absolutely enabled to do any of this. The thing is, you've been part of the previous generation of infrastructure where you know how this underlying infrastructure works, so you have more ability to reason about this, but it's not needed for cloud services nearly as much.You need different types of skill sets, but that doesn't mean the old skill set is completely useless, at least not as of right now. It's much more a case of you need fewer of those people and you need them in different places because those things have become infrastructure. Which is basically the cloud play, where a lot of this is just becoming infrastructure more and more.Corey: Oh, yeah. Back then I distinctly remember my elders looking down their noses at me because I didn't know assembly, and how could I possibly consider myself a competent systems admin if I didn't at least have a working knowledge of assembly? Or at least C, which I, over time, learned enough about to know that I didn't want to be a C programmer. And you're right, this is the value of cloud and going back to those days getting a web server up and running just to compile Apache's httpd took a week and an in-depth knowledge of GCC flags.And then in time, oh, great. We're going to have rpm or debs. Great, okay, then in time, you have apt, if you're in the dev land because I know you are a Debian developer, but over in Red Hat land, we had yum and other tools. And then in time, it became oh, we can just use something like Puppet or Chef to wind up ensuring that thing is installed. And then oh, just docker run. And now it's a checkbox in a web console for S3.These things get easier with time and step by step by step we're standing on the shoulders of giants. Even in the last ten years of my career, I used to have a great challenge question that I would interview people with of, “Do you know what TinyURL is? It takes a short URL and then expands it to a longer one. Great, on the whiteboard, tell me how you would implement that.” And you could go up one side and down the other, and then you could add constraints, multiple data centers, now one goes offline, how do you not lose data? Et cetera, et cetera.But these days, there are so many ways to do that using cloud services that it almost becomes trivial. It's okay, multiple data centers, API Gateway, a Lambda, and a global DynamoDB table. Now, what? “Well, now it gets slow. Why is it getting slow?”“Well, in that scenario, probably because of something underlying the cloud provider.” “And so now, you lose an entire AWS region. How do you handle that?” “Seems to me when that happens, the entire internet's kind of broken. Do people really need longer URLs?”And that is a valid answer, in many cases. The question doesn't really work without a whole bunch of additional constraints that make it sound fake. And that's not a weakness. That is the fact that computers and cloud services have never been as accessible as they are now. And that's a win for everyone.Richard: There's one aspect of accessibility which is actually decreasing—or two. A, you need to pay for them on an ongoing basis. And B, you need an internet connection which is suitably fast, low latency, what have you. And those are things which actually do make things harder for a variety of reasons. If I look at our back-end systems—as in Grafana—all of them have single binary modes where you literally compile everything into a single binary and you can run it on your laptop because if you're stuck on a plane, you can't do any work on it. That kind of is not the best of situations.And if you have a huge CI/CD pipeline, everything in this cloud and fine and dandy, but your internet breaks. Yeah, so I do agree that it is becoming generally more accessible. I disagree that it is becoming more accessible along all possible axes.Corey: I would agree. There is a silver lining to that as well, where yes, they are fraught and dangerous and I would preface this with a whole bunch of warnings, but from a cost perspective, all of the cloud providers do have a free tier offering where you can kick the tires on a lot of these things in return for no money. Surprisingly, the best one of those is Oracle Cloud where they have an unlimited free tier, use whatever you want in this subset of services, and you will never be charged a dime. As opposed to the AWS model of free tier where well, okay, it suddenly got very popular or you misconfigured something, and surprise, you now owe us enough money to buy Belize. That doesn't usually lead to a great customer experience.But you're right, you can't get away from needing an internet connection of at least some level of stability and throughput in order for a lot of these things to work. The stuff you would do locally on a Raspberry Pi, for example, if your budget constrained and want to get something out here, or your laptop. Great, that's not going to work in the same way as a full-on cloud service will.Richard: It's not free unless you have hard guarantees that you're not going to ever pay anything. It's fine to send warning, it's fine to switch the thing off, it's fine to have you hit random hard and soft quotas. It is not a free service if you can't guarantee that it is free.Corey: I agree with you. I think that there needs to be a free offering where, “Well, okay, you want us to suddenly stop serving traffic to the world?” “Yes. When the alternative is you have to start charging me through the nose, yes I want you to stop serving traffic.” That is definitionally what it says on the tin.And as an independent learner, that is what I want. Conversely, if I'm an enterprise, yeah, I don't care about money; we're running our Superbowl ad right now, so whatever you do, don't stop serving traffic. Charge us all the money. And there's been a lot of hand wringing about, well, how do we figure out which direction to go in? And it's, have you considered asking the customer?So, on a scale of one to bank, how serious is this account going to be [laugh]? Like, what are your big concerns: never charge me or never go down? Because we can build for either of those. Just let's make sure that all of those expectations are aligned. Because if you guess you're going to get it wrong and then no one's going to like you.Richard: I would argue this. All those services from all cloud providers actually build to address both of those. It's a deliberate choice not to offer certain aspects.Corey: Absolutely. When I talk to AWS, like, “Yeah, but there is an eventual consistency challenge in the billing system where it takes”—as anyone who's looked at the billing system can see—“Multiple days, sometimes for usage data to show up. So, how would we be able to stop things if the usage starts climbing?” To which my relatively direct responses, that sounds like a huge problem. I don't know how you'd fix that, but I do know that if suddenly you decide, as a matter of policy, to okay, if you're in the free tier, we will not charge you, or even we will not charge you more than $20 a month.So, you build yourself some headroom, great. And anything that people are able to spin up, well, you're just going to have to eat the cost as a provider. I somehow suspect that would get fixed super quickly if that were the constraint. The fact that it isn't is a conscious choice.Richard: Absolutely.Corey: And the reason I'm so passionate about this, about the free space, is not because I want to get a bunch of things for free. I assure you I do not. I mean, I spend my life fixing AWS bills and looking at AWS pricing, and my argument is very rarely, “It's too expensive.” It's that the billing dimension is hard to predict or doesn't align with a customer's experience or prices a service out of a bunch of use cases where it'll be great. But very rarely do I just sit here shaking my fist and saying, “It costs too much.”The problem is when you scare the living crap out of a student with a surprise bill that's more than their entire college tuition, even if you waive it a week or so later, do you think they're ever going to be as excited as they once were to go and use cloud services and build things for themselves and see what's possible? I mean, you and I met on IRC 20 years ago because back in those days, the failure mode and the risk financially was extremely low. It's yeah, the biggest concern that I had back then when I was doing some of my Linux experimentation is if I typed the wrong thing, I'm going to break my laptop. And yeah, that happened once or twice, and I've learned not to make those same kinds of mistakes, or put guardrails in so the blast radius was smaller, or use a remote system instead. Yeah, someone else's computer that I can destroy. Wonderful. But that was on we live and we learn as we were coming up. There was never an opportunity for us, to my understanding, to wind up accidentally running up an $8 million charge.Richard: Absolutely. And psychological safety is one of the most important things in what most people do. We are social animals. Without this psychological safety, you're not going to have long-term, self-sustaining groups. You will not make someone really excited about it. There's two basic ways to sell: trust or force. Those are the two ones. There's none else.Corey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It's a spooky season and you're already shaking. It's time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomemento.co/screaming That's GO M-O-M-E-N-T-O dot co slash screamingCorey: Yeah. And it also looks ridiculous. I was talking to someone somewhat recently who's used to spending four bucks a month on their AWS bill for some S3 stuff. Great. Good for them. That's awesome. Their credentials got compromised. Yes, that is on them to some extent. Okay, great.But now after six days, they were told that they owed $360,000 to AWS. And I don't know how, as a cloud company, you can sit there and ask a student to do that. That is not a realistic thing. They are what is known, in the United States at least, in the world of civil litigation as quote-unquote, “Judgment proof,” which means, great, you could wind up finding that someone owes you $20 billion. Most of the time, they don't have that, so you're not able to recoup it. Yeah, the judgment feels good, but you're never going to see it.That's the problem with something like that. It's yeah, I would declare bankruptcy long before, as a student, I wound up paying that kind of money. And I don't hear any stories about them releasing the collection agency hounds against people in that scenario. But I couldn't guarantee that. I would never urge someone to ignore that bill and see what happens.And it's such an off-putting thing that, from my perspective, is beneath of the company. And let's be clear, I see this behavior at times on Google Cloud, and I see it on Azure as well. This is not something that is unique to AWS, but they are the 800-pound gorilla in the space, and that's important. Or as I just to mention right now, like, as I—because I was about to give you crap for this, too, but if I go to grafana.com, it says, and I quote, “Play around with the Grafana Stack. Experience Grafana for yourself, no registration or installation needed.”Good. I was about to yell at you if it's, “Oh, just give us your credit card and go ahead and start spinning things up and we won't charge you. Honest.” Even your free account does not require a credit card; you're doing it right. That tells me that I'm not going to get a giant surprise bill.Richard: You have no idea how much thought and work went into our free offering. There was a lot of math involved.Corey: None of this is easy, I want to be very clear on that. Pricing is one of the hardest things to get right, especially in cloud. And it also, when you get it right, it doesn't look like it was that hard for you to do. But I fix [sigh] I people's AWS bills for a living and still, five or six years in, one of the hardest things I still wrestle with is pricing engagements. It's incredibly nuanced, incredibly challenging, and at least for services in the cloud space where you're doing usage-based billing, that becomes a problem.But glancing at your pricing page, you do hit the two things that are incredibly important to me. The first one is use something for free. As an added bonus, you can use it forever. And I can get started with it right now. Great, when I go and look at your pricing page or I want to use your product and it tells me to ‘click here to contact us.' That tells me it's an enterprise sales cycle, it's got to be really expensive, and I'm not solving my problem tonight.Whereas the other side of it, the enterprise offering needs to be ‘contact us' and you do that, that speaks to the enterprise procurement people who don't know how to sign a check that doesn't have to commas in it, and they want to have custom terms and all the rest, and they're prepared to pay for that. If you don't have that, you look to small-time. When it doesn't matter what price you put on it, you wind up offering your enterprise tier at some large number, it's yeah, for some companies, that's a small number. You don't necessarily want to back yourself in, depending upon what the specific needs are. You've gotten that right.Every common criticism that I have about pricing, you folks have gotten right. And I definitely can pick up on your fingerprints on a lot of this. Because it sounds like a weird thing to say of, “Well, he's the Director of Community, why would he weigh in on pricing?” It's, “I don't think you understand what community is when you ask that question.”Richard: Yes, I fully agree. It's super important to get pricing right, or to get many things right. And usually the things which just feel naturally correct are the ones which took the most effort and the most time and everything. And yes, at least from the—like, I was in those conversations or part of them, and the one thing which was always clear is when we say it's free, it must be free. When we say it is forever free, it must be forever free. No games, no lies, do what you say and say what you do. Basically.We have things where initially you get certain pro features and you can keep paying and you can keep using them, or after X amount of time they go away. Things like these are built in because that's what people want. They want to play around with the whole thing and see, hey, is this actually providing me value? Do I want to pay for this feature which is nice or this and that plugin or what have you? And yeah, you're also absolutely right that once you leave these constraints of basically self-serve cloud, you are talking about bespoke deals, but you're also talking about okay, let's sit down, let's actually understand what your business is: what are your business problems? What are you going to solve today? What are you trying to solve tomorrow?Let us find a way of actually supporting you and invest into a mutual partnership and not just grab the money and run. We have extremely low churn for, I would say, pretty good reasons. Because this thing about our users, our customers being successful, we do take it extremely seriously.Corey: It's one of those areas that I just can't shake the feeling is underappreciated industry-wide. And the reason I say that this is your fingerprints on it is because if this had been wrong, you have a lot of… we'll call them idiosyncrasies, where there are certain things you absolutely will not stand for, and misleading people and tricking them into paying money is high on that list. One of the reasons we're friends. So yeah, but I say I see your fingerprints on this, it's yeah, if this hadn't been worked out the way that it is, you would not still be there. One other thing that I wanted to call out about, well, I guess it's a confluence of pricing and logging in the rest, I look at your free tier, and it offers up to 50 gigabytes of ingest a month.And it's easy for me to sit here and compare that to other services, other tools, and other logging stories, and then I have to stop and think for a minute that yeah, discs have gotten way bigger, and internet connections have gotten way faster, and even the logs have gotten way wordier. I still am not sure that most people can really contextualize just how much logging fits into 50 gigs of data. Do you have any, I guess, ballpark examples of what that looks like? Because it's been long enough since I've been playing in these waters that I can't really contextualize it anymore.Richard: Lord of the Rings is roughly five megabytes. It's actually less. So, we're talking literally 10,000 Lord of the Rings, which you can just shove in us and we're just storing this for you. Which also tells you that you're not going to be reading any of this. Or some of it, yes, but not all of it. You need better tooling and you need proper tooling.And some of this is more modern. Some of this is where we actually pushed the state of the art. But I'm also biased. But I, for myself, do claim that we did push the state of the art here. But at the same time you come back to those absolute fundamentals of how humans deal with data.If you look back basically as far as we have writing—literally 6000 years ago, is the oldest writing—humans have always dealt with information with the state of the world in very specific ways. A, is it important enough to even write it down, to even persist it in whatever persistence mechanisms I have at my disposal? If yes, write a detailed account or record a detailed account of whatever the thing is. But it turns out, this is expensive and it's not what you need. So, over time, you optimize towards only taking down key events and only noting key events. Maybe with their interconnections, but fundamentally, the key events.As your data grows, as you have more stuff, as this still is important to your business and keeps being more important to—or doesn't even need to be a business; can be social, can be whatever—whatever thing it is, it becomes expensive, again, to retain all of those key events. So, you turn them into numbers and you can do actual math on them. And that's this path which you've seen again, and again, and again, and again, throughout humanity's history. Literally, as long as we have written records, this has played out again, and again, and again, and again, for every single field which humans actually cared about. At different times, like, power networks are way ahead of this, but fundamentally power networks work on metrics, but for transient load spike, and everything, they have logs built into their power measurement devices, but those are only far in between. Of course, the main thing is just metrics, time-series. And you see this again, and again.You also were sysadmin in internet-related all switches have been metrics-based or metrics-first for basically forever, for 20, 30 years. But that stands to reason. Of course the internet is running at by roughly 20 years scale-wise in front of the cloud because obviously you need the internet because as you wouldn't be having a cloud. So, all of those growing pains why metrics are all of a sudden the thing, “Or have been for a few years now,” is basically, of course, people who were writing software, providing their own software services, hit the scaling limitations which you hit for Internet service providers two decades, three decades ago. But fundamentally, you have this complete system. Basically profiles or distributed tracing depending on how you view distributed tracing.You can also argue that distributed tracing is key events which are linked to each other. Logs sit firmly in the key event thing and then you turn this into numbers and that is metrics. And that's basically it. You have extremes at the and where you can have valid, depending on your circumstances, engineering trade-offs of where you invest the most, but fundamentally, that is why those always appear again in humanity's dealing with data, and observability is no different.Corey: I take a look at last month's AWS bill. Mine is pretty well optimized. It's a bit over 500 bucks. And right around 150 of that is various forms of logging and detecting change in the environment. And on the one hand, I sit here, and I think, “Oh, I should optimize that,” because the value of those logs to me is zero.Except that whenever I have to go in and diagnose something or respond to an incident or have some forensic exploration, they then are worth an awful lot. And I am prepared to pay 150 bucks a month for that because the potential value of having that when the time comes is going to be extraordinarily useful. And it basically just feels like a tax on top of what it is that I'm doing. The same thing happens with application observability where, yeah, when you just want the big substantial stuff, yeah, until you're trying to diagnose something. But in some cases, yeah, okay, then crank up the verbosity and then look for it.But if you're trying to figure it out after an event that isn't likely or hopefully won't recur, you're going to wish that you spent a little bit more on collecting data out of it. You're always going to be wrong, you're always going to be unhappy, on some level.Richard: Ish. You could absolutely be optimizing this. I mean, for $500, it's probably not worth your time unless you take it as an exercise, but outside of due diligence where you need specific logs tied to—or specific events tied to specific times, I would argue that a lot of the problems with logs is just dealing with it wrong. You have this one extreme of full-text indexing everything, and you have this other extreme of a data lake—which is just a euphemism of never looking at the data again—to keep storage vendors happy. There is an in between.Again, I'm biased, but like for example, with Loki, you have those same label sets as you have on your metrics with Prometheus, and you have literally the same, which means you only index that part and you only extract on ingestion time. If you don't have structured logs yet, only put the metadata about whatever you care about extracted and put it into your label set and store this, and that's the only thing you index. But it goes further than just this. You can also turn those logs into metrics.And to me this is a path of optimization. Where previously I logged this and that error. Okay, fine, but it's just a log line telling me it's HTTP 500. No one cares that this is at this precise time. Log levels are also basically an anti-pattern because they're just trying to deal with the amount of data which I have, and try and get a handle on this on that level whereas it would be much easier if I just counted every time I have an HTTP 500, I just up my counter by one. And again, and again, and again.And all of a sudden, I have literally—and I did the math on this—over 99.8% of the data which I have to store just goes away. It's just magic the way—and we're only talking about the first time I'm hitting this logline. The second time I'm hitting this logline is functionally free if I turn this into metrics. It becomes cheap enough that one of the mantras which I have, if you need to onboard your developers on modern observability, blah, blah, blah, blah, blah, the whole bells and whistles, usually people have logs, like that's what they have, unless they were from ISPs or power companies, or so; there they usually start with metrics.But most users, which I see both with my Grafana and with my Prometheus [unintelligible 00:38:46] tend to start with logs. They have issues with those logs because they're basically unstructured and useless and you need to first make them useful to some extent. But then you can leverage on this and instead of having a debug statement, just put a counter. Every single time you think, “Hey, maybe I should put a debug statement,” just put a counter instead. In two months time, see if it was worth it or if you delete that line and just remove that counter.It's so much cheaper, you can just throw this on and just have it run for a week or a month or whatever timeframe and done. But it goes beyond this because all of a sudden, if I can turn my logs into metrics properly, I can start rewriting my alerts on those metrics. I can actually persist those metrics and can more aggressively throw my logs away. But also, I have this transition made a lot easier where I don't have this huge lift, where this day in three months is to be cut over and we're going to release the new version of this and that software and it's not going to have that, it's going to have 80% less logs and everything will be great and then you missed the first maintenance window or someone is ill or what have you, and then the next Big Friday is coming so you can't actually deploy there. I mean Black Friday. But we can also talk about deploying on Fridays.But the thing is, you have this huge thing, whereas if you have this as a continuous improvement process, I can just look at, this is the log which is coming out. I turn this into a number, I start emitting metrics directly, and I see that those numbers match. And so, I can just start—I build new stuff, I put it into a new data format, I actually emit the new data format directly from my code instrumentation, and only then do I start removing the instrumentation for the logs. And that allows me to, with full confidence, with psychological safety, just move a lot more quickly, deliver much more quickly, and also cut down on my costs more quickly because I'm just using more efficient data types.Corey: I really want to thank you for spending as much time as you have. If people want to learn more about how you view the world and figure out what other personal attacks they can throw your way, where's the best place for them to find you?Richard: Personal attacks, probably Twitter. It's, like, the go-to place for this kind of thing. For actually tracking, I stopped maintaining my own website. Maybe I'll do again, but if you go on github.com/ritchieh/talks, you'll find a reasonably up-to-date list of all the talks, interviews, presentations, panels, what have you, which I did over the last whatever amount of time. [laugh].Corey: And we will, of course, put links to that in the [show notes 00:41:23]. Thanks again for your time. It's always appreciated.Richard: And thank you.Corey: Richard Hartmann, Director of Community at Grafana Labs. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment. And then when someone else comes along with an insulting comment they want to add, we'll just increment the counter by one.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Heute gibt es nur eine kurze Sendung in der wir uns nur zwei größeren Themen annehmen: der Aufregung rund um den Chef des BSI, Arne Schönbohm, und der von ihm einst gegründeten Lobbyrunde namens Cyber-Sicherheitsrat e.V. Dazu blicken wir kurz auf den Anschlag auf die GSM-R-Infrastruktur der Bahn und stoßen vermutlich alle Bahnnerds vor den Kopf. Zum Schluß ein erster Ausblick auf den 37. Chaos Communication Congress.
In der heutigen Folge dreht sich alles um das Thema KRITIS. Da Kim und Robert nicht einmal versuchen Wissen vorzutäuschen, haben wir uns Manuel Atug als Gast eingeladen. Manuel ist Sprecher der Arbeitsgruppe Kritische Infrastrukturen. Die AG KRITIS ist eine Gruppe von Fachleuten, die sich die Verbesserung der IT-Sicherheit und Resilienz von Kritischen Infrastrukturen (KRITIS) zum Ziel gesetzt hat. Die AG KRITIS wurde 2018 im Nachgang zum 34. Chaos Communication Congress des CCC im Rahmen eines Arbeitstreffens gegründet, und sieht sich selbst als unabhängig von Unternehmen und Wirtschaftsverbänden. Im Podcast reden wir über nicht nur über Manuels Arbeit in der AG KRITIS, sondern widmen uns insbesondere vielen naiven Fragen zu den Themen KRITIS und Sicherheitsgesetz.Manuel auf Twitter: https://twitter.com/HonkHaseManuel auf LinkedIn: linkedin.com/in/manuel-atug-820b27241AG KRITIS: https://ag.kritis.info/Orientierungshilfe zu Nachweisen gemäß § 8a Absatz 3 BSIG: https://www.bsi.bund.de/DE/Themen/KRITIS-und-regulierte-Unternehmen/Kritische-Infrastrukturen/KRITIS-Nachweise/OH_Nachweise/orientierungshilfe_node.html
In dieser Episode nehmen wir zum ersten Mal live auf. Wir sind zu Gast beim RC3v2 dem remote Chaos Communication Congress. Wir sprechen über das vergangene Jahr 2021 und insbesondere den Energie- und Gasmarkt und beantworten die Fragen von Höhrenden.
Nach längerer Pause gibt es endlich wieder eine Neue Folge des Thekenschnacks. Diese ist aber noch aus dem Sommer als sich die Gäste Real beim Barkeeper trafen um gemeinsam zu Essen und evtl. das eine oder andere Alkoholische Getränk zu sich zu nehmen..
Hackaday editors Tom Nardi and Elliot Williams bring you up to speed on the most interesting stories of the week. Hackaday's Remoticon and Germany's Chaos Communication Congress are virtual again this year, but the Vintage Computer Festival will be live. We'll also talk about ocean-going drones, the recreation of an old-school light bulb with a potato peeler, cheap smart watches with hidden potential, and sanding down shady modules to figure out just how you've been scammed. Stick around for some thoughts on turning real-estate signs into a handy prototyping material, and to find out why some very impressive Soviet tech is getting the boot from America's space program. Please peruse the show notes!
Die Bundestagswahl ist durch und wir berichten von der Berliner Wahlfront, an der sich Tim und Linus als Wahlhelfer und -beoabachter betätigt haben und von allerlei Gutem und einigem Chaos berichten dürfen. Wir preisen den Flyerservice Hahn, der korrekt abgeliefert hat und blicken ein weiteres Mal auf die nächste Digitalpeinlichkeit der ausgehenden Bundesregierung namens "ID Wallet". Am Schluss noch schlechte Nachrichten: auch in diesem Jahr wird es keinen Chaos Communication Congress als Präsenzveranstaltung geben.
The CNCF has a rich suite to address monitoring Kubernetes and cloud-native workloads. First of which is Prometheus, which is widely adopted, with great out-of-the-box compatibility with Kubernetes. But under the CNCF you can also find OpenMetrics that offers standardization of the metrics format, Thanos and Cortex which offer long-term storage for Prometheus, and other complimentary solutions and integrations. On this episode of OpenObservability Talks we'll host “RichiH” Hartmann and discuss the different OSS projects, the synergy between them, and the future roadmap in building the community and making CNCF a leading offering. Richard "RichiH" Hartmann is Director of Community at Grafana Labs, Prometheus team member, OpenMetrics founder, CNCF SIG Observability chair, and other things. He also organizes various conferences, including FOSDEM, DENOG, DebConf, and Chaos Communication Congress. In the past, he made mainframe databases work, ISP backbones run, and built a datacenter from scratch. The episode was live-streamed on 02 July 2021 and the video is available at https://youtube.com/live/j3nFFHSosnI Show Notes: OpenTelemetry accepted to CNCF incubation OpenTelemetry structure OpenTelemetry community adoption OpenMetrics and Open* confusion OpenMetrics and OpenTelemetry synergy OpenMetrics updates CNCF's Observability TAG (Technical Advisory Group) How to sync between projects on CNCF Prometheus state and roadmap Prometheus conformance program Thanos and Cortex projects how the tech stack benefits humans Grafana, Loki and Tempo projects Resources: OpenTelemetry.io OpenTelemetry status page Guide to OpenTelemetry CNCF TAG Observability Open* Explainer by RichiH OpenMetrics
2020 war vieles anders, wegen Corona und dem Leben mit und in der Pandemie. So fand dieses Jahr auch kein analoger Chaos Communication Congress in Leipzig statt, auf dem wir sonst immer unseren feministischen Jahresrückblick aufgenommen haben, und Becci war auch nicht bei der digitalen Alternative Remote Chaos Experience dabei. Aber weil viele von euch sich trotzdem wieder unseren Jahresrückblick gewünscht haben, haben wir uns in der traditionellen Runde vom letzten Jahr remote über das Internet zum Podcastern zusammengefunden. Wir, das sind neben mir Miriam, Bianca und Daniela. Unser Gespräch kreiste von Backlashs in Sachen Rollenverteilung im Zuge der Corona-Pandemie über internationale Frauenbewegungen und -proteste und bis hin zu Black Lives Matter und der Frage, wie wir unseren Feminismus intersektionaler machen können. Die Gedanken meiner drei großartigen Gesprächspartnerinnen zu unserer feministischen Sicht auf das Jahr 2020 haben mich wieder sehr inspiriert. Ich hoffe, ich kann euch ein bisschen was davon weitergeben!
Dieses Jahr fiel der Chaos Communication Congress aus. Oder doch nicht. Denn das Ersatzkonstrukt rC3 war ein Überraschungserfolg der es angemessen erscheint, in belesener Runde zu reflektieren, was da eigentlich passiert ist und welche Konsequenzen es hatte, dass eine Community ihrer Lieblingsveranstaltung beraubt werden sollte und beschloss, darauf eine kreative Antwort zu finden. Und weil sich die Parallelen aufdrängen springen wir auch noch mal 28 Jahre zurück und diskutieren, was sich bei dem Projekt "Die Villa" vielleicht damals schon abgezeichnet hatte, was Online Communitys ausmacht und wie man sie erfolgreich entwickelt. Tala ist wieder dabei und wir begrüßen gleich zwei Gäste in der Runde: Lars Immisch, der eins mit Tim gemeinsam an der Villa geschraubt hat und Miriam "Noujoum" Seyffarth, die drohte, im rC3 komplett verloren zu gehen und von ihrer wilden Reise berichtet.
Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur
Mit neuen Gesetzen sollen die sozialen Netzwerke reguliert werden. Über die genaue Umsetzung wird aber noch diskutiert. Außerdem: Der größte Hackerangriff der letzten Jahre und die Digitalversion vom Chaos Communication Congress. Moderation: Vera Linß und Marcus Richter www.deutschlandfunkkultur.de, Breitband Hören bis: 19.01.2038 04:14 Direkter Link zur Audiodatei
Autor: Kloiber, Manfred Sendung: Computer und Kommunikation Hören bis: 19.01.2038 04:14
Wir berichten traditionell vom 36. Chaos Communication Congress. Über Origami, Sendezentrum, Podcast-Bühne, Mate-Inflation, Hafermilch, Chaospost (mit Pesthörnchen), DECT, Congress in der Tagesschau, Frauen, Stadtmarketing Leipzig, Gedanken zur Hotelplatzierung entlang der Tram-Linie 16, [kryogen konservierte Nerds](https://twitter.com/chrismarquardt/status/1210493419493171205), Hörer:innentreffen, Scooter mit zu [niedrigen Render-Settings](https://twitter.com/chrismarquardt/status/1210560107521728513) #36c3sz
Today on the show, we've got a look at running OpenBSD on a APU, some BSD in your Android, managing your own FreeBSD cloud service with ansible and much more. Keep it turned on your place to B...SD! This episode was brought to you by Headlines OpenBSD on PC Engines APU2 (https://github.com/elad/openbsd-apu2) A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2 It starts with a breakdown of the parts that were purchases, totally around $200 Then the reader is walked through configuring the serial console, flashing the ROM, and updating the BIOS The next step is actually creating a custom OpenBSD install image, and pre-configuring its serial console. Starting with OpenBSD 6.0, this step is done automatically by the installer Installation: Power off the APU2 Insert the bootable OpenBSD installer USB flash drive to one of the USB slots on the APU2 Power on the APU2, press F10 to get to the boot menu, and choose to boot from USB (usually option number 1) At the boot> prompt, remember the serial console settings (see above) Also at the boot> prompt, press Enter to start the installer Follow the installation instructions The driver used for wireless networking is athn(4). It might not work properly out of the box. Once OpenBSD is installed, run fw_update with no arguments. It will figure out which firmware updates are required and will download and install them. When it finishes, reboot. Where the rubber meets the road… (part one) (https://functionallyparanoid.com/2016/11/29/where-the-rubber-meets-the-road-part-one/) A user describes their adventures installing OpenBSD and Arch Linux on a new Lenovo X1 Carbon (4th gen, skylake) They also detail why they moved away from their beloved Macbook, which while long, does describe a journey away from Apple that we've heard elsewhere. The journey begins with getting a new Windows laptop, shrinking the partition and creating space for a triple-boot install, of Windows / Arch / OpenBSD Brian then details how he setup the partitioning and performed the initial Arch installation, getting it tuned to his specifications. Next up was OpenBSD though, and that went sideways initially due to a new NVMe drive that wasn't fully supported (yet) The article is split into two parts (we will bring you the next installment at a future date), but he leaves us with the plan of attack to build a custom OpenBSD kernel with corrected PCI device identifiers. We wish Brian luck, and look forward to the “rest of the story” soon. *** Howto setup a FreeBSD jail server using iocage and ansible. (https://github.com/JoergFiedler/freebsd-ansible-demo) Setting up a FreeBSD jail server can be a daunting task. However when a guide comes along which shows you how to do that, including not exposing a single (non-jailed) port to the outside world, you know we had a take a closer look. This guide comes to us from GitHub, courtesy of Joerg Fielder. The project goals seem notable: Ansible playbook that creates a FreeBSD server which hosts multiple jails. Travis is used to run/test the playbook. No service on the host is exposed externally. All external connections terminate within a jail. Roles can be reused using Ansible Galaxy. Combine any of those roles to create FreeBSD server, which perfectly suits you. To get started, you'll need a machine with Ansible, Vagrant and VirtualBox, and your credentials to AWS if you want it to automatically create / destroy EC2 instances. There's already an impressive list of Anisible roles created for you to start with: freebsd-build-server - Creates a FreeBSD poudriere build server freebsd-jail-host - FreeBSD Jail host freebsd-jailed - Provides a jail freebsd-jailed-nginx - Provides a jailed nginx server freebsd-jailed-php-fpm - Creates a php-fpm pool and a ZFS dataset which is used as web root by php-fpm freebsd-jailed-sftp - Installs a SFTP server freebsd-jailed-sshd - Provides a jailed sshd server. freebsd-jailed-syslogd - Provides a jailed syslogd freebsd-jailed-btsync - Provides a jailed btsync instance server freebsd-jailed-joomla - Installs Joomla freebsd-jailed-mariadb - Provides a jailed MariaDB server freebsd-jailed-wordpress - Provides a jailed Wordpress server. Since the machines have to be customized before starting, he mentions that cloud-init is used to do the following: activate pf firewall add a pass all keep state rule to pf to keep track of connection states, which in turn allows you to reload the pf service without losing the connection install the following packages: sudo bash python27 allow passwordless sudo for user ec2-user “ From there it is pretty straight-forward, just a couple commands to spin up the VM's either locally on your VirtualBox host, or in the cloud with AWS. Internally the VM's are auto-configured with iocage to create jails, where all your actual services run. A neat project, check it out today if you want a shake-n-bake type cloud + jail solution. Colin Percival's bsdiff helps reduce Android apk bandwidth usage by 6 petabytes per day (http://android-developers.blogspot.ca/2016/12/saving-data-reducing-the-size-of-app-updates-by-65-percent.html) A post on the official Android-Developers blog, talks about how they used bsdiff (and bspatch) to reduce the size of Android application updates by 65% bsdiff was developed by FreeBSD's Colin Percival Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size. This post is actually about the second generation of the code. Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller. Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches. So in the second generation of the code, they use bsdiff on each individual file, then package that, rather than diffing the original and new archives bsdiff is used in a great many other places, including shrinking the updates for the Firefox and Chrome browsers You can find out more about bsdiff here: http://www.daemonology.net/bsdiff/ A far more sophisticated algorithm, which typically provides roughly 20% smaller patches, is described in my doctoral thesis (http://www.daemonology.net/papers/thesis.pdf). Considering the gains, it is interesting that no one has implemented Colin's more sophisticated algorithm Colin had an interesting observation (https://twitter.com/cperciva/status/806426180379230208) last night: “I just realized that bandwidth savings due to bsdiff are now roughly equal to what the total internet traffic was when I wrote it in 2003.” *** News Roundup Distrowatch does an in-depth review of NAS4Free (https://distrowatch.com/weekly.php?issue=20161114#nas4free) Jesse Smith over at DistroWatch has done a pretty in-depth review of Nas4Free. The review starts with mentioning that NAS4Free works on 3 platforms, ARM/i386/AMD64 and for the purposes of this review he would be using AMD64 builds. After going through the initial install (doing typical disk management operations, such as GPT/MBR, etc) he was ready to begin using the product. One concern originally observed was that the initial boot seemed rather slow. Investigation revealed this was due to it loading the entire OS image into memory, and the first (long) disk read did take some time, but once loaded was super responsive. The next steps involved doing the initial configuration, which meant creating a new ZFS storage pool. After this process was done, he did find one puzzling UI option called “VM” which indicated it can be linked to VirtualBox in some way, but the Docs didn't reveal its secrets of usage. Additionally covered were some of the various “Access” methods, including traditional UNIX permissions, AD and LDAP, and then various Sharing services which are typical to a NAS, Such as NFS / Samba and others. One neat feature was the built-in file-browser via the web-interface, which allows you another method of getting at your data when sometimes NFS / Samba or WebDav aren't enough. Jesse gives us a nice round-up conclusion as well Most of the NAS operating systems I have used in the past were built around useful features. Some focused on making storage easy to set up and manage, others focused on services, such as making files available over multiple protocols or managing torrents. Some strive to be very easy to set up. NAS4Free does pretty well in each of the above categories. It may not be the easiest platform to set up, but it's probably a close second. It may not have the prettiest interface for managing settings, but it is quite easy to navigate. NAS4Free may not have the most add-on services and access protocols, but I suspect there are more than enough of both for most people. Where NAS4Free does better than most other solutions I have looked at is security. I don't think the project's website or documentation particularly focuses on security as a feature, but there are plenty of little security features that I liked. NAS4Free makes it very easy to lock the text console, which is good because we do not all keep our NAS boxes behind locked doors. The system is fairly easy to upgrade and appears to publish regular security updates in the form of new firmware. NAS4Free makes it fairly easy to set up user accounts, handle permissions and manage home directories. It's also pretty straight forward to switch from HTTP to HTTPS and to block people not on the local network from accessing the NAS's web interface. All in all, I like NAS4Free. It's a good, general purpose NAS operating system. While I did not feel the project did anything really amazing in any one category, nor did I run into any serious issues. The NAS ran as expected, was fairly straight forward to set up and easy to manage. This strikes me as an especially good platform for home or small business users who want an easy set up, some basic security and a solid collection of features. Browsix: Unix in the browser tab (https://browsix.org/) Browsix is a research project from the PLASMA lab at the University of Massachusetts, Amherst. The goal: Run C, C++, Go and Node.js programs as processes in browsers, including LaTeX, GNU Make, Go HTTP servers, and POSIX shell scripts. “Processes are built on top of Web Workers, letting applications run in parallel and spawn subprocesses. System calls include fork, spawn, exec, and wait.” Pipes are supported with pipe(2) enabling developers to compose processes into pipelines. Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers together with their clients in the browser. Browsix comprises two core parts: A kernel written in TypeScript that makes core Unix features (including pipes, concurrent processes, signals, sockets, and a shared file system) available to web applications. Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser. This seems like an interesting project, although I am not sure how it would be used as more than a toy *** Book Review: PAM Mastery (https://www.cyberciti.biz/reviews/book-review-pam-mastery/) nixCraft does a book review of Michael W. Lucas' “Pam Mastery” Linux, FreeBSD, and Unix-like systems are multi-user and need some way of authenticating individual users. Back in the old days, this was done in different ways. You need to change each Unix application to use different authentication scheme. Before PAM, if you wanted to use an SQL database to authenticate users, you had to write specific support for that into each of your applications. Same for LDAP, etc. So Open Group lead to the development of PAM for the Unix-like system. Today Linux, FreeBSD, MacOS X and many other Unix-like systems are configured to use a centralized authentication mechanism called Pluggable Authentication Modules (PAM). The book “PAM Mastery” deals with the black magic of PAM. Of course, each OS chose to implement PAM a little bit differently The book starts with the basic concepts about PAM and authentication. You learn about Multi-Factor Authentication and why use PAM instead of changing each program to authenticate the user. The author went into great details about why PAM is useful for developers and sysadmin for several reasons. The examples cover CentOS Linux (RHEL and clones), Debian Linux, and FreeBSD Unix system. I like the way the author described PAM Configuration Files and Common Modules that covers everyday scenarios for the sysadmin. PAM configuration file format and PAM Module Interfaces are discussed in easy to understand language. Control flags in PAM can be very confusing for new sysadmins. Modules can be stacked in a particular order, and the control flags determine how important the success or failure of a particular module. There is also a chapter about using one-time passwords (Google Authenticator) for your application. The final chapter is all about enforcing good password policies for users and apps using PAM. The sysadmin would find this book useful as it covers a common authentication scheme that can be used with a wide variety of applications on Unix. You will master PAM topics and take control over authentication for your organization IT infrastructure. If you are Linux or Unix sysadmin, I would highly recommend this book. Once again Michael W Lucas nailed it. The only book you may need for PAM deployment. get “PAM Mastery” (https://www.michaelwlucas.com/tools/pam) *** Reflections on Trusting Trust - Ken Thompson, co-author of UNIX (http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html) Ken Thompson's "cc hack" - Presented in the journal, Communication of the ACM, Vol. 27, No. 8, August 1984, in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed. The article starts off by talking about a content to write a program that produces its own source code as output. Or rather, a C program, that writes a C program, that produces its own source code as output. The C compiler is written in C. What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler. Suppose we wish to alter the C compiler to include the sequence "v" to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about "v," the source is not legal C. We must "train" the compiler. After it "knows" what "v" means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3. The actual bug I planted in the compiler would match code in the UNIX "login" command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. Next “simply add a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere. So now there is a trojan'd version of cc. If you compile a clean version of cc, using the bad cc, you will get a bad cc. If you use the bad cc to compile the login program, it will have a backdoor. The source code for both backdoors no longer exists on the system. You can audit the source code of cc and login all you want, they are trustworthy. The compiler you use to compile your new compiler, is the untrustworthy bit, but you have no way to know it is untrustworthy, and no way to make a new compiler, without using the bad compiler. The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. Acknowledgment: I first read of the possibility of such a Trojan horse in an Air Force critique of the security of an early implementation of Multics. I can- not find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know. Beastie Bits Custom made Beastie Stockings (https://www.etsy.com/listing/496638945/freebsd-beastie-christmas-stocking) Migrating ZFS from mirrored pool to raidz1 pool (http://ximalas.info/2016/12/06/migrating-zfs-from-mirrored-pool-to-raidz1-pool/) OpenBSD and you (https://home.nuug.no/~peter/blug2016/) Watson.org FreeBSD and Linux cross reference (http://fxr.watson.org/) OpenGrok (http://bxr.su/) FreeBSD SA-16:37: libc (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:37.libc.asc) -- A 26+ year old bug found in BSD's libc, all BSDs likely affected -- A specially crafted argument can trigger a static buffer overflow in the library, with possibility to rewrite following static buffers that belong to other library functions. HardenedBSD issues correction for libc patch (https://github.com/HardenedBSD/hardenedBSD/commit/fb823297fbced336b6beeeb624e2dc65b67aa0eb) -- original patch improperly calculates how many bytes are remaining in the buffer. From December the 27th until the 30th there the 33rd Chaos Communication Congress[0] is going to take place in Hamburg, Germany. Think of it as the yearly gathering of the european hackerscene and their overseas friends. I am one of the persons organizing the "BSD assembly (https://events.ccc.de/congress/2016/wiki/Assembly:BSD)" as a gathering place for BSD enthusiasts and waving the flag amidst the all the other projects / communities. Feedback/Questions Chris - IPFW + Wifi (http://pastebin.com/WRiuW6nn) Jason - bhyve pci (http://pastebin.com/JgerqZZP) Al - pf errors (http://pastebin.com/3XY5MVca) Zach - Xorg settings (http://pastebin.com/Kty0qYXM) Bart - Wireless Support (http://pastebin.com/m3D81GBW) ***