Podcasts about opentracing

  • 30PODCASTS
  • 47EPISODES
  • 51mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 19, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about opentracing

Latest podcast episodes about opentracing

Open Source Startup Podcast
E169: Building New Standards for Observability - Lightstep & OpenTelemetry

Open Source Startup Podcast

Play Episode Listen Later Mar 19, 2025 38:31


Ben Sigelman is the Co-Founder & CEO of observability platform Lightstep as well as Co-Creator of open source observability frameworks OpenTracing and OpenTelemetry. Lightstep was acquired by ServiceNow in 2021 and OpenTelemetry was released in 2019 and has since become the standard observability framework. In this episode, we dig into:The founding story for Lightstep - including the initial pivot into the ideaThe benefits Lightstep got from open sourcing OpenTracing The OpenTracing and OpenCensus merger into OpenTelemetryWhy OpenTelemetry has been so widely adopted Ben's perspective on the many companies building with OpenTelemetry todayHow their team made the decision to take the ServiceNow acquisition Company building learnings around team building (& more!)

The New Stack Podcast
OpenTelemetry: What's New with the 2nd Biggest CNCF Project?

The New Stack Podcast

Play Episode Listen Later Feb 6, 2025 30:14


Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams onThe New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it's the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry's role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved.Learn more from The New Stack about the latest trends in Open Telemetry:What Is OpenTelemetry? The Ultimate GuideObservability in 2025: OpenTelemetry and AI to Fill In GapsHoneycomb.io's Austin Parker: OpenTelemetry In-DepthJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  

OpenObservability Talks
Jaeger V2 Unveiled: Powered by OpenTelemetry - OpenObservability Talks S5E05

OpenObservability Talks

Play Episode Listen Later Oct 28, 2024 63:01


In this episode of OpenObservability Talks, Dotan Horovits sits down with Yuri Shkuro, the creator of Jaeger, to unveil the highly anticipated Jaeger V2. This major release introduces a new architecture with deep OpenTelemetry integration, which promises more flexibility, performance, extensibility and ease of use. Join us as Yuri shares insider details on the challenges, innovations, and roadmap for Jaeger V2 towards a more efficient and scalable distributed tracing solution. Yuri is a software engineer who works on distributed tracing, observability, reliability, and performance problems, currently working at Meta; author of the book "Mastering Distributed Tracing"; creator of Jaeger, an open source distributed tracing platform originally developed at Uber; co-founder of the OpenTracing and OpenTelemetry CNCF projects; member of the W3C Distributed Tracing Working Group. The episode was live-streamed on 14 October 2024, and the video is available at www.youtube.com/watch?v=lICivVwm-F8. Check out the recap blog at: https://medium.com/p/be612dbee774/ OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. ⁠⁠https://www.youtube.com/@openobservabilitytalks⁠   https://www.twitch.tv/openobservability⁠ Show Notes: 00:00 - Intro 00:45 - Open Source Observability Day 01:46 - Episode and guest intro 04:37 - Jaeger v1.x highlights 09:04 - Jaeger scope evolution from instrumentation to backend 13:36 - Jaeger v2 - why now? 20:26 - New architecture for V2 - learnings for SW engineering 26:53 - Jaeger persistence layer, and do we need tracing-specialized database? 35:35 - extending OpenTelemetry to manage storage for Jaeger 38:57 - RC1 is out, when is GA expected and what's expected? 43:24 - Breaking changes and migration path from v1 to v2 48:31 - What's expected for Jaeger UI 51:24 - New contributors joining through mentorship programs 54:47 - Observability at Meta/Facebook: machine learning, correlation, OpenTelemetry 1:01:04 - Outro Resources: https://www.jaegertracing.io/docs/next-release-v2/getting-started/⁠  https://medium.com/jaegertracing/towards-jaeger-v2-moar-opentelemetry-2f8239bee48e https://horovits.medium.com/observability-into-your-finops-taking-distributed-tracing-beyond-monitoring-48a51e32e78a https://www.youtube.com/watch?v=35aInRLbTQo&list=PLd57eY2edRXz4djMETYTm-2p8WGTdoX3D https://www.youtube.com/watch?v=1l0HKUDoX4Q&list=PLd57eY2edRXz4djMETYTm-2p8WGTdoX3D https://research.facebook.com/publications/positional-paper-schema-first-application-telemetry/⁠ ⁠ https://research.facebook.com/publications/scuba-diving-into-data-at-facebook/⁠ ⁠ https://osoday.com/⁠ Socials: Twitter:⁠ https://twitter.com/OpenObserv⁠ YouTube: ⁠https://www.youtube.com/@openobservabilitytalks⁠ Dotan Horovits ============ Twitter: @horovits LinkedIn: www.linkedin.com/in/horovits Mastodon: @horovits@fosstodon BlueSky: @horovits.bsky.social Yuri Shkuro ========== Twitter: https://twitter.com/YuriShkuro LinkedIn: https://www.linkedin.com/in/yurishkuro/

Engineering Kiosk
#101 Observability und OpenTelemetry mit Severin Neumann

Engineering Kiosk

Play Episode Listen Later Dec 12, 2023 69:13


Effektive Observability mit OpenTelemetryFrüher waren viele Applikationen eine Black Box, besonders für die Ops aka Betriebsabteilung. Dann fing das Logging an. Apps haben Log-Lines geschrieben, zum Beispiel wann die App fertig hochgefahren ist oder wenn etwas schief gegangen ist. In einer Art und Weise haben durch Logs die Devs angefangen, mit den Ops-Leuten zu kommunizieren.Irgendwann später gab es Metriken. Wie viel RAM verbraucht die App, wie oft wurde der Garbage Collector getriggert oder auch Business-Metriken, wie oft eine Bestellung ausgeführt wurde oder wann eine Geo- anstatt einer Text-Suche gestartet wurde.War das alles? Nein. Der neueste Hype: Traces. Eine genaue Einsicht, welchen Code-Path die App genommen hat und wie lange dieser gedauert hat inkl. aller Metadaten, die wir uns wünschen.Und wenn man dies nun alles in einen Sack packt, es gut durchschüttelt und man ein System hat, das man auf Basis dieser Daten fragen stellen kann, nennt man das Observability.Und genau da setzt das Projekt OpenTelemetry an.In dieser Episode sprechen wir mit dem Experten Severin Neumann über Observability und OpenTelemetry.Bonus: Was ist ein Sales-Engineer?**** Diese Episode wird gesponsert von www.aboutyou.de ABOUT YOU gehört zu den größten Online-Fashion Shops in Europa und ist immer auf der Suche nach Tech-Talenten - wie zum Beispiel einem (Lead) DevOps/DataOps Engineer Google Cloud Platform oder einem Lead Platform Engineer. Alle Stellen findest auch unter https://corporate.aboutyou.de/en/our-jobs ****Das schnelle Feedback zur Episode:

Streaming Audio: a Confluent podcast about Apache Kafka
How to use OpenTelemetry to Trace and Monitor Apache Kafka Systems

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Feb 1, 2023 50:01 Transcription Available


How can you use OpenTelemetry to gain insight into your Apache Kafka® event systems? Roman Kolesnev, Staff Customer Innovation Engineer at Confluent, is a member of the Customer Solutions & Innovation Division Labs team working to build business-critical OpenTelemetry applications so companies can see what's happening inside their data pipelines. In this episode, Roman joins Kris to discuss tracing and monitoring in distributed systems using OpenTelemetry. He talks about how monitoring each step of the process individually is critical to discovering potential delays or bottlenecks before they happen; including keeping track of timestamps, latency information, exceptions, and other data points that could help with troubleshooting.Tracing each request and its journey to completion in Kafka gives companies access to invaluable data that provides insight into system performance and reliability. Furthermore, using this data allows engineers to quickly identify errors or anticipate potential issues before they become significant problems. With greater visibility comes better control over application health - all made possible by OpenTelemetry's unified APIs and services.As described on the OpenTelemetry.io website, "OpenTelemetry is a Cloud Native Computing Foundation incubating project. Formed through a merger of the OpenTracing and OpenCensus projects." It provides a vendor-agnostic way for developers to instrument their applications across different platforms and programming languages while adhering to standard semantic conventions so the traces/information can be streamed to compatible systems following similar specs.By leveraging OpenTelemetry, organizations can ensure their applications and systems are secure and perform optimally. It will quickly become an essential tool for large-scale organizations that need to efficiently process massive amounts of real-time data. With its ability to scale independently, robust analytics capabilities, and powerful monitoring tools, OpenTelemetry is set to become the go-to platform for stream processing in the future.Roman explains that the OpenTelemetry APIs for Kafka are still in development and unavailable for open source. The code is complete and tested but has never run in production. But if you want to learn more about the nuts and bolts, he invites you to connect with him on the Confluent Community Slack channel. You can also check out Monitoring Kafka without instrumentation with eBPF - Antón Rodríguez to learn more about a similar approach for domain monitoring.EPISODE LINKSOpenTelemetry java instrumentationOpenTelemetry collectorDistributed Tracing for Kafka with OpenTelemetryMonitoring Kafka without instrumentation with eBPFKris Jenkins' TwitterWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   

Engineering Kiosk
#38 Monitoring, Metriken, Tracing, Alerting, Observability

Engineering Kiosk

Play Episode Listen Later Sep 27, 2022 55:50


Wie würde heutzutage ein moderner Logging, Metriken, Monitoring, Alerting und Tracing-Stack aussehen?Im Infrastruktur-Bereich gibt es zu jedem Bereich etliche Tools. Cloud-Native ist das Buzzword der Stunde. In dieser Episode erzählt Andy, wie er einen modernen Stack für ein Side-Projekt für die Bereiche Logging, Metriken, Monitoring, Alerting und Tracing aufsetzen würde. Unter anderem geht es dabei um Fragen wie: Was sollte man eigentlich alles loggen? Wie kann man von einem Alert angerufen werden? Wie visualisiert man Daten in schönen Graphen? Brauchen wir Tracing? Und was ist Observability?Bonus: Engineering Porn und Buzzword-Bingo.Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776Gerne behandeln wir auch euer Audio Feedback in einer der nächsten Episoden, einfach Audiodatei per Email oder WhatsApp Voice Message an +49 15678 136776LinksEpisode #37 Mit IT-Büchern Geld verdienen? Wer liest überhaupt noch Bücher?: https://engineeringkiosk.dev/podcast/episode/37-mit-it-b%C3%BCchern-geld-verdienen-wer-liest-%C3%BCberhaupt-noch-b%C3%BCcher/?pkn=shownotes Episode #17 Was können wir beim Incident Management von der Feuerwehr lernen?: https://engineeringkiosk.dev/podcast/episode/17-was-k%C3%B6nnen-wir-beim-incident-management-von-der-feuerwehr-lernen/?pkn=shownotes Sentry: https://sentry.io/Datadog: https://www.datadoghq.com/Splunk: https://www.splunk.com/Elasticsearch: https://www.elastic.co/de/enterprise-search/Logstash: https://github.com/elastic/logstashKibana: https://github.com/elastic/kibanaOpenSearch: https://opensearch.org/Elastic Cloud: https://www.elastic.co/de/cloud/Aiven: https://aiven.io/Fluentd: https://www.fluentd.org/Amazon S3 und S3 Glacier: https://aws.amazon.com/de/s3/Amazon Athena: https://aws.amazon.com/de/athena/Prometheus: https://prometheus.io/VictoriaMetrics: https://github.com/VictoriaMetrics/VictoriaMetricsInfluxDB: https://www.influxdata.com/M3 Metrics Engine: https://m3db.io/Prometheus Node Exporter: https://github.com/prometheus/node_exporterGrafana: https://github.com/grafana/grafanaPromQL: https://prometheus.io/docs/prometheus/latest/querying/basics/OpsGenie: https://www.atlassian.com/de/software/opsgenieJaeger: https://www.jaegertracing.io/Zipkin: https://zipkin.io/OpenTracing: https://opentracing.io/OpenTelemetry: https://opentelemetry.io/yak shaving: https://seths.blog/2005/03/dont_shave_that/Cloud Native Computing Foundation: https://www.cncf.io/Sprungmarken(00:00:00) Intro(00:00:50) Wolfgangs MySQL-Buch(00:02:11) Heutiges Thema: Wie würde Andy die Themen Monitoring, Alerting, Metriken und Logging bei einem Side Projekt angehen?(00:04:49) Warum brauchst du Logging, Monitoring, Metriken und Tracing?(00:07:29) Logging von Exceptions, Warnings und anderen Fehler, Logging und der ELK-Stack(00:16:06) Was sollte man eigentlich alles loggen?(00:19:22) Log-Rotation und Log-Retention auf Object-Storage(00:27:30) Metriken mit Prometheus(00:31:46) Visualisierung von Metriken mit Grafana(00:34:25) Intelligente Alerting Systeme und die richtigen Schwellenwerte finden(00:38:47) Alerts senden und anrufen lassen(00:43:22) Tracing: Was ist das und brauchen wir das?(00:48:49) Was ist Observability?(00:51:42) Iterativer Aufbau seiner Plattform und Alternativen(00:54:49) Keine bezahlte Werbung(00:55:14) Outro und FeedbackHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776

OpenObservability Talks
Expensive Observability: The Cardinality Challenge - OpenObservability Talks S3E02

OpenObservability Talks

Play Episode Listen Later Jul 28, 2022 60:03


We all collect logs, metrics and perhaps traces and other data types, in support of our observability. But this can get expensive pretty quickly, especially in microservices based systems, in what is commonly known as “the cardinality problem”.   On this episode of OpenObservability Talks I'll host Ben Sigelman, co-founder and the GM of Lightstep, to discuss this data problem and how to overcome it. Ben architected Google's own planet-scale metrics and distributed tracing systems (still in production today), and went on to co-create the open-source OpenTracing and OpenTelemetry projects, both part of the CNCF. The episode was live-streamed on 12 July 2022 and the video is available at https://youtu.be/gJhzwP-mZ2k OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and pitch in with your comments and questions on the live chat.https://www.twitch.tv/openobservabilityhttps://www.youtube.com/channel/UCLKOtaBdQAJVRJqhJDuOlPg Have you got an interesting topic you'd like to share in an episode? Reach out to us and submit your proposal at https://openobservability.io/ Show Notes: The difference between monitoring, observability and APM What comprises the cost of observability How common is the knowledge of cardinality and how to add metrics Controlling cost with sampling, verbosity and retention Lessons from Google's metrics and tracing systems Using metric rollups and aggregations intelligently Semantic conventions for logs, metrics and traces OpenCost project New research paper by Meta on schema-first approach to application telemetry metadata OTEL code contributions - published stats Resources: Monitoring vs. observability: https://twitter.com/el_bhs/status/1349406398388400128 The two drivers of cardinality: https://twitter.com/el_bhs/status/1360276734344450050 Sampling vs verbosity: https://twitter.com/el_bhs/status/1440750741384089608 Observing resources and transactions: https://twitter.com/el_bhs/status/1372636288021524482 Socials: Twitter: https://twitter.com/OpenObserv Twitch: https://www.twitch.tv/openobservability YouTube: https://www.youtube.com/channel/UCLKOtaBdQAJVRJqhJDuOlPg

Software at Scale
Software at Scale 47 - OpenTelemetry with Ted Young

Software at Scale

Play Episode Listen Later May 26, 2022 93:41


Ted Young is the Director of Developer Education at Lightstep and a co-founder of the OpenTelemetry project.Apple Podcasts | Spotify | Google PodcastsThis episode dives deep into the history of OpenTelemetry, why we need a new telemetry standard, all the work that goes into building generic telemetry processing infrastructure, and the vision for unified logging, metrics and traces.Episode Reading ListInstead of highlights, I’ve attached links to some of our discussion points.HTTP Trace Context - new headers to support a standard way to preserve state across HTTP requests.OpenTelemetry Data CollectionZipkinOpenCensus and OpenTracing - the precursor projects to OpenTelemetry This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev

PurePerformance
OpenTelemetry from a Contributors perspective with Daniel Dyla and Armin Ruech

PurePerformance

Play Episode Listen Later Apr 18, 2022 52:56


OpenTelemetry, for some the biggest romance story in open source, as it took off with the merger of OpenCensus and OpenTracing. But what is OpenTelemetry from the perspective of a contributor? Listen to this episode and here it from Daniel Dyla, Co-Maintainer OTel JS and W3C Distributed Tracing WG, and Armin Ruech who is on the Technical Committee focusing on cross language specifications. They give us insights into what it takes to contribute and drive an open source projects and give us an update on OpenTelemetry, the current status, what they are working on right now as well as the near future improvements they are excited about.Show Links:The OpenTelemetry Projecthttps://opentelemetry.io/Daniel Dylahttps://engineering.dynatrace.com/persons/daniel-dyla/Armin Ruechhttps://engineering.dynatrace.com/persons/armin-ruech/List of instrumented librarieshttps://opentelemetry.io/registry/Contribute to OTelhttps://opentelemetry.io/docs/contribution-guidelines/OpenTelemetry Tutorials on IsItObservablehttps://isitobservable.io/open-telemetry

OpenObservability Talks
Building web-scale observability at Slack, Pinterest & Twitter - OpenObservability Talks S2E09

OpenObservability Talks

Play Episode Listen Later Feb 27, 2022 58:39


What does it take to build observability in a web-scale company such as Slack, Pinterest and Twitter? On this episode of OpenObsevability Talks I'll host Suman Karumuri to hear how he built these systems from the ground up on these #BigTech co's, about his recent research papers and more. Suman Karumuri is a Sr. Staff Software Engineer and the tech lead for Observability at Slack. Suman Karumuri is an expert in distributed tracing and was a tech lead of Zipkin and a co-author of OpenTracing standard, a Linux Foundation project via the CNCF. Previously, Suman Karumuri has spent several years building and operating petabyte scale log search, distributed tracing and metrics systems at Pinterest, Twitter and Amazon.  In his spare time, he enjoys board games, hiking and playing with his kids. The episode was live-streamed on 16 February 2022 and the video is available at https://youtu.be/IvidkV3TfYg  OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and pitch in with your comments and questions on the live chat.https://www.twitch.tv/openobservabilityhttps://www.youtube.com/channel/UCLKOtaBdQAJVRJqhJDuOlPg Show Notes: * Who owns observability in large organizations? * The gaps in current way of handling metrics  * MACH research paper for metrics storage engine * The gaps in current way of handling logs Slack KalDB * SlackTrace - Slack in house tracing system  Resources: Research paper: building Observability Data Management Systems CIDR paper: Video SlackTrace blog post, talk. Logging at Twitter Pintrace: A Distributed Tracing Pipeline talk by Suman at LISA Observability Engineering book Observability Trends for 2022 Yelp engineering with Elasticsearch and Lucene Socials: Twitter: https://twitter.com/OpenObserv Twitch: https://www.twitch.tv/openobservability YouTube: https://www.youtube.com/channel/UCLKOtaBdQAJVRJqhJDuOlPg

airhacks.fm podcast with adam bien

An airhacks.fm conversation with Emily Jiang (@emilyfhjiang) about: the Chinese JavaONE, the MicroProfile book, writing a book in a caravan, the MicroProfile 5 release, MicroProfile 5.0 ships with Jakarta namespace, OpenLiberty supports MicroProfile 5.0, OpenTracing and OpenCensus merged into opentelemetry, MicroProfile OpenTelemetry will deprecate MicroProfile OpenTracing, Traced annotation and Tracer interface are comprising the OpenTracing spec, MicroProfile Metrics and micrometer, a shim layer around Micrometer could become MicroProfile Metrics, Jakarta EE is a shim, the Quarkus with Micrometer screencast, MicroProfile Metrics "application" registry is useful for business metrics and KPIs, MicroProfile standalone vs. platform releases, Jakarta EE 10 Core Profile will be consumed by MicroProfile, Jakarta Concurrency and Core Profile, MicroProfile Context Propagation integration with CDI, the importance of Jakarta EE Concurrency, a MicroProfile logging facade discussion, OpenTelemetry's logging branch, the AWS Lambda logging interface, injecting java.util.logging loggers and Java interface-based log facades, MicroProfile metrics custom scopes, a service mesh does not have any application-level insights, a service mesh performs a fallback based on traffic patters and not application logic, fault tolerance testing with service mesh vs. MicroProfile Fault Tolerance, MicroProfile and data access specification evaluation, Quarkus with MicroProfile as AWS Lambda screencast, Quarkus with MicroProfile as AWS Lambda github project, AWS serverless containers Jersey implementation, explaining AWS Lambdas with EJB talk, Message Driven Beans as email listeners with JCA, serverless and the ROI point of view, the self-explanatory serverless billing, OSGi is great for building runtimes, integrating MicroProfile Config with Jakarta EE, the Practical Cloud-Native Java Development with MicroProfile: Develop and deploy scalable, resilient, and reactive cloud-native applications using MicroProfile 4.1 book Emily Jiang on twitter: @emilyfhjiang

airhacks.fm podcast with adam bien
Debezium, Server, Engine, UI and the Outbox

airhacks.fm podcast with adam bien

Play Episode Listen Later Nov 28, 2021 67:11


An airhacks.fm conversation with Gunnar Morling (@gunnarmorling) about: debezium as analytics enablement, enriching events with quarkus, ksqlDB and PrestoDB and trino, cloud migrations with Debezium, embedded Debezium Engine, debezium server vs. Kafka Connect, Debezium Server with sink connectors, Apache Pulsar, Redis Streams are supporting Debezium Server, Debezium Server follows the microservice architecture, pluggable offset stores, JDBC offset store is Apache Iceberg connector, DB2, MySQL, PostgreSQL, MongoDB change streams, Cassandra, Vitess, Oracle, Microsoft SQL Server scylladb is cassandra compatible and provides external debezium connector, debezium ui is written in React, incremental snapshots, netflix cdc system, DBLog: A Watermark Based Change-Data-Capture Framework, multi-threaded snapshots, internal data leakage and the Outbox pattern, debezium listens to the outbox pattern, OpenTracing integration and the outbox pattern, sending messages directly to transaction log with PostgreSQL, Quarkus outbox pattern extension, the transaction boundary topic Gunnar Morling on twitter: @gunnarmorling and debezium.io

Python Bytes
#256 And the best open source project prize goes to ...

Python Bytes

Play Episode Listen Later Oct 29, 2021 59:36


Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut - Get started at shortcut.com/pythonbytes Special guest: The Anthony Shaw Michael #0: It's episode 2^8 (nearly 5 years of podcasting) Brian #1: Where does all the effort go?: Looking at Python core developer activity Łukasz Langa A look into CPython repository history and PR data Also, nice example of datasette in action and lots of SQL queries. The data, as well as the process, is open for anyone to look at. Cool that the process was listed in the article, including helper scripts used. Timeframe for data is since Feb 10, 2017, when source moved to GitHub, through Oct 9, 2021. However, some queries in the article are tighter than that. Queries Files involved in PRs since 1/1/20 top is ceval.c with 259 merged PRs Contributors by number of merged PRs lots of familiar names in the top 50, along with some bots it'd be fun to talk with someone about the bots used to help the Python project nice note: “Clearly, it pays to be a bot … or a release manager since this naturally causes you to make a lot of commits. But Victor Stinner and Serhiy Storchaka are neither of these things and still generate amazing amounts of activity. Kudos! In any case, this is no competition but it was still interesting to see who makes all these recent changes.” Who contributed where? Neat. There's a self reported Experts Index in the very nice Python Developer's Guide. But some libraries don't have anyone listed. The data does though. Łukasz generated a top-5 list for each file. Contributing to some file and have a question. These folks may be able to help. Averages for PR activity core developer authoring and merging their own PR takes on average ~7 days (std dev ±41.96 days); core developer authoring a PR which was merged by somebody else takes on average 20.12 days (std dev ±77.36 days); community member-authored PRs get merged on average after 19.51 days (std dev ±81.74 days). Interesting note on those std deviations: “Well, if we were a company selling code review services, this standard deviation value would be an alarmingly large result. But in our situation which is almost entirely volunteer-driven, the goal of my analysis is to just observe and record data. The large standard deviation reflects the large amount of variation but isn't necessarily something to worry about. We could do better with more funding but fundamentally our biggest priority is keeping CPython stable. Certain care with integrating changes is required. Erring on the side of caution seems like a wise thing to do.” More questions to be asked, especially from the issue tracker Which libraries require most maintenance? Michael #2: Why you shouldn't invoke setup.py directly By Paul Ganssle (from Talk Python #271: Unlock the mysteries of time, Python's datetime that is!) In response to conversation in Talk Python's cibuildwheel episode? For a long time, setuptools and distutils were the only game in town when it came to creating Python packages You write a setup.py file that invokes the setup() method, you get a Makefile-like interface exposed by invoking python setup.py [HTML_REMOVED] The last few years all direct invocations of setup.py are effectively deprecated in favor of invocations via purpose-built and/or standards-based CLI tools like pip, build and tox. In Python 2.0, the distutils module was introduced as a standard way to convert Python source code into *nix distro packages One major problem with this approach, though, is that every Python package must use distutils and only distutils — there was no standard way for a package author to make it clear that you need other packages in order to build or test your package. => Setuptools Works, but sometimes you need requirements before the install (see cython example) A build backend is something like setuptools or flit, which is a library that knows how to take a source tree and turn it into a distributable artifact — a source distribution or a wheel. A build frontend is something like pip or build, which is a program (usually a CLI tool) that orchestrates the build environment and invokes the build backend In this taxonomy, setuptools has historically been both a backend and a frontend - that said, setuptools is a terrible frontend. It does not implement PEP 517 or PEP 518's requirements for build frontends Why am I not seeing deprecation warnings? Use build package. Also can be replaced by tox, nox or even a Makefile Probably should just check out the summary table. Anthony #3: OpenTelemetry is going stable soon Cloud Native Computing Foundation project for cross-language event tracing, performance tracing, logging and sampling for distributed applications. Engineers from Microsoft, Amazon, Splunk, Google, Elastic, New Relic and others working on standards and specification. Formed through a merger of the OpenTracing and OpenCensus projects. Python SDK supports instrumentation of lots of frameworks, like Flask, Django, FastAPI (ASGI), and ORMs like SQLalchemy, or templating engines. All data can then be exported onto various platforms : NewRelic, Prometheus, Jaeger, DataDog, Azure Monitor, Google Cloud Monitoring. If you want to get started and play around, checkout the rich console exporter I submitted recently. Brian #4: Understanding all of Python, through its builtins Tushar Sadhwani I really enjoyed the discussion before he actually got to the builtins. LEGB rule defines the order of scopes in which variables are looked up in Python. Local, Enclosing (nonlocal), Global, Builtin Understanding LEGB is a good thing to do for Python beginners or advanced beginners. Takes a lot of the mystery away. Also that all the builtins are in one The rest is a quick scan through the entire list. It's not detailed everywhere, but pulls over scenic viewpoints at regular intervals to discuss interesting parts of builtins. Grouped reasonably. Not alphabetical Constants: There's exactly 5 constants: True, False, None, Ellipsis, and NotImplemented. globals and locals: Where everything is stored bytearray and memoryview: Better byte interfaces bin, hex, oct, ord, chr and ascii: Basic conversions … Well, it's a really long article, so I suggest jumping around and reading a section or two, or three. Luckily there's a nice TOC at the top. Michael #5: FastAPI, Dask, and more Python goodies win best open source titles Things that stood out to me FastAPI Dask Windows Terminal minikube - Kubernetes cluster on your PC OBS Studio Anthony #6: Notes From the Meeting On Python GIL Removal Between Python Core and Sam Gross Following on from last week's share on the “nogil” branch by Sam Gross, the Core Dev sprint included an interview. Targeted to 3.9 (alpha 3!), needs to at least be updated to 3.9.7. Nogil: Replaces pymalloc with mimalloc for thread safety Ties objects to the thread that created them witha. non-atomic local reference count within the owner thread Allows for (slower) reference counting from other threads. Immortalized some objects so that references never get inc/dec'ed like True, False, None, etc. Deferred reference counting Adjusts the GC to wait for all threads to pause at a safe point, doesn't wait for I/O blocked threads and constructs a list of objects to deallocate using mimalloc Relocates the MRO to a thread local (instead of process-local) to avoid contention on ref counting Modifies the builtin collections to be thread-safe (lists, dictionaries, etc,) since they could be shared across threads. IMHO, biggest thing to happen to Python in 5 years. Encouragingly, Sam was invited to be a Core Dev and Lukasz will mentor him! Extras Michael Python Developers Survey 2021 is open More PyPI CLI updates bump2version via Bahram Aghaei (youtube comment) Was there a bee stuck in Brian's mic last time? Brian PyCon US 2022 CFP is open until Dec 20 Python Testing with pytest, 2nd edition, Beta 7.0 All chapters now there. (Final chapter was “Advanced Parametrization”) It's in technical review phase now. If reading, please skip ahead to the chapter you really care about and submit errata if you find anything confusing. Joke:

Elixir Mix
Event Sourcing and CQRS ft. Ben Moss - EMx 148

Elixir Mix

Play Episode Listen Later Oct 13, 2021 54:41


Ben Moss joins the Mix to discuss Event Sourcing and CQRS in Elixir. Event sourcing is the practice of logging data across logged series of events and then reconstructing data from the events. CQRS is focused on keeping read and write operations from conflicting. Panel Adi IyengarAllen WymaSascha Wolf Guest Ben Moss Sponsors Dev Influencers AcceleratorLevel Up | Devchat.tvPodcastBootcamp.io Links GitHub | commanded/commandedEvent StoreTackling software complexity with the CELP stackEvent sourcing in practice - Using Elixir to build event-driven applicationsBitfieldTwitter: Benjamin Moss ( @benjamintmoss ) Picks Adi- AngelList - Engineering LeadAdi- theScore - Software DeveloperAdi- Community - Senior Software Engineer, BackendAllen- Book - Flutter in ActionBen- Toronto ElixirBen- Event ModelingSascha- OpenTelemetrySascha- OpenTracingSascha- HeadspaceSascha- 7Mind Contact Adi: Adi Iyengar – The Bug CatcherGitHub: Adi Iyengar ( thebugcatcher )Twitter: Adi Iyengar ( @lebugcatcher ) Contact Allen: Plangora  Plangora LimitedPlangora – YouTubePlangora | FacebookTech_Plangora Limited_Elixir | InstagramTwitter: Plangora ( @Plangora )LinkedIn: Plangora – Web and Mobile DevelopmentPlangora – Reddit Flying High With Flutter Flying High With FlutterFlying High with Flutter – YouTubeFlying High with Flutter | FacebookFlying High With Flutter | InstagramTwitter: Flying High with Flutter ( @fhwflutter ) Teach Me Code Teach Me CodeTeach Me Code | FacebookTeachMeCode | Instagram Contact Sascha: Sascha Wolf Special Guest: Benjamin Moss.

action event panel mix tackling github headspace elixir cel backend flying high flutter mobile development event sourcing cqrs 7mind ben moss opentracing event store dev influencers accelerator level up devchat podcastbootcamp allen wyma
OpenObservability Talks
Observability Into Your Business And FinOps - OpenObservability Talks S2E04

OpenObservability Talks

Play Episode Listen Later Sep 19, 2021 66:47


Observability is becoming a common practice for DevOps teams monitoring and troubleshooting IT systems. But Observability can offer much more than that. More advanced usage of telemetry, and in particular distributed tracing and its context propagation mechanism, can uncover insights into your business performance and can help solve business and FinOps problems. On this episode of OpenObservability Talks I hosted Yuri Shkuro, creator of Jaeger project and a champion of Distributed Tracing, to discuss how tracing and observability can help beyond DevOps, whether on business cases, FinOps or even software development. We also caught up on the latest updates from Jaeger, the CNCF's distributed tracing OSS project, its synergy with OpenTelemetry and more topics. Yuri is a software engineer who works on distributed tracing, observability, reliability, and performance problems; author of the book "Mastering Distributed Tracing"; creator of Jaeger, an open source distributed tracing platform and a graduated CNCF project; co-founder of the OpenTracing and OpenTelemetry CNCF projects; member of the W3C Distributed Tracing Working Group. The episode was live-streamed on 14 September 2021 and the video is available at https://youtube.com/live/YSOyTagKGtM Show Notes: why distributed tracing? tracing through async flows why is the slow adoption of tracing? instrumentation challenge for tracing adoption using context propagation for business use cases observability tooling maturity Jaeger project updates OpenTelemetry accepted to CNCF incubation Cortex and Thanos accepted to CNCF incubation Google contributing SQLCommenter project to OTel K8s v1.22 releases API Server tracing in alpha Resource: Great addition in Kubernetes v1.22 release: API Server Tracing, based on OpenTelemetry Tracing at Uber and the beginning of Jaeger project: Distributed Tracing at Uber-Scale episode OpenTelemetry becomes a CNCF incubating project Cortex accepted to CNCF incubation in August Thanos accepted to CNCF incubation in August Google Donates Sqlcommenter to OpenTelemetry Project From Distributed Tracing to APM: Taking OpenTelemetry and Jaeger Up a Level Mastering Distributed Tracing by Yuri Shkuro

airhacks.fm podcast with adam bien
MicroProfile Metrics, Micrometer and Quarkus

airhacks.fm podcast with adam bien

Play Episode Listen Later May 12, 2021 84:49


An airhacks.fm conversation with Erin Schnabel (@ebullientworks) about: switching from IBM to Red Hat, the great ThinkPad 31p, gentoo linux on Dell laptop, Dell vs. Alienware, working on quarkus, the Q quarkus issue, Quarkus, Health, Metrics,OpenAPI: Moved Permanently (301), OpenLiberty, Quarkus and the non-application namespace, Thinkpad with Windows Vista and an Apple sticker, Erin Schnabel-Metrics for the win! j4k.io conference, micrometer comes with support for Datadog metrics and other non-prometheus, prometheus as integration point, the relation between Microprofile Metrics and micrometer, OpenTracing, OpenCensus and OpenTelemetry, Quarkus and MicroProfile Erin Schnabel on twitter: @ebullientworks

airhacks.fm podcast with adam bien
I don't want your Thorntail

airhacks.fm podcast with adam bien

Play Episode Listen Later Nov 8, 2020 70:37


An airhacks.fm conversation with Ken Finnigan (@kenfinnigan) about: Commodore 64 in 1984, Commodore 128D in 1986, creating a Star Wars game, approaching the dark star, a Gateway XT with 20 MB hard drive and 640kB RAM, playing with DBase IV, Lotus 1-2-3 and Delphi, implementing software for baseball statistics in 1989, surviving a Giants game in San Francisco, learning C++, Modula 2 and assembly programming at university, the JavaONE session marathon, learning Java in 1999, enjoying Java programming, starting at IBM Global Services Australian, introduction to the enterprise world with PL 1, Job Control Language (JCL), AIX, CICS and CTG, starting to work with Java 1.2 at an insurance company, building a quotation engine in Java, wrapping JNI layer to reuse legacy C++ code, creating the first web UIs with Java with JSPs and Servlets, PowerBuilder and Borland JBuilder, enjoying the look and feel of Visual Age for Java and JBuilder, Symantec Visual Cafe for Java, Sun Studio Java Workshop had the worst look and feel, writing backend integration logic with XSLT and XML in Dublin, Apache FOP and Apache Cocoon, XSLT transformations in browser, enjoying the marquee tag, using SeeBeyond eWay integration in London, switching to chordiant Java EE CRM solution, using XDoclet to generate EJBs, from XDoclet to annotations, wrapping, abstracting and Aspect Oriented Programming framework, it is hard to find business use cases for AOP, J2EE already ships with built-in aspects, enterprise architecture and UML, using IBM Rational Software Modeler for architectures, driving a truck with tapes as migration, the Amazon Snowmobile Truck, never underestimate the bandwidth of a truck full of hard disks, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway", Andrew S. Tanenbaum, building stock trading platform in Sydney with J2EE, Complex Event Processing (CEP) with J2EE and JBoss, attending JBoss World in Florida and meeting Pete Muir, starting with Seam 2 to write a CRM solution for weddings, contributing to Seam 3, creating annotation-based i18n solution, joining RedHat consulting, migrating from Oracle Application Server to JBoss EAP 5, joining RedHat engineering, leading portlet bridge from JBoss Portal project, starting project LiveOak, apache sling, starting project WildFly Swarm with Bob McWhirter, WildFly Swarm vs. WildFly, WildFly Swarm and WildFly - the size perspective, WildFly Swarm supported hollow jars, hollow jar allows docker layering, WildFly Swarm was renamed to Thorntail, Thorntail 4 was a rewrite of the CDI container, Thorntail 4 codebase was used in Quarkus, Quarkus is the evolutionary leap forward, Quarkus observability and micrometer, working with OpenTelemetry, OpenTelemetry and micrometer, OpenCensus, Eclipse MicroProfile and Metrics, micrometer vs. MicroProfile metrics, GitHub issue regarding custom registry types, airhacks.fm episode with Romain Manni-Bucau #79 Back to Shared Deployments, starting with counters and gauges in MicroProfile, metrics in a Java Message Service (JMS) application, MicroProfile metrics could re-focus on business metrics, services meshes vs. MicroProfile Fault Tolerance, Istio is only able to see the external traffic, implementing business fallbacks with Istio is hard, OpenMetrics and OpenTracing are merging in OpenTelemetry, MicroProfile OpenTracing comes with a single annotation and brings the most added value, Jakarta EE improvements are incremental, Java's project leyden, the MicroProfile online workshop, Jakarta EE and MicroProfile complement each other, GraalVM and JavaScript, pooling with CDI is challenging, MicroProfile as layer on top of Jakarta EE, the smallrye first approach Ken Finnigan on twitter: @kenfinnigan, Ken's blog: kenfinnigan.me

The Python Podcast.__init__
Adding Observability To Your Python Applications With OpenTelemetry

The Python Podcast.__init__

Play Episode Listen Later Jun 23, 2020 53:44


Once you release an application into production it can be difficult to understand all of the ways that it is interacting with the systems that it integrates with. The OpenTracing project and its accompanying ecosystem of technologies aims to make observability of your systems more accessible. In this episode Austin Parker and Alex Boten explain how the correlation of tracing and metrics collection improves visibility of how your software is behaving, how you can use the Python SDK to automatically instrument your applications, and their vision for the future of observability as the OpenTelemetry standard gains broader adoption.

The New Stack Podcast
Episode 119: Observability in the Time of Covid

The New Stack Podcast

Play Episode Listen Later May 29, 2020 32:16


Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week's episode, we spoke with Christine Yen, CEO of Honeycomb.io, the observability platform vendor, about the company's pricing changes brought on by COVID-19 and more broadly how observability practices and tools are changing as more companies make the move to the cloud. TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson. Honeycomb this week changed its pricing structure to reflect the cost realities for businesses and the long term effect of COVID-19. The company also recently released the results of a survey that shows half of the developers surveyed aren't using observability currently, but 75% plan to do so in the next two years. And in April the company released an open source collector for OpenTracing that allows teams to import telemetry data from open source projects into any observability platform, including their own but also their competitors. Yen said of the pricing changes: Our old pricing was, you bought a certain amount of storage and gigabytes and paid for a certain amount of data ingest, also in gigabytes, over a period of time. We felt like that was a little bit harder for people to map to their existing workflows, harder for them to predict. So we shifted to an events-per-month ingest model, one axis, one way to scale your usage.

covid-19 ceo yen honeycomb observability tns opentracing christine yen libby clark joab jackson new stack context
The New Stack Context
Episode 119: Observability in the Time of Covid

The New Stack Context

Play Episode Listen Later May 29, 2020 32:16


Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week's episode, we spoke with Christine Yen, CEO of Honeycomb.io, the observability platform vendor, about the company's pricing changes brought on by COVID-19 and more broadly how observability practices and tools are changing as more companies make the move to the cloud. TNS editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson. Honeycomb this week changed its pricing structure to reflect the cost realities for businesses and the long term effect of COVID-19. The company also recently released the results of a survey that shows half of the developers surveyed aren't using observability currently, but 75% plan to do so in the next two years. And in April the company released an open source collector for OpenTracing that allows teams to import telemetry data from open source projects into any observability platform, including their own but also their competitors. Yen said of the pricing changes: Our old pricing was, you bought a certain amount of storage and gigabytes and paid for a certain amount of data ingest, also in gigabytes, over a period of time. We felt like that was a little bit harder for people to map to their existing workflows, harder for them to predict. So we shifted to an events-per-month ingest model, one axis, one way to scale your usage.

covid-19 ceo yen honeycomb observability tns opentracing christine yen libby clark joab jackson new stack context
Kubernetes Podcast from Google
Jaeger, with Yuri Shkuro

Kubernetes Podcast from Google

Play Episode Listen Later Mar 31, 2020 48:06


Jaeger is a distributed tracing platform built at Uber, and open-sourced in 2016. It traces its evolution from a Google paper on distributed tracing, the OpenZipkin project, and the OpenTracing libraries. Yuri Shkuro, creator of Jaeger and author of Mastering Distributed Tracing, joins Craig and Adam to tell the story, and explain the hows and whys of distributed tracing. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Music from Home: Brian May Neil Finn You Don’t Know Jack Galaxy Trucker Free books from the Sesame Workshop Google Play Amazon Barnes and Noble Kobo The Monster At The End Of This Book News of the week Update on the update on the update on KubeCon EU: now 13 to 16 August, and possibly online. Virtual Rejekts on 1 April Datastax Cassandra Operator and Management API Announcement blog PromCat: Prometheus Catalog from Sysdig Evaluating Predictive Autoscaling in Kubernetes by Jamie Thompson Provision a certificate and key for an application without Istio sidecars by Lei Wang How to Secure Your Kubernetes Cluster on GKE by Lewis Marshall Upcoming changes to IP assignment for EKS Managed Node Groups and De-mystifying EKS networking by Nathan Taber Updated EKS SLA Ops tips by Ciro S. Costa: Quality of Service and OOM, and Kubernetes Secrets Google upgrades to Platinum membership of Cloud Foundry Foundation CNCF Case Study: Vodafone Links from the interview Yuri Shkuro Open Source at Uber Episode 84: Monitoring, Metrics and M3, with Martin Mao and Rob Skillington - another open source project from Uber Mastering Distributed Tracing - Yuri’s book Service-Oriented Architecture: Scaling the Uber Engineering Codebase As We Grow by Einas Haddad What is Distributed Tracing? Evolving Distributed Tracing at Uber Engineering - Yuri’s blog post OpenZipkin TChannel OpenTracing Towards Turnkey Distributed Tracing by Ben Sigelman Jaeger Get started in one container Deploying to Kubernetes gRPC OpenTracing library Jaeger agent and collectors Storage backends Jaeger in Istio and trace context propagation OpenTelemetry: merging OpenTracing and OpenCensus A Brief History of Tracing (So Far) by Ben Sigelman and Morgan McLean Jaeger and OpenTelemetry Now officially in Beta! Google Dapper paper OpenTracing joined CNCF in 2016 What is a jaeger? The logo Red Hat Hawkular Jaeger joins the CNCF in 2017 and graduates in 2019 Jaeger Analytics Yuri Shkuro on Twitter

Splunk [IT Operations Track] 2019 .conf Videos w/ Slides
Distributed Tracing in Splunk: Get end-to-end visibility into application performance with Splunk and OpenTracing [Splunk Enterprise, Splunk IT Service Intelligence]

Splunk [IT Operations Track] 2019 .conf Videos w/ Slides

Play Episode Listen Later Dec 23, 2019


Are you addressing the challenges of gaining visibility into a distributed microservices environment? Is your organization considering using distributed tracing to augment your APM capabilities? Have you heard of OpenTracing and want to learn what capabilities it gives you and how to get started? Come learn about the OpenTracing project and how you can use it with Splunk to get a complete picture of your application environment using logs, metrics, and traces. We'll go from the basics of what the project is to how to get started integrating with Splunk. We'll also review an example of a large telco customer to see how they got started with OpenTracing and how they rolled it out in their application environments. Speaker(s) Gary Burgett, Staff Sales Engineer, Splunk Dave Cornette, Enterprise Monitoring Architect, T-Mobile Slides PDF link - https://conf.splunk.com/files/2019/slides/IT2095.pdf?podcast=1577146210 Product: Splunk Enterprise, Splunk IT Service Intelligence Track: IT Operations Level: Intermediate

performance speaker application enterprise visibility t mobile slides distributed splunk apm opentracing level intermediate staff sales engineer product splunk enterprise splunk it service intelligence track it operations
Splunk [IT Service Intelligence] 2019 .conf Videos w/ Slides
Distributed Tracing in Splunk: Get end-to-end visibility into application performance with Splunk and OpenTracing [Splunk Enterprise, Splunk IT Service Intelligence]

Splunk [IT Service Intelligence] 2019 .conf Videos w/ Slides

Play Episode Listen Later Dec 23, 2019


Are you addressing the challenges of gaining visibility into a distributed microservices environment? Is your organization considering using distributed tracing to augment your APM capabilities? Have you heard of OpenTracing and want to learn what capabilities it gives you and how to get started? Come learn about the OpenTracing project and how you can use it with Splunk to get a complete picture of your application environment using logs, metrics, and traces. We'll go from the basics of what the project is to how to get started integrating with Splunk. We'll also review an example of a large telco customer to see how they got started with OpenTracing and how they rolled it out in their application environments. Speaker(s) Gary Burgett, Staff Sales Engineer, Splunk Dave Cornette, Enterprise Monitoring Architect, T-Mobile Slides PDF link - https://conf.splunk.com/files/2019/slides/IT2095.pdf?podcast=1577146242 Product: Splunk Enterprise, Splunk IT Service Intelligence Track: IT Operations Level: Intermediate

performance speaker data conference videos streaming application enterprise visibility t mobile slides distributed splunk apm opentracing level intermediate staff sales engineer product splunk enterprise splunk it service intelligence track it operations
Splunk [Enterprise] 2019 .conf Videos w/ Slides
Distributed Tracing in Splunk: Get end-to-end visibility into application performance with Splunk and OpenTracing [Splunk Enterprise, Splunk IT Service Intelligence]

Splunk [Enterprise] 2019 .conf Videos w/ Slides

Play Episode Listen Later Dec 23, 2019


Are you addressing the challenges of gaining visibility into a distributed microservices environment? Is your organization considering using distributed tracing to augment your APM capabilities? Have you heard of OpenTracing and want to learn what capabilities it gives you and how to get started? Come learn about the OpenTracing project and how you can use it with Splunk to get a complete picture of your application environment using logs, metrics, and traces. We'll go from the basics of what the project is to how to get started integrating with Splunk. We'll also review an example of a large telco customer to see how they got started with OpenTracing and how they rolled it out in their application environments. Speaker(s) Gary Burgett, Staff Sales Engineer, Splunk Dave Cornette, Enterprise Monitoring Architect, T-Mobile Slides PDF link - https://conf.splunk.com/files/2019/slides/IT2095.pdf?podcast=1577146228 Product: Splunk Enterprise, Splunk IT Service Intelligence Track: IT Operations Level: Intermediate

performance speaker data conference videos streaming application enterprise visibility t mobile slides distributed splunk apm opentracing level intermediate staff sales engineer product splunk enterprise splunk it service intelligence track it operations
Splunk [All Products] 2019 .conf Videos w/ Slides
Distributed Tracing in Splunk: Get end-to-end visibility into application performance with Splunk and OpenTracing [Splunk Enterprise, Splunk IT Service Intelligence]

Splunk [All Products] 2019 .conf Videos w/ Slides

Play Episode Listen Later Dec 23, 2019


Are you addressing the challenges of gaining visibility into a distributed microservices environment? Is your organization considering using distributed tracing to augment your APM capabilities? Have you heard of OpenTracing and want to learn what capabilities it gives you and how to get started? Come learn about the OpenTracing project and how you can use it with Splunk to get a complete picture of your application environment using logs, metrics, and traces. We'll go from the basics of what the project is to how to get started integrating with Splunk. We'll also review an example of a large telco customer to see how they got started with OpenTracing and how they rolled it out in their application environments. Speaker(s) Gary Burgett, Staff Sales Engineer, Splunk Dave Cornette, Enterprise Monitoring Architect, T-Mobile Slides PDF link - https://conf.splunk.com/files/2019/slides/IT2095.pdf?podcast=1577146224 Product: Splunk Enterprise, Splunk IT Service Intelligence Track: IT Operations Level: Intermediate

performance speaker application enterprise visibility t mobile slides distributed splunk apm opentracing level intermediate staff sales engineer product splunk enterprise splunk it service intelligence track it operations
Smart Software with SmartLogic
Chris Keathley on Performance and Functional Programming

Smart Software with SmartLogic

Play Episode Listen Later Dec 19, 2019 35:26


Our guest on the show today is blogger, software cowboy, and podcast host Chris Keathley. Chris is a senior engineer at Bleacher Report, co-host of Elixir Outlaws, and writer of an assemblage of open-source software. He joins us today to speak about his new projects, his journey with functional programming, and what it is like to run Bleacher Report’s infrastructure on Elixir. Chris gives us the lowdown on Norm, a data validating interface he recently completed, weighing in on how it is different from Dialyzer and what it can offer as far as scalability. We hear more about how Chris got introduced to Elixir through Haskell, why he wishes he learned Clojure sooner, and why Ruby on Rails isn’t going anywhere soon. Chris also gets into the tradeoffs these languages make to correlate with Erlang. He argues that Elixir can only be more widely adopted if more people build cool things in it, and then lays out some of its power in supporting Bleacher Report’s user interface. We pick Chris’s brain about what his company is trying to optimize at the moment and hear about their preference for capacity over speed, and their techniques for failing gracefully during traffic spikes. Chris tells us how much he loves Elixir due to its use of ETS and other functionality which allows Bleacher Report to keep running even above capacity. Finally, we hear about some of the observability practices that Bleacher Report uses when deploying new systems and predicting future spikes. Plug in for a great conversation and hear why you should get building with Elixir now! Key Points From This Episode: Chris’s explanation of Norm, his new software that describes data moving through a system. Chris’s introduction to functional programming through learning Haskell, Clojure, and Elixir. What makes a great functional language: immutable data and first class functions. Things that make Clojure great, such as its thought out, holistic design. Characteristics of Cons lists versus RRB trees, and what makes the latter better. An acknowledgment of the necessity of the tradeoffs Elixir makes to interact with Erlang. A little bit about the language Chris wrote to do the admin of code challenges in. Why Ruby (on Rails) will not be replaced by Elixir due to commoditization that surrounds it. An argument that Elixir can only be more widely adopted if more people build with it. Why any language can build any program thus comparisons between them are arbitrary. Where Chris sets the bar as to when something is performant. Chris’s preference for high user capacity capability over speed of delivery at Bleacher Report. Optimization projects at Bleacher Report such as using few boxes and handling traffic spikes. Things Chris loves about Elixir such as its ability to deliver more from its boxes. Elixir’s use of ETS and how Chris coded a complex problem in half a day using it. How Chris detects spikes using time series, StatsD, and other observability tools. Links Mentioned in Today’s Episode: SmartLogic — https://smartlogic.io/ Chris Keathley on GitHub — https://github.com/keathley Chris Keathley Blog — https://keathley.io/ ElixirConf 2019, Contracts for Building Reliable Systems presented by Chris Keathley — https://www.youtube.com/watch?v=tpo3JUyVIjQ The Big Elixir 2019 - Keynote: Adoption - Brian Cardarella — https://www.youtube.com/watch?v=ghpIiQKRfQ4 Bleacher Report — https://bleacherreport.com/ Elixir Outlaws Podcast — https://elixiroutlaws.com/ Norm — https://github.com/keathley/norm Dialyzer — http://erlang.org/doc/man/dialyzer.html Haskell — https://www.haskell.org/ Clojure — https://clojure.org/ Erlang — https://www.erlang.org/ Chris Okasaki — https://github.com/chrisokasaki Discord — https://discordapp.com/company StatsD — https://www.datadoghq.com/blog/statsd/ Prometheus — https://prometheus.io/ Opentracing — https://opentracing.io/ Special Guest: Chris Keathley.

Elixir Mix
EMx 073: Application Monitoring Using Telemetry With Arkadiusz Gil

Elixir Mix

Play Episode Listen Later Oct 15, 2019 40:34


This episode of Elixir Mix features Arkadiusz Gil. Arkadiusz is a software engineer at Erlang Solutions. He is also a member of the observability working group of the Erlang Ecosystem Foundation. The purpose of this working group is to nurture different areas of the community to maintain libraries, improve tooling, and create documentation. He became a member of this group because of his work on Telemetry. The panelists discuss the background of Telemetry and Arkadiusz explains how it was originally written in Elixir and why they decided to switch over to Erlang. Arkadiusz explains how he became involved in Elixir and Erlang. When Mark asks why he prefers Elixir to Erlang he responds with explaining his affinity for the Elixir syntax and tooling that’s available.  The conversation then moves to how Telemetry came about. Telemetry started with the goal of creating a tool for monitoring Elixir applications but the creators had no idea what that application would be like. Arkadiusz then describes how he did an exercise with colleagues to identify the specific needs for such an application and how to implement it. The panelists discuss how Telemetry is integrated. They also discuss how to get started with Telemetry metrics and Arkadiusz shares some of the details of how the monitoring service works.  The next topic that the Elixir experts cover is how to monitor business data and activity. Arkadiusz explains the mechanism that can be used to attach to events in a custom way to retrieve the exact data that the user needs. He shares that Telemetry can really be used any time a user wants to expose a specific piece of data at runtime. Mark asks how this attaching works and this leads to a deeper technical discussion on how Telemetry attaches a mechanism to the application and returns that data, as well as how the listeners work when an event is fired and new data is sent to it.  The panelists then discuss how OpenCensus works with Telemetry. OpenCensus is a project created to culminate API’s that can be used in different languages to create metrics and other data. Arkadiusz shares a hypothetical example of how this works and how Telemetry works with it. The observability working group has helped contribute to OpenCensus. OpenCensus has a smooth integration and is built to run as smooth as possible. A user can use OpenCensus to build metrics based off of Telemetry events. The OpenCensus project is now called OpenTelemetry and it is a merger of OpenCensus and OpenTracing. Finally the Elixir experts cover real world examples of users implementing Telemetry as well as how to get involved with the observability working group and Telemetry. For the observability working group it is best to reach out to them telling them what kind of tooling that would be great to work across the ecosystem and other help they need. One of their goals is to put together a set of best practices for monitoring services.  Panelists Mark Ericksen Eric Oestrich Josh Adams   Guest Arkadiusz Gil Sponsors Sentry.io Adventures in DevOps Adventures in Angular   Links Erlang Solutions Observability Working Group Erlang Ecosystem Foundation Erlang  Telemetry Telemetry.Metrics AWS CloudWatch Events Programming Elixir OpenCensus OpenTelemetry OpenTelemetry.io OpenTracing arkgil on GitHub  Exometer - Erlang Implementation Package Prometheus.ex Picks Eric Oestrich UCL parser in Elixir Josh Adams The Depths of Deep Space Nine - YouTube Mark Ericksen How to Create Desktop Application With Elixir Terminal command “lscpu” Arkadiusz Alchemist’s Code Philosophy of Software Design The Anatomy of Next

Devchat.tv Master Feed
EMx 073: Application Monitoring Using Telemetry With Arkadiusz Gil

Devchat.tv Master Feed

Play Episode Listen Later Oct 15, 2019 40:34


This episode of Elixir Mix features Arkadiusz Gil. Arkadiusz is a software engineer at Erlang Solutions. He is also a member of the observability working group of the Erlang Ecosystem Foundation. The purpose of this working group is to nurture different areas of the community to maintain libraries, improve tooling, and create documentation. He became a member of this group because of his work on Telemetry. The panelists discuss the background of Telemetry and Arkadiusz explains how it was originally written in Elixir and why they decided to switch over to Erlang. Arkadiusz explains how he became involved in Elixir and Erlang. When Mark asks why he prefers Elixir to Erlang he responds with explaining his affinity for the Elixir syntax and tooling that’s available.  The conversation then moves to how Telemetry came about. Telemetry started with the goal of creating a tool for monitoring Elixir applications but the creators had no idea what that application would be like. Arkadiusz then describes how he did an exercise with colleagues to identify the specific needs for such an application and how to implement it. The panelists discuss how Telemetry is integrated. They also discuss how to get started with Telemetry metrics and Arkadiusz shares some of the details of how the monitoring service works.  The next topic that the Elixir experts cover is how to monitor business data and activity. Arkadiusz explains the mechanism that can be used to attach to events in a custom way to retrieve the exact data that the user needs. He shares that Telemetry can really be used any time a user wants to expose a specific piece of data at runtime. Mark asks how this attaching works and this leads to a deeper technical discussion on how Telemetry attaches a mechanism to the application and returns that data, as well as how the listeners work when an event is fired and new data is sent to it.  The panelists then discuss how OpenCensus works with Telemetry. OpenCensus is a project created to culminate API’s that can be used in different languages to create metrics and other data. Arkadiusz shares a hypothetical example of how this works and how Telemetry works with it. The observability working group has helped contribute to OpenCensus. OpenCensus has a smooth integration and is built to run as smooth as possible. A user can use OpenCensus to build metrics based off of Telemetry events. The OpenCensus project is now called OpenTelemetry and it is a merger of OpenCensus and OpenTracing. Finally the Elixir experts cover real world examples of users implementing Telemetry as well as how to get involved with the observability working group and Telemetry. For the observability working group it is best to reach out to them telling them what kind of tooling that would be great to work across the ecosystem and other help they need. One of their goals is to put together a set of best practices for monitoring services.  Panelists Mark Ericksen Eric Oestrich Josh Adams   Guest Arkadiusz Gil Sponsors Sentry.io Adventures in DevOps Adventures in Angular   Links Erlang Solutions Observability Working Group Erlang Ecosystem Foundation Erlang  Telemetry Telemetry.Metrics AWS CloudWatch Events Programming Elixir OpenCensus OpenTelemetry OpenTelemetry.io OpenTracing arkgil on GitHub  Exometer - Erlang Implementation Package Prometheus.ex Picks Eric Oestrich UCL parser in Elixir Josh Adams The Depths of Deep Space Nine - YouTube Mark Ericksen How to Create Desktop Application With Elixir Terminal command “lscpu” Arkadiusz Alchemist’s Code Philosophy of Software Design The Anatomy of Next

PurePerformance
Understanding Distributed Tracing, Trace Context, OpenCensus, OpenTracing & OpenTelemetry

PurePerformance

Play Episode Listen Later Sep 16, 2019 34:09


Did you know that Distributed Tracing has been around for much longer than the recent buzz? Do you know the history and future of OpenCensus, OpenTracing, OpenTelemetry and TraceContext? Listen to this podcast where we chat with Sonja Chevre, Technical Product Manager at Dynatrace, and Daniel Khan, Technical Evangelist at Dynatrace, about the past, current and future state of distributed tracing as a standard.OpenTelemetryhttps://opentelemetry.io/OpenCensushttps://opencensus.io/OpenTracinghttps://opentracing.io/TraceContexthttps://www.dynatrace.com/news/blog/distributed-tracing-with-w3c-trace-context-for-improved-end-to-end-visibility-eap/

PurePerformance
Understanding Distributed Tracing, Trace Context, OpenCensus, OpenTracing & OpenTelemetry

PurePerformance

Play Episode Listen Later Sep 16, 2019 34:09


Did you know that Distributed Tracing has been around for much longer than the recent buzz? Do you know the history and future of OpenCensus, OpenTracing, OpenTelemetry and TraceContext? Listen to this podcast where we chat with Sonja Chevre, Technical Product Manager at Dynatrace, and Daniel Khan, Technical Evangelist at Dynatrace, about the past, current and future state of distributed tracing as a standard.OpenTelemetryhttps://opentelemetry.io/OpenCensushttps://opencensus.io/OpenTracinghttps://opentracing.io/TraceContexthttps://www.dynatrace.com/news/blog/distributed-tracing-with-w3c-trace-context-for-improved-end-to-end-visibility-eap/

DevOps Chat
Splunk Moves Into Microservices/Service Mesh Tracing, Omnition

DevOps Chat

Play Episode Listen Later Sep 12, 2019 20:14


Swiftly after announcing their acquisition of SignalFx, Splunk vaults into microservices tracing by announcing its intention to acquire Omnition. Tech companies are rapidly making acquisitions to strengthen their position in the DevOps and contemporary cloud-native software stack. Rick Fitz, SVP and GM of Splunk's IT Markets Group, joins DevOps Chat to share with us the drivers behind Splunk's move into microservices and service mesh tracing. Information about Omnition is in short supply as Omnition's been in stealth, racing to develop a product to address the observability needs of DevOps teams. Omnition has received some notice through its contributions to OpenCensus open source, now being merged into OpenTracing. In addition to this episode of DevOps Chat, check out Mike Vizard's article about the Omnition announcement at https://devops.com/splunk-adds-omnition-to-devops-portfolio/.

Matrix Live
Matrix Live S04E11 OpenTracing with Jaeger - future of Synapse debugging feat. Erik and Jorik (2019-09-06)

Matrix Live

Play Episode Listen Later Sep 6, 2019 7:25


On-Call Me Maybe
04: The OpenTelemetry Episode

On-Call Me Maybe

Play Episode Listen Later Jul 25, 2019 51:55


We're back with a special episode that's all about OpenTelemetry! Joining us today is Ted Young (@tedsuo), Director of Open Source at LightStep and a maintainer of the OpenTelemetry project, here to talk about what this new project is, how it helps solve some of the biggest problems in observability libraries, and how you can get involved. Find OpenTelemetry on the web at https://opentelemetry.io. Get involved with OpenTelemetry development at https://github.com/open-telemetry/community!

Matrix Live
Matrix Live S04E03 Erik talks to Jorik and Oliver about Push and OpenTracing (2019-07-05)

Matrix Live

Play Episode Listen Later Jul 5, 2019 11:01


matrix jorik opentracing
The New Stack Context
Context: Monitoring and Observability Trends, KubeCon + CloudNativeCon 2019

The New Stack Context

Play Episode Listen Later May 24, 2019 49:44


This week on The New Stack Context podcast, recorded live from KubeCon + CloudNativeCon 2019, we're talking all about monitoring and observability. Our guests are Kresten Krab Thorup, chief technology officer for Humio and Colin Fernandes, director of product marketing at Sumo Logic, Sumo Logic is a machine data analytics company that has just announced an additional $110 million round of funding, making it worth over $1 billion. Humio is demonstrating the intake of 100 terabytes of data per day on only 25 nodes while delivering real-time observability of data. Both are on the cutting edge of understanding what intelligence we can gather from the operating conditions of our machines. We spoke with them about the trends they're seeing around data management and logging, both practices are seeing tremendous change, as end-users collect more and more data, while wanting to see analysis in real-time. We also talk about changes in cloud native monitoring and logging, including the recent consolidation of OpenTracing and OpenCensus into a single project, called Open Telemetry. In the second half of the show, we offer our top podcast and stories picks, including the move to free some proprietary Kubernetes extensions with a new project called KubeMove. We also discuss our recent @Scale podcast, which confronts the challenges that the newly-launched CD Foundation has in normalizing the vast set of cloud native tools for continuous delivery. The New Stack editorial and marketing director Libby Clark hosted this episode, with the help of TNS founder and publisher Alex Williams and TNS managing editor Joab Jackson.

On-Call Me Maybe
03: That Dragon, DNS

On-Call Me Maybe

Play Episode Listen Later May 9, 2019 46:19


Discussion Items - Azure Outage (https://twitter.com/AzureSupport/status/1124086111050051584?s=20) Facebook Outage (https://www.theverge.com/2019/4/14/18310069/facebook-instagram-whatsapp-down-outage-issues) Firefox Breaks Addons (https://blog.mozilla.org/addons/2019/05/04/update-regarding-add-ons-in-firefox/) Distributed Tracing, From Zero To One (https://thmsmlr.com/distributed-tracing-zero-to-one/) Follow Thomas on Twitter at @thmsmlr

The InfoQ Podcast
Ben Sigelman, Co-Creator of Dapper & OpenTracing API, on Observability

The InfoQ Podcast

Play Episode Listen Later May 5, 2019 42:18


Ben Sigelman is the CEO of Lightstep and the author of the Dapper paper that spawned distributed tracing discussions in the software industry. On the podcast today, Ben discusses with Wes observability, and his thoughts on logging, metrics, and tracing. The two discuss detection and refinement as the real problem when it comes to diagnosing and troubleshooting incidents with data. The podcast is full of useful tips on building and implementing an effective observability strategy. Why listen to this podcast: - If you’re getting woke up for an alert, it should actually be an emergency. When that happens, things to think about include: when did this happen, how quickly is it changing, how did it change, and what things in my entire system are correlated with that change. - A reality that seems to be happening in our industry is that we’re coupling the move to microservices with a move to allowing teams to fully self-determine technology stacks. This is dangerous because we’re not at the stage where all languages/tools/frameworks are equivalent. - While a service mesh offers a great potential for integrations at layer 7 many people have unrealistic expectations on how much observability will be enabled by a service mesh. The service mesh does a great job of showing you the communication between the services, but often the details get lost in the work that’s being done inside the service. Service owners need to still do much more work to instrument applications. - Too many people focus on the 3 Pillars of Observability. While logs, metrics, and tracing are important, observability strategy ought to be more focused on the core workflows and needs around detection and refinement. - Logging about individual transactions is better done with tracing. It’s unaffordable at scale to do otherwise. - Just like logging, metrics about individual transactions are less valuable. Application level metrics such as how long a queue is are metrics that are truly useful. - The problem with metrics are the only tools you have in a metrics system to explain the variations that you’re seeing is grouping by tags. The tags you want to group by have high cardinality, so you can’t group them. You end up in a catch 22. - Tracing is about taking traces and doing something useful with them. If you look at hundreds or thousands of tracing, you can answer really important questions about what’s changing in terms of workloads and dependencies about a system with evidence. - When it comes to serverless, tracing is more important than ever because everything is so ephemeral. Node is one of the most popular serverless languages/frameworks and, unfortunately, also one of the hardest of all to trace. - The most important thing is to make sure that you choose something portable for the actual instrumentation piece of a distributed tracing system. You don’t want to go back and rip out the instrumentation because you want to switch vendors. This is becoming conventional wisdom. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2PPIdeE You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2PPIdeE

Rebuild
219: Mass Murder of Apps (omo)

Rebuild

Play Episode Listen Later Oct 16, 2018 127:18


Hajime Morita さんをゲストに迎えて、Pixel 3, カメラ、Android, Creative Selection などについて話しました。 Show Notes Made by Google 2018 Pixel 3 - Google Store Google Pixel 3 and Pixel 3 XL Review: Software Is Eating Your Phone Google Pixel Stand review: The best accessory Google has ever made Google AI details Pixel 3's Super Res Zoom feature Marc Levoy's Home Page The Frankencamera: An Experimental Platform for Computational Photography Halide フレネル反射とは? Interview with the Google Pixel 3 Camera team Prometheus - Monitoring system & time series database The OpenTracing project Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs 60: Melton & Ganatra episode III: Shipping software 闘うプログラマー[新装版] レッド 1969~1972

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 337: Ben Sigelman on Distributed Tracing

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Sep 11, 2018


Ben Sigelman CEO of LightStep and co-author of the OpenTracing standard discusses distributed tracing, a form of event-driven observability useful in debugging distributed systems, understanding latency outlyers, and delivering “white box” analytics.  Host Robert Blumen spoke with Sigelman about the basics of tracing, why it is harder in a distributed system, the concept of tracing […]

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 337: Ben Sigelman on Distributed Tracing

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Sep 11, 2018 62:04


Ben Sigelman CEO of LightStep and co-author of the OpenTracing standard discusses distributed tracing, a form of event-driven observability for debugging distributed systems, understanding latency outlyers, and delivering "white box" analytics.

HashiCast
Episode 4 - Ben Sigelman, LightStep

HashiCast

Play Episode Listen Later Jun 1, 2018 50:49


This episode of HashiCast features Ben Sigelman from LightStep. Ben Sigelman is the CEO and Cofounder LightStep. LightStep is rethinking observability in the emerging world of microservices. He spent nine years at Google where he led the design and development of several global-scale monitoring systems. The most significant of these were Dapper, an always-on distributed tracing system, and Monarch, a high-availability time series collection, storage, and query system. Join us as we chat about monitoring approaches in the world of distributed systems, OpenTracing, and the growing world of observability with awesome projects like Zipkin. Recorded on April 6, 2018. Links -------- Dapper Paper: https://static.googleusercontent.com/media/research.google.com/en//archive/papers/dapper-2010-1.pdf Vizceral by Netflix: https://medium.com/netflix-techblog/vizceral-open-source-acc0c32113fe Flux by Netflix: https://medium.com/netflix-techblog/flux-a-new-approach-to-system-intuition-cf428b7316ec OpenTracing call: https://www.youtube.com/channel/UCAa4TcX4eLBenz9A1SjzL-Q/ Blog post "The difference between tracing, tracing, and tracing": https://medium.com/opentracing/the-difference-between-tracing-tracing-and-tracing-84b49b2d54ea Ben’s Monitorama talk: https://vimeo.com/221051832 Ben's Twitter handle: https://twitter.com/el_bhs LightStep: https://lightstep.com/

Elixir Outlaws
Episode 6: Blocking and Tackling

Elixir Outlaws

Play Episode Listen Later May 21, 2018 36:27


This week the outlaws are joined by Friend of The Show, Ben Marx. In between name dropping and sports idioms they discuss OpenTracing, OpenCensus, purely functional data structures, distributed systems, and Amos's experiences with living in the desert. Special Guest: Ben Marx.

The New Stack Context
This Week On The New Stack: KubeCon Highlights

The New Stack Context

Play Episode Listen Later May 4, 2018 23:32


This week on The New Stack Context, the editorial team recorded live from Copenhagen, Denmark, at KubeCon + CloudNativeCon -- the CNCF's premiere event focused on Kubernetes and other cloud-native open source projects including Prometheus, OpenTracing, Fluentd and others. The New Stack managing editor, Joab Jackson, editor-in-chief Alex Williams, and Ihor Dvoretskyi, developer advocate at the CNCF, discuss the latest with the Kubernetes community after the recent release of Kubernetes 1.10. Earlier this week, we hosted a pancake breakfast and podcast discussion on the CNCF's first security project called SPIFFE, which is aiming to build a secure identity framework for cloud-native production environments. And we hosted two days of podcasting and livestreaming from the show floor, sponsored by Red Hat and the CNCF. Watch on YouTube: https://youtu.be/_ld1PoOiv0s

The New Stack Context
This Week on The New Stack: A Kubecon + CloudNativeCon Europe Preview

The New Stack Context

Play Episode Listen Later Apr 27, 2018 29:33


Hello, welcome to The New Stack Context, a podcast where we review the week's hottest news in cloud-native technologies/ at-scale application development and look ahead to topics we expect will gain more attention in coming weeks. Next week, the New Stack editorial team will be traveling to Copenhagen next for the KubeCon + CloudNativeCon which is the Cloud Native Computing Foundation's premiere event focused on Kubernetes and other cloud native open source projects including Prometheus, OpenTracing, Fluentd and others. On Tuesday we'll be at the Red Hat Commons event on machine learning. Thursday we'll be hosting a pancake breakfast and podcast discussion on the new CNCF project called SPIFFE, which is aiming to build a secure identity framework for cloud-native production environments. And we'll host two days of podcasting and livestreaming from the show floor where we'll be talking to companies such as Red Hat (which is sponsoring us), Aqua Security, IBM, Google, Booking.com, VMware and many others. So for this week's podcast, we chatted with Red Hat's director of OpenShift product strategy, Brian Gracely. about what we might expect to see at Kubecon. We' also spoke with TNS Managing Editor Joab Jackson about how SUSE's Cloud Application Platform uses Kubernetes to manage a containerized distribution of Cloud Foundry.

The Cloudcast
The Cloudcast #323 - OpenTracing for Distributed Microservices

The Cloudcast

Play Episode Listen Later Dec 9, 2017 24:08


Brian talks with Ben Sigelman (@el_bhs, Co-Founder/CEO of @LightStepHQ) at KubeCon about his experiences at Google, the evolution of Open Tracing, the public launch of Lightstep, how operations teams are adapting to microservices applications, and how Lightstep is interacting with new CNCF projects like Envoy and Istio. Show Links: Lightstep Homepage Open Tracing Homepage Ben Sigelman (Lightstep) on theCUBE at KubeCon 2017 [PODCAST] @PodCTL - Containers | Kubernetes - RSS Feed, iTunes, Google Play, Stitcher, TuneIn and all your favorite podcast players [A CLOUD GURU] Get The Cloudcast Alexa Skill [A CLOUD GURU] DISCOUNT: Serverless for Beginners (only $15 instead of $29) [A CLOUD GURU] FREE: Alexa Development for Absolute Beginners [FREE] eBook from O'Reilly Feedback? Email: show at thecloudcast dot net Twitter: @thecloudcastnet and @ServerlessCast

BSD Now
222: How Netflix works

BSD Now

Play Episode Listen Later Nov 29, 2017 127:25


We take a look at two-faced Oracle, cover a FAMP installation, how Netflix works the complex stuff, and show you who the patron of yak shaving is. This episode was brought to you by Headlines Why is Oracle so two-faced over open source? (https://www.theregister.co.uk/2017/10/12/oracle_must_grow_up_on_open_source/) Oracle loves open source. Except when the database giant hates open source. Which, according to its recent lobbying of the US federal government, seems to be "most of the time". Yes, Oracle has recently joined the Cloud Native Computing Foundation (CNCF) to up its support for open-source Kubernetes and, yes, it has long supported (and contributed to) Linux. And, yes, Oracle has even gone so far as to (finally) open up Java development by putting it under a foundation's stewardship. Yet this same, seemingly open Oracle has actively hammered the US government to consider that "there is no math that can justify open source from a cost perspective as the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." That punch to the face was delivered in a letter to Christopher Liddell, a former Microsoft CFO and now director of Trump's American Technology Council, by Kenneth Glueck, Oracle senior vice president. The US government had courted input on its IT modernisation programme. Others writing back to Liddell included AT&T, Cisco, Microsoft and VMware. In other words, based on its letter, what Oracle wants us to believe is that open source leads to greater costs and poorly secured, limply featured software. Nor is Oracle content to leave it there, also arguing that open source is exactly how the private sector does not function, seemingly forgetting that most of the leading infrastructure, big data, and mobile software today is open source. Details! Rather than take this counterproductive detour into self-serving silliness, Oracle would do better to follow Microsoft's path. Microsoft, too, used to Janus-face its way through open source, simultaneously supporting and bashing it. Only under chief executive Satya Nadella's reign did Microsoft realise it's OK to fully embrace open source, and its financial results have loved the commitment. Oracle has much to learn, and emulate, in Microsoft's approach. I love you, you're perfect. Now change Oracle has never been particularly warm and fuzzy about open source. As founder Larry Ellison might put it, Oracle is a profit-seeking corporation, not a peace-loving charity. To the extent that Oracle embraces open source, therefore it does so for financial reward, just like every other corporation. Few, however, are as blunt as Oracle about this fact of corporate open-source life. As Ellison told the Financial Times back in 2006: "If an open-source product gets good enough, we'll simply take it. So the great thing about open source is nobody owns it – a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that's what we'll do. "So it is not disruptive at all – you have to find places to add value. Once open source gets good enough, competing with it would be insane... We don't have to fight open source, we have to exploit open source." "Exploit" sounds about right. While Oracle doesn't crack the top-10 corporate contributors to the Linux kernel, it does register a respectable number 12, which helps it influence the platform enough to feel comfortable building its IaaS offering on Linux (and Xen for virtualisation). Oracle has also managed to continue growing MySQL's clout in the industry while improving it as a product and business. As for Kubernetes, Oracle's decision to join the CNCF also came with P&L strings attached. "CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers' development toolchains," said Mark Cavage, vice president of software development at Oracle. One can argue that Oracle has figured out the exploitation angle reasonably well. This, however, refers to the right kind of exploitation, the kind that even free software activist Richard Stallman can love (or, at least, tolerate). But when it comes to government lobbying, Oracle looks a lot more like Mr Hyde than Dr Jekyll. Lies, damned lies, and Oracle lobbying The current US president has many problems (OK, many, many problems), but his decision to follow the Obama administration's support for IT modernisation is commendable. Most recently, the Trump White House asked for feedback on how best to continue improving government IT. Oracle's response is high comedy in many respects. As TechDirt's Mike Masnick summarises, Oracle's "latest crusade is against open-source technology being used by the federal government – and against the government hiring people out of Silicon Valley to help create more modern systems. Instead, Oracle would apparently prefer the government just give it lots of money." Oracle is very good at making lots of money. As such, its request for even more isn't too surprising. What is surprising is the brazenness of its position. As Masnick opines: "The sheer contempt found in Oracle's submission on IT modernization is pretty stunning." Why? Because Oracle contradicts much that it publicly states in other forums about open source and innovation. More than this, Oracle contradicts much of what we now know is essential to competitive differentiation in an increasingly software and data-driven world. Take, for example, Oracle's contention that "significant IT development expertise is not... central to successful modernization efforts". What? In our "software is eating the world" existence Oracle clearly believes that CIOs are buyers, not doers: "The most important skill set of CIOs today is to critically compete and evaluate commercial alternatives to capture the benefits of innovation conducted at scale, and then to manage the implementation of those technologies efficiently." While there is some truth to Oracle's claim – every project shouldn't be a custom one-off that must be supported forever – it's crazy to think that a CIO – government or otherwise – is doing their job effectively by simply shovelling cash into vendors' bank accounts. Indeed, as Masnick points out: "If it weren't for Oracle's failures, there might not even be a USDS [the US Digital Service created in 2014 to modernise federal IT]. USDS really grew out of the emergency hiring of some top-notch internet engineers in response to the Healthcare.gov rollout debacle. And if you don't recall, a big part of that debacle was blamed on Oracle's technology." In short, blindly giving money to Oracle and other big vendors is the opposite of IT modernisation. In its letter to Liddell, Oracle proceeded to make the fantastic (by which I mean "silly and false") claim that "the fact is that the use of open-source software has been declining rapidly in the private sector". What?!? This is so incredibly untrue that Oracle should score points for being willing to say it out loud. Take a stroll through the most prominent software in big data (Hadoop, Spark, Kafka, etc.), mobile (Android), application development (Kubernetes, Docker), machine learning/AI (TensorFlow, MxNet), and compare it to Oracle's statement. One conclusion must be that Oracle believes its CIO audience is incredibly stupid. Oracle then tells a half-truth by declaring: "There is no math that can justify open source from a cost perspective." How so? Because "the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." Which I guess is why Oracle doesn't use any open source like Linux, Kubernetes, etc. in its services. Oops. The Vendor Formerly Known As Satan The thing is, Oracle doesn't need to do this and, for its own good, shouldn't do this. After all, we already know how this plays out. We need only look at what happened with Microsoft. Remember when Microsoft wanted us to "get the facts" about Linux? Now it's a big-time contributor to Linux. Remember when it told us open source was anti-American and a cancer? Now it aggressively contributes to a huge variety of open-source projects, some of them homegrown in Redmond, and tells the world that "Microsoft loves open source." Of course, Microsoft loves open source for the same reason any corporation does: it drives revenue as developers look to build applications filled with open-source components on Azure. There's nothing wrong with that. Would Microsoft prefer government IT to purchase SQL Server instead of open-source-licensed PostgreSQL? Sure. But look for a single line in its response to the Trump executive order that signals "open source is bad". You won't find it. Why? Because Microsoft understands that open source is a friend, not foe, and has learned how to monetise it. Microsoft, in short, is no longer conflicted about open source. It can compete at the product level while embracing open source at the project level, which helps fuel its overall product and business strategy. Oracle isn't there yet, and is still stuck where Microsoft was a decade ago. It's time to grow up, Oracle. For a company that builds great software and understands that it increasingly needs to depend on open source to build that software, it's disingenuous at best to lobby the US government to put the freeze on open source. Oracle needs to learn from Microsoft, stop worrying and love the open-source bomb. It was a key ingredient in Microsoft's resurgence. Maybe it could help Oracle get a cloud clue, too. Install FAMP on FreeBSD (https://www.linuxsecrets.com/home/3164-install-famp-on-freebsd) The acronym FAMP refers to a set of free open source applications which are commonly used in Web server environments called Apache, MySQL and PHP on the FreeBSD operating system, which provides a server stack that provides web services, database and PHP. Prerequisites sudo Installed and working - Please read Apache PHP5 or PHP7 MySQL or MariaDB Install your favorite editor, ours is vi Note: You don't need to upgrade FreeBSD but make sure all patches have been installed and your port tree is up-2-date if you plan to update by ports. Install Ports portsnap fetch You must use sudo for each indivdual command during installations. Please see link above for installing sudo. Searching Available Apache Versions to Install pkg search apache Install Apache To install Apache 2.4 using pkg. The apache 2.4 user account managing Apache is www in FreeBSD. pkg install apache24 Confirmation yes prompt and hit y for yes to install Apache 2.4 This installs Apache and its dependencies. Enable Apache use sysrc to update services to be started at boot time, Command below adds "apache24enable="YES" to the /etc/rc.conf file. For sysrc commands please read ```sysrc apache24enable=yes Start Apache service apache24 start``` Visit web address by accessing your server's public IP address in your web browser How To find Your Server's Public IP Address If you do not know what your server's public IP address is, there are a number of ways that you can find it. Usually, this is the address you use to connect to your server through SSH. ifconfig vtnet0 | grep "inet " | awk '{ print $2 }' Now that you have the public IP address, you may use it in your web browser's address bar to access your web server. Install MySQL Now that we have our web server up and running, it is time to install MySQL, the relational database management system. The MySQL server will organize and provide access to databases where our server can store information. Install MySQL 5.7 using pkg by typing pkg install mysql57-server Enter y at the confirmation prompt. This installs the MySQL server and client packages. To enable MySQL server as a service, add mysqlenable="YES" to the /etc/rc.conf file. This sysrc command will do just that ```sysrc mysqlenable=yes Now start the MySQL server service mysql-server start Now run the security script that will remove some dangerous defaults and slightly restrict access to your database system. mysqlsecureinstallation``` Answer all questions to secure your newly installed MySQL database. Enter current password for root (enter for none): [RETURN] Your database system is now set up and we can move on. Install PHP5 or PHP70 pkg search php70 Install PHP70 you would do the following by typing pkg install php70-mysqli mod_php70 Note: In these instructions we are using php5.7 not php7.0. We will be coming out with php7.0 instructions with FPM. PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to MySQL databases to get information, and hand the processed content over to the web server to display. We're going to install the modphp, php-mysql, and php-mysqli packages. To install PHP 5.7 with pkg, run this command ```pkg install modphp56 php56-mysql php56-mysqli Copy sample PHP configuration file into place. cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini Regenerate the system's cached information about your installed executable files rehash``` Before using PHP, you must configure it to work with Apache. Install PHP Modules (Optional) To enhance the functionality of PHP, we can optionally install some additional modules. To see the available options for PHP 5.6 modules and libraries, you can type this into your system pkg search php56 Get more information about each module you can look at the long description of the package by typing pkg search -f apache24 Optional Install Example pkg install php56-calendar Configure Apache to Use PHP Module Open the Apache configuration file vim /usr/local/etc/apache24/Includes/php.conf DirectoryIndex index.php index.html Next, we will configure Apache to process requested PHP files with the PHP processor. Add these lines to the end of the file: SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source Now restart Apache to put the changes into effect service apache24 restart Test PHP Processing By default, the DocumentRoot is set to /usr/local/www/apache24/data. We can create the info.php file under that location by typing vim /usr/local/www/apache24/data/info.php Add following line to info.php and save it. Details on info.php info.php file gives you information about your server from the perspective of PHP. It' useful for debugging and to ensure that your settings are being applied correctly. If this was successful, then your PHP is working as expected. You probably want to remove info.php after testing because it could actually give information about your server to unauthorized users. Remove file by typing rm /usr/local/www/apache24/data/info.php Note: Make sure Apache / meaning the root of Apache is owned by user which should have been created during the Apache install is the owner of the /usr/local/www structure. That explains FAMP on FreeBSD. IXsystems IXsystems TrueNAS X10 Torture Test & Fail Over Systems In Action with the ZFS File System (https://www.youtube.com/watch?v=GG_NvKuh530) How Netflix works: what happens every time you hit Play (https://medium.com/refraction-tech-everything/how-netflix-works-the-hugely-simplified-complex-stuff-that-happens-every-time-you-hit-play-3a40c9be254b) Not long ago, House of Cards came back for the fifth season, finally ending a long wait for binge watchers across the world who are interested in an American politician's ruthless ascendance to presidency. For them, kicking off a marathon is as simple as reaching out for your device or remote, opening the Netflix app and hitting Play. Simple, fast and instantly gratifying. What isn't as simple is what goes into running Netflix, a service that streams around 250 million hours of video per day to around 98 million paying subscribers in 190 countries. At this scale, providing quality entertainment in a matter of a few seconds to every user is no joke. And as much as it means building top-notch infrastructure at a scale no other Internet service has done before, it also means that a lot of participants in the experience have to be negotiated with and kept satiated?—?from production companies supplying the content, to internet providers dealing with the network traffic Netflix brings upon them. This is, in short and in the most layman terms, how Netflix works. Let us just try to understand how Netflix is structured on the technological side with a simple example. Netflix literally ushered in a revolution around ten years ago by rewriting the applications that run the entire service to fit into a microservices architecture?—?which means that each application, or microservice's code and resources are its very own. It will not share any of it with any other app by nature. And when two applications do need to talk to each other, they use an application programming interface (API)?—?a tightly-controlled set of rules that both programs can handle. Developers can now make many changes, small or huge, to each application as long as they ensure that it plays well with the API. And since the one program knows the other's API properly, no change will break the exchange of information. Netflix estimates that it uses around 700 microservices to control each of the many parts of what makes up the entire Netflix service: one microservice stores what all shows you watched, one deducts the monthly fee from your credit card, one provides your device with the correct video files that it can play, one takes a look at your watching history and uses algorithms to guess a list of movies that you will like, and one will provide the names and images of these movies to be shown in a list on the main menu. And that's the tip of the iceberg. Netflix engineers can make changes to any part of the application and can introduce new changes rapidly while ensuring that nothing else in the entire service breaks down. They made a courageous decision to get rid of maintaining their own servers and move all of their stuff to the cloud?—?i.e. run everything on the servers of someone else who dealt with maintaining the hardware while Netflix engineers wrote hundreds of programs and deployed it on the servers rapidly. The someone else they chose for their cloud-based infrastructure is Amazon Web Services (AWS). Netflix works on thousands of devices, and each of them play a different format of video and sound files. Another set of AWS servers take this original film file, and convert it into hundreds of files, each meant to play the entire show or film on a particular type of device and a particular screen size or video quality. One file will work exclusively on the iPad, one on a full HD Android phone, one on a Sony TV that can play 4K video and Dolby sound, one on a Windows computer, and so on. Even more of these files can be made with varying video qualities so that they are easier to load on a poor network connection. This is a process known as transcoding. A special piece of code is also added to these files to lock them with what is called digital rights management or DRM?—?a technological measure which prevents piracy of films. The Netflix app or website determines what particular device you are using to watch, and fetches the exact file for that show meant to specially play on your particular device, with a particular video quality based on how fast your internet is at that moment. Here, instead of relying on AWS servers, they install their very own around the world. But it has only one purpose?—?to store content smartly and deliver it to users. Netflix strikes deals with internet service providers and provides them the red box you saw above at no cost. ISPs install these along with their servers. These Open Connect boxes download the Netflix library for their region from the main servers in the US?—?if there are multiple of them, each will rather store content that is more popular with Netflix users in a region to prioritise speed. So a rarely watched film might take time to load more than a Stranger Things episode. Now, when you will connect to Netflix, the closest Open Connect box to you will deliver the content you need, thus videos load faster than if your Netflix app tried to load it from the main servers in the US. In a nutshell… This is what happens when you hit that Play button: Hundreds of microservices, or tiny independent programs, work together to make one large Netflix service. Content legally acquired or licensed is converted into a size that fits your screen, and protected from being copied. Servers across the world make a copy of it and store it so that the closest one to you delivers it at max quality and speed. When you select a show, your Netflix app cherry picks which of these servers will it load the video from> You are now gripped by Frank Underwood's chilling tactics, given depression by BoJack Horseman's rollercoaster life, tickled by Dev in Master of None and made phobic to the future of technology by the stories in Black Mirror. And your lifespan decreases as your binge watching turns you into a couch potato. It looked so simple before, right? News Roundup Moving FreshPorts (http://dan.langille.org/2017/11/15/moving-freshports/) Today I moved the FreshPorts website from one server to another. My goal is for nobody to notice. In preparation for this move, I have: DNS TTL reduced to 60s Posted to Twitter Updated the status page Put the website put in offline mode: What was missed I turned off commit processing on the new server, but I did not do this on the old server. I should have: sudo svc -d /var/service/freshports That stops processing of incoming commits. No data is lost, but it keeps the two databases at the same spot in history. Commit processing could continue during the database dumping, but that does not affect the dump, which will be consistent regardless. The offline code Here is the basic stuff I used to put the website into offline mode. The main points are: header(“HTTP/1.1 503 Service Unavailable”); ErrorDocument 404 /index.php I move the DocumentRoot to a new directory, containing only index.php. Every error invokes index.php, which returns a 503 code. The dump The database dump just started (Sun Nov 5 17:07:22 UTC 2017). root@pg96:~ # /usr/bin/time pg_dump -h 206.127.23.226 -Fc -U dan freshports.org > freshports.org.9.6.dump That should take about 30 minutes. I have set a timer to remind me. Total time was: 1464.82 real 1324.96 user 37.22 sys The MD5 is: MD5 (freshports.org.9.6.dump) = 5249b45a93332b8344c9ce01245a05d5 It is now: Sun Nov 5 17:34:07 UTC 2017 The rsync The rsync should take about 10-20 minutes. I have already done an rsync of yesterday's dump file. The rsync today should copy over only the deltas (i.e. differences). The rsync started at about Sun Nov 5 17:36:05 UTC 2017 That took 2m9.091s The MD5 matches. The restore The restore should take about 30 minutes. I ran this test yesterday. It is now Sun Nov 5 17:40:03 UTC 2017. $ createdb -T template0 -E SQL_ASCII freshports.testing $ time pg_restore -j 16 -d freshports.testing freshports.org.9.6.dump Done. real 25m21.108s user 1m57.508s sys 0m15.172s It is now Sun Nov 5 18:06:22 UTC 2017. Insert break here About here, I took a 30 minute break to run an errand. It was worth it. Changing DNS I'm ready to change DNS now. It is Sun Nov 5 19:49:20 EST 2017 Done. And nearly immediately, traffic started. How many misses? During this process, XXXXX requests were declined: $ grep -c '" 503 ' /usr/websites/log/freshports.org-access.log XXXXX That's it, we're done Total elapsed time: 1 hour 48 minutes. There are still a number of things to follow up on, but that was the transfers. The new FreshPorts Server (http://dan.langille.org/2017/11/17/x8dtu-3/) *** Using bhyve on top of CEPH (https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-November/005876.html) Hi, Just an info point. I'm preparing for a lecture tomorrow, and thought why not do an actual demo.... Like to be friends with Murphy :) So after I started the cluster: 5 jails with 7 OSDs This what I manually needed to do to boot a memory stick Start een Bhyve instance rbd --dest-pool rbddata --no-progress import memstick.img memstick rbd-ggate map rbddata/memstick ggate-devvice is available on /dev/ggate1 kldload vmm kldload nmdm kldload iftap kldload ifbridge kldload cpuctl sysctl net.link.tap.uponopen=1 ifconfig bridge0 create ifconfig bridge0 addm em0 up ifconfig ifconfig tap11 create ifconfig bridge0 addm tap11 ifconfig tap11 up load the GGate disk in bhyve bhyveload -c /dev/nmdm11A -m 2G -d /dev/ggate1 FB11 and boot a single from it. bhyve -H -P -A -c 1 -m 2G -l com1,/dev/nmdm11A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap11 -s 4,ahci-hd,/dev/ggate1 FB11 & bhyvectl --vm=FB11 --get-stats Connect to the VM cu -l /dev/nmdm11B And that'll give you a bhyve VM running on an RBD image over ggate. In the installer I tested reading from the bootdisk: root@:/ # dd if=/dev/ada0 of=/dev/null bs=32M 21+1 records in 21+1 records out 734077952 bytes transferred in 5.306260 secs (138341865 bytes/sec) which is a nice 138Mb/sec. Hope the demonstration does work out tomorrow. --WjW *** Donald Knuth - The Patron Saint of Yak Shaves (http://yakshav.es/the-patron-saint-of-yakshaves/) Excerpts: In 2015, I gave a talk in which I called Donald Knuth the Patron Saint of Yak Shaves. The reason is that Donald Knuth achieved the most perfect and long-running yak shave: TeX. I figured this is worth repeating. How to achieve the ultimate Yak Shave The ultimate yak shave is the combination of improbable circumstance, the privilege to be able to shave at your hearts will and the will to follow things through to the end. Here's the way it was achieved with TeX. The recount is purely mine, inaccurate and obviously there for fun. I'll avoid the most boring facts that everyone always tells, such as why Knuth's checks have their own Wikipedia page. Community Shaving is Best Shaving Since the release of TeX, the community has been busy working on using it as a platform. If you ever downloaded the full TeX distribution, please bear in mind that you are downloading the amassed work of over 40 years, to make sure that each and every TeX document ever written builds. We're talking about documents here. But mostly, two big projects sprung out of that. The first is LaTeX by Leslie Lamport. Lamport is a very productive researcher, famous for research in formal methods through TLA+ and also known laying groundwork for many distributed algorithms. LaTeX is based on the idea of separating presentation and content. It is based around the idea of document classes, which then describe the way a certain document is laid out. Think Markdown, just much more complex. The second is ConTeXt, which is far more focused on fine grained layout control. The Moral of the Story Whenever you feel like “can't we just replace this whole thing, it can't be so hard” when handling TeX, don't forget how many years of work and especially knowledge were poured into that system. Typesetting isn't the most popular knowledge around programmers. Especially see it in the context of the space it is in: they can't remove legacy. Ever. That would break documents. TeX is also not a programming language. It might resemble one, but mostly, it should be approached as a typesetting system first. A lot of it's confusing lingo gets much better then. It's not programming lingo. By approaching TeX with an understanding for its history, a lot of things can be learned from it. And yes, a replacement would be great, but it would take ages. In any case, I hope I thoroughly convinced you why Donald Knuth is the Patron Saint of Yak Shaves. Extra Credits This comes out of a enjoyable discussion with [Arne from Lambda Island](https://lambdaisland.com/https://lambdaisland.com/, who listened and said “you should totally turn this into a talk”. Vincent's trip to EuroBSDCon 2017 (http://www.vincentdelft.be/post/post_20171016) My euroBSDCon 2017 Posted on 2017-10-16 09:43:00 from Vincent in Open Bsd Let me just share my feedback on those 2 days spent in Paris for the EuroBSDCon. My 1st BSDCon. I'm not a developer, contributor, ... Do not expect to improve your skills with OpenBSD with this text :-) I know, we are on October 16th, and the EuroBSDCon of Paris was 3 weeks ago :( I'm not quick !!! Sorry for that Arrival at 10h, I'm too late for the start of the key note. The few persons behind a desk welcome me by talking in Dutch, mainly because of my name. Indeed, Delft is a city in Netherlands, but also a well known university. I inform them that I'm from Belgium, and the discussion moves to the fact the Fosdem is located in Brussels. I receive my nice T-shirt white and blue, a bit like the marine T-shirts, but with the nice EuroBSDCon logo. I'm asking where are the different rooms reserved for the BSD event. We have 1 big on the 1st floor, 1 medium 1 level below, and 2 smalls 1 level above. All are really easy to access. In this entrance we have 4 or 5 tables with some persons representing their company. Those are mainly the big sponsors of the event providing details about their activity and business. I discuss a little bit with StormShield and Gandi. On other tables people are selling BSD t-shirts, and they will quickly be sold. "Is it done yet ?" The never ending story of pkg tools In the last Fosdem, I've already hear Antoine and Baptiste presenting the OpenBSD and FreeBSD battle, I decide to listen Marc Espie in the medium room called Karnak. Marc explains that he has rewritten completely the pkg_add command. He explains that, at contrario with other elements of OpenBSD, the packages tools must be backward compatible and stable on a longer period than 12 months (the support period for OpenBSD). On the funny side, he explains that he has his best idea inside his bath. Hackathons are also used to validate some ideas with other OpenBSD developers. All in all, he explains that the most time consuming part is to imagine a good solution. Coding it is quite straightforward. He adds that better an idea is, shorter the implementation will be. A Tale of six motherboards, three BSDs and coreboot After the lunch I decide to listen the talk about Coreboot. Indeed, 1 or 2 years ago I had listened the Libreboot project at Fosdem. Since they did several references to Coreboot, it's a perfect occasion to listen more carefully to this project. Piotr and Katazyba Kubaj explains us how to boot a machine without the native Bios. Indeed Coreboot can replace the bios, and de facto avoid several binaries imposed by the vendor. They explain that some motherboards are supporting their code. But they also show how difficult it is to flash a Bios and replace it by Coreboot. They even have destroyed a motherboard during the installation. Apparently because the power supply they were using was not stable enough with the 3v. It's really amazing to see that open source developers can go, by themselves, to such deep technical level. State of the DragonFly's graphics stack After this Coreboot talk, I decide to stay in the room to follow the presentation of Fran?ois Tigeot. Fran?ois is now one of the core developer of DrangonflyBSD, an amazing BSD system having his own filesystem called Hammer. Hammer offers several amazing features like snapshots, checksum data integrity, deduplication, ... Francois has spent his last years to integrate the video drivers developed for Linux inside DrangonflyBSD. He explains that instead of adapting this code for the video card to the kernel API of DrangonflyBSD, he has "simply" build an intermediate layer between the kernel of DragonflyBSD and the video drivers. This is not said in the talk, but this effort is very impressive. Indeed, this is more or less a linux emulator inside DragonflyBSD. Francois explains that he has started with Intel video driver (drm/i915), but now he is able to run drm/radeon quite well, but also drm/amdgpu and drm/nouveau. Discovering OpenBSD on AWS Then I move to the small room at the upper level to follow a presentation made by Laurent Bernaille on OpenBSD and AWS. First Laurent explains that he is re-using the work done by Antoine Jacoutot concerning the integration of OpenBSD inside AWS. But on top of that he has integrated several other Open Source solutions allowing him to build OpenBSD machines very quickly with one command. Moreover those machines will have the network config, the required packages, ... On top of the slides presented, he shows us, in a real demo, how this system works. Amazing presentation which shows that, by putting the correct tools together, a machine builds and configure other machines in one go. OpenBSD Testing Infrastructure Behind bluhm.genua.de Here Jan Klemkow explains us that he has setup a lab where he is able to run different OpenBSD architectures. The system has been designed to be able to install, on demand, a certain version of OpenBSD on the different available machines. On top of that a regression test script can be triggered. This provides reports showing what is working and what is not more working on the different machines. If I've well understood, Jan is willing to provide such lab to the core developers of OpenBSD in order to allow them to validate easily and quickly their code. Some more effort is needed to reach this goal, but with what exists today, Jan and his colleague are quite close. Since his company is using OpenBSD business, to his eyes this system is a "tit for tat" to the OpenBSD community. French story on cybercrime Then comes the second keynote of the day in the big auditorium. This talk is performed by the colonel of french gendarmerie. Mr Freyssinet, who is head of the Cyber crimes unit inside the Gendarmerie. Mr Freyssinet explains that the "bad guys" are more and more volatile across countries, and more and more organized. The small hacker in his room, alone, is no more the reality. As a consequence the different national police investigators are collaborating more inside an organization called Interpol. What is amazing in his talk is that Mr Freyssinet talks about "Crime as a service". Indeed, more and more hackers are selling their services to some "bad and temporary organizations". Social event It's now time for the famous social event on the river: la Seine. The organizers ask us to go, by small groups, to a station. There is a walk of 15 minutes inside Paris. Hopefully the weather is perfect. To identify them clearly several organizers takes a "beastie fork" in their hands and walk on the sidewalk generating some amazing reactions from some citizens and toursits. Some of them recognize the Freebsd logo and ask us some details. Amazing :-) We walk on small and big sidewalks until a small stair going under the street. There, we have a train station a bit like a metro station. 3 stations later they ask us to go out. We walk few minutes and come in front of a boat having a double deck: one inside, with nice tables and chairs and one on the roof. But the crew ask us to go up, on the second deck. There, we are welcome with a glass of wine. The tour Eiffel is just at few 100 meters from us. Every hour the Eiffel tower is blinking for 5 minutes with thousands of small lights. Brilliant :-) We see also the "statue de la libertee" (the small one) which is on a small island in the middle of the river. During the whole night the bar will be open with drinks and some appetizers, snacks, ... Such walking diner is perfect to talk with many different persons. I've discussed with several persons just using BSD, they are not, like me, deep and specialized developers. One was from Switzerland, another one from Austria, and another one from Netherlands. But I've also followed a discussion with Theo de Raadt, several persons of the FreeBSD foundation. Some are very technical guys, other just users, like me. But all with the same passion for one of the BSD system. Amazing evening. OpenBSD's small steps towards DTrace (a tale about DDB and CTF) On the second day, I decide to sleep enough in order to have enough resources to drive back to my home (3 hours by car). So I miss the 1st presentations, and arrive at the event around 10h30. Lot of persons are already present. Some faces are less "fresh" than others. I decide to listen to Dtrace in OpenBSD. After 10 minutes I am so lost into those too technical explainations, that I decide to open and look at my PC. My OpenBSD laptop is rarely leaving my home, so I've never had the need to have a screen locking system. In a crowded environment, this is better. So I was looking for a simple solution. I've looked at how to use xlock. I've combined it with the /ets/apm/suspend script, ... Always very easy to use OpenBSD :-) The OpenBSD web stack Then I decide to follow the presentation of Michael W Lucas. Well know person for his different books about "Absolute OpenBSD", Relayd", ... Michael talks about the httpd daemon inside OpenBSD. But he also present his integration with Carp, Relayd, PF, FastCGI, the rules based on LUA regexp (opposed to perl regexp), ... For sure he emphasis on the security aspect of those tools: privilege separation, chroot, ... OpenSMTPD, current state of affairs Then I follow the presentation of Gilles Chehade about the OpenSMTPD project. Amazing presentation that, on top of the technical challenges, shows how to manage such project across the years. Gilles is working on OpenSMTPD since 2007, thus 10 years !!!. He explains the different decisions they took to make the software as simple as possible to use, but as secure as possible, too: privilege separation, chroot, pledge, random malloc, ? . The development starts on BSD systems, but once quite well known they received lot of contributions from Linux developers. Hoisting: lessons learned integrating pledge into 500 programs After a small break, I decide to listen to Theo de Raadt, the founder of OpenBSD. In his own style, with trekking boots, shorts, backpack. Theo starts by saying that Pledge is the outcome of nightmares. Theo explains that the book called "Hacking blind" presenting the BROP has worried him since few years. That's why he developed Pledge as a tool killing a process as soon as possible when there is an unforeseen behavior of this program. For example, with Pledge a program which can only write to disk will be immediately killed if he tries to reach network. By implementing Pledge in the +-500 programs present in the "base", OpenBSD is becoming more secured and more robust. Conclusion My first EuroBSDCon was a great, interesting and cool event. I've discussed with several BSD enthusiasts. I'm using OpenBSD since 2010, but I'm not a developer, so I was worried to be "lost" in the middle of experts. In fact it was not the case. At EuroBSDCon you have many different type of enthusiasts BSD's users. What is nice with the EuroBSDCon is that the organizers foresee everything for you. You just have to sit and listen. They foresee even how to spend, in a funny and very cool attitude, the evening of Saturday. > The small draw back is that all of this has a cost. In my case the whole weekend cost me a bit more than 500euro. Based on what I've learned, what I've saw this is very acceptable price. Nearly all presentations I saw give me a valuable input for my daily job. For sure, the total price is also linked to my personal choice: hotel, parking. And I'm surely biased because I'm used to go to the Fosdem in Brussels which cost nothing (entrance) and is approximately 45 minutes of my home. But Fosdem is not the same atmosphere and presentations are less linked to my daily job. I do not regret my trip to EuroBSDCon and will surely plan other ones. Beastie Bits Important munitions lawyering (https://www.jwz.org/blog/2017/10/important-munitions-lawyering/) AsiaBSDCon 2018 CFP is now open, until December 15th (https://2018.asiabsdcon.org/) ZSTD Compression for ZFS by Allan Jude (https://www.youtube.com/watch?v=hWnWEitDPlM&feature=share) NetBSD on Allwinner SoCs Update (https://blog.netbsd.org/tnf/entry/netbsd_on_allwinner_socs_update) *** Feedback/Questions Tim - Creating Multi Boot USB sticks (http://dpaste.com/0FKTJK3#wrap) Nomen - ZFS Questions (http://dpaste.com/1HY5MFB) JJ - Questions (http://dpaste.com/3ZGNSK9#wrap) Lars - Hardening Diffie-Hellman (http://dpaste.com/3TRXXN4) ***

Software Defined Talk
Episode 75: "AWS and VMware are having a LAN party” or “Matt Ray’s deep story” or “some five year old gibberish”

Software Defined Talk

Play Episode Listen Later Oct 14, 2016 51:31


Summary Big shakes in cloud land this week with VMware and AWS partnering up. Is this the hybrid cloud enterprises have been dreaming on? We also cover systems of records, Oracle, and something about Google phones. It’s a regular episode on all the hot topics! See full show notes: http://cote.io/sdt75 Listen above, subscribe to the feed (or iTunes), or download the MP3 directly. With Brandon Whichard, Matt Ray, and Coté. Sponsors/Mid-roll Check out cote.io/promos/ for more - free books, free cloud time, etc. Also: Lords of Computing is now Coté.show. Will put upcoming DrunkAndRetired.com special episode in there. And as always check out Pivotal Conversations. Nov 2nd - Pivotal Kansas City roadshow, Coté’ll be there. For more DevOps awesomeness, join the Chef Community Summit, October 26th and 27th in Seattle, WA. This Open Space event provides a great opportunity to connect with the DevOps Community and Chef Engineers over two days of engaging sessions and hallway discussions. Bring your ideas, passion and excitement for Chef and DevOps to this highly interactive event. Go to summit.chef.io to register for this awesome event and use the code PODCAST to get 10% off your ticket! DevOps Days Australia 20% discount code - SDT2016. Matt at DevOps Sydney October 20. Matt at AWS North Sydney October 25. Show notes Follow-up Buy-side commentary on Oracle storming the AWS castle Those reviews are awesome, thanks so much! I’ll be re-jiggering the podcast back end again, so expect some annoying weirdness (fireside.fm appears to be awesome, if expensive) So who’s buying Twitter? Tyler Cowe’s short term focus and going private escape hatch. VMware and AWS “VMware Cloud on AWS” “The service will be operated, sold and supported by VMware (not AWS) but integrate with the rest of AWS’ cloud portfolio (think storage, database, analytics and more).” https://medium.com/@cloud_opinion/aws-blinked–20cddbb537ed#.9cuvcp75o “these customers will go to Cloud, but its really a glorified co-lo.” “AWS should be encouraging customers to develop their workloads to take advantage of Cloud ( microservices, serverless etc ) and not delay it further.” InfoWorld piece: They keep talking about hybrid cloud, but what does that mean here? Just “we use multiple cloud types/providers,” or one application running across different clouds? “As part of the deal, VMware will be AWS’s preferred private cloud partner and Amazon will be VMware’s preferred partner in the public cloud.” Some MSP action: “One of the key differences between this deal and the one VMware announced with IBM in February is that this service is being offered and managed by VMware.” “Interested customers can request access to the service’s private beta starting Thursday, but VMware doesn’t expect the service to be live until early next year. General availability of VMware cloud on AWS will have to wait until even later in 2017.” Brief 451 note No data in DevOps Google Devices Roundup, and AI interlude Revisiting the Apple or Google ecosystem question. I hate having to think about ecosystems when buying electronics. And AI. Walt Mossberg Thinks Siri is Dumb Wired Interview with Obama - dude knows AI. BONUS LINKS, not covered in podcast Container Madness! Nothing much new, just content to riff on Microsoft shipping Commercially Supported (CS) Docker Engine Red Hat and containers - relabel, transitioned from originally a PaaS to CaaS. DockerCon coming to Austin Opentracing joins the Cloud Native Computing Foundation Announcement Open tracing Luke transitions to new Puppet CEO Luke’s announcement in Twitter Luke is one of the main people who started all this stuff, based on annoyance of BladeLogic, cfengine, etc. Did I ever tell the one of how I did a terrible sales job getting Reductive Labs signed up with RedMonk? The new dude looks like the real deal of enterprise infrastructure. Recommendations Matt: Warren Ellis’ Normal - From his latest newsletter “What science fiction, as a field, is good for, is looking at ten thousand possibilities at once," Zapp Branigan reading Trump quotes Brandon: Slate Plus. Also, Harry’s Blades. Coté: iPhone 7 Plus. Live Photos, Rotate mode, Bokeh stuff actually in beta, Home button takes getting used to, Two speakers is better?