Set of programming tools that is used to perform a complex software development task or to create a software product
POPULARITY
Entwickler:innen verbringen unzählige Stunden ihres Lebens in Entwicklungsumgebungen (IDEs). Uns geht es da natürlich nicht anders. Grund genug, dass wir mal einen echten Profi einladen, um über IDEs, die Arbeit damit und daran, sowie die technischen Hintergründe zu sprechen.Zusammen mit Jan-Niklas Wortmann von JetBrains sprechen wir über die Unterschiede von IDEs und Texteditoren, finden heraus, was ein Language Server eigentlich macht und diskutieren die perspektivische Rolle von AI in der Toolchain und im Alltag von Entwickler:innen.Außerdem sprechen wir über Pricing und Vertrieb von Developer-Software und erfahren hier exklusive Neuigkeiten zu kommenden Änderungen an JetBrains' Lizenzmodell.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTubeMusik: Hanimo
Freut euch auf unser nächstes Meetup: Am 28. November öffnet die programmier.bar wieder ihre Tore und lädt alle Interessierten ein, gleich zwei Talks zum Thema „Data in Gaming“ zu genießen.In dieser News-Folge kann Dave endlich erzählen, warum ihn ausgerechnet ein kleiner, aber smarter Wecker schon seit Wochen beschäftigt. Außerdem gibt es von Apple ein neues, kleines iPad.Leider gibt es auch weniger erfreuliche Nachrichten, denn sowohl bei Firefox als auch im Arc Browser haben sich Sicherheitsprobleme mit einem CVS von 9.8 eingeschlichen. Worum es sich dabei handelt und wie es dazu kommen konnte, erfahrt ihr bei uns im Detail.Außerdem berichten wir von dem neusten Drama um Matt Mullenweg, Automattic und die WordPress-Community und diskutieren, was für ein Schaden da gerade für WordPress und die gesamte Open-Source-Community angerichtet wird.Aber es gibt auch gute Neuigkeiten. Im Vue/Vite-Ökosystem hat Evan You zuletzt $4.6 Millionen eingesammelt, um mit void(0) eine Firma zu gründen, die das Vite Ökosystem und die dazugehörige Toolchain auf vollständig neue, moderne Beine stellen soll. Wie das genau ablaufen soll, hört ihr natürlich im Podcast.Wir bitten die etwas ungewohnte Tonqualität zu entschuldigen.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube
.io domains have been in vogue for over a decade, but now that the British government has decided to give up sovereignty over the small set of islands in the Indian Ocean that owned that country code on the Internet, it will soon cease to exist. Evan You, of Vue JS and Vite fame, has started a new company VoidZero Inc. to build the next generation toolchain for JavaScript. While trying to make Vite even better, Evan realized he needed a full-time team and funding to build the best toolchain around, and the engineers and investors agreed.StackBlitz enters the AI arena as well with its bolt.new offering, AI-powered software development allowing users to prompt, run, edit, and deploy full-stack web apps directly in the browser.WordPress drama reaches new levels of pettiness with a new checkbox that users must check before signing into their WP accounts swearing they are not affiliated with WP Engine in any way. In happier news, Sentry doubles down on its support for open source software (and the maintainers) by creating the Open Source Pledge where companies who use OSS for profit are encouraged to commit to paying the maintainers of the software they use so that burnout and related security issues can be better addressed.News:Paige - void(0) JavaScript toolingJack - StackBlitz's Bolt.new AI dev toolTJ - The end of .io domainsBonus News:Waymo updateWordPress updateSentry launches the Open Source PledgeSentry itself gave $500k to OS maintainers this yearDeno 2 is officially out!Fire Starters:HTTP QUERYWhat Makes Us Happy this Week:Paige - The Lord of the Rings: Rings of Power season 2Jack - The Substance movieTJ - Cider millsThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or Tweet us on X @front_end_fire and BlueSky.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.com
#EMBEDDEDSYSTEMS #IoTDEVELOPMENTwww.iotusecase.comIn dieser Podcast-Folge dreht sich alles um die Programmiersprache Rust und ihre Anwendung im Bereich IoT. Madeleine Mickeleit spricht mit Matthias Goetz, Lead Engineer, und Felix Herrmann, FPGA-Engineer bei ITK Engineering GmbH, einer Tochtergesellschaft von Bosch. Gemeinsam diskutieren sie die Besonderheiten und Vorteile von Rust im Vergleich zu anderen Programmiersprachen und gehen auf spezifische Anwendungsfälle und Best Practices ein, die Entwicklungsprozesse für IoT-Projekte optimieren können.Folge 134 auf einen Blick (und Klick):[12:00] Herausforderungen, Potenziale und Status quo – So sieht der Use Case in der Praxis aus[24:48] Lösungen, Angebote und Services – Ein Blick auf die eingesetzten TechnologienZusammenfassung der PodcastfolgeIn Folge 134 sind wir technisch unterwegs. Diese Podcastfolge ist vor allem etwas für alle Anwendungsentwickler und -entwicklerinnen da draußen – oder Personen, die mit Entwicklungsteams zusammenarbeiten.Rust ist eine moderne Programmiersprache, die sich durch innovative Ansätze wie das Ownership-Modell auszeichnet. Diese Merkmale sorgen für memory-safety und eine effiziente Ressourcenkontrolle, was sie besonders geeignet für die Entwicklung sicherer und zuverlässiger Embedded Software macht.Im Vergleich zu traditionellen Sprachen wie C und C++ bietet Rust durch seine strikte Speicherverwaltung und das Fehlen von Nullpointern eine höhere Sicherheit. Der Rust-Compiler hilft, viele typische Fehler bereits während der Entwicklung zu vermeiden.Anwendungsfälle im IoT: Rust wird sowohl für Embedded als auch für Anwendungssoftware verwendet. Ein konkretes Beispiel ist die Überwachung von Vibrationen in industriellen Motoren zur Früherkennung von Anomalien. ITK-Engineering erläutert, wie Rust in IoT-Projekten implementiert werden kann, um Entwicklungsprozesse zu beschleunigen und die Fehleranfälligkeit zu reduzieren. Sie betonen die Bedeutung der Integration von Rust in bestehende Systeme und die Nutzung von Rusts umfangreicher Toolchain.Business Case und Herausforderungen: Rust bietet Lösungen für häufige Probleme in der Softwareentwicklung, wie Speicherfehler und komplexe Testing-Prozesse. Die Sprache trägt zu kürzeren Entwicklungszyklen bei und erhöht die Effizienz durch schnellere Feedback-Loops und weniger notwendige zusätzliche Tools.Einhaltung von Industriestandards: Rust unterstützt die Einhaltung von Sicherheitsstandards wie ISO 26262 und MISRA. Institutionen wie die CISA und NSA empfehlen zunehmend die Nutzung speichersicherer Sprachen wie Rust.-----Relevante Folgenlinks:Madeleine (https://www.linkedin.com/in/madeleine-mickeleit/)Felix (https://www.linkedin.com/in/felix-herrmann-626493283/)Matthias (https://www.linkedin.com/in/matthias-g%C3%B6tz-bba994294/)Rust Wiki (https://rustwiki.org/en/)Studie mit 1.000 Google-Entwicklern (https://opensource.googleblog.com/2023/06/rust-fact-vs-fiction-5-insights-from-googles-rust-journey-2022.html)Jetzt IoT Use Case auf LinkedIn folgen
Biome started as a fork of Rome but has grown into a robust open-source toolchain that provides lightning-fast Rust-driven formatting, linting, import sorting, and a built-in LSP. Join us as we interview Emanuele Stoppa about Biome's current abilities, how Ema and the open-source community resurrected it from the ashes of Rome, and what we can look forward to as the team implements its roadmap.More about Ema and Biomehttps://github.com/biomejs/biomeRust book, if people want to start to learn Rust: https://doc.rust-lang.org/stable/book/How to create a new lint rule inside Biome: https://www.youtube.com/watch?v=zfzMO3nW_Wo&t=354sX:@ematipicoLinkedIn: Emanuele Stoppa Follow us on X: The Angular Plus Show The Angular Plus Show is a part of ng-conf. ng-conf is a multi-day Angular conference focused on delivering the highest quality training in the Angular JavaScript framework. Developers from across the globe converge on Salt Lake City, UT every year to attend talks and workshops by the Angular team and community experts.Join: http://www.ng-conf.org/Attend: https://ti.to/ng-confFollow: https://twitter.com/ngconf https://www.linkedin.com/company/ng-conf https://bsky.app/profile/ng-conf.bsky.social https://www.facebook.com/ngconfofficialRead: https://medium.com/ngconf Watch: https://www.youtube.com/@ngconfonline Stock media provided by JUQBOXMUSIC/ Pond5
Von dem Team hinter Million.js gibt es jetzt nicht nur einen Beschleuniger für die React-Laufzeit, sondern mit „Million Lint“ auch einen neuen Linter – ebenfalls speziell für React. Fabi erklärt uns, was es damit auf sich hat. Performance steht bekanntlich auch bei dem Web-Framework Astro hoch im Kurs. Diesem Versprechen soll auch das neueste Angebot aus dem Haus Astro folgen: AstroDB. Das Datenbank-as-a-Service-Angebot verspricht darüber hinaus noch eine einfache Integration in Astro und eure bestehende Toolchain.Tailwind hat die ersten Alphas von Version 4.0 veröffentlicht. Neben einer einfacheren Benutzung bringt Version 4 als größte Neuerung wohl eine neue auf Rust und TypeScript aufbauende CSS Engine mit – Codename Oxide.Ebenfalls auf Rust aufgebaut ist auch Tauri. Die Electron-Alternative nutzt Rust und OS-native WebViews, um Anwendungen bestehend aus HTML, CSS und JavaScript auf den Desktop zu bringen. Seit der Beta-Version 2.0 geht das nun auch für iOS und Android. Wer keine Lust hat, selbst Rust zu lernen, kann in Zukunft vielleicht auf Devin setzen – den ersten „AI Software Engineer”.Als kleinen Teaser gibt es am Ende noch den Hinweis auf „Node.js: The Documentary” von Honeypot.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube
Dive into the world of DevOps with our special episode of the DevOps Toolchain Show, where we unveil the reasons behind our rebranding and deep-dive into the significance of performance testing within the DevOps ecosystem. Discover how continuous testing, encompassing performance testing, becomes a vital cog in the machine, propelling the delivery of top-quality software at breakneck speeds. Buckle up as we navigate through the intricacies of the DevOps toolchain, spotlighting the essential stages and tools needed to streamline software development.
Pre-release announcement for Go 1.20.1 & 1.19.6 to fix private security issuesPre-release announcement for golang.org/x/image/tiff & golang.org/x/image to fix private security issuesTransparent TelementryGitHub Discussion (now locked)Blog post explaining the problem and proposed solutionGopherCon IsraelApache Arrow 11.0 releasedMatt TopolGitHub profileVoltron DataBook: In-Memory Analytics with Apache ArrowPresentation at SubSurface: Understanding Apache ArrowPresentation at ApacheCon 2022: Apache Arrow and Go: A Match made in DataApache arrow project web siteApache Go libraryFollow Matt on Twitter, LinkedIn or MastodonMatt will be speaking at the free, virtual conference Subsurface on March 1
What advantages can a build system provide for a Python developer? What new skills are required when working with a team of developers? This week on the show, Benjy Weinberger from Toolchain is here to discuss the Pants build system and getting started with continuous integration (CI).
Benjy Weinberger is the co-founder of Toolchain, a build tool platform. He is one of the creators of the original Pants, an in-house Twitter build system focused on Scala, and was the VP of Infrastructure at Foursquare. Toolchain now focuses on Pants 2, a revamped build system.Apple Podcasts | Spotify | Google PodcastsIn this episode, we go back to the basics, and discuss the technical details of scalable build systems, like Pants, Bazel and Buck. A common challenge with these build systems is that it is extremely hard to migrate to them, and have them interoperate with open source tools that are built differently. Benjy's team redesigned Pants with an initial hyper-focus on Python to fix these shortcomings, in an attempt to create a third generation of build tools - one that easily interoperates with differently built packages, but still fast and scalable.Machine-generated Transcript[0:00] Hey, welcome to another episode of the Software at Scale podcast. Joining me here today is Benji Weinberger, previously a software engineer at Google and Twitter, VP of Infrastructure at Foursquare, and now the founder and CEO of Toolchain.Thank you for joining us.Thanks for having me. It's great to be here. Yes. Right from the beginning, I saw that you worked at Google in 2002, which is forever ago, like 20 years ago at this point.What was that experience like? What kind of change did you see as you worked there for a few years?[0:37] As you can imagine, it was absolutely fascinating. And I should mention that while I was at Google from 2002, but that was not my first job.I have been a software engineer for over 25 years. And so there were five years before that where I worked at a couple of companies.One was, and I was living in Israel at the time. So my first job out of college was at Check Point, which was a big successful network security company. And then I worked for a small startup.And then I moved to California and started working at Google. And so I had the experience that I think many people had in those days, and many people still do, of the work you're doing is fascinating, but the tools you're given to do it with as a software engineer are not great.This, I'd had five years of experience of sort of struggling with builds being slow, builds being flaky with everything requiring a lot of effort. There was almost a hazing,ritual quality to it. Like, this is what makes you a great software engineer is struggling through the mud and through the quicksand with this like awful substandard tooling. And,We are not users, we are not people for whom products are meant, right?We make products for other people. Then I got to Google.[2:03] And Google, when I joined, it was actually struggling with a very massive, very slow make file that took forever to parse, let alone run.But the difference was that I had not seen anywhere else was that Google paid a lot of attention to this problem and Google devoted a lot of resources to solving it.And Google was the first place I'd worked and I still I think in many ways the gold standard of developers are first class participants in the business and deserve the best products and the best tools and we will if there's nothing out there for them to use, we will build it in house and we will put a lot of energy into that.And so it was for me, specifically as an engineer.[2:53] A big part of watching that growth from in the sort of early to late 2000s was. The growth of engineering process and best practices and the tools to enforce it and the thing i personally am passionate about is building ci but i'm also talking about.Code review tools and all the tooling around source code management and revision control and just everything to do with engineering process.It really was an object lesson and so very, very fascinating and really inspired a big chunk of the rest of my career.I've heard all sorts of things like Python scripts that had to generate make files and finally they move the Python to your first version of Blaze. So it's like, it's a fascinating history.[3:48] Maybe can you tell us one example of something that was like paradigm changing that you saw, like something that created like a magnitude, like order of magnitude difference,in your experience there and maybe your first aha moment on this is how good like developer tools can be?[4:09] Sure. I think I had been used to using make basically up till that point. And Google again was, as you mentioned, using make and really squeezing everything it was possible to squeeze out of that lemon and then some.[4:25] But when the very early versions of what became blaze which was that big internal build system which inspired basil which is the open source variant of that today. Hey one thing that really struck me was the integration with the revision controls system which was and i think still is performance.I imagine many listeners are very familiar with Git. Perforce is very different. I can only partly remember all of the intricacies of it, because it's been so long since I've used it.But one interesting aspect of it was you could do partial checkouts. It really was designed for giant code bases.There was this concept of partial checkouts where you could check out just the bits of the code that you needed. But of course, then the question is, how do you know what those bits are?But of course the build system knows because the build system knows about dependencies. And so there was this integration, this back and forth between the, um.[5:32] Perforce client and the build system that was very creative and very effective.And allowed you to only have locally on your machine, the code that you actually needed to work on the piece of the codebase you're working on,basically the files you cared about and all of their transitive dependencies. And that to me was a very creative solution to a problem that involved some lateral thinking about how,seemingly completely unrelated parts of the tool chain could interact. And that's kind of been that made me realize, oh, there's a lot of creative thought at work here and I love it.[6:17] Yeah, no, I think that makes sense. Like I interned there way back in 2016. And I was just fascinated by, I remember by mistake, I ran like a grep across the code base and it just took forever. And that's when I realized, you know, none of this stuff is local.First of all, like half the source code is not even checked out to my machine.And my poor grep command is trying to check that out. But also how seamlessly it would work most of the times behind the scenes.Did you have any experience or did you start working on developer tools then? Or is that just what inspired you towards thinking about developer tools?I did not work on the developer tools at Google. worked on ads and search and sort of Google products, but I was a big user of the developer tools.Exception which was that I made some contributions to the.[7:21] Protocol buffer compiler which i think many people may be familiar with and that is. You know if i very deep part of the toolchain that is very integrated into everything there and so that gave me.Some experience with what it's like to hack on a tool that's everyone in every engineer is using and it's the sort of very deep part of their workflow.But it wasn't until after google when i went to twitter.[7:56] I noticed that the in my time of google my is there the rest of the industry had not. What's up and suddenly i was sort of stressed ten years into the past and was back to using very slow very clunky flaky.Tools that were not designed for the tasks we were trying to use them for. And so that made me realize, wait a minute, I spent eight years using these great tools.They don't exist outside of these giant companies. I mean, I sort of assumed that maybe, you know, Microsoft and Amazon and some other giants probably have similar internal tools, but there's something out there for everyone else.And so that's when I started hacking on that problem more directly was at Twitter together with John, who is now my co-founder at Toolchain, who was actually ahead of me and ahead ofthe game at Twitter and already begun working on some solutions and I joined him in that.Could you maybe describe some of the problems you ran into? Like were the bills just taking forever or was there something else?[9:09] So there were...[9:13] A big part of the problem was that Twitter at the time, the codebase I was interested in and that John was interested in was using Scala. Scala is a fascinating, very rich language.[9:30] Its compiler is very slow. And we were in a situation where, you know, you'd make some small change to a file and then builds would take just,10 minutes, 20 minutes, 40 minutes. The iteration time on your desktop was incredibly slow.And then CI times, where there was CI in place, were also incredibly slow because of this huge amount of repetitive or near repetitive work. And this is because the build tools,etc. were pretty naive about understanding what work actually needs to be done given a set of changes.There's been a ton of work specifically on SBT since then.[10:22] It has incremental compilation and things like that, but nonetheless, that still doesn't really scale well to large corporate codebases that are what people often refer to as monorepos.If you don't want to fragment your codebase with all of the immense problems that that brings, you end up needing tooling that can handle that situation.Some of the biggest challenges are, how do I do less than recompile the entire codebase every time. How can tooling help me be smart about what is the correct minimal amount of work to do.[11:05] To make compiling and testing as fast as it can be?[11:12] And I should mention that I dabbled in this problem at Twitter with John. It was when I went to Foursquare that I really got into it because Foursquare similarly had this big Scala codebase with a very similar problem of incredibly slow builds.[11:29] The interim solution there was to just upgrade everybody's laptops with more RAM and try and brute force the problem. It was very obvious to everyone there, tons of,force-creation pattern still has lots of very, very smart engineers.And it was very obvious to them that this was not a permanent solution and we were casting around for...[11:54] You know what can be smart about scala builds and i remember this thing that i had hacked on twitter and. I reached out to twitter and ask them to open source it so we could use it and collaborate on it wasn't obviously some secret sauce and that is how the very first version of the pants open source build system came to be.I was very much designed around scarlet did eventually.Support other languages. And we hacked on it a lot at Foursquare to get it to...[12:32] To get the codebase into a state where we could build it sensibly. So the one big challenge is build speed, build performance.The other big one is managing dependencies, keeping your codebase sane as it scales.Everything to do with How can I audit internal dependencies?How do I make sure that it is very, very easy to accidentally create all sorts of dependency tangles and cycles and create a code base whose dependency structure is unintelligible, really,hard to work with and actually impacts performance negatively, right?If you have a big tangle of dependencies, you're more likely to invalidate a large chunk of your code base with a small change.And so tooling that allows you to reason about the dependencies in your code base and.[13:24] Make it more tractable was the other big problem that we were trying to solve. Mm-hmm. No, I think that makes sense.I'm guessing you already have a good understanding of other build systems like Bazel and Buck.Maybe could you walk us through what are the difference for PANs, Veevan? What is the major design differences? And even maybe before that, like, how was Pants designed?And is it something similar to like creating a dependency graph? You need to explicitly include your dependencies.Is there something else that's going on?[14:07] Maybe just a primer. Yeah. Absolutely. So I should mention, I was careful to mention, you mentioned Pants V1.The version of Pants that we use today and base our entire technology stack around is what we very unimaginatively call Pants V2, which we launched two years ago almost to the day.That is radically different from Pants V1, from Buck, from Bazel. It is quite a departure in ways that we can talk about later.One thing that I would say Panacea V1 and Buck and Bazel have in common is that they were designed around the use cases of a single organization. is a.[14:56] Open source variant or inspired by blaze its design was very much inspired by. Here's how google does engineering and a buck similarly for facebook and pansy one frankly very similar for.[15:11] Twitter and we sort of because Foursquare also contributed a lot to it, we sort of nudged it in that direction quite a bit. But it's still very much if you did engineering in this one company's specific image, then this might be a good tool for you.But you had to be very much in that lane.But what these systems all look like is, and the way they are different from much earlier systems is.[15:46] They're designed to work in large scalable code bases that have many moving parts and share a lot of code and that builds a lot of different deployables, different, say, binaries or DockerDocker images or AWS lambdas or cloud functions or whatever it is you're deploying, Python distributions, Java files, whatever it is you're building, typically you have many of them in this code base.Could be lots of microservices, could be just lots of different things that you're deploying.And they live in the same repo because you want that unity. You want to be able to share code easily. you don't want to introduce dependency hell problems in your own code. It's bad enough that we have dependency hell problems third-party code.[16:34] And so these systems are all if you squint at them from thirty thousand feet today all very similar in that they make that the problem of. Managing and building and testing and packaging in a code base like that much more tractable and the way they do this is by applying information about the dependencies in your code base.So the important ingredient there is that these systems understand the find the relatively fine grained dependencies in your code base.And they can use that information to reason about work that needs to happen. So a naive build system, you'd say, run all the tests in the repo or in this part of the repo.So a naive system would literally just do that, and first they would compile all the code.[17:23] But a scalable build system like these would say, well, you've asked me to run these tests, but some of them have already been cached and these others, okay, haven't.So I need to look at these ones I actually need to run. So let me see what needs to be done before I can run them.Oh, so these source files need to be compiled, but some of those already in cache and then these other ones I need to compile. But I can apply concurrency because there are multiple cores on this machine.So I can know through dependency analysis which compile jobs can run concurrently and which cannot. And then when it actually comes time to run the tests, again, I can apply that sort of concurrency logic.[18:03] And so these systems, what they have in common is that they use dependency information to make your building testing packaging more tractable in a large code base.They allow you to not have to do the thing that unfortunately many organizations find themselves doing, which is fragmenting the code base into lots of different bits andsaying, well, every little team or sub team works in its own code base and they consume each other's code through, um, so it was third party dependencies in which case you are introducing a dependency versioning hell problem.Yeah. And I think that's also what I've seen that makes the migration to a tool like this hard. Cause if you have an existing code base that doesn't lay out dependencies explicitly.[18:56] That migration becomes challenging. If you already have an import cycle, for example.[19:01] Bazel is not going to work with you. You need to clean that up or you need to create one large target where the benefits of using a tool like Bazel just goes away. And I think that's a key,bit, which is so fascinating because it's the same thing over several years. And I'm hoping that,it sounds like newer tools like Go, at least, they force you to not have circular dependencies and they force you to keep your code base clean so that it's easy to migrate to like a scalable build system.[19:33] Yes exactly so it's funny that is the exact observation that let us to pans to see to so they said pans to be one like base like buck was very much inspired by and developed for the needs of a single company and other companies were using it a little bit.But it also suffered from any of the problems you just mentioned with pans to for the first time by this time i left for square and i started to chain with the exact mission of every company every team of any size should have this kind of tooling should have this ability this revolutionary ability to make the code base is fast and tractable at any scale.And that made me realize.We have to design for that we have to design for not for. What a single company's code base looks like but we have to design.To support thousands of code bases of all sorts of different challenges and sizes and shapes and languages and frameworks so.We actually had to sit down and figure out what does it mean to make a tool.Like this assistant like this adoptable over and over again thousands of times you mentioned.[20:48] Correctly, that it is very hard to adopt one of those earlier tools because you have to first make your codebase conform to whatever it is that tool expects, and then you have to write huge amounts of manual metadata to describe all of the dependencies in your,the structure and dependencies of your codebase in these so-called build files.If anyone ever sees this written down, it's usually build with all capital letters, like it's yelling at you and that those files typically are huge and contain a huge amount of information your.[21:27] I'm describing your code base to the tool with pans be to eat very different approaches first of all we said this needs to handle code bases as they are so if you have circular dependencies it should handle them if you have. I'm going to handle them gracefully and automatically and if you have multiple conflicting external dependencies in different parts of your code base this is pretty common right like you need this version of whatever.Hadoop or NumPy or whatever it is in this part of the code base, and you have a different conflicting version in this other part of the code base, it should be able to handle that.If you have all sorts of dependency tangles and criss-crossing and all sorts of things that are unpleasant, and better not to have, but you have them, the tool should handle that.It should help you remove them if you want to, but it should not let those get in the way of adopting it.It needs to handle real-world code bases. The second thing is it should not require you to write all this crazy amount of metadata.And so with Panzer V2, we leaned in very hard on dependency inference, which means you don't write these crazy build files.You write like very tiny ones that just sort of say, you know, here is some code in this language for the build tool to pay attention to.[22:44] But you don't have to edit the added dependencies to them and edit them every time you change dependencies.Instead, the system infers dependencies by static analysis. So it looks at your, and it does this at runtime.So you, you know, almost all your dependencies, 99% of the time, the dependencies are obvious from import statements.[23:05] And there are occasional and you can obviously customize this because sometimes there are runtime dependencies that have to be inferred from like a string. So from a json file or whatever is so there are various ways to customize this and of course you can always override it manually.If you have to be generally speaking ninety.Seven percent of the boilerplate that used to going to build files in those old systems including pans v1 no. You know not claiming we did not make the same choice but we goes away with pans v2 for exactly the reason that you mentioned these tools,because they were designed to be adopted once by a captive audience that has no choice in the matter.And it was designed for how that code base that adopting code base already is. is these tools are very hard to adopt.They are massive, sometimes multi-year projects outside of that organization. And we wanted to build something that you could adopt in days to weeks and would be very easy,to customize to your code base and would not require these massive wholesale changes or huge amounts of metadata.And I think we've achieved that. Yeah, I've always wondered like, why couldn't constructing the build file be a part of the build. In many ways, I know it's expensive to do that every time. So just like.[24:28] Parts of the build that are expensive, you cache it and then you redo it when things change.And it sounds like you've done exactly that with BANs V2.[24:37] We have done exactly that. The results are cached on a profile basis. So the very first time you run something, then dependency inference can take some time. And we are looking at ways to to speed that up.I mean, like no software system has ever done, right? Like it's extremely rare to declare something finished. So we are obviously always looking at ways to speed things up.But yeah, we have done exactly what you mentioned. We don't, I should mention, we don't generate the dependencies into build for, we don't edit build files and then you check them in.We do that a little bit. So I mentioned you do still with PANSTL V2, you need these little tiny build files that just say, here is some code.They typically can literally be one line sometimes, almost like a marker file just to say, here is some code for you to pay attention to.We're even working on getting rid of those.We do have a little script that generates those one time just to help you onboard.But...[25:41] The dependencies really are just generated a runtime as on demand as needed and used a runtime so we don't have this problem of. Trying to automatically add or edit a otherwise human authored file that is then checked in like this generating and checking in files is.Problematic in many ways, especially when those files also have to take human written edits.So we just do away with all of that and the dependency inference is at runtime, on demand, as needed, sort of lazily done, and the information is cached. So both cached in memory in the surpassed V2 has this daemon that runs and caches a huge amount of state in memory.And the results of running dependency inference are also cached on disk. So they survive a daemon restart, etc.I think that makes sense to me. My next question is going to be around why would I want to use panthv2 for a smaller code base, right? Like, usually with the smaller codebase, I'm not running into a ton of problems around the build.[26:55] I guess, do you notice these inflection points that people run into? It's like, okay, my current build setup is not enough. What's the smallest codebase that you've seen that you think could benefit? Or is it like any codebase in the world? And I should start with,a better build system rather than just Python setup.py or whatever.I think the dividing line is, will this code base ever be used for more than one thing?[27:24] So if you have a, let's take the Python example, if literally all this code base will ever do is build this one distribution and a top level setup pie is all I need. And that is, you know, this,sometimes you see this with open source projects and the code base is going to remain relatively small, say it's only ever going to be a few thousand lines and the tests, even if I runthe tests from scratch every single time, it takes under five minutes, then you're probably fine.But I think two things I would look at are, am I going to be building multiple things in this code base in the future, or certainly if I'm doing it now.And that is much more common with corporate code bases. You have to ask yourself, okay, my team is growing, more and more people are cooperating on this code base.I want to be able to deploy multiple microservices. I want to be able to deploy multiple cloud functions.I want to be able to deploy multiple distributions or third-party artifacts.I want to be able to.[28:41] You know, multiple sort of data science jobs, whatever it is that you're building. If you want, if you ever think you might have more than one, now's the time to think about,okay, how do I structure the code base and what tooling allows me to do this effectively?And then the other thing to look at is build times. If you're using compiled languages, then obviously compilation, in all cases testing, if you start to see like, I can already see that that tests are taking five minutes, 10 minutes, 15 minutes, 20 minutes.Surely, I want some technology that allows me to speed that up through caching, through concurrency, through fine-grained invalidation, namely, don't even attempt to do work that isn't necessary for the result that was asked for.Then it's probably time to start thinking about tools like this, because the earlier you adopt it, the easier it is to adopt.So if you wait until you've got a tangle of multiple setup pies in the repo and it's unclear how you manage them and how you keep their dependencies synchronized,so there aren't version conflicts across these different projects, specifically with Python,this is an interesting problem.I would say with other languages, there is more because of the compilation step in jvm languages or go you.[30:10] Encounter the need for a build system much much earlier a bill system of some kind and then you will ask yourself what kind with python because you can get a bite for a while just running. What are the play gate and pie test and i directly and all everything is all together in a single virtual and.But the Python tooling, as mighty as it is, mostly is not designed for larger code bases with multiple, that deploy multiple things and have multiple different sets of.[30:52] Internal and external dependencies the tooling generally implicitly assume sort of one top level set up i want top level. Hi project dot com all you know how are you configuring things and so especially using python let's say for jango flask apps or for data scienceand your code base is growing and you've hired a bunch of data scientists and there's more and more code going in there. With Python, you need to start thinking about what tooling allows me to scale this code base. No, I think I mostly resonate with that. The first question that comes to my mind is,let's talk specifically about the deployment problem. If you're deployed to multiple AWS lambdas or cloud functions or whatever, the first thought that would come to my mind isis I can use separate Docker images that can let me easily produce this container image that I can ship independently.Would you say that's not enough? I totally get that for the build time problem.A Docker image is not going to solve anything. But how about the deployment step?[32:02] So again, with deployments, I think there are two ways where a tool like this can really speed things up.One is only build the things that actually need to be redeployed. And because the tool understands dependencies and can do change analysis, it can figure that out.So one of the things that HansB2 does is it integrates with Git.And so it natively understands how to figure out Git diffs. So you can say something like, show me all the whatever, lambdas, let's say, that are affected by changes between these two branches.[32:46] And it knows and it understands it can say, well, these files changed and you know, we, I understand the transitive dependencies of those files.So I can see what actually needs to be deployed. And, you know, many cases, many things will not need to be redeployed because they haven't changed.The other thing is there's a lot of performance improvements and process improvements around building those images. So, for example, we have for Python specifically, we have an executable format called PEX,which stands for Python executable, which is a single file that embeds all of your Python code that is needed for your deployable and all of its external requirements, transitive external requirements, all bundled up into this single sort of self-executing file.This allows you to do things like if you have to deploy 50 of these, you can basically have a single docker image.[33:52] The different then on top of that you add one layer for each of these fifty and the only difference in that layer is the presence of this pecs file. Where is without all this typically what you would do is.You have fifty docker images each one of which contains a in each one of which you have to build a virtual and which means running.[34:15] Pip as part of building the image, and that gets slow and repetitive, and you have to do it 50 times.We have a lot of ways to speed up. Even if you are deploying 50 different Docker images, we have ways of speeding that up quite dramatically.Because again, of things like dependency analysis, the PECS format, and the ability to build incrementally.Yeah, I think I remember that at Dropbox, we came up with our own, like, par format to basically bundle up a Python binary with, I think par stood for Python archive. I'm notentirely sure. But it did something remarkably similar to solve exactly this problem. It just takes so long, especially if you have a large Python code base. I think that makes sense to me. The other thing that one might ask is, with Python, you don't really have,too long of a build time, is what you would guess, because there's nothing to build. Maybe myPy takes some time to do some static analysis, and, of course, your tests can take forever,and you don't want to rerun them. But there isn't that much of a build time that you have to think about. Would you say that you agree with this, or there's some issues that end,up happening on real-world code basis.[35:37] Well that's a good question the word builds means different things to different people and we recently taken to using the time see i more. Because i think that is clear to people what that means but when i say build or see i mean it in the law in in the extended sense everything you do to go from.Human written source code to a verified.Test did. deployable artifact and so it's true that for python there's no compilation step although arguably. Running my pie is really important and now that i'm really in the habit of using.My pie i will probably never not use it on python code ever again but so that are.[36:28] Sort of build-ish steps for Python such as type checking, such as running code generators like Thrift or Protobuf.And obviously a big, big one is running, resolving third-party dependencies such as running PIP or poetry or whatever it is you're using. So those are all build steps.But with Python, really the big, big, big thing is testing and packaging and primarily testing.And so with Python, you have to be even more rigorous about unit testing than you do with other languages because you don't have a compiler that is catching whole classes of bugs.So and again, MyPy and type checking does really help with that. And so when I build to me includes, build in the large sense includes running tests,includes packaging and includes everything, all the quality control that you run typically in CI or on your desktop in order to go say, well, I've made some edits and here's the proof that these edits are good and I can merge them or deploy them.[37:35] I think that makes sense to me. And like, I certainly saw it with the limited number of testing, the limited amount of type checking you can do with Python, like MyPy is definitelyimproving on this. You just need to unit test a lot to get the same amount of confidence in your own code and then unit tests are not cheap. The biggest question that comes tomy mind is that is BANs V2 focused on Python? Because I have a TypeScript code base at my workplace and I would love to replace the TypeScript compiler with something that was slightly smarter and could tell me, you know what, you don't need to run every unit test every change.[38:16] Great question so when we launched a pass me to which was two years ago. The.We focused on python and that was the initial language we launched with because you had to start somewhere and in the city ten years in between the very scarlet centric work we were doing on pansy one. And the launch of hands be to something really major happened in the industry which was the python skyrocketed in popularity sky python went from.Mostly the little scripting language around the edges of your quote unquote real code, I can use python like fancy bash to people are building massive multi billion dollar businesses entirely on python code bases and there are a few things that drove this one was.I would say the biggest one probably was the python became the. Language of choice for data science and we have strong support for those use cases. There was another was the,Django and Flask became very popular for writing web apps more and more people were used there were more in Intricate DevOps use cases and Python is very popular for DevOps for various good reasons. So.[39:28] Python became super popular. So that was the first thing we supported in pants v2, but we've since added support for or Go, Java, Scala, Kotlin, Shell.Definitely what we don't have yet is JavaScript TypeScript. We are looking at that very closely right now, because that is the very obvious next thing we want to add.Actually, if any listeners have strong opinions about what that should look like, we would love to hear from them or from you on our Slack channels or on our GitHub discussions where we are having some lively discussions about exactly this because the JavaScript.[40:09] And TypeScript ecosystem is already very rich with tools and we want to provide only value add, right? We don't want to say, you have to, oh, you know, here's another paradigm you have to adopt.And here's, you know, you have to replace, you've just done replacing whatever this with this, you know, NPM with yarn. And now you have to do this thing. And now we're, we don't want to beanother flavor of the month. We only want to do the work that uses those tools, leverages the existing ecosystem but adds value. This is what we do with Python and this is one of the reasons why our Python support is very, very strong, much stronger than any other comparable tool out there is.[40:49] A lot of leaning in on the existing Python tool ecosystem but orchestrating them in a way that brings rigor and speed to your builds.And I haven't used the word we a lot. And I just kind of want to clarify who we is here.So there is tool chain, the company, and we're working on, um, uh, SAS and commercial, um, solutions around pants, which we can talk about in a bit.But there is a very robust open source community around pants that is not. tightly held by Toolchain, the company in a way that some other companies open source projects are.So we have a lot of contributors and maintainers on Pants V2 who are not working at Toolchain, but are using Pants in their own companies and their own organizations.And so we have a very wide range of use cases and opinions that are brought to bear. And this is very important because, as I mentioned earlier,we are not trying to design a system for one use case, for one company or a team's use case.We are trying, you know, we are working on a system we want.[42:05] Adoption for over and over and over again at a wide variety of companies. And so it's very important for us to have the contributions and the input from a wide variety of teams and companiesand people. And it's very fortunate that we now do. I mean, on that note, the thing that comes to my mind is another benefit of your scalable build system like Vance or Bazel or Buck is that youYou don't have to learn various different commands when you are spelunking through the code base, whether it's like a Go code base or like a Java code base or TypeScript code base.You just have to run pants build X, Y, Z, and it can construct the appropriate artifacts for you. At least that was my experience with Bazel.Is that something that you are interested in or is that something that pants V2 does kind of act as this meta layer for various other build systems or is it much more specific and knowledgeable about languages itself?[43:09] It's, I think your intuition is correct. The idea is we want you to be able to do something like pants test or pants test, you know, give it a path to a directory and it understands what that means.Oh, this directory contains Python code. Therefore, I should run PyTest in this way. And oh, Oh, it also contains some JavaScript code, so I should run the JavaScript test in this way.And it basically provides a conceptual layer above all the individual tools that gives you this uniformity across frameworks, across languages.One way to think about this is.[43:52] The tools are all very imperative. say you have to run them with a whole set of flags and inputs and you have to know how to use each one separately. So it's like having just the blades of a Swiss Army knife withno actual Swiss Army knife. A tool like Pants will say, okay, we will encapsulate all of that complexity into a much more simple command line interface. So you can do, like I said,test or pants lint or pants format and it understands, oh, you asked me to format your code. I see that you have the black and I sort configured as formatters. So I will run them. And I happen to know that formatting, because formatting can change the source files,I have to run them sequentially. But when you ask for lint, it's not changing the source files. So I know that I can run them multiple lint as concurrently, that sort of logic. And And different tools have different ways of being configured or of telling you what they want to do, but we...[44:58] Can't be to sort of encapsulate all that away from you and so you get this uniform simple command line interface that abstract away a lot of the specifics of these tools and let you run simple commands and the reason this is important is that. This extra layer of indirection is partly what allows pants to apply things like cashing.And invalidation and concurrency because what you're saying is.[45:25] Hey, the way to think about it is not, I am telling pants to run tests. It is I am telling pants that I want the results of tests, which is a subtle difference.But pants then has the ability to say, well, I don't actually need to run pi test on all these tests because I have results from some of them already cached. So I will return them from cache.So that layer of indirection not only simplifies the UI, but provides the point where you can apply things like caching and concurrency.Yeah, I think every programmer wants to work with declarative tools. I think SQL is one of those things where you don't have to know how the database works. If SQL were somewhat easier, that dream would be fulfilled. But I think we're all getting there.I guess the next question that I have is, what benefit do I get by using the tool chain, like SaaS product versus Pants V2?When I think about build systems, I think about local development, I think about CI.[46:29] Why would I want to use the SaaS product? That's a great question.So Pants does a huge amount of heavy lifting, but in the end it is restricted to the resources is on the machine on which it's running. So when I talk about cash, I'm talking about the local cash on that machine. When I talk about concurrency, I'm talking about using,the cores on your machine. So maybe your CI machine has four cores and your laptop has eight cores. So that's the amount of concurrency you get, which is not nothing at all, which is great.[47:04] Thanks for watching![47:04] You know as i mentioned i worked at google for many years and then other companies where distributed systems were saying like i come from a distributed systems background and it really. Here is a problem.All of a piece of work taking a long time because of. Single machine resource constraints the obvious answer here is distributed distributed the work user distributed system and so that's what tool chain offers essentially.[47:30] You configure Pants to point to the toolchain system, which is currently SAS.And we will have some news soon about some on-prem solutions.And now the cache that I mentioned is not just did this test run with these exact inputs before on my machine by me me while I was iterating, but has anyone in my organization or any CI run this test before,with these exact inputs?So imagine a very common situation where you come in in the morning and you pull all the changes that have happened since you last pulled.Those changes presumably passed CI, right? And the CI populated the cache.So now when I run tests, I can get cache hits from the CI machine.[48:29] Now pretty much, yeah. And then with concurrency, again, so let's say, you know, post cache, there are still 200 tests that need to be run.I could run them eight at a time on my machine or the CI machine could run them, you know, say, four at a time on four cores, or I could run 50 or 100 at a time on a cluster of machines.That's where, again, as your code base gets bigger and bigger, that's where some massive, massive speedups come in.The other aspects of the... I should mention that the remote execution that I just mentioned is something we're about to launch. It is not available today. The remote caching is.The other aspects are things like observability. So when you run builds on your laptop or CI, they're ephemeral.Like the output gets lost in the scroll back.And it's just a wall of text that gets lost with them.[49:39] Toolchain all of that information is captured and stored in structured form so you have. Hey the ability to see past bills and see build behavior over time and drill death search builds and drill down into individual builds and see well.How often does this test fail and you know when did this get slow all this kind of information and so you get.This more enterprise level.Observability into a very core piece of developer productivity, which is the iteration time.The time it takes to run tests and build deployables and parcel the quality control checks so that you can merge and deploy code directly relates to time to release.It directly relates to some of the core metrics of developer productivity. How long is it going to take to get this thing out the door?And so having the ability to both speed that up dramatically through distributing the work and having observability into what work is going on, that is what toolchain provides,on top of the already, if I may say, pretty robust open source offering.[51:01] So yeah, that's kind of it.[51:07] Pants on its own gives you a lot of advantages, but it runs standalone. Plugging it into a larger distributed system really unleashes the full power of Pants as a client to that system.[51:21] No, I think what I'm seeing is this interesting convergence. There's several companies trying to do this for Bazel, like BuildBuddy and Edgeflow. So, and it really sounds like the build system of the future, like 10 years from now.[51:36] No one will really be developing on their local machines anymore. Like there's GitHub code spaces on one side. It's like you're doing all your development remotely.[51:46] I've always found it somewhat odd that development that happens locally and whatever scripts you need to run to provision your CI machine to run the same set of testsare so different sometimes that you can never tell why something's passing locally and failing in in CI or vice versa. And there really should just be this one execution layer that can say, you know what, I'm going to build at a certain commit or run at a certain commit.And that's shared between the local user and the CI user. And your CI script is something as simple as pants build slash slash dot dot dot. And it builds the whole code base for,you. So yeah, I certainly feel like the industry is moving in that direction. I'm curious whether You think that's the same.Do you have an even stronger vision of how folks will be developing 10 years from now? What do you think it's going to look like?Oh, no, I think you're absolutely right. I think if anything, you're underselling it. I think this is how all development should be and will be in the future for multiple reasons.One is performance.[52:51] Two is the problem of different platforms. And so today, big thorny problem is I want to, you know, I want to,I'm developing on my Mac book, but the production, so I'm running, when I run tests locally and when I run anything locally, it's running on my Mac book, but that's not our deployable, right?Typically your deploy platform is some flavor of Linux. So...[53:17] With the distributed system approach you can run the work in. Containers that exactly match your production environments you don't even have to care about can this run.On will my test pass on mac os do i need ci the runs on mac os just to make sure the developers can. past test on Mac OS and that is somehow correlated with success on the production environment.You can cut away a whole suite of those problems, which today, frankly, I had mentioned earlier, you can get cache hits on your desktop from remote, from CI populating the cache.That is hampered by differences in platform.Is hampered by other differences in local setup that we are working to mitigate. But imagine a world in which build logic is not actually running on your MacBook, or if it is,it's running in a container that exactly matches the container that you're targeting.It cuts away a whole suite of problems around platform differences and allows you to focus because on just a platform you're actually going to deploy too.[54:35] And the...[54:42] And just the speed and the performance of being able to work and deploy and the visibility that it gives you into the productivity and the operational work of your development team,I really think this absolutely is the future.There is something very strange about how in the last 15 years or so, so many business functions have had the distributed systems treatment applied to them.Function is now that there are these massive valuable companies providing systems that support sales and systems that support marketing and systems that support HR and systems supportoperations and systems support product management and systems that support every business function,and there need to be more of these that support engineering as a business function.[55:48] And so i absolutely think the idea that i need a really powerful laptop so that my running tests can take thirty minutes instead of forty minutes when in reality it should take three minutes is. That's not the future right the future is to as it has been for so many other systems to the web the laptop is that i can take anywhere is.Particularly in these work from home times, is a work from anywhere times, is just a portal into the system that is doing the actual work.[56:27] Yeah. And there's all these improvements across the stack, right? When I see companies like Versel, they're like, what if you use Next.js, we provide the best developer platform forthat and we want to provide caching. Then there's like the lower level systems with build systems, of course, like bands and Bazel and all that. And at each layer, we're kindof trying to abstract the problem out. So to me, it still feels like there is a lot of innovation to be done. And I'm also going to be really curious to know, you know, there'sgoing to be like a few winners of this space, or if it's going to be pretty broken up. And like everyone's using different tools. It's going to be fascinating, like either way.Yeah, that's really hard to know. I think one thing you mentioned that I think is really important is you said your CI should be as simple as just pants build colon, colon, or whatever.That's our syntax would be sort of pants test lint or whatever.I think that's really important. So.[57:30] Today one of the big problems with see i. Which is still growing right now home market is still growing is more more teams realize the value and importance of automated.Very aggressive automated quality control. But configuring CI is really, really complicated. Every CI provider have their own configuration language,and you have to reason about caching, and you have to manually construct cache keys to the extent,that caching is even possible or useful.There's just a lot of figuring out how to configure and set up CI, And even then it's just doing the naive thing.[58:18] So there are a couple of interesting companies, Dagger and Earthly, or interesting technologies around simplifying that, but again, you still have to manually,so they are providing a, I would say, better config and more uniform config language that allows you to, for example, run build steps in containers.And that's not nothing at all.[58:43] Um, but you still manually creating a lot of, uh, configuration to run these very coarse grained large scale, long running build steps, you know, I thinkthe future is something like my entire CI config post cloning the repo is basically pants build colon, colon, because the system does the configuration for you.[59:09] It figures out what that means in a very fast, very fine grained way and does not require you to manually decide on workflows and steps and jobs and how they all fit together.And if I want to speed this thing up, then I have to manually partition the work somehow and write extra config to implement that partitioning.That is the future, I think, is rather than there's the CI layer, say, which would be the CI providers proprietary config or theodagger and then underneath that there is the buildtool, which would be Bazel or Pants V2 or whatever it is you're using, could still be we make for many companies today or Maven or Gradle or whatever, I really think the future is the integration of those two layers.In the same way that I referenced much, much earlier in our conversation, how one thing that stood out to me at Google was that they had the insight to integrate the version control layer and the build tool to provide really effective functionality there.I think the build tool being the thing that knows about your dependencies.[1:00:29] Can take over many of the jobs of the c i configuration layer in a really smart really fast. Where is the future where essentially more and more of how do i set up and configure and run c i is delegated to the thing that knows about your dependencies and knows about cashing and knows about concurrency and is able,to make smarter decisions than you can in a YAML config file.[1:01:02] Yeah, I'm excited for the time that me as a platform engineer has to spend less than 5% of my time thinking about CI and CD and I can focus on other things like improving our data models rather than mucking with the YAML and Terraform configs. Well, yeah.Yeah. Yeah. Today you have to, we're still a little bit in that state because we are engineers and because we, the tools that we use are themselves made out of software. There's,a strong impulse to tinker and there's a strong impulse sake. Well, I want to solve this problem myself or I want to hack on it or I should be able to hack on it. And that's, you should be able to hack on it for sure. But we do deserve more tooling that requires less hacking,and more things and paradigms that have tested and have survived a lot of tire kicking.[1:02:00] Will we always need to hack on them a little bit? Yes, absolutely, because of the nature of what we do. I think there's a lot of interesting things still to happen in this space.Yeah, I think we should end on that happy note as we go back to our day jobs mucking with YAML. Well, thanks so much for being a guest. I think this was a great conversation and I hope to have you again for the show sometime.Would love that. Thanks for having me. It was fascinating. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev
EuroBSDcon 2022 as first BSD conference, Red Hat's OpenShift vs FreeBSD Jails, Running a Docker Host under OpenBSD using vmd(8), history of sending signals to Unix process groups, Toolchains adventures - Q3 2022, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines EuroBSDCon 2022, my first BSD conference (and how they are different) (https://eerielinux.wordpress.com/2022/09/25/eurobsdcon-2022-my-first-bsd-conference-and-how-they-are-different/) Red Hat's OpenShift vs FreeBSD Jails (https://klarasystems.com/articles/red-hats-openshift-vs-freebsd-jails/) News Roundup The history of sending signals to Unix process groups (https://utcc.utoronto.ca/~cks/space/blog/unix/ProcessGroupsAndSignals) Running a Docker Host under OpenBSD using vmd(8) (https://www.tumfatig.net/2022/running-docker-host-openbsd-vmd/) Toolchains adventures - Q3 2022 (https://www.cambus.net/toolchains-adventures-q3-2022/) Beastie Bits -current has moved to 7.2 (https://undeadly.org/cgi?action=article;sid=20220912055003) Several /sbin daemons are now dynamically-linked (http://undeadly.org/cgi?action=article;sid=20220830052924) Announcing the pkgsrc 2022Q3 branch (https://mail-index.netbsd.org/netbsd-announce/2022/09/29/msg000341.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Hans - datacenters and dust (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/Hans%20-%20datacenters%20and%20dust.md) Tim - Boot issue (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/Tim%20-%20Boot%20issue.md) aaron- dwm tiling (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/aaron-%20dwm%20tiling%20.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Talk Python To Me - Python conversations for passionate developers
Do you have a large or growing Python code base? If you struggle to run builds, tests, linting, and other quality checks regularly or quickly, you'll want to hear what Benjy Weinberger has to say. He's here to introduce Pants Build to us. Pants is a fast, scalable, user-friendly build system for codebases of all sizes. It's currently focused on Python, Go, Java, Scala, Kotlin, Shell, and Docker. Links from the show Benjy on Twitter: @benjy Pants Build: pantsbuild.org Pants Source: github.com Getting help in the Pants community: pantsbuild.org/docs/getting-help An example repo to demonstrate Python support in Pants: github.com Toolchain: toolchain.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors Local Maximum Podcast Microsoft AssemblyAI Talk Python Training
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: POWERplay: An open-source toolchain to study AI power-seeking, published by Edouard Harris on October 24, 2022 on The AI Alignment Forum. We're open-sourcing POWERplay, a research toolchain you can use to study power-seeking behavior in reinforcement learning agents. POWERplay was developed by Gladstone AI for internal research. POWERplay's main use is to estimate the instrumental value that a reinforcement learning agent can get from a state in an MDP. Its implementation is based on a definition of instrumental value (or "POWER") first proposed by Alex Turner et al. We've extended this definition to cover certain tractable multi-agent RL settings, and built an implementation behind a simple Python API. We've used POWERplay previously to obtain some suggestive early results in single-agent and multi-agent power-seeking. But we think there may be more low-hanging fruit to be found in this area. Beyond our own ideas about what to do next, we've also received some interesting conceptual questions in connection with this work. A major reason we're open-sourcing POWERplay is to lower the cost of converting these conceptual questions into real experiments with concrete outcomes, that can support or falsify our intuitions about instrumental convergence. Ramp-up We've designed POWERplay to make it as easy as possible for you to get started with it. Follow the installation and quickstart instructions to get moving quickly. Use the replication API to trivially reproduce any figure from any post in our instrumental convergence sequence. Design single-agent and multi-agent MDPs and policies, launch experiments on your local machine, and visualize results with clear figures and animations. POWERplay comes with "batteries included", meaning all the code samples in the documentation should just work out-of-the-box if it's been installed successfully. It also comes with pre-run examples of experimental results, so you can understand what "normal" output is supposed to look like. While this does make the repo weigh in at about 500 MB, it's worth the benefits of letting you immediately start playing around with visualizations on preexisting data. If we've done our job right, a smart and curious grad student (with a bit of Python experience) should be able to start reproducing our previous experiments within an hour, and to have some new — and hopefully interesting! — results within a week. We're looking forward to seeing what people do with this. If you have any questions or comments about POWERplay, feel free to reach out to Edouard at edouard@gladstone.ai. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
We look back at how tools, processes, and developer trends have changed over nearly ten years of the show.
这次关于基础软件的硬核直播,干货实在是太多了!如果你对基础软件、开发者工具、开源这些话题感兴趣,一定不能错过。国内开源独角兽 PingCAP, Coinbase 数据平台负责人,刚创业的Google Tensorflow 大牛,一起来聊聊全球视角下的基础软件创业的经验和坑,用户视角的选型逻辑,更有未来的技术展望。 小提示:本次内容需要对database 等领域有一定技术背景知识。 Hello World, who is OnBoard?! 过去3年,国内外的基础软件、开源和开发者工具领域都涌现出前所未有的热潮。作为这个领域的投资人,Monica 看到,这些企业在走向国际竞技场的过程中,真切感受到了中美市场从用户需求、技术生态、人才组织、创业环境等等方面的异同。这中间要经历的挑战、思考、与调整,非常需要来自中美的多方经验来一起碰撞、讨论、提升。 这次三位重磅嘉宾,正是代表了三个非常重要的视角。有中国和美国本土的创业公司,更有美国科技企业infra负责人,都是资深大牛。不同的视角,同样的犀利。有市场和生态的现实,又有未来展望。这次讨论有些长,全程无尿点又不好分开。不过,只要你对打造全球一流的基础软件公司这个话题感兴趣,相信这次的内容绝对不会让你失望。 大家 Enjoy! 几位嘉宾所在的公司 三位嘉宾的具体介绍,可以参见这一篇文章。 Dongxu/Ed Huang,PingCAP co-founder & CTO, TiDB, TiKV 作者 Leo Liang: Coinbase 数据平台负责人,前 Cruise ML 平台负责人 Mingsheng Hong: Bluesky co-founder & CEO,前 Google Tensorflow Runtime 机器学习负责人 我们聊了什么 02:03 开场 & 几位嘉宾自我介绍 & fun fact: 最近看到有意思的开源项目 (Vercel, AnyScale) 11:46 PingCAP 的出海体会 14:16 Dongxu 对美国市场的观察:Developer will be the king, 开发者体验越来越重要 16:20Dongxu 对美国市场的观察:cloud native 已经是事实标准 18:28 Dongxu 对美国市场的观察:storytelling 太重要 & supabase 的例子 24:36 Leo: 开发者想要听怎样的故事 27:23 Mingsheng: 什么是一个开发者体验好的产品 30:12 Leo: 硅谷科技公司如何做技术选型:opensource, composable, componentize 42:11 Dongxu: 中国基础软件公司走向海外的节奏与重要性 47:01 Mingsheng: 创业公司如何选择早期用户 52:10 讨论:销售模式应该选择PLG (自下而上产品驱动增长)vs 传统大客户销售 55:59 讨论:不同阶段如何选择不同用户?中美用户购买决策有何差异? 63:10 Dongxu: 全球化的社区运营有什么挑战?运营与产品之间的关系? 68:14 Leo, Mingsheng 推荐了解的开源项目:Anyscale/Ray, Tensorflow 73:58 Dongxu:开源项目如何考虑商业化的节点与方式? 79:26 重点讨论:美国Digital Native Business (DNB) 的公司如何做开源产品的购买决策,中美有什么异同,为什么 91:16 划重点:为什么客户关注的是 ROI>易用性>性能>功能,但是公司宣传的时候往往反过来 92:07 Mingsheng: 什么是下一代的云计算成本优化 96:45 讨论:展望未来,哪些让你们感兴趣的创新机会?文件传输格式分离,serverless, ML in infra 101:27 Q&A: 存储领域的挑战和新机会? 113:36 Q&A: 企业用户如何考虑基础软件产品ROI Reference/提到的公司 Neon: serverless Postgres Vercel: serverless frontend stack for web developers, started from hosting node.js Upstash: serverless data for Redis and Kafka Supabase: opensource Firebase alternative Toolchain: ergonomic open source developer workflow system FaunaDB: serverless Anyscale: company behind Ray, an open-source Python framework for running distributed computing Tensorflow: Dbt: opensource data transformation tool for ELT 提到的文章 Dongxu 登顶Hackernews 的文章:Some notes on DynamoDB 2022 paper Leo 关于Serverless 的文章 欢迎关注M小姐的微信公众号,了解更多中美连线对话! M小姐研习录 (ID: MissMStudy) 大家的点赞、评论、转发是对我们最好的鼓励!希望你分享给对这个话题感兴趣的朋友哦~ 如果你有希望我们聊的话题,希望我们邀请的访谈嘉宾,都欢迎在留言中告诉我们,我们会认真看每个评论的!
This ain't your granddaddy's browser. In this episode, Charles and Subrat sit down with Eric Simons, a developer who's on the forefront of expanding for what's possible for your toolchain in browsers. They lay out a BIG trend you oughta know, how these programs can help you level up your security, and how the “Google Docs” approach gives a hint for some remarkable developments coming this year and beyond. “Browsers have gotten a lot more pliable and robust over the past half decade.” - Eric Simons In This Episode 1) A BIG trend that's need to know about Angular and beyond (and how it'll affect you) 2) Why these programs will help you step up your security while maintaining continuity 3) How these toolchains are applying the “Google Docs” approach to all kinds of use cases 4) The COOLEST upcoming developments that are “barely scratching the surface” for what's possible in 2022 and beyond (including desktop AND mobile!) Sponsors Top End Devs (https://topenddevs.com/) Raygun | Click here to get started on your free 14-day trial (https://raygun.com/?utm_medium=podcast&utm_source=adventuresangular&utm_campaign=devchat&utm_content=homepage) Coaching | Top End Devs (https://topenddevs.com/coaching) Picks Charles- Wavelength | Board Game | BoardGameGeek (https://boardgamegeek.com/boardgame/262543/wavelength) Charles- topenddevs.com (https://topenddevs.com/) for authoring, coaching, and more Charles- JS Remote Con is coming Eric- Vite: Home (https://vitejs.dev/) Subrat- You Don't Know JS Yet: Get Started (https://amzn.to/3GYoCl5) Special Guest: Eric Simons .
This ain't your granddaddy's browser. In this episode, Charles and Subrat sit down with Eric Simons, a developer who's on the forefront of expanding for what's possible for your toolchain in browsers. They lay out a BIG trend you oughta know, how these programs can help you level up your security, and how the “Google Docs” approach gives a hint for some remarkable developments coming this year and beyond. “Browsers have gotten a lot more pliable and robust over the past half decade.” - Eric Simons In This Episode 1) A BIG trend that's need to know about Angular and beyond (and how it'll affect you) 2) Why these programs will help you step up your security while maintaining continuity 3) How these toolchains are applying the “Google Docs” approach to all kinds of use cases 4) The COOLEST upcoming developments that are “barely scratching the surface” for what's possible in 2022 and beyond (including desktop AND mobile!) Sponsors Top End Devs (https://topenddevs.com/) Raygun | Click here to get started on your free 14-day trial (https://raygun.com/?utm_medium=podcast&utm_source=adventuresangular&utm_campaign=devchat&utm_content=homepage) Coaching | Top End Devs (https://topenddevs.com/coaching) Picks Charles- Wavelength | Board Game | BoardGameGeek (https://boardgamegeek.com/boardgame/262543/wavelength) Charles- topenddevs.com (https://topenddevs.com/) for authoring, coaching, and more Charles- JS Remote Con is coming Eric- Vite: Home (https://vitejs.dev/) Subrat- You Don't Know JS Yet: Get Started (https://amzn.to/3GYoCl5) Special Guest: Eric Simons .
Hello and welcome to CHAOSScast Community podcast, where we share use cases and experiences with measuring open source community health. Elevating conversations about metrics, analytics, and software from the Community Health Analytics Open Source Software, or short CHAOSS Project, to wherever you like to listen. Today, we have joining us again, Carina Zona, who is the Head of Developer Relations for Toolchain, which is the lead sponsor of Pantsbuild open source project. If you listened to our previous episode, in Part 1 we talked about the Pants community and how it's been evolving over the last ten years, and there were conversations about some qualitative means of measuring and some culture around growing community. Today's episode is Part 2, where we get more hands-on with what you can do with data with understanding the community. Also, Carina details about the tools they use to satisfy their data needs, how they organize all the data, and more about Savannah CRM and tagging. Download this episode now to find out much more, and don't forget to subscribe for free to this podcast on your favorite podcast app and share this podcast with your friends and colleagues! [00:02:37] As the Dev Rel person in the community, Carina talks and reports to stakeholders who need different data points, so she explains the data points she looks at and the tools she's using to satisfy her data needs. [00:06:00] Carina explains how she organizes all the data that comes in from the surveys. [00:10:22] We find out some other ways Carina is using the data, as well as who she reports to and what she reports. [00:12:21] Venia wonders if there are different dashboards and reports that Carina provides to the individuals with completely different key performance indicators. [00:14:43] The topic of tagging in Savannah CRM is brought up and Carina explains what's in Savannah. [00:20:41] Carina tells us more about the tagging in Savannah and Venia wonders if she's using the tags in order to bring up and study the comments on a customer sentiment. [00:27:50] Carina shares some advice to other Dev Rels who want to understand the health of their communities and work towards making them more healthy. [00:30:49] Find out where you can follow Carina online. Value Adds (Picks) of the week: [00:32:19] Georg's pick is designing and building a custom home. [00:33:17] Venia's pick is getting back into doing book clubs this week. [00:33:49] Carina's pick is having conversations with people that she's fallen out of touch with during the pandemic. [00:34:55] Armstrong's pick is the beauty of family and seeing a very good perspective of life. Panelists: Georg Link Venia Logan Armstrong Foundjem Guest: Carina Zona Sponsor: SustainOSS (https://sustainoss.org/) Links: CHAOSS (https://chaoss.community/) CHAOSS Project Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Carina C. Zona Website (http://cczona.com/) Carina C. Zona Twitter (https://twitter.com/cczona?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Pantsbuild (https://www.pantsbuild.org/) Pantsbuild Twitter (https://twitter.com/pantsbuild?lang=en) Pantsbuild-GitHub (https://github.com/pantsbuild) Pantsbuild Slack (https://www.pantsbuild.org/docs/getting-help) Pantsbuild Blog (https://blog.pantsbuild.org/) Savannah CRM (https://www.savannahhq.com/) Airtable (https://www.airtable.com/) SurveyMonkey (https://www.surveymonkey.com/) Special Guest: Carina C. Zona.
Hello and welcome to CHAOSScast Community podcast, where we share use cases and experiences with measuring open source community health. Elevating conversations about metrics, analytics, and software from the Community Health Analytics Open Source Software, or short CHAOSS Project, to wherever you like to listen. Today, we are super excited to have as our guest, Carina Zona, who is the Head of Developer Relations for Toolchain, which is the lead sponsor of Pantsbuild open source project, as well as the Founder of CallbackWomen. Our discussions take us into Carina sharing her knowledge about some qualitative means of measuring and some culture around growing communities. Her passion has been trying to increase gender diversity in this industry as a side project on top of developer relations, and we learn what she's been doing to help advocate this. We learn more about the Pants community, what this project is, and Carina tells us about adding the welcome channel on Slack and the quantitative work she's doing on it using Savannah CRM. Download this episode now to find out much more, and don't forget to subscribe for free to this podcast on your favorite podcast app and share this podcast with your friends and colleagues! [00:02:27] Carina tells us her background and more about her project, CallbackWomen. [00:05:52] The topic of data being self-reinforcing is discussed. Venia wonders how Carina approaches conversations with people who are so metrics focused. [00:12:35] We learn all about the Pants community and what this project is all about. [00:17:28] Carina fills us in on the who makes up the Pants community. [00:21:29] Carina makes a clarification about Pants Build being written as an open source project in Python and the core engine written in Rust, and she speaks more about supporting languages and the effect it will have on who exists in your community. [00:26:09] As the Pants community grows, Venia wonders what Carina has been doing to decide which aspects of that culture are working for the lurkers and silent majority in order to keep it when stakeholders choose to make decisions, and how does she make the decision between what to keep in the culture and what to let go. [00:30:00] Venia wonders if Carina has considered using the welcome channel for purposes of direct measurement, and she goes in depth about how she's doing quantitative work on it using Savannah CRM. [00:34:19] Armstrong wonders if Carina thinks qualitative findings or evidence will challenge or support quantitative numbers she has. She also explains why the number is not important but what matters is the experience. [00:39:01] Find out where you can follow Carina online. Value Adds (Picks) of the week: [00:40:40] Georg's pick is a new tea pot he bought. [00:41:26] Venia's pick is finding a therapist that is okay with being online. [00:42:00] Armstrong's pick is getting selected to be AI chair at OpenInfra Summit Berlin 2022. [00:42:26] Carina's pick is her new puppy that brings her so much joy. Panelists: Georg Link Venia Logan Armstrong Foundjem Guest: Carina Zona Sponsor: SustainOSS (https://sustainoss.org/) Links: CHAOSS (https://chaoss.community/) CHAOSS Project Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Carina C. Zona Website (http://cczona.com/) Carina C. Zona Twitter (https://twitter.com/cczona?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Pantsbuild (https://www.pantsbuild.org/) Pantsbuild Twitter (https://twitter.com/pantsbuild?lang=en) Toolchain (https://toolchain.com/) CallbackWomen (https://www.callbackwomen.com/) CallbackWomen Twitter (https://twitter.com/callbackwomen) Savannah CRM (https://www.savannahhq.com/) OpenInfra Summit Berlin 2022 (https://openinfra.dev/summit/) Special Guest: Carina C. Zona.
Adam McNair, Kevin Long, and Emilie Scantlebury sit down with Rise8, COO Matt Nelson and WeWork, Federal Sales Account Director Jay Sampson to discuss ToolChain as a Service and Software Factories within the Government Sector.
FreeBSD Foundation reviews 2021 activities, DragonflyBSD 6.2.1 is here, Lumina Desktop 1.6.2 available, toolchain adventures, The OpenBSD BASED Challenge Day 7, Bastille Template: AdGuard Home, setting up ZSH on FreeBSD and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines FreeBSD Foundation 2021 in Review Software Development (https://freebsdfoundation.org/blog/2021-in-review-software-development/) Year End Fundraising Report (https://freebsdfoundation.org/blog/2021-year-end-fundraising-report/) Infrastructure Support (https://freebsdfoundation.org/blog/2021-in-review-infrastructure-support/) Advocacy (https://freebsdfoundation.org/blog/2021-in-review-advocacy/) FreeBSD 2022 CfP (https://freebsdfoundation.org/blog/freebsd-foundation-2022-call-for-proposals/) DragonFlyBSD 6.2.1 is out (https://www.dragonflybsd.org/release62/) News Roundup Lumina Desktop 1.6.2 is out (https://lumina-desktop.org/post/2021-12-25/) Toolchain Adventures (https://www.cambus.net/toolchains-adventures-q4-2021/) The OpenBSD BASED Challenge Day 7 (https://write.as/adventures-in-bsd/) Bastille Template: AdGuard Home (https://bastillebsd.org/blog/2022/01/03/bastille-template-examples-adguardhome/) Setting up ZSH on FreeBSD (https://www.danschmid.me/article/setting-up-zsh-on-freebsd) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions • Producers Note: We did get some Christmas AMA questions in after we recorded that episode (since we recorded it early) but don't worry, I've made a note of them and we'll save them for our next AMA episode. Patrick - Volume (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/438/feedback/Patrick%20-%20Volume.md) Reptilicus Rex - FreeBSD Docs Team (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/438/feedback/Reptilicus%20Rex%20-%20FreeBSD%20Docs%20Team.md) michael - question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/438/feedback/michael%20-%20question.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Il quinto appuntamento nella serie DevOps della Bitrock Tech Radio è dedicata alla toolchain, ossia quella catena di strumenti che permette di tradurre in pratica i principi metodologici del DevOps, dall'idea di partenza fino al cliente finale. Come ricorda il CTO Franco Geraci, la regola aurea è che non ci sono proiettili d'argento: è necessario valutare la scelta di tool e piattaforma sulla base degli obiettivi di business, KPI, caratteristiche del progetto e molto altro. Per conoscere nel dettaglio i nostri servizi consulenziali per il mondo DevOps, visita il sito Bitrock e segui il nostro profilo LinkedIn!
This episode is also available as a blog post: Arduino: Arduino IDE Tool chain N AVR Studio - Karate Coder
Echo Innovate IT - Web & Mobile App Development Technologies Podcast
Before we discuss the DevOps toolchain, let us first understand DevOps. DevOps is considered the next step in the growth of software development methods. It is more of a comprehensive way of thinking that organizations have to apply to get the best possible results. Ultimately, the aim is to break the barriers between different teams like development, and IT operations, and prompt them to communicate and collaborate even better. They can develop and release a better product faster if their powers get combined. It will be easier and quicker to deal with the problem with more effectiveness and reduced overall complexity. “DevOps toolchain”, is a term that refers to the tools which a team uses. It's used for ease in development, the management process of the product, and delivery. It is aligned with the organization's DevOps culture and is more effective. If you have a well-planned DevOps toolchain that is ideal for your requirements then it can help companies reach their goals and maintain an efficient software development process. Develop an App Using DevOps Toolchain DevOps evolution was increased by 62 percent of the developer's team in the higher stages of their application development cycle in 2020. DevOps is all about making the development life cycle quicker, more automated, and collaborative. You can concentrate on the development and then deployment of the project by leaving all your boring work like installing, upgrading, configuring, and setting up the infrastructure to the tools in the DevOps toolchain. You need to stay up-to-date in this competitive IT industry with the latest technology to deliver the best results. Nowadays, since DevOps is being implemented almost at every place, it has become essential for you to understand its toolchain. Develop an App Using DevOps Toolchain DevOps evolution was increased by 62 percent of the developer's team in the higher stages of their application development cycle in 2020. DevOps is all about making the development life cycle quicker, more automated, and collaborative. You can concentrate on the development and then deployment of the project by leaving all your boring work like installing, upgrading, configuring, and setting up the infrastructure to the tools in the DevOps toolchain. You need to stay up-to-date in this competitive IT industry with the latest technology to deliver the best results. Nowadays, since DevOps is being implemented almost at every place, it has become essential for you to understand its toolchain. --- Send in a voice message: https://anchor.fm/echo-innovate-it/message
DTrace network probes, next 50 years of shell programming, NetBSD on the Vortex86DX CPU, system CPU time in top, your filesystem as a dungeon, diving into toolchains, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines DTrace Network Probes (https://klarasystems.com/articles/dtrace-network-probes/) Unix Shell Programming: The Next 50 Years (https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s06-greenberg.pdf) News Roundup NetBSD on the Vortex86DX CPU (https://www.cambus.net/netbsd-on-the-vortex86dx-cpu/) System CPU time – ‘sys' time in top (https://blog.ycrash.io/2020/11/28/system-cpu-time-sys-time-in-top/) rpg-cli —your filesystem as a dungeon! (https://github.com/facundoolano/rpg-cli) Diving into toolchains (https://www.cambus.net/diving-into-toolchains/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions • [Alfred - Advice](https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/409/feedback/Alfred%20-%20Advice) • [CY - Portable Patch Util](https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/409/feedback/CY%20-%20Portable%20Patch%20Util) • [Denis - State of ZFS Ecosystem](https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/409/feedback/Denis%20-%20State%20of%20ZFS%20Ecosystem) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Dog's Garage Runs OpenBSD, EuroBSDcon 2021 Call for Papers, FreeBSD’s iostat, The state of toolchains in NetBSD, Bandwidth limiting on OpenBSD 6.8, FreeBSD's ports migration to git and its impact on HardenedBSD, TrueNAS 12.0-U3 has been released, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines My Dog's Garage Runs OpenBSD (https://undeadly.org/cgi?action=article;sid=20210415055717) I was inspired by the April 2017 article in undeadly.org about getting OpenBSD running on a Raspberry Pi 3B+. My goal was to use a Raspberry Pi running OpenBSD to monitor the temperature in my garage from my home. My dog has his own little "apartment" inside the garage, so I want to keep an eye on the temperature. (I don't rely on this device. He sleeps inside the house whenever he wants.) EuroBSDcon 2021 Call for Papers (https://2021.eurobsdcon.org/about/cfp/) FreeBSD iostat (https://klarasystems.com/articles/freebsd-iostat-a-quick-glance/) The state of toolchains in NetBSD (https://www.cambus.net/the-state-of-toolchains-in-netbsd/) While FreeBSD and OpenBSD both switched to using LLVM/Clang as their base system compiler, NetBSD picked a different path and remained with GCC and binutils regardless of the license change to GPLv3. However, it doesn't mean that the NetBSD project endorses this license, and the NetBSD Foundation's has issued a statement about its position on the subject. NetBSD’s statement (http://cvsweb.netbsd.org/bsdweb.cgi/src/external/gpl3/README?rev=1.1) *** News Roundup Bandwidth limiting on OpenBSD 6.8 (https://dataswamp.org/~solene/2021-02-07-limit.html) I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate. FreeBSD's ports migration to git and its impact on HardenedBSD (https://hardenedbsd.org/article/shawn-webb/2021-04-06/freebsds-ports-migration-git-and-its-impact-hardenedbsd) FreeBSD completed their ports migration from subversion to git. Prior to the official switch, we used the read-only mirror FreeBSD had at GitHub[1]. The new repo is at [2]. A cursory glance at the new repo will show that the commit hashes changed. This presents an issue with HardenedBSD's ports tree in our merge-based workflow. TrueNAS 12.0-U3 has been released (https://www.truenas.com/docs/releasenotes/core/12.0u3/) iXsystems is excited to announce TrueNAS 12.0-U3 was released today and marks an important milestone in the transition from FreeNAS to TrueNAS. TrueNAS 12.0 is now considered by iXsystems to be a higher quality release than FreeNAS 11.3-U5, our previous benchmark. The new TrueNAS documentation site has also reached a point where it has more content and capabilities than FreeNAS. TrueNAS 12.0 is ready for mission-critical enterprise deployments. Beastie Bits Joyent provides pkgsrc for MacOS X (https://pkgsrc.joyent.com/install-on-osx/) Archives of old Irix documentation (https://techpubs.jurassic.nl) FreeBSD Developer/Vendor Summit 2021 (https://wiki.freebsd.org/DevSummit/202106) *** Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Andre - splitting zfs array (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/401/feedback/Andre - splitting zfs array) Bruce - Command Change (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/401/feedback/Bruce - Command Change) Dan - Annoyances with ZFS (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/401/feedback/Dan - Annoyances with ZFS) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
In this episode of Semaphore Uncut, I talk to Benjy Weinberger, co-founder of Toolchain. We discuss the open-source build tool, 'Pants', and hear Benjy's views on the monorepo strategy for managing your codebase.Key takeaways:Pants: a fast, scalable build systemExplicit modelling of dependencies is key to Pants performanceMonorepo gives visibility and ownership of the effects of your changesMonorepo helps avoid dependency hellHow Pants works: a concrete exampleTools to make adopting Pants easyHow to contribute to Pants V2About Semaphore UncutIn each episode of Semaphore Uncut, we invite software industry professionals to discuss the impact they are making and what excites them about the emerging technologies.
Benjy Weinberger is a veteran software engineer with over 20 years of experience at Google, Twitter, Foursquare, and Checkpoint. He is a co-founder and CEO of Toolchain Labs and a long-time contributor to the Pants OSS build system. In this episode, we are discussing with Benjy how to address slow, broken, and inconsistent builds with the Toolchain cloud-centric build system.
Summary of the series – Blockchain in the New norm This series was incepted with the Covid pandemic context – with lot of talk around adapting to the new norm – both on an individual level as well as on the biz level. From Lockdowns, Restricted movement & travel, zoom meetings to reviving demand, rebooting operations and supply chains, reassuring your employees and customers – lots of things have changed this year. And tech has emerged as an anchor point in this new norm. This series was a delve into how blockchain specifically is helping businesses in this new norm. I covered 2 topics under this series – first Safe back to Work and second Repurposing of supply chains. Here’s a summary of what we explored in this series.Welcome to the Blockchain Hustle where I take a look at some interesting plays of how blockchain technology is opening up new business vistas across multiple industries. TIME STAMPED SHOW NOTES:0:00 Welcome to Blockchain Hustle0:17 Introduction to this episode1:36 Part 1: Introduction to the series (https://tinyurl.com/y5vvmwq5)1:40 Part 2: Parts 2 to 8 are on Safe back to Work. In this episode I set the context of Safe back to Work - along with some criteria for the feasible solutions, including how blockchain is becoming an important cog in this. I covered 3 facets of Safe back to Work, Safe back to Business, and Being Safe. (https://tinyurl.com/y6ejoahm)3:54 Part 3: Here I take a look at some of the #SafeBacktoWork solutions out there. In this episode, I share on the Covid-19 immunity passport solution from SICPA-OpenHealth-Guardtime consortium. (https://tinyurl.com/y2ygw4h8)4:11 Part 4: In this episode I share a couple more examples on the Covid-19 immunity passport solutions. These include the IOC AOKpass from International Chamber of Commerce (ICC)-Perlin-AOKpass-International SOS and QDX Health ID from Quantum Material Corp. (https://tinyurl.com/y46of5zx)4:39 Part 5: This had share from UK’s Open University #blockchain based solution which stores your #ImmunityCertificates on #Solid #Pod. It’s about My data my control. (https://tinyurl.com/yx9nsg3p )5:00 Part 6: Part 6 is on Safe back to Business. In this episode, I shared on DNV GL and ToolChain’s blockchain solution which was used by the shipping company Viking Line. (https://tinyurl.com/y5zq3ceq)5:17 Part 7: Parts 7 and 8 are on the 3rd category – Being Safe? I share my thoughts on a key problem faced by clinicians, researchers and scientists studying the Covid related data from multi, disparate and solo sources. - the lack of integration of verified data sources that can be used with confidence, why is this a challenging task and also do a brief share on a data platform called MiPasa. (https://tinyurl.com/yywnpm7q)6:00 Part 8: Continuing on the data platforms, this is on another such called Shivom.( https://tinyurl.com/yyyvr7x5)6:09 Part 9: Parts 9 to 11 are on the second theme of this series and is on Repurposing of supply chains in the new norm. In this episode, I introduce this theme and share some thoughts on how Supply chains in the new norm are getting repurposed – prioritization is shifting from efficiency and productivity to resilience and flexibility. And organizations are leveraging blockchain to help achieve this. (https://tinyurl.com/y3fr2axv)6:40 Part 10: How do you build critical supply chain resilience? How do you de-risk your supply chain? In this episode, I look at a couple of companies Tymlez and Rapid Medical Parts - working on this using blockchain. 7:05 Part 11: A recent WEF report highlighted a couple of key principles for Agile blockchains: Ensure Data privacy for suppliers and Incentivizing suppliers to share data. Part 11 is a delve into this and the toolkit from WEF to help supply chains.7:34 Part 12: This summary and wrap-up. So, this brings us to the end of this podcast series on blockchain in the new norm. Hope you have found this helpful and also enjoyed listening to this series. Will shortly come back with a new topic. Should you have any suggestions on this, give me shout out. Do stay tuned and till then keep safe and healthy. CheersLeave some feedback:I hope this content will be valuable to you. If you enjoyed this podcast, pl. like it, share it, download it, subscribe to it and do leave a short review. What should I talk about next? Please let me know by writing to me.Connect with me:LinkedIn http://sg.linkedin.com/in/meenusarin | Twitter @meenusarin |Email meenu@vlsiconsultancy.com | Website http://www.vlsiconsultancy.com | Blog http://www.vlsiconsultancy.com/newblogBlockchain Hustle Podcast channelsApple Podcast https://podcasts.apple.com/sg/podcast/blockchain-hustle-podcast/id1493384933Youtube https://www.youtube.com/channel/UCy-49hvbkgkCaSHyLo8QOTA/featuredInstagram/IGTV https://www.instagram.com/accounts/login/?next=/blockchainhustle1/Google Podcast https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jsb2NrY2hhaW5odXN0bGUvZmVlZC54bWw=Stitcher https://www.stitcher.com/podcast/blockchain-hustle-podcastPodbean https://blockchainhustle.podbean.com
Navigating in the #newnorm, how can #enterprises communicate #trust to the market? How do you assure your customers, partners and other stakeholders about the safety of your products and services? That you are operating a safe and secure business?I explore this in my 6th episode of Blockchain in the New norm. Welcome to the Blockchain Hustle where I take a look at some interesting plays of how blockchain technology is opening up new business vistas across multiple industries. TIME STAMPED SHOW NOTES:[00.17] Episode introduction[01.52] Example – Viking Line with DNV GL’s MyCare on ToolChain[04.32] Why blockchain[05.59] What I’ll share in the next episodeLeave some feedback:I hope this content will be valuable to you. If you enjoyed this podcast, pl. like it, share it, download it, subscribe to it and do leave a short review. What should I talk about next? Please let me know by writing to me.Connect with me:LinkedIn http://sg.linkedin.com/in/meenusarin | Twitter @meenusarin |Email meenu@vlsiconsultancy.com| Website www.vlsiconsultancy.com | Blog http://www.vlsiconsultancy.com/newblog
Hosts: Tom Bridge - @tbridge777 Marcus Ransom - @marcusransom Charles Edge - @cedge318 Links: https://support.google.com/chrome/a/answer/7497916?hl=en http://Myitindy.com Sponsors: Halp Mac Business Solutions Patreon Watchman Monitoring If you're interested in sponsoring the Mac Admins Podcast, please email podcast@macadmins.org for more information. Get the latest about the Mac Admins Podcast, follow us on Twitter! We're @MacAdmPodcast!
In this episode of The IoT Unicorn Podcast, Rene Haas, President Intellectual Property Group at Arm, discusses the development of edge devices and the 5G wave. Download Transcript Here 00:00 PETE BERNARD: Rene, thanks again for joining us here on the IoT Unicorn. I was trying to remember the last time actually we saw each other face-to-face. That's something that we do these days. I think it was Barcelona 2019 or something. It was a while ago. But again, thanks for joining us today. 00:23 RENE HAAS: You are welcome. I wasn't sure if it was CES of 2020, but... 00:28 PETE BERNARD: It could be. 00:28 RENE HAAS: Gosh, you might be right. Barcelona, 2019. My gosh, over 18 months ago. 00:32 PETE BERNARD: Yeah, that was a long time ago. Well, CES 2020 was our last... It was kind of the last hurrah for events, although going to Vegas always has its potential infection rates of all sorts of things going on there, but... Not in that case, but... Cool, yeah, no, it's good to see you again, and we've known each other for a little while and worked on some interesting projects, so it was great to have you on the show, and obviously very timely with the DevSummit coming up and some recent news that we'll talk about as well. But maybe you can give us and the listeners a little background on your journey to where you're at as President of Arm IP. 02:07 RENE HAAS: So my role at Arm is I run the IP products group. Our acronym is IPG, Intellectual Property Products Group, and that's the sales marketing development of all of our products, GPUs, CPUs, NPUs for the markets that we serve, the client market, infrastructure market, automotive autonomous and IoT. I am in the Bay Area now, but I've had a fun journey at Arm. I have spent seven years at Arm, but only a few years in the Bay Area. I was in Shanghai, China for two years, and I was in the UK for three, living in London, commuting to Cambridge. And I just came back to the Bay Area at the beginning of 2020, and... 02:50 PETE BERNARD: Are you an original California person or what's your... Where is your home base? 02:54 RENE HAAS: I'm originally from Upstate New York. Yeah, I'm, originally from Upstate New York. 02:58 PETE BERNARD: Wow, cool. 02:58 RENE HAAS: My dad was a Xerox guy, so I was a son of a Xerox guy working in... He was working in Rochester, New York, which is where I grew up. And then I came out to California in the mid-1990s, and I've been here ever since. 03:12 PETE BERNARD: I'm a New Jersey person myself, so that's something we have in common, the Tri-state area. Although Rochester is pretty far up state there. 03:21 RENE HAAS: Serious snow country. 03:22 PETE BERNARD: Serious, yes. Good, good. Excellent. So you've been at Arm for a while then, and you also spent a little bit of time at Nvidia. 03:31 RENE HAAS: I did, I did. I'm gonna pre-fetch probably your next set of questions, but before I spent... 03:37 PETE BERNARD: No pun intended. 03:38 RENE HAAS: Seven years at Arm I was with Nvidia for seven years doing a number of different roles there, but primarily in the notebook graphics space, GPUs, as well as Arm-based CPUs that went into all different types of laptops including the very first Surface that was running Windows 8 on Arm. 04:00 PETE BERNARD: Yes, those were the days. I had one of those. A lot of us up in Redmond had one of those. [chuckle] Unfortunately, not a lot of the other people had them. That was the problem. [chuckle] But, so cool. So now sort of full circle, just to touch upon that topic, Nvidia and Arm. For you, it's kind of break out the old badge, I guess... 04:20 RENE HAAS: Yeah. It's something that came live last Monday. Obviously, the rumors had been out for a number of weeks, so some people were surprised, but some people were not so surprised when it finally was announced to everyone actually last Sunday. It was supposed to be on Monday, and then we pulled it forward to Sunday. We're actually very excited about it at Arm, we think it's a really, really amazing opportunity. Nvidia is an amazing company, has done some fantastic things over the years obviously. And Arm efforts around client and data center, autonomous and such. When we think about what's going on in the next wave of computing where everything is gonna be touching something that is around artificial intelligence, I think the opportunities for the two companies to be a combined entity in this new area of computing, the opportunities are somewhat limitless. 05:17 RENE HAAS: So we're quite excited. Me, on a personal level, sometimes when these M&A things [05:21] ____ talking to the company on either side, there's a lot of questions of, "Do I know these folks? And can we really understand what their language is?" But for me, having spent equal amount of time in both places, I feel very fortunate to be in a position to be where we are on this, and it should be very exciting. And someone over there even pinged me not long after the announcement and said, "Hey, your email address is still available." So it's interesting how things circle back. 05:55 PETE BERNARD: Yeah, yeah, I wonder if you get credited those seven years at Nvidia as part of your Arm tenure. So how that works I'm not sure. 06:00 RENE HAAS: You know what, that's a really good question. I haven't... 06:03 PETE BERNARD: You might get a double hit on that one. 06:06 RENE HAAS: Yeah. In fact [06:08] ____ Pete, that was not on the FAQ. That's a good one. I'm gonna go check on that. 06:14 PETE BERNARD: Well, one of the things that's happened over the past number of years, what's been super exciting working with Arm is kind of the proliferation of where Arm is, the Arm silicon showing up. And you mentioned the early experiments, early efforts I should say, on Windows on Arm, but we had kind of a relaunch or a re-emergence of that tech a couple years ago, and I know I had the pleasure of working with you guys on that. So Windows on Arm, Windows on Snapdragon and all that stuff, it seems to be kind of a resurgence now on that as well. So what are your thoughts there? 06:50 RENE HAAS: Oh gosh. And as I mentioned, the history with working with Nvidia and Arm and Microsoft for me goes way back. And having worked on the original Surface product, that was basically what we called [07:06] ____ back in the day. And if I just think back to the value proposition we were hoping to get from those systems, it was really around extended battery life, always on, always connected, things like that. But you go back those years, there was no connectivity story, so those were just obviously purely WiFi devices. And the app story was really, really incomplete. I remember meeting with analysts early on and one of the biggest questions that I got asked when we were going to press reviews was, "Will it run iTunes?" And the answer to that question at the time was, "No." And that was a bit of a killer, if you just think about how people were getting access to music back and when these products came out. Fast-forward to now, the landscape is so different when you just think about, A, how many of our applications exist in the Cloud? B, the devices that have been introduced by third-party OEMs and as well as Microsoft. You have these amazing connectivity type of solutions that are brought forward by Snapdragon, so there's a great story in terms of connectivity. There's a great story in terms of app compatibilities on Windows 10 with everything running across. So we... 08:19 PETE BERNARD: Including iTunes, by the way. So iTunes now runs on that. 08:23 RENE HAAS: ITunes runs. And I bet you if I went through and asked that analyst and told them that iTunes ran successfully on these Windows devices, he would not care. But yeah, the experience is great. We use a lot of them inside of Arm. In fact, when I was living in the UK, I used to use it all the time on the train because the WiFi was actually spotty on the train and the cellular worked pretty good, and it was a great device to use. And not the least of which, I would literally leave my power supply back in the flat during the day. I wouldn't bring it with me, wouldn't need it. And so the devices have really, really advanced, and then there's just more great things to come. 09:04 PETE BERNARD: Yeah, fantastic. I use the Galaxy Book as my main PC and yeah, it's a game changer. When you don't have to worry about power and connectivity, all of a sudden, it's like a behavioral change in how you use a PC, so it's pretty cool stuff. And then I guess the other big thing where you're making a lot of headway with partners is in the Cloud and sort of bringing a lot of low-power. A lot of times, people think of low-power as battery life, but it's not just battery life, it's just low-power, a greener, more smarter consumption of power, overall, especially in a big data center. 09:42 RENE HAAS: Yeah, no, that's exactly right. Arm has been working on products for the data center for actually a long time. Even from back in the time when I was at Nvidia, Arm was working with early partners around SSEs for the data center and such. Like everything else, over 10 years a lot of things have changed. Confluence of a lot of work being done on the engineering side to get great products. We've gone from 32-bit to 64-bit. The performance has increased. Geometries have also gone in such a way that you've gone from 10 to seven to five nanometre type of technologies now, so you can get some really, really powerful type of processing. And then just again, like any technology trend, you need a confluence of a number of things to take place. 10:32 RENE HAAS: 10 years ago, we were thinking largely about the enterprise; we weren't thinking as much about the Cloud. And what has happened with everything moving towards the Cloud, to your point, it's put such a premium on data efficiency, on power. These Cloud data centers typically have a very, very fixed power budget and a very fixed area where they put the compute capacity. So efficiency really, really matters, it's really, really important. And we continue to innovate in this area. We've introduced some new products. Our Neoverse V1, which has scalar vector processing for HPC and high-end computing. Our N2 platform, which is 40% more efficient than our N1 platforms. And we've seen some of the large hyperscalers including AWS who have announced products based upon our N1 with their Graviton2 processor. And they've talked very publicly about a 40% power advantage at the same performance level versus the competition. So yeah, it's very real and people might think, "Oh, my gosh, it's happened overnight." And you've been in this industry a long time, you know it doesn't. 11:46 PETE BERNARD: That's right. 11:47 RENE HAAS: It's a long, long effort by a lot of partners and a lot of people inside of Arm. But yeah, now I think confluence of a lot of things in the marketplace, it's really starting to take off. 11:56 PETE BERNARD: Yeah, it's true. For a lot of things, it's a matter of the right time and the right tech and the right need for it to all come together. Actually, interesting anecdote, just to circle back to the PC discussion. We were first working on the Windows on Snapdragon PCs, we had a big beta test inside of Microsoft and we handed them out to all of our engineering managers and stuff. And we started to get bug reports that the battery meter was not working right because it was just always full. And it turned out the battery meter was working fine, it's just people weren't used to the fact that this thing would last for whatever, 20 hours. And so it was an interesting discussion with folks that that's actually how it's supposed to work. 12:38 RENE HAAS: Which is game changer, like you said. 12:41 PETE BERNARD: Yeah, yeah. So let's get to IoT. This is called the IoT Unicorn, so we might as well dig into that. Probably the real fascinating things happening on the edge, the far edge, the near edge. The definition of the edge depends on where you're standing, I guess. But Arm at the edge and things that are happening out there, what do you see as disruptions that we should be expecting beyond the incremental things getting faster and less power, but what's the view there? One of the interesting things for our listeners that aren't aware is an IP license is like pretty far up the food chain. So you get probably one of the best long-term views of what's happening in the business over the next, whatever, five years. But be curious on the IoT and edge side, where do you see things heading? 13:30 RENE HAAS: Yeah, no, it's a great question. And that area is evolving fast. Even over the last number of years, we've seen a real acceleration of activity, innovation in that space. And particularly around the area of that these edge devices are increasingly becoming small computers in of themselves. When IoT kicked off with VIGOR inside of Arm, we were talking to companies about this. It included a small microcontroller with potentially a sensor and a Bluetooth connector that could send the data back somewhere. Now you're talking about a heavy degree of compute power, you're talking about machine learning at the edge. Increasingly, we have partners who are looking to not only use our micro-controllers that have extensions for machine learning, but even tiny MPUs, tiny ML doing some level of inference at the edge. 14:24 RENE HAAS: And with that, you have a much different requirement for security because now these devices are small computers, they're dealing with a tremendous amount of data, the data needs to be protected, you need to ensure that you have an architecture that will keep the data secure. So we've done a lot of work with our partners around an innovation that we call the platform security architecture, which does a number of things. We've done a lot of work over the years around Root of Trust and things at that nature. With this platform security architecture, we actually allow for third parties to certify the devices that will essentially assure a level of data encryption and security going up the line. And with that, I think it just all feeds onto itself relative to... These are small computers, these small computers are doing more and more compute intensive tasks, they're sending more and more data through the Cloud, you then have 5g that is also adding more bandwidth and more compute capability. So what that basically means is you just start pushing from the data center to the edge, the amount of compute capacity is going up exponentially. 15:41 RENE HAAS: And I think over the next number of years, these edge devices are gonna become even more powerful and more sophisticated in terms of their capability. And you'll have a very interesting trade-off between the applications that run with that edge device at the node next to it, things that are cloud-native where the app can be running in a number of different spots. I think also you're gonna see huge innovation. And that's gonna mean certain things like autonomous entities. Not necessarily cars. Obviously cars are the most popular areas that get a lot of attention, but drones and robotics and things that can run at a much more sophisticated way, factory floor robotics, all kinds of things around managing warehousing, things of that nature. All of this is gonna become much more intelligent and much more sophisticated. 16:27 RENE HAAS: And then, back to the Nvidia/Arm potential about the edge of AI, these devices will learn, they'll get smarter. And as they get smarter, that again builds on having the compute capabilities. I know it sounds a bit of a cliche, and I've been around the industry probably to see at least a number of these waves of computing, but we're definitely into another very large one. And 5g, because of the additional bandwidth, is gonna be able to enable a lot of that. 16:55 PETE BERNARD: Yeah. I think I had this discussion with Rob Tiffany from Ericsson on the last episode or two episodes ago, but we were talking about the confluence of 5g, AI and IoT, sort of three, these... It's like peanut butter, chocolate and whatever the third thing is. But I haven't... The metaphor breaks down after that. It's like you get these ultra low latency, high performance networks combined with AI, which you could either do at the edge or the cloud or somewhere in between, with the concept of Internet of Things, which is just things connected to the Cloud and sending intelligent data back and forth and actuating in real-time. And then all of a sudden, you've got some really potential transformative scenarios there, right? 17:34 RENE HAAS: Yeah. 17:36 PETE BERNARD: And so I think... So it's sort of like... And I've had Qualcomm on the show before and other folks, and we talk about IoT being a team sport, that that statement of 5g, AI and IoT is an interesting example 'cause you need lots of different companies to come to the table to work together on behalf of a customer problem, 'cause it all starts with a customer having a problem that they need solved. And, yeah, I agree with you. You mentioned also about the fact that we're bringing AI horsepower into MCU devices or really tiny edge devices that previously were controlling a light switch are now going to be smart, and be able to learn and execute AI models. And I think that's fascinating. 18:22 RENE HAAS: Yeah. And you still have to get into... And by the way, I like that peanut butter and chocolate analogy, which are two of my favorite ingredients on [18:28] ____. You just need a third, but... 18:29 PETE BERNARD: [chuckle] Peanut butter, chocolate and more chocolate, I don't know if that's fair or not. 18:32 RENE HAAS: But similar to... One of the stories I like to talk about is a bit of what these new waves of technology enable. When we went from 3g to 4g, and I know you and I both were around for that, people were not talking about the fact that 3g to 4g was going to enable a brand new ride sharing capability, and it was gonna be able to enable people to rent their homes for vacations and such. Yet Airbnb, and Uber, and location bearing apps and things you can do on a smartphone all came through with that. I think the same thing is true for 5g and IoT. It's a little hard to completely imagine all of the possibilities that can happen. There's a lot of smart people and, as you said it, it takes a village of a combination of chip people and OEMs and software and makers to come up with a lot of ideas to advance this. But it will be there because there's such a profound shift of compute power that's gonna exist in these edge devices that is going to allow for a lot of really, really interesting potential. So it's gonna be really exciting to see. 19:37 PETE BERNARD: Let me kind of cut into one blurb here around AI Toolchain, because I believe one of the things we've done with Arm and I think should be announced for DevSummit, if not, we'll edit it out, but we've come to some agreement with you, I believe, to integrate your AI Toolchain into Azure. 19:56 RENE HAAS: Yeah. 19:56 PETE BERNARD: One of the things is around... ML Ops is a kind of a hot term, but how do you leverage a hyperscaler cloud to develop and train models and then manage those models across the edge to the cloud securely on updating these edge devices with new AI capabilities or models or trainings and tunings? And so your Toolchain's kind of at the core of a lot of that for a lot of Silicon partners, so the ability to sort of integrate that Toolchain into Azure for our customers should be a big deal, right? 20:26 RENE HAAS: Oh, it's a really, really huge opportunity. We're actually quite excited about it. We do a lot of work on the Toolchain with Compute Libraries and frameworks and different things to allow folks to develop solutions for ML at the edge, and I think we probably have as many people in our ML group doing hardware MPUs and also are doing the software libraries and frameworks. So it's really, really large. And you're reaching a brand new set of developers, if you will, and think about a Raspberry Pi or an Arduino-like platform for people who are developing things for the edge. If you can now allow those to integrate, upscale into the Azure cloud framework, because all of this tiny data becomes big data in the cloud, and then ultimately it can get serviced in such a way that end users can benefit. It's actually a really exciting thing and we've been partners with Microsoft for such a long time in a broad set of areas. I'm very excited to be involved here as well. 21:27 PETE BERNARD: Yeah, that'd be great. Hey, so DevSummit. We're on the eve of DevSummit or the day one of DevSummit. I'm not sure what the publication timeline is here, but it's a big deal. It's very exciting. Obviously this year kind of highly virtualized, but still exciting. Do you have any kind of words of wisdom if you're an attendee for DevSummit? What are some of the things you wanna look for or try to get out of? And maybe first time visitors or whoever, how do people really grok the scene? 21:57 RENE HAAS: It's a big change on a couple fronts. Obviously, first off, it's virtual. It's not live. So that's for starters. So go to your favorite search engine and search for DevSummit and you get all the details about registering and such, but we have moved it to a virtual event. For those of who are saying, "Okay, it's virtual, I get it, but I've never heard of DevSummit. Tell me what DevSummit is," DevSummit is the re-branded name of a show we used to call TechCon. And so, TechCon was the show we had every fall. And it used to be in Santa Clara for many years and we moved it to San Jose the last couple of years. So, what's new is old, what's old is new. It's the TechCon show that we're now targeting really more towards... Broadly towards developers, although I would say we think 60% of the folks who have registered are self-proclaimed or self-identified software types, versus about 40% hardware types. 22:54 RENE HAAS: We've got about 4000, 5000 people already pre-registered. We think we'll have a bit more when the time comes. It will be very broad, as Arm typically is in nature. We'll be talking about things like cloud native, chip design, autonomous vehicles. It will run the gamut of all the areas that we're involved in, relative to what it takes to integrate Arm IP and an SoC and what do you need to know about hardware libraries and partners in that space, versus everything around open source software and popular development tools and operating environments that we just talked about on the software space. There will be a lot of emphasis around autonomous, which is a pretty hot area. A lot of areas also around cloud native. You'll see the typical key notes from Simon, myself and some of the other leaders inside of Arm. I would also encourage folks to tune in because there will be some special surprise guests. I won't... 23:56 PETE BERNARD: I can imagine. 23:57 RENE HAAS: Give that away at this point of time, but it should be a very, very interesting and fun event. We have our annual Arm partner meeting every August. I think you've been to it. It's not a public event, it's an NDA event. But I bring that up just in the context of... We've had one rodeo with doing this thing virtually. So I'd like to think we've got some good practice in terms of things that... The dos and don't-'s in terms of doing something from a virtual standpoint. But yeah, it should be very, very good. We're looking forward to it. 24:26 PETE BERNARD: Cool. Yeah, it's interesting, Microsoft's done a number of events now virtual and I don't think we published the data but my understanding is the engagement we get because it's virtualized, we get so much broader engagement, we get so many more people quote, unquote, "attending" and engaged in the content than you would if it was only a... You had to get on a plane and go somewhere. So I think one of the nice by-products, if there is a nice by-product out of all this craziness, is we are all building more muscle about how to enable people to be more engaged regardless of where they are. And especially when you talk about developers, developers everywhere in the world and there should be. And now to be able to enable them to plug in and get educated and learn some new things, that's a fantastic by-product. 25:13 RENE HAAS: Yeah, yeah. No, you're completely right. We'd love to do these events live versus virtual, but when I think about the size of the developer community that exists... Arm is a fairly broad platform, as you know, and it would be really hard to figure out events that could bring all the potential developers who work on Arm... And it's all over the place. There are apps developers, there are kernel developers, there are people who do open source software, it's a broad, broad community. So we're actually kind of excited to do this thing virtually. It'll be a bit of a lab test to see how that works in terms of reaching the development community in a virtual way, but we're looking forward to it. 25:53 PETE BERNARD: Cool, awesome. Well, lots of stuff going on at Arm these days. And so it was great again to connect, Rene. I think hopefully we'll keep in touch here as things transform into Nvidia landscape. Maybe you'll get those extra years on your seniority. [chuckle] But that would be great. 26:14 RENE HAAS: I should get some credit somehow for that. I am going to talk to Jensen about that next time I have our consultation with him. 26:21 PETE BERNARD: Yeah. Cool. Well, good. Any last closing thoughts? It sounds like we've really covered [26:27] ____ here today. 26:29 RENE HAAS: [26:29] ____ I appreciate it and [26:29] ____ as I mentioned, I was listening to some of the podcasts you had done prior and I really enjoyed them and I'm very, very honored on behalf of Arm to join you and be part of what you're building here. It's really cool. 26:43 PETE BERNARD: Sounds good. Alright, Rene. Well, take care and I'm sure our paths will cross again. 26:48 RENE HAAS: Alright, great. Thanks. 26:50 PETE BERNARD: Alright, take care. Thanks.
FreeBSD Q2 Quarterly Status report of 2020, Traditional Unix Toolchains, BastilleBSD 0.7 released, Finding meltdown on DragonflyBSD, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/) Headlines FreeBSD Quarterly Report (https://www.freebsd.org/news/status/report-2020-04-2020-06.html) This report will be covering FreeBSD related projects between April and June, and covers a diverse set of topics ranging from kernel updates over userland and ports, as well to third-party work. Some highlights picked with the roll of a d100 include, but are not limited to, the ability to forcibly unmounting UFS when the underlying media becomes inaccessible, added preliminary support for Bluetooth Low Energy, a introduction to the FreeBSD Office Hours, and a repository of software collections called potluck to be installed with the pot utility, as well as many many more things. As a little treat, readers can also get a rare report from the quarterly team. Finally, on behalf of the quarterly team, I would like to extend my deepest appreciation and thank you to salvadore@, who decided to take down his shingle. His contributions not just the quarterly reports themselves, but also the surrounding tooling to many-fold ease the work, are immeasurable. Traditional Unix Toolchains (https://bsdimp.blogspot.com/2020/07/traditional-unix-toolchains.html?m=1) Older Unix systems tend to be fairly uniform in how they handle the so-called 'toolchain' for creating binaries. This blog will give a quick overview of the toolchain pipeline for Unix systems that follow the V7 tradition (which evolved along with Unix, a topic for a separate blog maybe). Unix is a pipeline based system, either physically or logically. One program takes input, process the data and produces output. The input and output have some interface they obey, usually text-based. The Unix toolchain is no different. News Roundup Bastille Day 2020 : v0.7 released (https://github.com/BastilleBSD/bastille/releases/tag/0.7.20200714) This release matures the project from 0.6.x -> 0.7.x. Continued testing and bug fixes are proving Bastille capable for a range of use-cases. New (experimental) features are examples of innovation from community contribution and feedback. Thank you. Beastie Bits Finding meltdown on DragonFly (https://www.dragonflydigest.com/2020/07/28/24787.html) NetBSD Server Outage (https://mobile.twitter.com/netbsd/status/1286898183923277829) *** Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Vincent - Gnome 3 question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/363/feedback/vincent%20-%20gnome3.md) Malcolm - ZFS question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/363/feedback/malcolm%20-%20zfs.md) Hassan - Video question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/363/feedback/hassan%20-%20video.md) For those that watch on youtube, don’t forget to subscribe to our new YouTube Channel if you want updates when we post them on YT (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/363/feedback/new-bsdnow-youtube-channel.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
The state of the art in natural language processing is a constantly moving target. With the rise of deep learning, previously cutting edge techniques have given way to robust language models. Through it all the team at Explosion AI have built a strong presence with the trifecta of SpaCy, Thinc, and Prodigy to support fast and flexible data labeling to feed deep learning models and performant and scalable text processing. In this episode founder and open source author Matthew Honnibal shares his experience growing a business around cutting edge open source libraries for the machine learning developent process.
O Banco Central, timidamente, criou um perfil no GitHub. O que podemos esperar? O Coronavírus está afetando todas as áreas, inclusive a tecnologia, com vários eventos importantes cancelados no mundo. Nessa semana temos um novo ToolChain lançado, o Rome, criado por Sebastian, criador do Yarn e Babel. — Curadoria, edição e revisão: Jaydson Gomes Get full access to BrazilJS at www.braziljs.org/subscribe
O Banco Central, timidamente, criou um perfil no GitHub. O que podemos esperar? O Coronavírus está afetando todas as áreas, inclusive a tecnologia, com vários eventos importantes cancelados no mundo. Nessa semana temos um novo ToolChain lançado, o Rome, criado por Sebastian, criador do Yarn e Babel. — Curadoria, edição e revisão: Jaydson Gomes
More Than Just Code podcast - iOS and Swift development, news and advice
This time we have Toolchain, Halloween AirPods, and RIP QuickTime Player 7. Tim Cooks says we may have bundles, Google buys FitBit, FCC passes new Mac Pro, online etymology, we follow up on Apple TV+ trial, GameClub, and PhotoShop on iPad, Apple privacy updates, and Uber car kills jaywalker. You cannot submit Electron 6 or 7 apps to the App Store. Microsoft’s Edge Chromium browser will launch on January 15th. Apple Push Notification Service Update. Apple TV+ first impressions and a new look for Apple privacy. Picks: Researchers hack Siri, Alexa, and Google Home by shining lasers at them, SwiftUI Cheat Sheet, Photoshop on iPad
Hosts Ben Yorke and Kenneth Ashely discuss the recent release of the Low Carbon Ecosystem (VeChain, DNV GL, BYD, PICC) and their WeChat Mini Program. Then the hosts address some of the most common concerns showing up on social media, including the latest on Walmart China and ToolChain.
Fabian rejoins us to give us his thoughts on recent events in the ecosystem. As June officially arrives, we look closer at the events unfolding in China.
In our latest community podcast, Sarah Nabaa (VeChain SE Asia and Australia General Manager) and Perkins Chen (VeChain Project Manager) break down ToolChain with host Ben Yorke. This is a great introduction to their turnkey tool for making blockchain accessible to hobbyists, small businesses, and medium-sized enterprises.
Panel: Charles Max Wood Joe Eames Alyssa Nicoll John Papa Ward Bell Special Guests: Alex Eagle In this episode of Adventures in Angular, the panel discusses Angular’s BuildTools with Alex Eagle. Alex has been working on the Angular core team at Google for the past three years and works on developer tooling there. He discusses the advantages of using a new build system, Bazel, and how using this system could improve your coding across the board. They also compare Bazel to other Angular tools and talk about when you would want to integrate Bazel into your tool belt. In particular, we dive pretty deep on: Angular plumbing Google Monorepo Bazel software Micro-services Not all tools need to be written JavaScript Pros of Bazel build system Compilation in Angular CLI Two second rule How do you know when Bazel is good for you? Production mode vs development mode Feeling nervous about using Bazel Want your CI to have cashing What does Bazel look like today? What will Bazel look like when your done with it? Take rules and compose them however you want Bazel syntax is like Python Rules Bazel Ecosystem vs Angular Ecosystem Tools in your Toolchain And much, much more! Links: Linode FreshBooks Angular Bootcamp G.co/ng/abc Picks: Charles Developer Week ngATL Joe The Greatest Showman Kids on Bikes Alyssa The Impossible Project Ward Fly Like an Eagle by Steve Miller Band Alex Pocket Operators
Panel: Charles Max Wood Joe Eames Alyssa Nicoll John Papa Ward Bell Special Guests: Alex Eagle In this episode of Adventures in Angular, the panel discusses Angular’s BuildTools with Alex Eagle. Alex has been working on the Angular core team at Google for the past three years and works on developer tooling there. He discusses the advantages of using a new build system, Bazel, and how using this system could improve your coding across the board. They also compare Bazel to other Angular tools and talk about when you would want to integrate Bazel into your tool belt. In particular, we dive pretty deep on: Angular plumbing Google Monorepo Bazel software Micro-services Not all tools need to be written JavaScript Pros of Bazel build system Compilation in Angular CLI Two second rule How do you know when Bazel is good for you? Production mode vs development mode Feeling nervous about using Bazel Want your CI to have cashing What does Bazel look like today? What will Bazel look like when your done with it? Take rules and compose them however you want Bazel syntax is like Python Rules Bazel Ecosystem vs Angular Ecosystem Tools in your Toolchain And much, much more! Links: Linode FreshBooks Angular Bootcamp G.co/ng/abc Picks: Charles Developer Week ngATL Joe The Greatest Showman Kids on Bikes Alyssa The Impossible Project Ward Fly Like an Eagle by Steve Miller Band Alex Pocket Operators
Panel: Charles Max Wood Joe Eames Alyssa Nicoll John Papa Ward Bell Special Guests: Alex Eagle In this episode of Adventures in Angular, the panel discusses Angular’s BuildTools with Alex Eagle. Alex has been working on the Angular core team at Google for the past three years and works on developer tooling there. He discusses the advantages of using a new build system, Bazel, and how using this system could improve your coding across the board. They also compare Bazel to other Angular tools and talk about when you would want to integrate Bazel into your tool belt. In particular, we dive pretty deep on: Angular plumbing Google Monorepo Bazel software Micro-services Not all tools need to be written JavaScript Pros of Bazel build system Compilation in Angular CLI Two second rule How do you know when Bazel is good for you? Production mode vs development mode Feeling nervous about using Bazel Want your CI to have cashing What does Bazel look like today? What will Bazel look like when your done with it? Take rules and compose them however you want Bazel syntax is like Python Rules Bazel Ecosystem vs Angular Ecosystem Tools in your Toolchain And much, much more! Links: Linode FreshBooks Angular Bootcamp G.co/ng/abc Picks: Charles Developer Week ngATL Joe The Greatest Showman Kids on Bikes Alyssa The Impossible Project Ward Fly Like an Eagle by Steve Miller Band Alex Pocket Operators
Nachdem es um „DocPatch“ stiller geworden ist, haben sich einige Entitäten zu einem Hackend getroffen, um das Projekt auf einen aktuellen Stand zu bringen. DocPatch ist eine Toolchain, mit der man Änderungen an Dokumenten leichter nachvollziehen kann. Vorwiegend ist DocPatch dafür gedacht, Gesetzestexte zu versionieren, die in der Regel als „Diff“ zur vorausgehenden Version veröffentlicht werden. Auf diese Art und Weise haben wir die Änderungen am Grundgesetz bis zum heutigen Tag dokumentiert und auf einer Webseite veröffentlicht.
We recap vBSDcon, give you the story behind a PF EN, reminisce in Solaris memories, and show you how to configure different DEs on FreeBSD. This episode was brought to you by Headlines [vBSDCon] vBSDCon was held September 7 - 9th. We recorded this only a few days after getting home from this great event. Things started on Wednesday night, as attendees of the thursday developer summit arrived and broke into smallish groups for disorganized dinner and drinks. We then held an unofficial hacker lounge in a medium sized seating area, working and talking until we all decided that the developer summit started awfully early tomorrow. The developer summit started with a light breakfast and then then we dove right in Ed Maste started us off, and then Glen Barber gave a presentation about lessons learned from the 11.1-RELEASE cycle, and comparing it to previous releases. 11.1 was released on time, and was one of the best releases so far. The slides are linked on the DevSummit wiki page (https://wiki.freebsd.org/DevSummit/20170907). The group then jumped into hackmd.io a collaborative note taking application, and listed of various works in progress and upstreaming efforts. Then we listed wants and needs for the 12.0 release. After lunch we broke into pairs of working groups, with additional space for smaller meetings. The first pair were, ZFS and Toolchain, followed by a break and then a discussion of IFLIB and network drivers in general. After another break, the last groups of the day met, pkgbase and secure boot. Then it was time for the vBSDCon reception dinner. This standing dinner was a great way to meet new people, and for attendees to mingle and socialize. The official hacking lounge Thursday night was busy, and included some great storytelling, along with a bunch of work getting done. It was very encouraging to watch a struggling new developer getting help from a seasoned veteran. Watching the new developers eyes light up as the new information filled in gaps and they now understood so much more than just a few minutes before, and they raced off to continue working, was inspirational, and reminded me why these conferences are so important. The hacker lounge shut down relatively early by BSD conference standards, but, the conference proper started at 8:45 sharp the next morning, so it made sense. Friday saw a string of good presentations, I think my favourite was Jonathan Anderson's talk on Oblivious sandboxing. Jonathan is a very energetic speaker, and was able to keep everyone focused even during relatively complicated explanations. Friday night I went for dinner at ‘Big Bowl', a stir-fry bar, with a largish group of developers and users of both FreeBSD and OpenBSD. The discussions were interesting and varied, and the food was excellent. Benedict had dinner with JT and some other folks from iXsystems. Friday night the hacker lounge was so large we took over a bigger room (it had better WiFi too). Saturday featured more great talks. The talk I was most interested in was from Eric McCorkle, who did the EFI version of my GELIBoot work. I had reviewed some of the work, but it was interesting to hear the story of how it happened, and to see the parallels with my own story. My favourite speaker was Paul Vixie, who gave a very interesting talk about the gets() function in libc. gets() was declared unsafe before the FreeBSD project even started. The original import of the CSRG code into FreeBSD includes the compile time, and run-time warnings against using gets(). OpenBSD removed gets() in version 5.6, in 2014. Following Paul's presentation, various patches were raised, to either cause use of gets() to crash the program, or to remove gets() entirely, causing such programs to fail to link. The last talk before the closing was Benedict's BSD Systems Management with Ansible (https://people.freebsd.org/~bcr/talks/vBSDcon2017_Ansible.pdf). Shortly after, Allan won a MacBook Pro by correctly guessing the number of components in a jar that was standing next to the registration desk (Benedict was way off, but had a good laugh about the unlikely future Apple user). Saturday night ended with the Conference Social, and excellent dinner with more great conversations On Sunday morning, a number of us went to the Smithsonian Air and Space Museum site near the airport, and saw a Concorde, an SR-71, and the space shuttle Discovery, among many other exhibits. Check out the full photo album by JT (https://t.co/KRmSNzUSus), our producer. Thanks to all the sponsors for vBSDcon and all the organizers from Verisign, who made it such a great event. *** The story behind FreeBSD-EN-17.08.pf (https://www.sigsegv.be//blog/freebsd/FreeBSD-EN-17.08.pf) After our previous deep dive on a bug in episode 209, Kristof Provost, the maintainer of pf on FreeBSD (he is going to hate me for saying that) has written the story behind a recent ERRATA notice for FreeBSD First things first, so I have to point out that I think Allan misremembered things. The heroic debugging story is PR 219251, which I'll try to write about later. FreeBSD-EN-17:08.pf is an issue that affected some FreeBSD 11.x systems, where FreeBSD would panic at startup. There were no reports for CURRENT. There's very little to go on here, but we do know the cause of the panic ("integer divide fault"), and that the current process was "pf purge". The pf purge thread is part of the pf housekeeping infrastructure. It's a housekeeping kernel thread which cleans up things like old states and expired fragments. The lack of mention of pf functions in the backtrace is a hint unto itself. It suggests that the error is probably directly in pfpurgethread(). It might also be in one of the static functions it calls, because compilers often just inline those so they don't generate stack frames. Remember that the problem is an "integer divide fault". How can integer divisions be a problem? Well, you can try to divide by zero. The most obvious suspect for this is this code: idx = pfpurgeexpiredstates(idx, pfhashmask / (Vpfdefaultrule.timeout[PFTMINTERVAL] * 10)); However, this variable is both correctly initialised (in pfattachvnet()) and can only be modified through the DIOCSETTIMEOUT ioctl() call and that one checks for zero. At that point I had no idea how this could happen, but because the problem did not affect CURRENT I looked at the commit history and found this commit from Luiz Otavio O Souza: Do not run the pf purge thread while the VNET variables are not initialized, this can cause a divide by zero (if the VNET initialization takes to long to complete). Obtained from: pfSense Sponsored by: Rubicon Communications, LLC (Netgate) That sounds very familiar, and indeed, applying the patch fixed the problem. Luiz explained it well: it's possible to use Vpfdefaultrule.timeout before it's initialised, which caused this panic. To me, this reaffirms the importance of writing good commit messages: because Luiz mentioned both the pf purge thread and the division by zero I was easily able to find the relevant commit. If I hadn't found it this fix would have taken a lot longer. Next week we'll look at the more interesting story I was interested in, which I managed to nag Kristof into writing *** The sudden death and eternal life of Solaris (http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/) A blog post from Bryan Cantrill about the death of Solaris As had been rumored for a while, Oracle effectively killed Solaris. When I first saw this, I had assumed that this was merely a deep cut, but in talking to Solaris engineers still at Oracle, it is clearly much more than that. It is a cut so deep as to be fatal: the core Solaris engineering organization lost on the order of 90% of its people, including essentially all management. Of note, among the engineers I have spoken with, I heard two things repeatedly: “this is the end” and (from those who managed to survive Friday) “I wish I had been laid off.” Gone is any of the optimism (however tepid) that I have heard over the years — and embarrassed apologies for Oracle's behavior have been replaced with dismay about the clumsiness, ineptitude and callousness with which this final cut was handled. In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF'd” in the words of one employee — is both despicable and cowardly. To their credit, the engineers affected saw themselves as Sun to the end: they stayed to solve hard, interesting problems and out of allegiance to one another — not out of any loyalty to the broader Oracle. Oracle didn't deserve them and now it doesn't have them — they have been liberated, if in a depraved act of corporate violence. Assuming that this is indeed the end of Solaris (and it certainly looks that way), it offers a time for reflection. Certainly, the demise of Solaris is at one level not surprising, but on the other hand, its very suddenness highlights the degree to which proprietary software can suffer by the vicissitudes of corporate capriciousness. Vulnerable to executive whims, shareholder demands, and a fickle public, organizations can simply change direction by fiat. And because — in the words of the late, great Roger Faulkner — “it is easier to destroy than to create,” these changes in direction can have lasting effect when they mean stopping (or even suspending!) work on a project. Indeed, any engineer in any domain with sufficient longevity will have one (or many!) stories of exciting projects being cancelled by foolhardy and myopic management. For software, though, these cancellations can be particularly gutting because (in the proprietary world, anyway) so many of the details of software are carefully hidden from the users of the product — and much of the innovation of a cancelled software project will likely die with the project, living only in the oral tradition of the engineers who knew it. Worse, in the long run — to paraphrase Keynes — proprietary software projects are all dead. However ubiquitous at their height, this lonely fate awaits all proprietary software. There is, of course, another way — and befitting its idiosyncratic life and death, Solaris shows us this path too: software can be open source. In stark contrast to proprietary software, open source does not — cannot, even — die. Yes, it can be disused or rusty or fusty, but as long as anyone is interested in it at all, it lives and breathes. Even should the interest wane to nothing, open source software survives still: its life as machine may be suspended, but it becomes as literature, waiting to be discovered by a future generation. That is, while proprietary software can die in an instant, open source software perpetually endures by its nature — and thrives by the strength of its communities. Just as the existence of proprietary software can be surprisingly brittle, open source communities can be crazily robust: they can survive neglect, derision, dissent — even sabotage. In this regard, I speak from experience: from when Solaris was open sourced in 2005, the OpenSolaris community survived all of these things. By the time Oracle bought Sun five years later in 2010, the community had decided that it needed true independence — illumos was born. And, it turns out, illumos was born at exactly the right moment: shortly after illumos was announced, Oracle — in what remains to me a singularly loathsome and cowardly act — silently re-proprietarized Solaris on August 13, 2010. We in illumos were indisputably on our own, and while many outsiders gave us no chance of survival, we ourselves had reason for confidence: after all, open source communities are robust because they are often united not only by circumstance, but by values, and in our case, we as a community never lost our belief in ZFS, Zones, DTrace and myriad other technologies like MDB, FMA and Crossbow. Indeed, since 2010, illumos has thrived; illumos is not only the repository of record for technologies that have become cross-platform like OpenZFS, but we have also advanced our core technologies considerably, while still maintaining highest standards of quality. Learning some of the mistakes of OpenSolaris, we have a model that allows for downstream innovation, experimentation and differentiation. For example, Joyent's SmartOS has always been focused on our need for a cloud hypervisor (causing us to develop big features like hardware virtualization and Linux binary compatibility), and it is now at the heart of a massive buildout for Samsung (who acquired Joyent a little over a year ago). For us at Joyent, the Solaris/illumos/SmartOS saga has been formative in that we have seen both the ill effects of proprietary software and the amazing resilience of open source software — and it very much informed our decision to open source our entire stack in 2014. Judging merely by its tombstone, the life of Solaris can be viewed as tragic: born out of wedlock between Sun and AT&T and dying at the hands of a remorseless corporate sociopath a quarter century later. And even that may be overstating its longevity: Solaris may not have been truly born until it was made open source, and — certainly to me, anyway — it died the moment it was again made proprietary. But in that shorter life, Solaris achieved the singular: immortality for its revolutionary technologies. So while we can mourn the loss of the proprietary embodiment of Solaris (and we can certainly lament the coarse way in which its technologists were treated!), we can rejoice in the eternal life of its technologies — in illumos and beyond! News Roundup OpenBSD on the Lenovo Thinkpad X1 Carbon (5th Gen) (https://jcs.org/2017/09/01/thinkpad_x1c) Joshua Stein writes about his experiences running OpenBSD on the 5th generation Lenovo Thinkpad X1 Carbon: ThinkPads have sort of a cult following among OpenBSD developers and users because the hardware is basic and well supported, and the keyboards are great to type on. While no stranger to ThinkPads myself, most of my OpenBSD laptops in recent years have been from various vendors with brand new hardware components that OpenBSD does not yet support. As satisfying as it is to write new kernel drivers or extend existing ones to make that hardware work, it usually leaves me with a laptop that doesn't work very well for a period of months. After exhausting efforts trying to debug the I2C touchpad interrupts on the Huawei MateBook X (and other 100-Series Intel chipset laptops), I decided to take a break and use something with better OpenBSD support out of the box: the fifth generation Lenovo ThinkPad X1 Carbon. Hardware Like most ThinkPads, the X1 Carbon is available in a myriad of different internal configurations. I went with the non-vPro Core i7-7500U (it was the same price as the Core i5 that I normally opt for), 16Gb of RAM, a 256Gb NVMe SSD, and a WQHD display. This generation of X1 Carbon finally brings a thinner screen bezel, allowing the entire footprint of the laptop to be smaller which is welcome on something with a 14" screen. The X1 now measures 12.7" wide, 8.5" deep, and 0.6" thick, and weighs just 2.6 pounds. While not available at initial launch, Lenovo is now offering a WQHD IPS screen option giving a resolution of 2560x1440. Perhaps more importantly, this display also has much better brightness than the FHD version, something ThinkPads have always struggled with. On the left side of the laptop are two USB-C ports, a USB-A port, a full-size HDMI port, and a port for the ethernet dongle which, despite some reviews stating otherwise, is not included with the laptop. On the right side is another USB-A port and a headphone jack, along with a fan exhaust grille. On the back is a tray for the micro-SIM card for the optional WWAN device, which also covers the Realtek microSD card reader. The tray requires a paperclip to eject which makes it inconvenient to remove, so I think this microSD card slot is designed to house a card semi-permanently as a backup disk or something. On the bottom are the two speakers towards the front and an exhaust grille near the center. The four rubber feet are rather plastic feeling, which allows the laptop to slide around on a desk a bit too much for my liking. I wish they were a bit softer to be stickier. Charging can be done via either of the two USB-C ports on the left, though I wish more vendors would do as Google did on the Chromebook Pixel and provide a port on both sides. This makes it much more convenient to charge when not at one's desk, rather than having to route a cable around to one specific side. The X1 Carbon includes a 65W USB-C PD with a fixed USB-C cable and removable country-specific power cable, which is not very convenient due to its large footprint. I am using an Apple 61W USB-C charger and an Anker cable which charge the X1 fine (unlike HP laptops which only work with HP USB-C chargers). Wireless connectivity is provided by a removable Intel 8265 802.11a/b/g/n/ac WiFi and Bluetooth 4.1 card. An Intel I219-V chip provides ethernet connectivity and requires an external dongle for the physical cable connection. The screen hinge is rather tight, making it difficult to open with one hand. The tradeoff is that the screen does not wobble in the least bit when typing. The fan is silent at idle, and there is no coil whine even under heavy load. During a make -j4 build, the fan noise is reasonable and medium-pitched, rather than a high-pitched whine like on some laptops. The palm rest and keyboard area remain cool during high CPU utilization. The full-sized keyboard is backlit and offers two levels of adjustment. The keys have a soft surface and a somewhat clicky feel, providing very quiet typing except for certain keys like Enter, Backspace, and Escape. The keyboard has a reported key travel of 1.5mm and there are dedicated Page Up and Page Down keys above the Left and Right arrow keys. Dedicated Home, End, Insert, and Delete keys are along the top row. The Fn key is placed to the left of Control, which some people hate (although Lenovo does provide a BIOS option to swap it), but it's in the same position on Apple keyboards so I'm used to it. However, since there are dedicated Page Up, Page Down, Home, and End keys, I don't really have a use for the Fn key anyway. Firmware The X1 Carbon has a very detailed BIOS/firmware menu which can be entered with the F1 key at boot. F12 can be used to temporarily select a different boot device. A neat feature of the Lenovo BIOS is that it supports showing a custom boot logo instead of the big red Lenovo logo. From Windows, download the latest BIOS Update Utility for the X1 Carbon (my model was 20HR). Run it and it'll extract everything to C:driversflash(some random string). Drop a logo.gif file in that directory and run winuptp.exe. If a logo file is present, it'll ask whether to use it and then write the new BIOS to its staging area, then reboot to actually flash it. + OpenBSD support Secure Boot has to be disabled in the BIOS menu, and the "CSM Support" option must be enabled, even when "UEFI/Legacy Boot" is left on "UEFI Only". Otherwise the screen will just go black after the OpenBSD kernel loads into memory. Based on this component list, it seems like everything but the fingerprint sensor works fine on OpenBSD. *** Configuring 5 different desktop environments on FreeBSD (https://www.linuxsecrets.com/en/entry/51-freebsd/2017/09/04/2942-configure-5-freebsd-x-environments) This fairly quick tutorial over at LinuxSecrets.com is a great start if you are new to FreeBSD, especially if you are coming from Linux and miss your favourite desktop environment It just goes to show how easy it is to build the desktop you want on modern FreeBSD The tutorial covers: GNOME, KDE, Xfce, Mate, and Cinnamon The instructions for each boil down to some variation of: Install the desktop environment and a login manager if it is not included: > sudo pkg install gnome3 Enable the login manager, and usually dbus and hald: > sudo sysrc dbusenable="YES" haldenable="YES" gdmenable="YES" gnomeenable="YES"? If using a generic login manager, add the DE startup command to your .xinitrc: > echo "exec cinnamon" > ~/.xinitrc And that is about it. The tutorial goes into more detail on other configuration you can do to get your desktop just the way you like it. To install Lumina: > sudo pkg install lumina pcbsd-utils-qt5 This will install Lumina and the pcbsd utilities package which includes pcdm, the login manager. In the near future we hear the login manager and some of the other utilities will be split into separate packages, making it easier to use them on vanilla FreeBSD. > sudo sysrc pcdmenable=”YES” dbusenable="YES" hald_enable="YES" Reboot, and you should be greeted with the graphical login screen *** A return-oriented programming defense from OpenBSD (https://lwn.net/Articles/732201/) We talked a bit about RETGUARD last week, presenting Theo's email announcing the new feature Linux Weekly News has a nice breakdown on just how it works Stack-smashing attacks have a long history; they featured, for example, as a core part of the Morris worm back in 1988. Restrictions on executing code on the stack have, to a great extent, put an end to such simple attacks, but that does not mean that stack-smashing attacks are no longer a threat. Return-oriented programming (ROP) has become a common technique for compromising systems via a stack-smashing vulnerability. There are various schemes out there for defeating ROP attacks, but a mechanism called "RETGUARD" that is being implemented in OpenBSD is notable for its relative simplicity. In a classic stack-smashing attack, the attack code would be written directly to the stack and executed there. Most modern systems do not allow execution of on-stack code, though, so this kind of attack will be ineffective. The stack does affect code execution, though, in that the call chain is stored there; when a function executes a "return" instruction, the address to return to is taken from the stack. An attacker who can overwrite the stack can, thus, force a function to "return" to an arbitrary location. That alone can be enough to carry out some types of attacks, but ROP adds another level of sophistication. A search through a body of binary code will turn up a great many short sequences of instructions ending in a return instruction. These sequences are termed "gadgets"; a large program contains enough gadgets to carry out almost any desired task — if they can be strung together into a chain. ROP works by locating these gadgets, then building a series of stack frames so that each gadget "returns" to the next. There is, of course, a significant limitation here: a ROP chain made up of exclusively polymorphic gadgets will still work, since those gadgets were not (intentionally) created by the compiler and do not contain the return-address-mangling code. De Raadt acknowledged this limitation, but said: "we believe once standard-RET is solved those concerns become easier to address separately in the future. In any case a substantial reduction of gadgets is powerful". Using the compiler to insert the hardening code greatly eases the task of applying RETGUARD to both the OpenBSD kernel and its user-space code. At least, that is true for code written in a high-level language. Any code written in assembly must be changed by hand, though, which is a fair amount of work. De Raadt and company have done that work; he reports that: "We are at the point where userland and base are fully working without regressions, and the remaining impacts are in a few larger ports which directly access the return address (for a variety of reasons)". It can be expected that, once these final issues are dealt with, OpenBSD will ship with this hardening enabled. The article wonders about applying the same to Linux, but notes it would be difficult because the Linux kernel cannot currently be compiled using LLVM If any benchmarks have been run to determine the cost of using RETGUARD, they have not been publicly posted. The extra code will make the kernel a little bigger, and the extra overhead on every function is likely to add up in the end. But if this technique can make the kernel that much harder to exploit, it may well justify the extra execution overhead that it brings with it. All that's needed is somebody to actually do the work and try it out. Videos from BSDCan have started to appear! (https://www.youtube.com/playlist?list=PLeF8ZihVdpFfVEsCxNWGDmcATJfRZacHv) Henning Brauer: tcp synfloods - BSDCan 2017 (https://www.youtube.com/watch?v=KuHepyI0_KY) Benno Rice: The Trouble with FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=1DM5SwoXWSU) Li-Wen Hsu: Continuous Integration of The FreeBSD Project - BSDCan 2017 (https://www.youtube.com/watch?v=SCLfKWaUGa8) Andrew Turner: GENERIC ARM - BSDCan 2017 (https://www.youtube.com/watch?v=gkYjvrFvPJ0) Bjoern A. Zeeb: From the outside - BSDCan 2017 (https://www.youtube.com/watch?v=sYmW_H6FrWo) Rodney W. Grimes: FreeBSD as a Service - BSDCan 2017 (https://www.youtube.com/watch?v=Zf9tDJhoVbA) Reyk Floeter: The OpenBSD virtual machine daemon - BSDCan 2017 (https://www.youtube.com/watch?v=Os9L_sOiTH0) Brian Kidney: The Realities of DTrace on FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=NMUf6VGK2fI) The rest will continue to trickle out, likely not until after EuroBSDCon *** Beastie Bits Oracle has killed sun (https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/) Configure Thunderbird to send patch friendly (http://nanxiao.me/en/configure-thunderbird-to-send-patch-friendly/) FreeBSD 10.4-BETA4 Available (https://www.freebsd.org/news/newsflash.html#event20170909:01) iXsystems looking to hire kernel and zfs developers (especially Sun/Oracle Refugees) (https://www.facebook.com/ixsystems/posts/10155403417921508) Speaking of job postings, UnitedBSD.com has few job postings related to BSD (https://unitedbsd.com/) Call for papers USENIX FAST ‘18 - February 12-15, 2018, Due: September 28 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/usenix-fast-18-call-for-papers/) Scale 16x - March 8-11, 2018, Due: October 31, 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/scale-16x-call-for-participation/) FOSDEM ‘18 - February 3-4, 2018, Due: November 3 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/fosdem-18-call-for-participation/) Feedback/Questions Jason asks about cheap router hardware (http://dpaste.com/340KRHG) Prashant asks about latest kernels with freebsd-update (http://dpaste.com/2J7DQQ6) Matt wants know about VM Performance & CPU Steal Time (http://dpaste.com/1H5SZ81) John has config questions regarding Dell precision 7720, FreeBSD, NVME, and ZFS (http://dpaste.com/0X770SY) ***
Discussion: In this lesson, we're going to do an overview of the Arduino toolchain. The toolchain includes all of the software tools that are between us pressing the Verify/Upload buttons and that code actually getting loaded onto the integrated circuit on the Arduino board. I definitely want to stress the keyword overview because there is a lot going on behind the scenes that actually makes it happen. In practice for us, though, the entire process is just a matter of pressing a button. As we start to dig in, it's tempting to want to know every detail. However, remember that our goal in this course is to learn programming and electronics. If we're not careful, we can get pulled down this huge rabbit hole. That being said, I do think it's important to be familiar with the process. We don’t need to understand it in depth. I just want you to be able to understand and recognize some of the terms and basic concepts associated with this toolchain. Specifically, we’ll discuss: What Is a Toolchain A Toolchain Analogy Toolchain Basics What in the World Is a Toolchain? So, what is a toolchain in the first place? In programming, a toolchain is just a series of programming tools that work together in order to complete a task. If we were cooking something, like carrot soup, the toolchain might include a carrot peeler, a produce knife, a pot, some water, and a stove. Together, these tools help you create the desired output, carrot soup. When we're developing programs for Arduino, we'll also be using a toolchain. This all happens behind the scenes. On our end, we literally only have to press a button for the code to get to the board, but it wouldn’t happen without the toolchain. A Helpful Analogy So, I want to start peeling back the curtain on this Arduino toolchain. Imagine for a moment that you're an author. In fact, you're a New York Times bestselling author... just like me. What type of toolchain might you use? Maybe you start off with text editing software, just like Microsoft Word, to type your awesome story. Once you're done with that manuscript, you send it off to a professional editor at a publishing company. That professional editor is part of this toolchain. He will look over your manuscript and point out any errors. He might do some rearranging and other things, as well. After he sends the suggested edits back to you, you make any necessary corrections. Then, you give the manuscript back to that professional editor to do one last check for errors. He then hands it off to the office next door at that publishing company. The publishing company can't just take that Microsoft Word file and send it off to a printer. They need a special file type to format the book for how it should look on a physical page when it actually gets printed. Therefore, they take the manuscript in a Microsoft Word file format, and they turn it into a new file format. Once this is done, the publisher can send it off to be printed. Luckily, to make the process of printing a whole lot easier, this publisher has an in-house printer. So, he simply needs to just go right down the hall, so to speak. The printer takes that file and prints it onto an actual physical page. So, let's review this author's toolchain. The author writes with a text editor program. Then, he sends the manuscript to an editor at a publishing company. That editor reviews the manuscript until it is perfect. He then sends it next door to be formatted. The manuscript is converted into the proper file format, and it is sent off to the printer to create the physical book. Finally, voilà, somebody buys your book from Amazon to read all about zombies over a warm cup of latte. Arduino Toolchain Basics Why did I go through that long scenario? Well, the Arduino has a similar toolchain. When we start writing our code, and we become the author. We do this in the Arduino IDE, which is akin to a text editor. We also write the code in a programming language called C++, with a file type extension of .ino. The code that we write is called human readable code since it's meant for us to read and, hopefully, understand. However, the Arduino hardware doesn't understand C++ code. It needs what's called machine code. In order to generate that machine code, we use a software tool called a compiler. Remember that Verify button in the Arduino IDE that looks like the checkmark? When you press this button, it compiles the code using compiler software called AVR-GCC. This compiler software does a bunch of stuff. The main thing it does it rearrange some code and check for errors. This is like the professional editor at the publishing company. The compiler also translates the human readable code into machine code. There are some in-between file types and other fuzzy things that go on, but the final output of the compiler is a machine language saved as a .hex file. In order to get this .hex file loaded onto our Arduino's integrated circuit, we need to press the Upload button. This begins another piece of software called AVRDUDE, which sends the file to the integrated circuit. Normally, we would have to use some external hardware to load that circuit. However, in Arduino’s infinite wisdom, they loaded a small piece of software onto the integrated circuit that automatically comes with an Arduino board. That piece of software is called a bootloader. It works with ARVDUDE to take the outputted .hex file and put it onto the flash memory of that Arduino's integrated circuit using only the USB cable. Again, all we had to do was press the Upload button. This whole process happens behind the scenes. Now the process is complete. So, what was our toolchain? We have the Arduino IDE Editor. Then, we have the compiler, which is AVR-GCC. The result is a .hex file. Next, we have AVRDUDE and the bootloader on the integrated circuit of the Arduino board. These work together to upload the .hex file to the board. REVIEW That was a lot of information to absorb. Let’s summarize what we learned in this lesson. First, we learned that in programming, a toolchain is simply a group of software tools used to complete a task. Then we discussed the book publishing analogy to help illustrate that point. Lastly, we walked through the Arduino toolchain. As an encore, we’ll go over that toolchain one more time. We start by writing human-readable C++ code in the Arduino IDE Editor. Then, we click Verify. The compiler program, called AVR-GCC, checks the code for error and adjusts some of the code for us. The result is machine code in a .hex file. When we press the Upload button, AVRDUDE takes that .hex file and works with the bootloader. The bootloader is pre-installed on the Arduino's integrated circuit, and it helps get the machine code loaded onto the Arduino's integrated circuit. Wow, I'm so glad that all happens behind the scenes! Again, this was meant only to be a cursory overview. I wanted to make sure that if you see AVR-GCC in error message or hear the word compiler or any of the other jargon we discussed here, at least you now have an idea of their significance. You now know where they are in the Arduino toolchain.
Talk Python To Me - Python conversations for passionate developers
See the full show notes for this episode on the website at talkpython.fm/78.
Mike’s making some big changes to his workflow, and sharing the tools in his box. We’ll look at the transition to Ubuntu Linux for Mike and his dev team, and the productivity advantages they see. Plus planning for scale, a fresh look at Vala, your emails, and more!
Viele Bothaner erlitten den Tod bis endlich Patrick Lauke den Weg in unsere Sendung fand. Dort angekommen erhellte er uns über Touch- und Pointer-Events und glich seine Meinung über Toolchains und ARIA-Kurse mit Schepp und Peter ab. Schaunotizen [00:02:10] Toolchain-Kult? Früher war alles einfacher, da brauchte man nur Notepad und einen FTP-Client. Heute hingegen machte […]
This Week on developerWorks has a new home page at: http://ibm.com/developerworks/thisweek Links to articles mentioned on this episode are at: https://ibm.biz/BdxCft
Today's hosts are Jason Kridner, Gerald Coley and Jeffery Osier-Mixon. Below are the show note links. Links to the recordingsBeagleCast-20110314.mp3BeagleCast-20110314.oggTo provide questions or suggestions:Call +1-713-234-0535 orvisit the BeagleCast suggestions formFrom the RSS feed Running a BeagleBoard off of Batteries BeagleBoard cases with a MakerBot on ThingiverseNew SGX Graphics Driver Release 4.03.00.02 for Linux now available! DVI-D to VGA converter for BeagleBoard-xM and issue to be fixed with the current BeagleBoardToys VGA adapter when using a BeagleBoard-xM Kinect + BeagleBoard-xM (now need GLES) Leverett and Wasson Win Texas Instruments Beagle Board Design Challenge Toolchain, Check! Kernel, Check! - Cross Linux From Scratch Twitter badge on the blog pageLots of interesting #BeagleBoard tweetsFollow the #BeagleBoard RSS feed news items on Twitter Upcoming eventsTweet @Jadon for free BeagleBoard hands-on training on March 26th at Indiana Linuxfest going on March 25-27Linux Collaboration Summit on April 6-8 Embedded Linux Conference on April 11-13Maker Faire Bay Area on May 21-22BeagleBoard-xM Rev C HW and SW UpdateNew release candidate from AngstromFAT vs. ext2boot.scr vs uEnv.txt change is not welcomed by all Why won't old MLO and u-boot work with xM rev C?Hot Topics on the BeagleBoard Google GroupMark Yoder's ECE497 class with some students using the KinectCollecting Google Summer of Code project ideas such as the car PC projectFuture topics and guestsThe theme music for BeagleCast was created and provided by Alasdair Drake.