Podcasts about Open source

a broad concept article for open-source

  • 4,516PODCASTS
  • 20,722EPISODES
  • 47mAVG DURATION
  • 5DAILY NEW EPISODES
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Open source

    Show all podcasts related to open source

    Latest podcast episodes about Open source

    Zebras & Unicorns
    "OpenClaw darf nicht ClosedClaw werden"

    Zebras & Unicorns

    Play Episode Listen Later Feb 13, 2026 30:53


    OpenClaw trifft den Zeitgeist - und stellt seinen Entwickler vor eine große Entscheidung. Peter Steinberger, dessen Open-Source-KI-Agent weltweit für Aufsehen sorgt, hat Angebote von Meta und OpenAI auf dem Tisch. Wo kommt der Hype her, wo liegen die technischen Tücken, und warum ist der Open Source KI-Agent eine große Bedrohung für Big Tech?Jakob Steinschaden, Mitgründer von Trending Topics und newsrooms, und Matteo Rosoli, CEO von newsrooms, sprechen im heutigen Podcast über:

    The top AI news from the past week, every ThursdAI

    Hey dear subscriber, Alex here from W&B, let me catch you up! This week started with Anthropic releasing /fast mode for Opus 4.6, continued with ByteDance reality-shattering video model called SeeDance 2.0, and then the open weights folks pulled up! Z.ai releasing GLM-5, a 744B top ranking coder beast, and then today MiniMax dropping a heavily RL'd MiniMax M2.5, showing 80.2% on SWE-bench, nearly beating Opus 4.6! I've interviewed Lou from Z.AI and Olive from MiniMax on the show today back to back btw, very interesting conversations, starting after TL;DR!So while the OpenSource models were catching up to frontier, OpenAI and Google both dropped breaking news (again, during the show), with Gemini 3 Deep Think shattering the ArcAGI 2 (84.6%) and Humanity's Last Exam (48% w/o tools)... Just an absolute beast of a model update, and OpenAI launched their Cerebras collaboration, with GPT 5.3 Codex Spark, supposedly running at over 1000 tokens per second (but not as smart) Also, crazy week for us at W&B as we scrambled to host GLM-5 at day of release, and are working on dropping Kimi K2.5 and MiniMax both on our inference service! As always, all show notes in the end, let's DIVE IN! ThursdAI - AI is speeding up, don't get left behind! Sub and I'll keep you up to date with a weekly catch upOpen Source LLMsZ.ai launches GLM-5 - #1 open-weights coder with 744B parameters (X, HF, W&B inference)The breakaway open-source model of the week is undeniably GLM-5 from Z.ai (formerly known to many of us as Zhipu AI). We were honored to have Lou, the Head of DevRel at Z.ai, join us live on the show at 1:00 AM Shanghai time to break down this monster of a release.GLM-5 is massive, not something you run at home (hey, that's what W&B inference is for!) but it's absolutely a model that's worth thinking about if your company has on prem requirements and can't share code with OpenAI or Anthropic. They jumped from 355B in GLM4.5 and expanded their pre-training data to a whopping 28.5T tokens to get these results. But Lou explained that it's not only about data, they adopted DeepSeeks sparse attention (DSA) to help preserve deep reasoning over long contexts (this one has 200K)Lou summed up the generational leap from version 4.5 to 5 perfectly in four words: “Bigger, faster, better, and cheaper.” I dunno about faster, this may be one of those models that you hand off more difficult tasks to, but definitely cheaper, with $1 input/$3.20 output per 1M tokens on W&B! While the evaluations are ongoing, the one interesting tid-bit from Artificial Analysis was, this model scores the lowest on their hallucination rate bench! Think about this for a second, this model is neck-in-neck with Opus 4.5, and if Anthropic didn't release Opus 4.6 just last week, this would be an open weights model that rivals Opus! One of the best models the western foundational labs with all their investments has out there. Absolutely insane times. MiniMax drops M2.5 - 80.2% on SWE-bench verified with just 10B active parameters (X, Blog)Just as we wrapped up our conversation with Lou, MiniMax dropped their release (though not weights yet, we're waiting ⏰) and then Olive Song, a senior RL researcher on the team, joined the pod, and she was an absolute wealth of knowledge! Olive shared that they achieved an unbelievable 80.2% on SWE-Bench Verified. Digest this for a second: a 10B active parameter open-source model is directly trading blows with Claude Opus 4.6 (80.8%) on the one of the hardest real-world software engineering benchmark we currently have. While being alex checks notes ... 20X cheaper and much faster to run? Apparently their fast version gets up to 100 tokens/s. Olive shared the “not so secret” sauce behind this punch-above-its-weight performance. The massive leap in intelligence comes entirely from their highly decoupled Reinforcement Learning framework called “Forge.” They heavily optimized not just for correct answers, but for the end-to-end time of task performing. In the era of bloated reasoning models that spit out ten thousand “thinking” tokens before writing a line of code, MiniMax trained their model across thousands of diverse environments to use fewer tools, think more efficiently, and execute plans faster. As Olive noted, less time waiting and fewer tools called means less money spent by the user. (as confirmed by @swyx at the Windsurf leaderboard, developers often prefer fast but good enough models) I really enjoyed the interview with Olive, really recommend you listen to the whole conversation starting at 00:26:15. Kudos MiniMax on the release (and I'll keep you updated when we add this model to our inference service) Big Labs and breaking newsThere's a reason the show is called ThursdAI, and today this reason is more clear than ever, AI biggest updates happen on a Thursday, often live during the show. This happened 2 times last week and 3 times today, first with MiniMax and then with both Google and OpenAI! Google previews Gemini 3 Deep Think, top reasoning intelligence SOTA Arc AGI 2 at 84% & SOTA HLE 48.4% (X , Blog)I literally went

    BSD Now
    650: Korn Chips

    BSD Now

    Play Episode Listen Later Feb 12, 2026 57:21


    AT&T's $2000 shell, ZFS Scrubs and Data Integrity, FFS Backups, FreeBSD Home Nas, and more. NOTES This episode of BSDNow is brought to you by Tarsnap and the BSDNow Patreon Headlines One too many words on AT&T's $2,000 Korn shell and other Usenet topics Understanding ZFS Scrubs and Data Integrity News Roundup FFS Backup FreeBSD: Home NAS, part 1 – configuring ZFS mirror (RAID1) 8 more parts! Beastie Bits The BSD Proposal UNIX Magic Poster Haiku OS Pulls In Updated Drivers From FreeBSD 15 FreeBSD 15.0 VNET Jails Call for NetBSD testing Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Gary - Links Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Join us and other BSD Fans in our BSD Now Telegram channel

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

    Connected Social Media
    Business Benefits of Running DBaaS in an Open-Source Virtualized Environment

    Connected Social Media

    Play Episode Listen Later Feb 12, 2026


    Intel IT now runs part of our database as a service (DBaaS) in an open-source containerized environment, in response to...

    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
    AI Business and Development Daily News Rundown February 12 2026: Musk's Moon Factory, China's New Open-Source King, & Claude's "Sabotage Risk"

    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

    Play Episode Listen Later Feb 12, 2026 32:51


    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
    Teaser for AI Daily News Rundown February 12 2026: Musk's Moon Factory, China's New Open-Source King, & The "Sabotage Risk" (Ep. Brought to you by AIRIA)

    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

    Play Episode Listen Later Feb 12, 2026 1:55


    Bitcoin Park
    NEMS26: User-Centered Design: Hard Lessons in Building Hardware

    Bitcoin Park

    Play Episode Listen Later Feb 11, 2026 18:31


    DescriptionIn this conversation, Thomas Templeton discusses the importance of user-centered design in the Bitcoin mining space, emphasizing the need for builders to listen to customer pain points. He shares insights from his experience at Apple and Square, highlighting the significance of redefining miners as infrastructure and the role of open source in fostering community engagement. The discussion culminates in a call for collaboration and innovation within the Bitcoin mining community.TakeawaysUser-centered design is crucial in the Bitcoin mining space.Listening to customer pain points leads to better product development.Redefining miners as infrastructure can unlock new opportunities.Open source initiatives can help decentralize Bitcoin mining.Community engagement is essential for innovation.Asking 'why' can challenge industry norms and assumptions.Diverse perspectives enhance understanding of mining challenges.Building tools for the community fosters collaboration.Success in Bitcoin mining benefits all stakeholders.The Bitcoin community is welcoming and supportive for newcomers.Chapters00:00Introduction to User-Centered Design in Bitcoin Mining03:46Thomas Templeton's Journey: From Apple to Square09:50Listening to Customers: The Key to Innovation14:47Redefining Miners as Infrastructure17:40Community Engagement and Open Source in Bitcoin MiningKeywordsBitcoin mining, user-centered design, customer feedback, infrastructure, open source, community engagement, product development, innovation, pain points, decentralization

    POD256 | Bitcoin Mining News & Analysis
    104. AI, Open-Source Bitcoin Mining, and Battling Surveillance

    POD256 | Bitcoin Mining News & Analysis

    Play Episode Listen Later Feb 11, 2026 57:13 Transcription Available


    In this live episode of POD256 (Ep. 104), eco is joined by Scott and Tyler—freshly minted 256 Foundation board members—for a fast-paced tour through open-source Bitcoin mining, DIY heat reuse, and the growing role of AI in hardware and firmware. We showcase D++'s new livestream overlay and the public monitoring dashboard at dash.256f.org/monitor.html, experiment with zap-based chat, and talk through the recent major difficulty drop and what it means for home miners. We revisit the 2021 China mining ban, S9 nostalgia, power and noise hacks, and the rise of an open mining stack—LibreBoard, HydraPool, and Mujina—aimed at dismantling proprietary control. From hot-tub immersion builds to sous vide steak with miner heat, we explore practical heat reuse, the need for reusable open components, and how AI agents can automate dashboards, tuning, and reverse-engineering—while warning about SaaS surveillance, Ring cameras, in-car spyware, and AI skill-store malware. If you want to support or learn, point hash to the 256 Foundation when we're live, or spin up your own pool with HydraPool. Privacy, sovereignty, and open hardware are the path forward—bring your hash and your curiosity.

    The Business of Open Source
    Changing Your Price Anchor with Anais Concepcion

    The Business of Open Source

    Play Episode Listen Later Feb 11, 2026 32:33


    There's a new episode of The Business of Open Source today! It's been a while. I talked with Anais Concepcion about a program she's been testing at Grist to give free activation codes for the enterprise version of Grist to individuals and small businesses that have a revenue under $1 million. The program has been in place for 5 months, and Anais came on the show to talk about both the strategy behind the program as well as some preliminary results. The strategy comes down to shifting the perception of Grist Open Source and Grist Enterprise. The goal, Anais says, is to make the ecosystem consider Grist Enterprise the ‘default' version of Grist, rather than the other way around. In fact, she's considering renaming Grist Enterprise to just “Grist” to reinforce the idea that it is not the ‘special' version of Grist, but the default version. There were other strategic goals, too. One is to get more feedback on the ‘enterprise' features, another is to avoid nickel-and-diming individual users while making sure that big companies are paying. The results so far have been interesting. The biggest concrete result has been in partnership relationships; it's easier for small consulting / development shops to get access to the full Grist and to then resell it to their clients. There haven't been been any signed deals yet as a result of this dynamic, but there are companies in discussions with the Grist sales team that probably wouldn't have happened without the program… it will be interesting to see what happens as the program matures. After we turned off the recording, we had an interesting discussion about pricing as well; at Open Source Founders Summit Anais is going to do a workshop on pricing strategy. Not how much to charge, but what to charge for (consumption, seats, etc), how to set pricing anchors, and more. Join us in May if that's interesting to you! 

    Choses à Savoir TECH VERTE
    TrackCarbon : l'app open source sur l'empreinte carbone de l'IA ?

    Choses à Savoir TECH VERTE

    Play Episode Listen Later Feb 11, 2026 2:21


    On parle beaucoup d'intelligence artificielle comme d'un miracle numérique. Plus rapide, plus créative, plus efficace. Mais on oublie souvent une question toute simple : combien ça consomme, au juste, une conversation avec une IA ? Car derrière Gemini, Mistral, ChatGPT ou Claude, il y a des centres de données géants, des serveurs qui tournent jour et nuit, et une facture énergétique loin d'être virtuelle.Pour mettre des chiffres sur cette réalité, la Fondation Sahar et l'association Trackarbon lancent TrackCarbon, une application disponible dès aujourd'hui sur macOS. Son idée est simple : mesurer, en temps réel, l'empreinte énergétique et carbone de nos échanges avec les IA génératives. Une fois installée, l'application observe la longueur de vos requêtes, celle des réponses, puis estime l'électricité consommée et le CO₂ émis à partir de données scientifiques. Pas d'espionnage pour autant : tout reste stocké en local, sur l'ordinateur. Rien ne quitte la machine.L'interface affiche trois indicateurs très parlants : le nombre de requêtes envoyées, l'énergie utilisée, et l'empreinte carbone correspondante. Et pour rendre ces données concrètes, TrackCarbon traduit tout ça en équivalences du quotidien : nombre de recharges de smartphone, kilomètres parcourus en voiture. De quoi réaliser que chaque prompt a, lui aussi, un coût.Les calculs s'appuient sur des travaux académiques et évoluent en continu, avec des contributions de chercheurs et de la communauté. L'outil est d'ailleurs entièrement open source : le code est publié sur GitHub, consultable et modifiable par tous. Pas question de culpabiliser, assure Gauthier Schweitzer, président de la fondation. L'objectif est d'éclairer, pas de juger. À terme, l'application devrait aussi proposer des analyses par modèle, des outils pour les entreprises, et même recommander l'IA la moins gourmande selon la tâche. Windows et Linux sont déjà prévus. Reste à savoir si cette boussole carbone changera vraiment nos habitudes numériques. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

    Category Visionaries
    How Collate turned 12,000 open source users into an inbound sales engine | Suresh Srinivas

    Category Visionaries

    Play Episode Listen Later Feb 10, 2026 24:43


    Collate is building a semantic intelligence platform that unifies fragmented metadata tooling across the modern data stack. With 12,000+ community members, 3,000+ open source deployments, and 400+ code contributors, the company has proven that open source can be a systematic GTM engine, not just a distribution tactic. In this episode of BUILDERS, I sat down with Suresh Srinivas, Co-Founder & CEO of Collate, to explore his journey from the Hadoop core team at Yahoo, through founding Hortonworks, to architecting data systems processing 4 trillion events daily at Uber—and why that experience led him to rebuild metadata infrastructure from scratch. Topics Discussed: Why platform builders at Yahoo and Hortonworks struggled to drive business value despite powerful technology The metadata fragmentation problem: how siloed tools lack unified vocabularies and end-to-end context Collate's contrarian decision to build Open Metadata from zero rather than spinning out Uber's internal tooling Engineering an open core GTM model that generates nearly 100% inbound sales from technical practitioners Scaling community contribution: moving from feedback loops to 400+ code contributors Hiring a CMO to translate technical value into business-leader messaging without losing practitioner trust The convergence thesis: structured data, knowledge graphs, and semantic layers as the foundation for reliable AI GTM Lessons For B2B Founders: Architect your open source for GTM leverage, not just distribution: Suresh built Open Metadata as a unified platform consolidating data discovery, observability, and governance—previously fragmented across multiple tools. This architectural decision created natural upgrade paths to Collate's managed offering. The lesson: open source architecture should solve a complete job-to-be-done that reveals commercial value through usage, not just demonstrate technical capability. 100+ daily practitioner conversations beats any user research: Collate maintains ongoing dialogue with their community across Snowflake, Databricks, and other integrations. Suresh called this "a product manager's dream"—immediate feedback on what breaks, what's missing, and what workflow improvements matter. For infrastructure startups, this beat rate of validated learning is nearly impossible to replicate through traditional customer development. High-velocity releases build credibility faster than pedigree: Starting from scratch without Yahoo or Uber's brand meant proving commitment through shipping cadence. Collate's strategy: demonstrate you'll be around and responsive before asking for production deployments. This matters more in open source than closed-source where sales cycles force commitment conversations earlier. Separate technical-buyer and business-buyer GTM motions explicitly: Collate's founding team spoke fluently to data engineers and architects who lived the metadata problem daily. Their CMO hire (after establishing product-market fit) brought expertise in articulating business impact—ROI on data initiatives, compliance risk reduction, AI readiness—without the founders faking business-speak. The timing matters: hire for the motion you're entering, not the one you're in. Play the long game with builder-culture companies: At Uber, internal tools were 2-3 years ahead of vendor solutions but became technical debt as teams moved to new problems. Suresh's advice: "Keep in touch with these larger companies. Your technology will improve and you will have better conversation with larger technical companies." The wedge is timing—catch them when maintenance burden outweighs building pride, typically 24-36 months post-launch. Design for all company scales from day one: Unlike Uber's internal metadata platform built for massive scale with corresponding complexity, Open Metadata works for small teams through enterprises. This wasn't just good design—it was GTM expansion strategy. Building only for scale locks you into enterprise-only sales. Building only for simplicity caps your ACV. The middle path requires architectural discipline upfront. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

    Das Beste vom Morgen von MDR AKTUELL
    Digitaler Neustart: Warum Thüringen ohne Microsoft auskommen will

    Das Beste vom Morgen von MDR AKTUELL

    Play Episode Listen Later Feb 10, 2026 3:53


    Thüringen will Microsoft‑Software aus Behörden verbannen und auf Open Source umsteigen - aus Sicherheits- und Kostengründen. Auch Sachsen und Sachsen-Anhalt sehen Risiken mit der US-Software, zögern aber dennoch.

    radio microsoft open source beh risiken neustart ringen digitaler sachsen anhalt sicherheits kostengr auskommen mdr aktuell microsoft software nachrichtenradio opendesk
    The Changelog
    Vouch for an open source web of trust (News)

    The Changelog

    Play Episode Listen Later Feb 9, 2026 7:35


    Mitchell Hashimoto's trust management system for open source, Nicholas Carlini has a team of Claudes build a C compiler, Stephan Schwab recounts the history of attempted developer replacement, NanClaw is an alternative to OpenClaw, and Sophie Koonin can't wrap her head around so many people going so hard on LLM-generated code.

    open source llm vouch mitchell hashimoto jerod santo web of trust
    Python Bytes
    #469 Commands, out of the terminal

    Python Bytes

    Play Episode Listen Later Feb 9, 2026 33:56 Transcription Available


    Topics covered in this episode: Command Book App uvx.sh: Install Python tools without uv or Python Ending 15 years of subprocess polling monty: A minimal, secure Python interpreter written in Rust for use by AI Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Command Book App New app from Michael Command Book App is a native macOS app for developers, data scientists, AI enthusiasts and more. This is a tool I've been using lately to help build Talk Python, Python Bytes, Talk Python Training, and many more applications. It's a bit like advanced terminal commands or complex shell aliases, but hosted outside of your terminal. This leaves the terminal there for interactive commands, exploration, short actions. Command Book manages commands like "tail this log while I'm developing the app", "Run the dev web server with true auto-reload", and even "Run MongoDB in Docker with exactly the settings I need" I'd love it if you gave it a look, shared it with your team, and send me feedback. Has a free version and paid version. Build with Swift and Swift UI Check it out at https://commandbookapp.com Brian #2: uvx.sh: Install Python tools without uv or Python Tim Hopper Michael #3: Ending 15 years of subprocess polling by Giampaolo Rodola The standard library's subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago The problem with busy-polling CPU wake-ups: even with exponential backoff (starting at 0.1ms, capping at 40ms), the system constantly wakes up to check process status, wasting CPU cycles and draining batteries. Latency: there's always a gap between when a process actually terminates and when you detect it. Scalability: monitoring many processes simultaneously magnifies all of the above. + L1/L2 CPU cache invalidations It's interesting to note that waiting via poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps. Here is the merged PR for this change. Brian #4: monty: A minimal, secure Python interpreter written in Rust for use by AI Samuel Colvin and others at Pydantic Still experimental “Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code. “ “Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.” Extras Brian: Expertise is the art of ignoring - Kevin Renskers You don't need to master the language. You need to master your slice. Learning everything up front is wasted effort. Experience changes what you pay attention to. I hate fish - Rands (Michael Lopp) Really about productivity systems And a nice process for dealing with email Michael: Talk Python now has a CLI New essay: It's not vibe coding - Agentic engineering GitHub is having a day Python 3.14.3 and 3.13.12 are available Wall Street just lost $285 billion because of 13 markdown files Joke: Silence, current side project!

    LINUX Unplugged
    653: The Kernel Always Wins

    LINUX Unplugged

    Play Episode Listen Later Feb 9, 2026 65:50 Transcription Available


    The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:

    Changelog News
    Vouch for an open source web of trust

    Changelog News

    Play Episode Listen Later Feb 9, 2026 7:35


    Mitchell Hashimoto's trust management system for open source, Nicholas Carlini has a team of Claudes build a C compiler, Stephan Schwab recounts the history of attempted developer replacement, NanClaw is an alternative to OpenClaw, and Sophie Koonin can't wrap her head around so many people going so hard on LLM-generated code.

    open source llm vouch mitchell hashimoto jerod santo web of trust
    כל תכני עושים היסטוריה
    בינה מלאכותית 2026. מה מצפה לנו? [עושים תוכנה]

    כל תכני עושים היסטוריה

    Play Episode Listen Later Feb 9, 2026 44:04 Transcription Available


    רק חודש לתוך 2026 וכבר ברור שהשנה הזו הולכת לשנות את הכללים.אירחתי באולפן את אורי אליבייב, מייסד קהילת MDLI ואחד האנשים המעניינים בתחום ה-AI בארץ.ביחד ניתחנו את השנה שעברה ודיברנו על מה שמחכה לנו: עתיד ה-Agents, מודלי שפה קטנים (SLM), שוק העבודה המשתנה, המרוץ הסיני ב-Open Source, והכיוון שהענקיות לוקחות בעולם ה-GenAI.האזנה נעימה, עמית בן דור.

    All TWiT.tv Shows (MP3)
    Untitled Linux Show 241: A Very Hot Sandwich

    All TWiT.tv Shows (MP3)

    Play Episode Listen Later Feb 8, 2026 81:44 Transcription Available


    This week, we start by talking about the Raspberry Pi memory price increases and bemoan that it's a tough time to be an enthusiast. Then we help ourselves feel better by covering all the new Betas and releases of our favorite software. There's a new LibreOffice, a look ahead at GIMP 3.2, and the Krita 6 Beta. Toyota has announced Flourite, a new game engine written in Flutter and Dart. And Ardour 9 and Shotcut 26.1 are out. We talk Debian, and spend some time looking at how AI has changed the Open Source landscape. For tips, there's another look at systemd-analyze and then a quick intro to gpioget for reading gpio lines. You can find the show notes at https://bit.ly/4r3PmZn and have a great week! Host: Jonathan Bennett Co-Host: Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

    Hacker News Recap
    February 7th, 2026 | France's homegrown open source online office suite

    Hacker News Recap

    Play Episode Listen Later Feb 8, 2026 15:23


    This is a recap of the top 10 posts on Hacker News on February 07, 2026. This podcast was generated by wondercraft.ai (00:30): France's homegrown open source online office suiteOriginal post: https://news.ycombinator.com/item?id=46923736&utm_source=wondercraft_ai(01:57): We mourn our craftOriginal post: https://news.ycombinator.com/item?id=46926245&utm_source=wondercraft_ai(03:25): Coding agents have replaced every framework I usedOriginal post: https://news.ycombinator.com/item?id=46923543&utm_source=wondercraft_ai(04:53): Vocal Guide – belt sing without killing yourselfOriginal post: https://news.ycombinator.com/item?id=46922049&utm_source=wondercraft_ai(06:21): U.S. jobs disappear at fastest January pace since great recessionOriginal post: https://news.ycombinator.com/item?id=46925669&utm_source=wondercraft_ai(07:49): The AI boom is causing shortages everywhere elseOriginal post: https://news.ycombinator.com/item?id=46922969&utm_source=wondercraft_ai(09:16): SectorC: A C Compiler in 512 bytes (2023)Original post: https://news.ycombinator.com/item?id=46925741&utm_source=wondercraft_ai(10:44): Why I Joined OpenAIOriginal post: https://news.ycombinator.com/item?id=46920487&utm_source=wondercraft_ai(12:12): British drivers over 70 to face eye tests every three yearsOriginal post: https://news.ycombinator.com/item?id=46924813&utm_source=wondercraft_ai(13:40): Software factories and the agentic momentOriginal post: https://news.ycombinator.com/item?id=46924426&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

    The Linux Cast
    Episode 221: Wayland Window Managers and More with TheBlackDon ​

    The Linux Cast

    Play Episode Listen Later Feb 8, 2026 67:41


    The boys return, this time to talk about Wayland Window Managers and other things with TheBlackDon ==== Special Thanks to Our Patrons! ==== https://thelinuxcast.org/patrons/ ===== Follow us

    All TWiT.tv Shows (Video LO)
    Untitled Linux Show 241: A Very Hot Sandwich

    All TWiT.tv Shows (Video LO)

    Play Episode Listen Later Feb 8, 2026 81:44 Transcription Available


    This week, we start by talking about the Raspberry Pi memory price increases and bemoan that it's a tough time to be an enthusiast. Then we help ourselves feel better by covering all the new Betas and releases of our favorite software. There's a new LibreOffice, a look ahead at GIMP 3.2, and the Krita 6 Beta. Toyota has announced Flourite, a new game engine written in Flutter and Dart. And Ardour 9 and Shotcut 26.1 are out. We talk Debian, and spend some time looking at how AI has changed the Open Source landscape. For tips, there's another look at systemd-analyze and then a quick intro to gpioget for reading gpio lines. You can find the show notes at https://bit.ly/4r3PmZn and have a great week! Host: Jonathan Bennett Co-Host: Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

    bitcoinheiros
    É hora de ALAVANCAR o Bitcoin!

    bitcoinheiros

    Play Episode Listen Later Feb 8, 2026 46:00


    Oh, yeah! Chegou a hora de tomar coragem e usar todas as ferramentas disponíveis para você alavancar o ambiente OPEN SOURCE do Bitcoin e levar adiante o projeto que sempre sonhou! Tudo o que você precisa está disponível. É muito melhor usar seu tempo livre para criar, se concentrar em coisas produtivas do que se afundar em medos, incertezas e dúvidas plantadas por banqueiros e shitcoinheiros nas suas redes sociais.Sobre alavancar o ambiente Open Source Bitcoin a seu favorhttps://x.com/bitdov/status/2017222739430457521https://x.com/bitdov/status/2017245084329087120https://x.com/Printer_Gobrrr/status/2017880731884613955https://pod21.com/Projetos open source #Bitcoinhttps://github.com/topics/bitcoinSobre depressãohttps://x.com/bradmillscan/status/2018772105538502864Gravado no bloco 934967________________APOIE O CANALhttps://bitcoinheiros.com/apoie/⚡ln@pay.bitcoinheiros.comPara agendar uma CONSULTA PRIVADA com o Dov: https://consultorio.bitcoinheiros.com/Consulta pública: https://ask.arata.se/bitdov00:00 Introdução01:54 Ciclos de preço do Bitcoin e o bear market08:49 Prepare-se para a volatilidade do preço do Bitcoin15:48 Foque na criação e esqueça o preço do Bitcoin19:56 A hora de alavancar o ecossistema do Bitcoin26:45 Projeto Pod21: The Decentralized 3D Printing Network39:27 Epstein estava envolvido com o Bitcoin?Escute no Fountain Podcasts (https://fountain.fm/join-fountain)para receber e enviar satoshinhos no modelo Value4ValueSIGA OS BITCOINHEIROS:Site: https://www.bitcoinheiros.comTwitter: https://www.x.com/bitcoinheirosAllan - https://www.x.com/allanraicherDov - https://x.com/bitdovBecas - https://x.com/bksbk6Instagram: https://www.instagram.com/bitcoinheirosFacebook: https://www.fb.com/bitcoinheirosPodcast: https://anchor.fm/bitcoinheirosMedium: https://medium.com/@bitcoinheirosCOMO GUARDAR SEUS BITCOINS?Bitcoinheiros recomendam o uso de carteiras Multisig com Hardware Wallets de diferentes fabricantes ou próprias.Para ver as carteiras de hardware que recomendamos, acesse https://www.bitcoinheiros.com/carteirasVeja os descontos e clique nos links de afiliados para ajudar o canalPor exemplo, para a COLDCARD - https://store.coinkite.com/promo/bitcoinheirosCom o código "bitcoinheiros" você ganha 5% de desconto na ColdCardPlaylist "Canivete Suíço Bitcoinheiro"https://www.youtube.com/playlist?list=PLgcVYwONyxmg-KH5bwzMU4sdyMbVMPqwbPlaylist "Carteiras Multisig de Bitcoin"https://www.youtube.com/playlist?list=PLgcVYwONyxmi74PiIUSnGieNIPqmtmdjWISENÇÃO DE RESPONSABILIDADE:Este conteúdo foi preparado para fins meramente informativos.NÃO é uma recomendação financeira nem de investimento.As opiniões apresentadas são apenas opiniões.Faça sua própria pesquisa.Não nos responsabilizamos por qualquer decisão de investimento que você tomar ou ação que você executar inspirada em nossos vídeos.P.S. para os buscadoresSomos bitcoinheiros, não bitconheiros, nem bitconheros, bitcoinheros, biticonheiros, biticonheros ou biticoinheros.O Dov é bitcoinheiro, não bitconheiro, nem bitconhero, bitcoinhero, biticonheiro, biticonhero ou biticoinhero.É Bitcoin, não Bitcon e nem Biticoin :)

    Mission Matters Podcast with Adam Torres
    Papermark Founder Marc Seitz on Making Open Source Sustainable

    Mission Matters Podcast with Adam Torres

    Play Episode Listen Later Feb 7, 2026 13:08


    In this episode, Adam Torres interviews Marc Seitz, Founder of Papermark. Marc shares his mission to help make open source a sustainable path for maintainers and explains how Papermark evolved from an open-source project into a secure document sharing and data room platform. The conversation covers building in public, earning trust through transparency, and how teams use Papermark to securely share pitch decks, proposals, and track viewer engagement. Follow Adam on Instagram at https://www.instagram.com/askadamtorres/ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: https://missionmatters.lpages.co/podcastguest/ Visit our website: https://missionmatters.com/ More FREE content from Mission Matters here: https://linktr.ee/missionmattersmedia Learn more about your ad choices. Visit podcastchoices.com/adchoices

    DLN Xtend
    218: Home Lab Spawn Point and High‑Ping Gaming | Linux Out Loud 120

    DLN Xtend

    Play Episode Listen Later Feb 7, 2026 56:12


    In this episode of Linux Out Loud, Matt takes squad leader role while Wendy and Nate rejoin the party for a high‑FPS catch‑up on life, Linux, and loud gaming sessions. They swap updates on Wendy's robotics teams heading deeper into competition season, Nate's battle with basement water and building a proper home lab spawn point, and Matt's quest to keep a local‑only media server running on modest hardware. From organizing racks and labeling gear to wrestling with Starlink latency and debating cloud gaming versus real ownership, the crew dives into how their real‑world chaos shapes the way they run Linux, host services, and play games. If you like robots, home labs, and arguing about whether you really own your digital library, this one's for you. Show Links: Discord Invite: https://discord.gg/73bDDATDAK Bookbinder JS (booklet maker): https://momijizukamori.github.io/bookbinder-js/ Bookbinder JS on GitHub: https://github.com/momijizukamori/bookbinder-js PS4 controller USB‑C upgrade guide: https://www.youtube.com/watch?v=nGKyBJVDXDQ BattleTech on GOG: https://www.gog.com/en/game/battletech_game​

    WCCO Tech Talk
    Opening Our Tech-Driven Minds

    WCCO Tech Talk

    Play Episode Listen Later Feb 7, 2026 35:49


    Doug Swinhart and Steve Thomson are in to answer your tech questions. This week, those include the latest push for Open Source programming, why you should use Privazer to help your computer's efficiency, and making the most out of your printer.

    Let's Talk AI
    #233 - Moltbot, Genie 3, Qwen3-Max-Thinking

    Let's Talk AI

    Play Episode Listen Later Feb 6, 2026 80:33


    Our 233rd episode with a summary and discussion of last week's big AI news!Recorded on 01/30/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Google introduces Gemini AI agent in Chrome for advanced browser functionality, including auto-browsing for pro and ultra subscribers.OpenAI releases ChatGPT Translator and Prism, expanding its applications beyond core business to language translation and scientific research assistance.Significant funding rounds and valuations achieved by startups Recursive and New Rofo, focusing on specialized AI chips and optical processors respectively.Political and social issues, including violence in Minnesota, prompt tech leaders in AI like Ade from Anthropic and Jeff Dean from Google to express concerns about the current administration's actions.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:04:09) Google adds Gemini AI-powered ‘auto browse' to Chrome | The Verge(00:07:11) Users flock to open source Moltbot for always-on AI, despite major risks - Ars Technica(00:13:25) Google Brings Genie 3 'World Building' Experiment to AI Ultra Subscribers - CNET(00:16:17) OpenAI's ChatGPT translator challenges Google Translate | The Verge(00:18:27) OpenAI launches Prism, a new AI workspace for scientists | TechCrunchApplications & Business(00:19:49) Exclusive: China gives nod to ByteDance, Alibaba and Tencent to buy Nvidia's H200 chips - sources | Reuters(00:22:55) AI chip startup Ricursive hits $4B valuation 2 months after launch(00:24:38) AI Startup Recursive in Funding Talks at $4 Billion Valuation - Bloomberg(00:27:30) Flapping Airplanes and the promise of research-driven AI | TechCrunch(00:31:54) From invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing | TechCrunchProjects & Open Source(00:35:34) Qwen3-Max-Thinking debuts with focus on hard math, code(00:38:26) China's Moonshot releases a new open-source model Kimi K2.5 and a coding agent | TechCrunch(00:46:00) Ai2 launches family of open-source AI developer agents that adapt to any codebase - SiliconANGLE(00:47:46) Tiny startup Arcee AI built a 400B-parameter open source LLM from scratch to best Meta's LlamaResearch & Advancements(00:52:53) Post-LayerNorm Is Back: Stable, ExpressivE, and Deep(00:58:00) [2601.19897] Self-Distillation Enables Continual Learning(01:03:04) [2601.20802] Reinforcement Learning via Self-Distillation(01:05:58) Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityPolicy & Safety(01:09:13) Amodei, Hoffman Join Tech Workers Decrying Minnesota Violence - BloombergSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Tank Talks
    The Rundown 2/6/26: Canada's AI Strategy Goes LLM-Powered, YC's Canada U-Turn, SpaceX–xAI Shock Deal

    Tank Talks

    Play Episode Listen Later Feb 6, 2026 21:23


    In this episode of Tank Talks, Matt Cohen and John Ruffolo rip through a stacked rundown of tech, venture capital, and geopolitical “sovereignty” theater. They open with Europe's accelerating shift away from Microsoft Office and big U.S. platforms toward open-source alternatives, then jump straight into a breaking change from Y Combinator CEO Garry Tan: Canada is back on the list of accepted incorporations, reversing a move that sparked serious backlash about Canadian startup brain drain and U.S.-domicile pressure.From there, they dissect Elon Musk's headline-grabbing SpaceX–xAI all-stock merger and why it looks way better for xAI holders than SpaceX shareholders ahead of a rumored SpaceX IPO window. The episode also digs into Canada's national AI consultation (and the government openly using multiple LLM providers like Cohere and OpenAI to process submissions), the EU's push for digital sovereignty (and the risks of swapping to “free” tools), and the brutal reality of AI-driven search gutting legacy media traffic, with the Washington Post laying off a third of its newsroom. The big throughline: information is cheap now, execution and trust are expensive, and countries (and companies) that don't adapt are about to get cooked.Y Combinator Reverses Course: Canada Back on the List (00:43)YC CEO Garry Tan adds Canada back to YC's list of accepted incorporation jurisdictions after removing it, triggering a wave of criticism. Matt and John break down what changed, why the original rationale (Canadian winners re-domiciling to the U.S.) was a flawed signal, and why the real issue is still Canadian capital formation and follow-on funding strength.SpaceX Buys xAI: A $1.25T Story Swap Before an IPO? (02:34)Matt tees up the shocker: SpaceX acquires xAI in an all-stock deal valuing xAI at $250B and SpaceX at $1T, creating a combined $1.25T entity. They discuss xAI's massive burn versus SpaceX's improving cash profile (driven by Starlink) and why this kind of move raises eyebrows heading into an IPO narrative.Second-Order Effects: When a Cash-Burning AI Company Merges Into Space Infrastructure (07:35)They debate whether this becomes a template for other pre-IPO restructures or stays a one-off “Elon special.” John says a Starlink-style consolidation would make strategic sense; folding in xAI doesn't feel like a choke-point win.Canada's AI Strategy Consultation: Government Using LLMs in the Workflow (09:10)Canada's ISED publishes a high-level summary of its AI consultation and explicitly notes using multiple LLMs and pipelines (including Cohere and OpenAI) to process massive public input. Matt frames this as a meaningful “government actually doing something” moment, even if the public is still anxious about jobs and privacy.Europe's Digital Sovereignty Push: Dropping Teams/Zoom for Open Source? (12:40)They react to reports of governments moving away from Teams/Zoom and Microsoft tooling in the name of sovereignty. Matt calls the open-source swap risky from a security and operational standpoint; John says the bigger signal is global: sovereignty is now a first-order priority, and Canada can't pretend this wave isn't coming.Washington Post Layoffs: AI Search Is Eating the Referral Economy (16:48)Matt highlights the Washington Post's reported search traffic collapse and layoffs impacting a third of the newsroom. John calls journalism an obvious early disruption target: LLMs compress content production costs, and the old newsroom pyramid doesn't match the new economics.The Survival Play: Media Becomes a Live Events Business (19:26)They land on the counter-move: stop fighting the trend and monetize what still works: brand, access, community, and in-person experiences. If content becomes commoditized, relationships and trust become the product.Connect with John Ruffolo on LinkedIn: https://ca.linkedin.com/in/joruffoloConnect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com

    Alexa's Input (AI)
    Inside the Future of AI Infrastructure with Marc Austin

    Alexa's Input (AI)

    Play Episode Listen Later Feb 6, 2026 45:52


    Most AI infrastructure today is hitting a breaking point. Marc Austin, CEO of Hedgehog, reveals how open source networking and cloud-native solutions are revolutionizing how enterprises build and operate AI at scale. This episode addresses issues many building AI infrastructure today are facing — expensive proprietary systems, overwhelming complex network configurations, and ways to make on-prem AI infrastructure feel just like the public cloud.We discuss how networking is the hidden bottleneck in scaling GPU clusters and the surprising physics and hardware innovations enabling higher throughput. Marc shares the journey of building Hedgehog, an open source, cloud-native platform designed for AI workloads that bridges the gap between complex hardware and seamless, user-friendly cloud experiences. Marc explains how Hedgehog's software abstracts and automates the networking complexity, making AI infrastructure accessible to enterprises without dedicated networking teams.We break down the future of AI networks, from multi-cloud and hybrid environments to the rise of Neo Clouds and the open source movement transforming enterprise AI infrastructure. If you're a CTO, data scientist, or AI innovator, understanding these network innovations can be your moat. Listen to this episode to see how open source, cloud-native networking, and physical innovation are shaping the AI infrastructure of tomorrow.Podcast LinksWatch: ⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠Read: ⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠Listen:⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠More: ⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠Website: ⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠Find out more about the guest at LinkedIn:  https://www.linkedin.com/in/austinmarc/Website: https://hedgehog.cloud/Github: https://github.com/githedgehogChapters00:00 Rethinking AI Infrastructure02:49 The Role of Networking in AI05:54 Marc's Journey to Hedgehog08:46 Lessons from Big Companies11:38 Requirements for AI Networks14:48 Advancements in AI Networking17:33 Future Challenges in AI Infrastructure20:46 Creating a Cloud Experience On-Prem23:32 The Shift to Hybrid Multi-Cloud28:10 Evolving AI Infrastructure and Efficiency30:57 AI Workloads and Network Configurations32:41 Zero Touch Lifecycle Management35:12 Support for Hardware Devices35:45 Networking Paradigms and Vendor Lock-in38:42 The Rise of Neo Clouds41:31 Demand for AI Infrastructure43:57 Open Source and Cloud-Native Networking47:27 Challenges of Building a Networking Startup50:46 Proud Accomplishments at Hedgehog52:41 Future Excitement in AI Inference

    Open Source with Christopher Lydon
    George Saunders on Life and the Afterlife

    Open Source with Christopher Lydon

    Play Episode Listen Later Feb 5, 2026 31:54


    We’re going off script out here in the afterlife, in the imagination of the triple-threat novelist George Saunders. He’s eminent as a writer of stories and novels, as a critical reader, and as a teacher ... The post George Saunders on Life and the Afterlife appeared first on Open Source with Christopher Lydon.

    Software Engineering Daily
    Airbnb's Open-Source GraphQL Framework with Adam Miskiewicz

    Software Engineering Daily

    Play Episode Listen Later Feb 5, 2026 55:45


    Engineering teams often build microservices as their systems grow, but over time this can lead to a fragmented ecosystem with scattered data access patterns, duplicated business logic, and an uneven developer experience. A unified data graph with a consistent execution layer helps address these challenges by centralizing schema, simplifying how teams compose functionality, and reducing The post Airbnb's Open-Source GraphQL Framework with Adam Miskiewicz appeared first on Software Engineering Daily.

    BSD Now
    649: The Desk Review

    BSD Now

    Play Episode Listen Later Feb 5, 2026 71:37


    ZFS Scrubs and Data integrity, Propolice, FreeBSD vs Slackware and more. NOTES This episode of BSDNow is brought to you by Tarsnap and the BSDNow Patreon Headlines Understanding ZFS Scrubs and Data Integrity The story of Propolice Desk reviews describe comment ask questions No reponses, no justications. [Tj's Desk](media/bsdnow649-tjs-desk.jpg) [Ruben's Desk](media/bsdnow649-rubens-desk.jpg) News Roundup FreeBSD vs. Slackware: Which super stable OS is right for you? Prometheus, Let's Encrypt, and making sure all our TLS certificates are monitored Wait, a repairable ThinkPad!? Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Join us and other BSD Fans in our BSD Now Telegram channel

    CHAOSScast
    Episode 127: Community Health metrics for Commercial Open Source

    CHAOSScast

    Play Episode Listen Later Feb 5, 2026 34:52


    Thank you to the folks at Sustain for providing the hosting account for CHAOSScast! CHAOSScast – Episode 127 In this episode of CHAOSScast, host Alice is joined by Matt Trifiro from the Commercial Open Source Startup Alliance (COSSA) and Daniel Izquierdo, CEO of Bitergia and co-founder of the CHAOSS Community. The discussion delves into the importance of open source community health metrics in shaping successful commercial strategies for startups. Matt shares COSSA's mission to support the growth of venture-funded open source projects by fostering collaboration among founders, investors, and customers. Daniel discusses how community health can influence the sustainability and innovation of projects. They also explore the future goals of COSSA, including establishing a working group to develop standardized metrics for evaluating community contributions and business value. Press download now to hear more! [00:00:29] Matt and Daniel introduce themselves and their backgrounds. [00:01:56] Matt explains COSSA's mission. [00:02:58] Matt cites evidence that community health can correlate with business outcomes and that investment can improve community indicators, and there's a discussion on moving beyond vanity metrics like GitHub stars. [00:05:13] Daniel shares his perspective from the Open Compliance Summit (Tokyo) and the supply chain/corporate lens: organizations want confidence the software will be safe and still maintained years from now, and he talks about measuring health via collaboration networks. [00:08:34] Matt breaks value into two buckets: Distribution and IP/innovation to explain how open source communities create startup value. Daniel adds that open source and can reduce procurement friction. [00:12:23] They touch on open source as a path to standards. [00:14:50] Matt describes how COSSA supports the startups: education, best practices, and measurement and his goal is to “convert community metrics into dollars.” Daniel notes the need for a baseline framework, then customization by industry. [00:19:38] What's next for COSSA? Matt shares COSSA is being bootstrapped, received initial Linux Foundation support, and is pursuing seed style funding. His planned membership structure is investors, founders, and customers. [00:20:36] Daniel and Matt discuss making the metric framework transparent, likely anchored via CHAOSS, and the goal to building a “Rosetta Stone” between investors and community. [00:25:49] There's a conversation on rug pulls, incentives, and lack of a shared framework. [00:28:21] Matt describes the “covenant” concept. [00:30:34] Alice wraps with mentioning COSSA's direction is clear, and a working group could be on the ramp for broader community participation. Value Adds (Picks) of the week: [00:31:20] Alice's pick is visiting outdoor Christmas light displays after dark. [00:32:27] Matt's pick is his oldest son's finishing his first semester in college. [00:32:58] Daniel's pick is his son finishing his first quarter at primary school and going to the Open Compliance Summit and thanking Shane Coughlan for all his work for many years running this event. Panelist: Alice Sowerby Guests: Matt Trifiro Daniel Izquierdo Links: CHAOSS CHAOSS Project X CHAOSScast Podcast CHAOSS YouTube podcast@chaoss.community Alice Sowerby LinkedIn Matt Trifiro LinkedIn COSSA Daniel Izquierdo LinkedIn Bitergia Christmas Lights at Stourhead Rapturous Delight: after-dark Worcester, Worcestershire The State of Commercial Open Source 2025 (The Linux Foundation)Special Guest: Matt Trifiro.

    Rework
    Product walkthroughs, the next open source product & other listener questions

    Rework

    Play Episode Listen Later Feb 4, 2026 26:38 Transcription Available


    A fresh batch of listener questions leads this week's conversation. Jason Fried and David Heinemeier Hansson talk through how they approach product walkthroughs, what's ahead for open code at 37signals, and why a little fun still belongs in serious software.Key Takeaways00:22 – Recording product walkthroughs without scripts or polish11:45 – Writebook as an open source product15:04 – How the 37signals team uses Basecamp and Fizzy together22:52 – The quiet joy of Easter eggs and playful details in softwareLinks and ResourcesFizzy is a modern spin on kanban. Try it for free at fizzy.doRecord a video question for the podcastSign up for a 30-day free trial at Basecamp.comBooks by 37signalsHEY World | HEYThe REWORK podcastThe Rework Podcast on YouTubeThe 37signals Dev Blog37signals on YouTube@37signals on X

    The Cloudcast
    The Future of Enterprise Software?

    The Cloudcast

    Play Episode Listen Later Feb 4, 2026 27:11


    Are we ready to move into an era of wild predictions about where the future of Enterprise software is headed in 2026 and beyond? SHOW: 999SHOW TRANSCRIPT: The Cloudcast #999 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW NOTESThe SPAC-king is going to fix legacy software All Enterprise software is dead Microsoft and Software Survival (Stratechery)WHAT HAPPENS TO ENTERPRISE SOFTWARE NEXT?How much do enterprises want to write their own software? How much do enterprises wish they could write more software?How much do enterprises not understand the economics of owning their own software?How much does “big SaaS” or just “big Enterprise software” actually help because people already know it?Is it possible that this new Agentic-driven software could create a type of new software community? Are “open” software communities prepared for the emerging economics of AI-created software? FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod

    Python Bytes
    #468 A bolt of Django

    Python Bytes

    Play Episode Listen Later Feb 3, 2026 31:00 Transcription Available


    Topics covered in this episode: django-bolt: Faster than FastAPI, but with Django ORM, Django Admin, and Django packages pyleak More Django (three articles) Datastar Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: django-bolt : Faster than FastAPI, but with Django ORM, Django Admin, and Django packages Farhan Ali Raza High-Performance Fully Typed API Framework for Django Inspired by DRF, FastAPI, Litestar, and Robyn Django-Bolt docs Interview with Farhan on Django Chat Podcast And a walkthrough video Michael #2: pyleak Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak. Has patterns for Context managers decorators Checks for Unawaited asyncio tasks Threads Blocking of an asyncio loop Includes a pytest plugin so you can do @pytest.mark.no_leaks Brian #3: More Django (three articles) Migrating From Celery to Django Tasks Paul Taylor Nice intro of how easy it is to get started with Django Tasks Some notes on starting to use Django Julia Evans A handful of reasons why Django is a great choice for a web framework less magic than Rails a built-in admin nice ORM automatic migrations nice docs you can use sqlite in production built in email The definitive guide to using Django with SQLite in production I'm gonna have to study this a bit. The conclusion states one of the benefits is “reduced complexity”, but, it still seems like quite a bit to me. Michael #4: Datastar Sent to us by Forrest Lanier Lots of work by Chris May Out on Talk Python soon. Official Datastar Python SDK Datastar is a little like HTMX, but The single source of truth is your server Events can be sent from server automatically (using SSE) e.g yield SSE.patch_elements( f"""{(#HTML#)}{datetime.now().isoformat()}""" ) Why I switched from HTMX to Datastar article Extras Brian: Django Chat: Inverting the Testing Pyramid - Brian Okken Quite a fun interview PEP 686 – Make UTF-8 mode default Now with status “Final” and slated for Python 3.15 Michael: Prayson Daniel's Paper tracker Ice Cubes (open source Mastodon client for macOS) Rumdl for PyCharm, et. al cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun Python Developers Survey 2026 Joke: Pushed to prod

    Conscious Chatter with Kestrel Jenkins
    Beth Jensen of Textile Exchange on fashion's complex history with data, how the organization is addressing it through their open-source reporting and the need to ensure the search for *perfect data* doesn't hinder real action

    Conscious Chatter with Kestrel Jenkins

    Play Episode Listen Later Feb 3, 2026 50:19


    In Episode 339, Kestrel welcomes Beth Jensen, the Chief Impact Officer at Textile Exchange, to the show. Leading the organization's efforts to achieve beneficial climate and nature impacts, Beth oversees key functions at Textile Exchange including impact data and Life Cycle Assessment studies; impact tools and reporting mechanisms; reports and research; fundraising; and public affairs/policy. "A big part of vulnerability is really admitting that you don't have all the answers. So in sustainability, in fashion, apparel, and textile space, this is just the way we have to operate. If you said you had all the answers, you wouldn't be taken seriously in this space … What you present as data might change the next time you present it because you have new and better information. You just have to be able to work in the gray and really take the best available information and make informed decisions based on that information." -Beth THEME — DATA & FASHION: METHODS & ACCESS Before we dive in, I want to take a moment to remind us all that FASHION IS POLITICAL.   Whenever a big politically-charged moment arises in the U.S., there is this narrative I see creeping around that expects fashion (brands, designers, creators, etc) to stay silent on quote unquote political issues – that fashion should stay in its so-called lane, detached from the world around it.  Here's the thing – FASHION IS POLITICAL. It always has been and it always will be. It doesn't exist in its own little vacuum. If you care about the fashion industry, and its impact on people and the planet, it's imminent to pay attention and engage in so-called politics, because it's entirely interconnected. Just to mention a few of these significant overlaps – The origins of the fashion industry in the United States – cotton grown by Black enslaved folks who were forced to immigrate – is political.  The way clothing supply chains operate – predominantly spread across the Global South where our clothes are made by mostly women of color, who are often paid less than a living wage – is political. How certain materials permeate the fashion industry – fossil fuel-derived fibers AKA plastic. While other natural fibers were historically made illegal to grow AKA hemp – is political. The largest garment manufacturing city in the U.S. is Los Angeles, employing over 46,000 garment workers, most of whom are immigrant women from Mexico and Central America. L.A. is the wage theft capital of the U.S., with the average hourly wage being $5.85 (Labor Violations In The LA Garment Industry, Garment Worker Center, 2020) The institutionalized violent origins of ICE as well as the continued horrific acts they have made toward immigrants and nonimmigrants, fellow members of our communities – is political. As Faherty called it in their recent IG post – systemic inhumanity affects us all – our families, friends, colleagues, neighbors and communities, and that is political. If you try to separate fashion from politics, clothing from humans, it's impossible. Clothing is made by people who are integral members of our communities and valued creatives along the supply chain. We must advocate for our fellow community members and the safety of our neighbors. This is the second episode is a 2-part series dedicated to DATA IN FASHION. While many of you may already have an understanding of these elements, I think they are important to reframe and contextualize the following conversation.  The fashion industry and the so-called sustainable fashion space has a concerning history with data. The so-called stat – fashion is the 2nd largest polluter globally, second only to oil – unfortunately spread like wildfire before it was found to be unsubstantiated – in 2017, journalist Alden Wicker brought this to light in an article on Racked, and the NY Times did a deep dive into it the following year, calling it the "biggest fake news in fashion". It's clear that the fashion industry has a massive impact on the earth and its inhabitants – it's an industry that not only thrives with models of overproduction and waste, it also prioritizes synthetic fossil fuel-derived materials like polyester. But, considering how long this inaccurate claim was utilized by the sustainability and fashion realm (to note, I still see it used today and often have to send articles to folks to remind them that it was never substantiated) – I guess, it becomes challenging for fashion to be taken seriously in the greater climate conversation.  Being that fashion is one of the most underregulated industries – I know this is shifting with more policy coming into play, but it's slow. This has further reduced the amount of data collected from brands, because it hasn't been required.  As you can tell, data, fashion and sustainability have a complex history. This week's guest understands this reality, and is pushing to shift the narrative through her work with Textile Exchange. But it's a tricky task, when for her, a lack of data shouldn't prevent us from taking action.  "Without having data to underpin statements about something working toward reducing impact or creating beneficial impact, there's really nothing for those statements to stand on. Now the challenge there is making sure that we're striking the right balance of not letting perfect data get in the way of doing the work that we need to do to improve practices and create beneficial outcomes for the industry." -Beth Materials Market Report 2025 (Press Release) Paper on Ensuring Integrity in the Use of Life Cycle Assessment Data (Press Release) Industry Reports Library Life Cycle Inventory (LCI) Library Follow Textile Exchange on Instagram

    Hintergrund - Deutschlandfunk
    Open Source - Die Suche nach mehr digitaler Unabhängigkeit

    Hintergrund - Deutschlandfunk

    Play Episode Listen Later Feb 3, 2026 18:59


    Tech-Konzerne wie Microsoft sind mächtig, doch ein Großteil der deutschen Ministerien und Behörden nutzt die Software. Das macht verwundbar, glauben viele. Bundesregierung und erste Bundesländer setzen deshalb zunehmend auf den offenen Quellcode. Loll, Anna www.deutschlandfunk.de, Hintergrund

    Coffee and Open Source

    Tim is a program manager at Microsoft, working on .NET and developer tools (formerly UI frameworks including WPF, Silverlight, UWP, and WinUI). In the past Tim worked as software developer for various healthcare and consulting companies building client and web applications. Personally Tim is an avid cyclist.You can find Tim on the following sites:BlogLinkedInXPLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin

    Oracle Groundbreakers
    Paul Bakker: Go Build a Lot of Stuff!

    Oracle Groundbreakers

    Play Episode Listen Later Feb 3, 2026 26:08


    This is the third in a short series of speaker profiles for JavaOne 2026 in Redwood Shores, California, March 17-19. Get early bird pricing until February 9, and for a limited time, take advantage of a $100 discount by using this code at checkout: J12026IJN100. Register. Sessions. In this conversation, Jim Grisanzio from Java Developer Relations talks with Paul Bakker, an engineer and Java architect in California. Paul is a staff software engineer in the Java Platform team at Netflix. He works on improving the Java stack and tooling used by all Netflix microservices and was one of the original authors of the DGS (GraphQL) Framework. He is also a Java Champion, he's published two books about Java modularity, and he's a speaker at conferences and Java User Groups. Java Is Everywhere at Netflix Paul will present "How Netflix Uses Java: 2026 Edition" at JavaOne in March. The session updates previous year's talk because Java keeps evolving at Netflix. "Netflix is really staying on the latest and greatest with a lot of things," Paul says. "We're trying new things. And that means there's always new stuff to learn every year." Java powers both Netflix streaming and enterprise applications used internally and supporting studio teams. "Java is everywhere at Netflix," Paul says. "All the backends, they are all Java powered." Why Java? It comes down to history and practicality. The original team members were Java experts, but more importantly, "Java is also just the best choice for us," he says. The language balances developer productivity and runtime performance. At Netflix's scale with thousands of AWS instances running production services, runtime performance is critical. Netflix engineers stay closely connected with development at OpenJDK. They test new features early and work with preview releases or builds before official releases. When virtual threads appeared, Netflix engineers tested immediately to measure performance gains. Paul says they give feedback on what works, what doesn't work, and what they would like to see different. This just demonstrates the value of being involved with OpenJDK, and Paul says they have a really nice back and forward with the Oracle engineering teams. The microservices architecture Netflix adopted years ago enabled the company to scale. This approach has become common now, but Netflix pioneered talking about it publicly. Breaking functionality into smaller pieces lets teams scale and develop services independently. Most workloads are stateless, which enables horizontal scaling. Production services for streaming often run several thousand AWS instances at a time. Early on with Java Applets Paul's coding journey started at 15 when he got his first computer and wanted to learn everything about it. Working at a computer shop repairing machines, the owner asked if he knew how to build websites. Paul said no but wanted to learn. He was curious about everything that involved computes. Java applets were hot back then. With nothing online available, he bought a book and started hacking away. "It was so much fun that I also decided right at that point basically like, oh, I'm going to be an engineer for the rest of my life," he says. That's clarity for a 15-year-old. And it's remarkable. But Paul says it felt natural. He just started doing it, had such a good time, and knew that was what he wanted to do. When he started university around 2000, right during the dot-com bubble and crash, professors warned students not to expect to make money in engineering because the bubble had burst. Paul still remembers how funny that seems now. You can never predict the future. Initially, he learned Java and PHP simultaneously. Java powered client-side applications through applets while PHP ran server-side code. The roles have completely reversed now. Engaging the Community Paul attended his first JavaOne in 2006. "Those were really good times," he says about the early conferences when everything felt big and JavaOne was the only place to learn about Java. Back then, around 20,000 people would travel to San Francisco every year. It was the one and only place to learn what was new in Java. All the major news would be released at JavaOne each year. The world has changed. Now information spreads instantly and continually online, but Paul misses something about those early days. The more recent JavaOne conferences offer something different but equally valuable. Paul points to last year's event in Redwood City as a great example. While the conference is still big, it's small enough that attendees can actually talk with the Oracle JDK engineers and have deeper conversations. The folks who work on the JDK and the Java language are all there giving presentations, but they're also totally accessible for hallway chats. "That makes it really interesting," Paul says. This direct access to the people building the platform distinguishes JavaOne from other conferences. Java User Groups also played an important role in Paul's development. He lived in the Netherlands before moving to the Bay Area nine years ago. In the Netherlands, the NLJUG (Dutch Java User Group) organized two conferences a year, J-Spring and J-Fall. Paul would go to both every year. That was his place to learn in Europe. He has been continuing that pattern right up until now, which is why he is speaking at JavaOne again. Open Source software has also been another major aspect of community for Paul. He has always been active in Open Source because he says it's a fun place to work with people from all over the world solving interesting problems. Besides being a critical part of his professional career, it was also his hobby. Paul says the Open Source aspect with the community behind it is maybe his biggest thing that he really enjoyed over the years. AI Throughout Development AI now occupies much of Paul's professional focus. At Netflix, engineers use AI tools throughout the development lifecycle. Paul uses Claude Code daily, though other developers prefer Cursor, especially for Python and Node work. Most Java developers at Netflix work with Claude Code. The tools integrate with GitHub for pull request reviews, help find bugs, and assist with analyzing production problems by examining log files. Paul describes using AI as having a thinking partner to t all to and code with. Sometimes he needs to bounce ideas around, and the AI gives insights he might have missed or suggests additional issues to consider. For repetitive tasks like copying fields between objects, AI handles the grunt work efficiently. "That's the nice thing about an AI," Paul says. "While a person would probably get really annoyed with all this feedback all the time and like having to repeat the work over and over again, but an AI is like, fine, I'll do it again." Go Build a Lot of Stuff! When asked about advice for students, Paul's answer comes quickly and has not changed much over the years. "I think what I really recommend is just go and build a lot of stuff," he says. "The way to get to become a better developer is by doing a whole lot of development." That's timeless advice students can easily adopt no matter how the modern tools for learning have changed. Paul had to go to a bookstore and buy a book to learn programming. Students today have AI tools to help them and advanced IDEs. But the fundamental principle remains the same, which is to build interesting applications. Paul recommends that students come up with a fun problem and just build it. You learn by making mistakes. You build a system, reach the end, and realize the new codebase already struggles with maintainability. Then you ask what you could have done differently. Those real-life coding experiences teach you how to design code, architect code, and write better code. Paul also suggests that students use AI tools but not blindly. Do not just accept whatever an AI generates. Instead, try to understand what came out, how it could have been done differently, and experiment with different approaches. Use the tools available but really understand what is going on and what options you have. Some students and even practicing developers worry that advanced tools might eliminate their future role as developers. Paul says that nobody knows exactly how things will look in the future because tools get better almost every day now. But AI tools are just tools. Someone needs to drive them and come up with the ideas they should build. Plus, the tools at present are far from a state where you can hand them a task, never look at it again, and have everything work perfectly. Substantial hand-holding is involved. "Is our daily work going to change? Very likely," Paul says. "That's already happening." But he tries to see this change as a positive thing. "It's a new tool that we can use. It makes certain parts of our job more fun, more interesting. You can get more things done in some ways and be open to it." Why Java Works At the end of the conversation, Paul answered a simple question — Why Java? What makes it great? — with a simple and direct answer: "Java is the perfect balance of developer productivity and runtime performance." That balance matters where Paul works at Netflix. But it also matters for students learning their first language, for teams building enterprise applications, and for developers choosing tools that will sustain long careers. Paul's career started with Java applets 20 years ago when he bought a book and started hacking away. The language and platform has evolved dramatically since then, moving from client-side applets to powering massive backend services that stream entertainment to millions globally via Netflix. Through all that change, the core appeal remains — you can build things efficiently for many platforms and those things run fast. Paul Bakker: X, LinkedIn Duke's Corner Java Podcast: Libsyn Jim Grisanzio: X, LinkedIn, Website  

    LINUX Unplugged
    652: Have Your Bot Call My Bot

    LINUX Unplugged

    Play Episode Listen Later Feb 2, 2026 69:57 Transcription Available


    We stress tested open source AI agents this week. What actually held up, and where it falls apart. Plus Brent's $20 Wi-Fi upgrade.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:

    Hacker News Recap
    February 1st, 2026 | Netbird – Open Source Zero Trust Networking

    Hacker News Recap

    Play Episode Listen Later Feb 2, 2026 15:19


    This is a recap of the top 10 posts on Hacker News on February 01, 2026. This podcast was generated by wondercraft.ai (00:30): Netbird – Open Source Zero Trust NetworkingOriginal post: https://news.ycombinator.com/item?id=46844870&utm_source=wondercraft_ai(01:57): Teaching my neighbor to keep the volume downOriginal post: https://news.ycombinator.com/item?id=46848415&utm_source=wondercraft_ai(03:24): Notepad++ hijacked by state-sponsored actorsOriginal post: https://news.ycombinator.com/item?id=46851548&utm_source=wondercraft_ai(04:52): What I learned building an opinionated and minimal coding agentOriginal post: https://news.ycombinator.com/item?id=46844822&utm_source=wondercraft_ai(06:19): Defeating a 40-year-old copy protection dongleOriginal post: https://news.ycombinator.com/item?id=46849567&utm_source=wondercraft_ai(07:47): List animals until failureOriginal post: https://news.ycombinator.com/item?id=46842603&utm_source=wondercraft_ai(09:14): Adventure Game Studio: OSS software for creating adventure gamesOriginal post: https://news.ycombinator.com/item?id=46846252&utm_source=wondercraft_ai(10:41): Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolationOriginal post: https://news.ycombinator.com/item?id=46850205&utm_source=wondercraft_ai(12:09): My thousand dollar iPhone can't do mathOriginal post: https://news.ycombinator.com/item?id=46849258&utm_source=wondercraft_ai(13:36): The Book of PF, 4th editionOriginal post: https://news.ycombinator.com/item?id=46844350&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

    Lex Fridman Podcast
    #490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

    Lex Fridman Podcast

    Play Episode Listen Later Feb 1, 2026


    Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/ai-sota-2026-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to https://box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to https://quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to https://perplexity.ai/ OUTLINE: (00:00) – Introduction (01:39) – Sponsors, Comments, and Reflections (16:29) – China vs US: Who wins the AI race? (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning? (36:11) – Best AI for coding (43:02) – Open Source vs Closed Source LLMs (54:41) – Transformers: Evolution of LLMs since 2019 (1:02:38) – AI Scaling Laws: Are they dead or still holding? (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training (1:51:51) – Post-training explained: Exciting new research directions in LLMs (2:12:43) – Advice for beginners on how to get into AI development & research (2:35:36) – Work culture in AI (72+ hour weeks) (2:39:22) – Silicon Valley bubble (2:43:19) – Text diffusion models and other new research directions (2:49:01) – Tool use (2:53:17) – Continual learning (2:58:39) – Long context (3:04:54) – Robotics (3:14:04) – Timeline to AGI (3:21:20) – Will AI replace programmers? (3:39:51) – Is the dream of AGI dying? (3:46:40) – How AI will make money? (3:51:02) – Big acquisitions in 2026 (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta (4:08:08) – Manhattan Project for AI (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters (4:22:48) – Future of human civilization

    The Vonu Podcast
    Cloak & Dagger Correspondences #3: Hardware Hacking, Open Source Surveillance, & Psychological/Mental Self-Liberation w/ Jamin Biconik

    The Vonu Podcast

    Play Episode Listen Later Feb 1, 2026 70:49


    Join Thane Riddle, host of Cloak & Dagger, in another enthralling conversation with hardware hacker & permaculture farmer, Jamin Biconik. Herein, we get some updates on what Jamin's been up to hardware hacking-wise, lots of mini-conversations, and even some deep psychological, spiritual self-liberation stuff towards the end. Please enjoy, and… The post Cloak & Dagger Correspondences #3: Hardware Hacking, Open Source Surveillance, & Psychological/Mental Self-Liberation w/ Jamin Biconik appeared first on The Vonu Podcast.

    The Linux Cast
    Episode 220: What Should be Default on Linux?

    The Linux Cast

    Play Episode Listen Later Jan 31, 2026 73:07


    The boys return, this time to talk about what Linux distros should use as defaults. KDE or Gnome? SystemD or runnit? Things like that. ==== Special Thanks to Our Patrons! ==== https://thelinuxcast.org/patrons/ ===== Follow us

    Paul VanderKlay's Podcast
    Is Secularity an Open Source Religious Commons?

    Paul VanderKlay's Podcast

    Play Episode Listen Later Jan 30, 2026 33:07


    Revolutionary Iran https://www.amazon.com/Revolutionary-Iran-audiobook/dp/B07TKCGR4S DW History and Culture The Iranian Revolution 1979 explained: From the Pahlavis to mass protests and the Islamic Republic https://youtu.be/1uFGWkd_65k?si=nIvge7HLo_s0cyVz   https://www.livingstonescrc.com/give Register for the Estuary/Cleanup Weekend https://lscrc.elvanto.net/form/94f5e542-facc-4764-9883-442f982df447 Paul Vander Klay clips channel https://www.youtube.com/channel/UCX0jIcadtoxELSwehCh5QTg https://www.meetup.com/sacramento-estuary/ My Substack https://paulvanderklay.substack.com/ Bridges of meaning https://discord.gg/WA2RmWx2 Estuary Hub Link https://www.estuaryhub.com/ There is a video version of this podcast on YouTube at http://www.youtube.com/paulvanderklay To listen to this on ITunes https://itunes.apple.com/us/podcast/paul-vanderklays-podcast/id1394314333  If you need the RSS feed for your podcast player https://paulvanderklay.podbean.com/feed/  All Amazon links here are part of the Amazon Affiliate Program. Amazon pays me a small commission at no additional cost to you if you buy through one of the product links here. This is is one (free to you) way to support my videos.  https://paypal.me/paulvanderklay Blockchain backup on Lbry https://odysee.com/@paulvanderklay https://www.patreon.com/paulvanderklay Paul's Church Content at Living Stones Channel https://www.youtube.com/channel/UCh7bdktIALZ9Nq41oVCvW-A To support Paul's work by supporting his church give here. https://tithe.ly/give?c=2160640 https://www.livingstonescrc.com/give

    Dev Interrupted
    A constitution for AI, breaking dark flow, and open source as a moat?

    Dev Interrupted

    Play Episode Listen Later Jan 30, 2026 23:03


    In this Friday Deploy, Andrew and Ben dive into the viral Moltbot (now OpenClaw) phenomenon and Steve Yegge's Software Survival 3.0 essay, debating how SaaS companies can build moats in an era of token-constrained engineering. They also explore the concept of "Dark Flow" - a deceptive state where vibe coding feels productive but hides accumulated tech debt - and break down Anthropic's newly released constitution for Claude. Finally, the team discusses a Reddit user's claim to have ported CUDA to AMD in 30 minutes and shares a fascinating breakdown of podcast listening data.LinearB: The AI productivity platform for engineering leadersFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:OpenClawSoftware Survival 3.0Breaking the Spell of Vibe CodingClaude's new constitutionClaude Code Has Managed to Port NVIDIA's CUDA Backend to ROCmMy Top 25 Podcast Episodes & Interviews from 2025 by IPM (Insights Per Minute)OFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

    BSD Now
    648: Greytrapping for years

    BSD Now

    Play Episode Listen Later Jan 29, 2026 64:38


    FreeBSD's Future, 18 years of greytrapping, PF vs Linux firewalls, and more. NOTES This episode of BSDNow is brought to you by Tarsnap and the BSDNow Patreon Headlines Powering the Future of FreeBSD Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off? BSDCan Organisating committee Interview News Roundup How I, a non-developer, read the tutorial you, a developer, wrote for me, a beginner BSD PF versus Linux nftables for firewalls for us Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Join us and other BSD Fans in our BSD Now Telegram channel

    Burning Man LIVE
    Open Source Innovation

    Burning Man LIVE

    Play Episode Listen Later Jan 28, 2026 42:23


    There's a world of civic hacking where "making cool stuff" meets "making useful stuff."Hear tinkerers, gearheads and other makers share about the inventions that won them Burners Without Borders Civic Ignition Grants. These grants are little sparks that fire up the next level of open-source technology for all of our community, and for all the world.Colin Jemmott and MJ Brovold of YOUtopia, the San Diego Regional event, share about their low maintenance light source that's sturdy, solar-powered, and buildable by anyone. They're also building a huge steel pop-up book! Sam Smith and Squirtle of SOAK, the Portland Regional event, share about their deployable solar shade pavilion made of star-shapes and scissor linkages. Trash eating robots are involved, and 3D printed ‘precious plastic' art.This is not about the party. This is about practicing for a future where we won't need to poison the planet to self-express. These stories are a recipe:One part ‘for the love of it' spiritOne part skills we already haveBlend until smooth.Enjoy what new ideas can happen when we all put our heads together.https://burnerswithoutborders.org/uncategorized/2025-regional-event-granthttps://sdyoutopia.comhttps://sdcolab.orgwww.luxcapacitor.arthttps://www.precipitationnw.org/burnonhttps://soakpdx.com LIVE.BURNINGMAN.ORG