Process of comparing one's business processes and performance metrics to others in the industry
POPULARITY
Categories
Gen AI success starts with a growth mindset and continuous upskilling. Organizations can meet AI demands by embedding learning into daily work and encouraging experimentation. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about why Gen AI demands we keep learning or become obsolete.This article forms the basis for this episode: https://disasteravoidanceexperts.com/gen-ai-demands-we-keep-learning-or-become-obsolete/
At SocialPacific 2025 in North Vancouver, Charlie Grinnell, Co-CEO of RightMetric, joins guest host Rachel Thexton to break down the uncomfortable truth about modern marketing.Charlie explains why most brands operate on assumptions, not evidence, and why “looking before you leap” is no longer optional. From ego and institutional bias to blind faith in performance marketing, he challenges marketers to stop guessing and start triangulating the truth using real external data.The conversation explores attention economics, content engineering, and why in a saturated digital world, creativity without context is just expensive guesswork.Thanks to TAKT, the editors and producers of the SocialPacific 2025 series.
Better management doesn’t just improve margins — it could reshape how Canada designs farm policy. In this episode of the Ag Policy Connection podcast, Tyler McCann and Elise Bigley of the Canadian Agri-Food Policy Institute sit down with Terry Betker, founder of Backswath Management, to explore a deceptively simple but far-reaching idea: build a national... Read More
Welcome back to Our Agile Tales as we continue our conversation with Bjarte Bogsnes, exploring case studies from his latest book, This Is Beyond Budgeting. The book distills nearly three decades of experience challenging traditional budgeting, targets, and control-based management.In this episode, we examine Beyond Budgeting through two case studies: Miles and David Lloyd Clubs.Miles is a Norwegian IT consulting company founded in reaction to command-and-control micromanagement. It operates without budgets and with minimal KPIs, guided by an evergreen financial ambition of maintaining a profit margin above 10% without cascaded targets or bonus links. Employees enjoy wide autonomy, with transparency as the primary control mechanism: purchases and training costs are posted on the intranet for shared learning.Miles places strong emphasis on recruitment and cultural fit, taking at least ten references and interviewing for beliefs, values, and attitudes. Employees assess technical skills and can veto candidates. The company invests heavily in social cohesion, including spouse-only events, and practices servant leadership, with the CEO retitling himself “Chief Servant Leader.” Bjarte notes that Miles was essentially “born beyond budgeting” and has sustained its principles through growth by consciously resisting bureaucracy, including internal leadership succession.The second case study, David Lloyd Clubs, a high-end UK gym chain with around 300 clubs, represents one of the fastest Beyond Budgeting implementations Bjarte has seen: launched in October 2019 and fully budget-free by January 2021. The model helped the company not only survive COVID-19 but emerge stronger.Key practices included increased club autonomy, strong internal benchmarking, transparency, and local involvement in KPI selection. Central target setting was reduced, with emphasis on relative performance rather than detailed annual targets tied to bonuses.Ownership by private equity firm TDR Capital supported the shift, focusing on leadership and management improvement rather than cost-cutting.Bjarte attributes the speed to strong owner backing, a capable controller leading the effort, and a supportive CEO, while noting that mindset change takes longer than process change. HR played a key role in shifting performance evaluation toward relative measures and maintaining shared club-level bonuses instead of individual incentives.Key topics and timestamps00:00 Welcome01:07 Miles Overview02:47 Transparency Over Budgets04:15 Recruiting and Culture06:05 Servant Leadership06:46 Born Beyond Budgeting10:37 Sustaining Beliefs at Scale12:23 David Lloyd Clubs13:09 Rapid Rollout13:56 Benchmarking and Rhythm17:41 Why It Worked20:53 Relative Performance24:45 Transparency and Learning26:47 HR and Rewards28:15 Results and ConclusionAbout Bjarte BogsnesBjarte Bogsnes is Chairman of the Beyond Budgeting Round Table, a former global finance executive, and a leading thinker in management innovation. He is the author of Implementing Beyond Budgeting and This Is Beyond Budgeting, showing how organizations can replace rigid, calendar-driven systems with models built on trust, transparency, and adaptability — creating companies that are both more responsive and more human.Follow Bjarte at:https://www.linkedin.com/in/bjarte-bogsnes-41557910/Music: https://www.purple-planet.comVisit us at https://www.ouragiletales.com/about
Be sure to connect with Dr. Darin Brawley on LinkedIn here.Dr. Brawley transformed Compton Unified from a 58% graduation rate to 94%.A-G Completion rate from 12% to 76%Collective will is essential for achieving student success.Benchmarking against other districts drives improvement.Continuous improvement models help in adjusting strategies.Staying connected to schools is crucial for effective leadership.Succession planning ensures organizational growth and stability.Conflict is necessary for change and should be embraced.Work-life balance is important to prevent burnout.Mentorship plays a key role in professional development.Assessments should guide instruction and interventions.00:00 Introduction to Dr. Darren Brawley02:43 Transforming Compton Unified School District05:13 The Importance of Collective Will in Education07:55 Benchmarking for Success10:49 Staying Connected to Schools13:29 Succession Planning and Mentorship16:06 Work-Life Balance and Avoiding Burnout18:57 The Role of Conflict in Leadership21:19 Shout Outs and Closing ThoughtsBook Adam for your next event! mradamwelcome.com/speakingBrand new speaking video HERE!Adam's Books:Kids Deserve It - amzn.to/3JzaoZvRun Like a Pirate - amzn.to/3KH9fjTTeachers Deserve It - amzn.to/3jzATDgEmpower Our Girls - amzn.to/3JyR4vm
On the Uplevel Dairy Podcast, Peggy Coffeen talks with Curtis Gerrits and Jim Moriarty of Compeer Financial about why benchmarking is essential for dairy farms, especially as year-end financials become available, milk prices soften, and recent beef-on-dairy income may have masked underlying costs. They explain benchmarking as first comparing a farm to itself over time, then comparing to a larger peer dataset of similar farms to identify strengths and small opportunities across income and expenses that can add up. Key areas discussed include feed cost and productivity (including homegrown forages like corn silage and increased use of alfalfa), feed efficiency factors such as refusals and mixing time, and the importance of working with nutritionists and local crop partners. They highlight core benchmarks such as capital cost per hundredweight and labor cost per hundredweight, how capital and labor relate when making investments, and improvements in net herd replacement costs driven by lower herd turnover, fewer heifers raised, and more beef calf sales. They conclude with takeaways to embrace financial management and benchmarking, keep moving forward during down cycles, and note that top-performing dairies succeed through attention to detail, execution, regular decision-making, and involving family, key employees, and advisors by sharing financial results.This episode is sponsored by Compeer Financial.Compeer Financial is a member-owned Farm Credit cooperative serving and supporting agriculture and rural America. Their dairy team brings world-class expertise and tailored solutions to support dairy producers' financial goals and lending needs.Visit https://www.compeer.com/specialists/dairy00:00 Why Benchmarking Matters Right Now (Year-End Numbers + Softer Milk Prices)04:05 Benchmarking Basics: Compare to Yourself, Then to Peer Groups07:22 Big Levers: Feed Costs, Efficiency, and Milk Components08:59 Homegrown Forages & Feed Management: What to Optimize11:38 Core Benchmarks to Watch: Capital Cost, Labor, and Replacement Rates16:18 Turning Data Into Action: Consistency, Clean Categories, and Advisory Teams20:45 Key Takeaways for Dairy Strong: Embrace the Process & Keep Moving Forward22:56 What Top-Performing Dairies Do Differently (Attention to Detail + Team Buy-In)27:31 Wrap-Up & Resources
Voice used to be AI's forgotten modality — awkward, slow, and fragile. Now it's everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.Neil breaks down today's dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it's popular: it's modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.Finally, we tackle voice cloning: where it's genuinely useful, what it means for deepfakes and privacy, and why watermarking isn't a silver bullet.If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.Neil ZeghidourLinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/X/Twitter - https://x.com/neilzeghGradiumWebsite - https://gradium.aiX/Twitter - https://x.com/GradiumAIMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFirstMarkWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Intro(01:21) Voice AI's big moment — and why we're still early(03:34) Why voice lagged behind text/image/video(06:06) The convergence era: transformers for every modality(07:40) Beyond Her: always-on assistants, wake words, voice-first devices(11:01) Voice vs text: where voice fits (even for coding)(12:56) Neil's origin story: from finance to machine learning(18:35) Neural codecs (SoundStream): compression as the unlock(22:30) Kyutai: open research, small elite teams, moving fast(31:32) Why big labs haven't “won” voice AI4(34:01) On-device voice: where it works, why compact models matter(46:37) The last mile: real-world robustness, pronunciation, uptime(41:35) Benchmarking voice: why metrics fail, how they actually test(47:03) Cascades vs speech-to-speech: trade-offs + what's next(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos(1:00:50) New languages + dialects: what transfers, what doesn't(1:02:54 Hardware & compute: why voice isn't a 10,000-GPU game(1:07:27) What data do you need to train voice models?(1:09:02) Deepfakes + privacy: why watermarking isn't a solution(1:12:30) Voice + vision: multimodality, screen awareness, video+audio(1:14:43) Voice cloning vs voice design: where the market goes(1:16:32) Paris/Europe AI: talent density, underdog energy, what's next
Open, well-facilitated forums turn AI anxiety into trust and engagement, helping employees feel heard, informed, and invested while enabling organizations to adopt Gen AI with clarity, confidence, and collaboration. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about the silent crisis of Gen AI anxiety in the workplace.This article forms the basis for this episode: https://disasteravoidanceexperts.com/the-silent-crisis-of-gen-ai-anxiety-in-the-workplace/
Peer mentoring accelerates skill-building, boosts collaboration, and fosters innovation, helping organizations embrace generative AI effectively while creating a culture of learning, confidence, and shared expertise. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about an approach to learning that makes sure generative AI is not intimidating.This article forms the basis for this episode: https://disasteravoidanceexperts.com/generative-ai-isnt-intimidating-when-you-learn-it-this-way/
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
We unpack how AI finally lets marketers count what matters, why attribution is broken across platforms, and how to use behavioral insight and benchmarking to choose better bets. We share a simple playbook: listen in niche communities, test in organic, scale what pops, and measure outcomes not vanity.• AI compressing the cost of data integration and analysis• Platform bias, privacy limits and pixel gaps in attribution• Traffic down, leads up as signal quality improves• Behavioral personas outperforming demographic personas• Niche communities seeding mainstream trends with delay• Organic as a testbed to inform paid investments• Benchmarking growth against the category, not just yourself• Operational discipline for clean data and controlled spend• Directional signals over false precision to act fasterGuest Contact Information: LinkedIn: linkedin.com/in/charliegrinnellWebsite: rightmetric.coInstagram: instagram.com/charliegrinnellTwitter/X: x.com/CharlieGrinnellMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
Resetting the culture code is essential to unlock Gen AI's value — aligning people, ethics, and collaboration so AI becomes a trusted partner for innovation, not a source of fear or disruption. That's the key take-away message of this episode of the Wise Decision Maker Show, which discusses resetting the culture code for the generative AI era.This article forms the basis for this episode: https://disasteravoidanceexperts.com/resetting-the-culture-code-for-the-generative-ai-era/
In this episode of The First Day from The Fund Raising School, host Bill Stanczykiewicz, Ed.D., sits down with Carly Berna, Vice President of Marketing (and the impressively titled “Fundraiser in Residence”) at Virtuous. Carly shares findings from the latest Virtuous Benchmark Report, a treasure trove of data gleaned from over 570 nonprofits using the platform for at least three years. The result? A layered look at donor trends across sectors and revenue sizes, from faith-based orgs to human services, all the way from scrappy sub-million-dollar shops to the $10M+ fundraising heavyweights. “Flat doesn't mean bad,” Carly notes, sometimes staying steady means you've weathered the storm. Bill and Carly dig into the meaty data highlights, starting with online giving. The average online gift increased by $5 in the last year and is up a whopping $22 since 2020, showing just how powerful digital channels are becoming, no surprise given Boomers are now a driving force online (61% of them give that way!). Meanwhile, Carly waves the mid-level donor flag with pride, celebrating growth in this oft-ignored group. Nonprofits are learning not to put all their donor eggs in one major gift basket. The conversation turns to recurring giving, a favorite of sustainability-minded fundraisers everywhere. While the average nonprofit sees 13% of their revenue coming from recurring donors, Virtuous' top quartile of performers boasts a hefty 33%. Donor retention is also slowly rebounding post-pandemic, reaching a six-year high of 50%. But Carly urges listeners not to settle, “Top performers hit 67%, so shoot for the stars!” Finally, the duo dives into donor acquisition and lifetime value. New donor acquisition is slipping, now around 30%, but those who do give are investing more over time, with average donor lifetime value rising to $784. Carly's message is clear: nonprofits need to be smart, not just generous: track your data, find your gaps, and don't just pat yourself on the back for being average. With the right balance of stewardship, segmentation, and sustainability, nonprofits can build donor relationships that last longer than most gym memberships.
The Gen AI adoption battle is won by engaging employees through hands-on learning, transparency, and involvement, turning fear into ownership and proving AI's value with real results that drive adoption, trust, and performance. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about how one financial firm won the Gen AI adoption battle.This article forms the basis for this episode: https://disasteravoidanceexperts.com/how-one-financial-firm-won-the-gen-ai-adoption-battle/
In this episode of Excess Returns, Rupert Mitchell returns to break down a rapidly shifting global macro landscape and explain how he is positioning across regions, assets, and market regimes. The conversation spans emerging markets, commodities, China, Latin America, US market leadership, and the risks building beneath familiar narratives. Rupert walks through the charts, frameworks, and portfolio construction decisions that underpin his current outlook, with a focus on duration, cash flows, and real assets in a changing cycle.Topics covered include:Why US equity leadership is showing signs of fatigue after a decade-plus runThe case for emerging markets as a multi-year relative tradeLatin America as a commodity-driven opportunity rather than a political betBrazil, Mexico, and Peru through the lens of fiscal policy and real assetsWhy India stands out as expensive within emerging marketsChina's equity market inflection and the role of domestic savings and fiscal supportThe difference between onshore A-shares and offshore Chinese equitiesWhy Rupert prefers lower-beta, dividend-oriented exposure in ChinaHow AI is being deployed differently in China versus the USThe risks facing enterprise software and long-duration growth assetsPortfolio construction, benchmarking, and managing drawdowns across cyclesHow Rupert thinks about hedging, trend following, and capital preservationTimestamps:00:00 Macro market backdrop and early warning signals01:00 Venezuela, oil, and why context matters more than headlines04:40 The chart of truth and US versus international equities07:00 Emerging markets relative performance and historical parallels10:00 Duration risk, valuation, and the shift toward real assets14:30 Mag 7 leadership, software weakness, and AI disruption18:00 India valuations and the role of flows and derivatives20:40 Latin America beyond politics: commodities and fiscal drivers26:00 Brazil, Mexico, and country-level positioning29:50 Benchmarking and why Latin America is a major overweight32:10 China's equity inflection and the ABC framework36:00 Fiscal policy, buybacks, and domestic savings in China41:00 Tencent versus Alibaba and managing drawdowns44:30 AI capex discipline in China versus the US46:00 Stock selection in China and second-derivative opportunities51:00 Portfolio construction, benchmarks, and risk management58:00 Blind Squirrel Macro, live shows, and ongoing research
Losing some skills to Gen AI isn't decline — it's evolution. As AI takes over routine tasks, humans gain space for creativity, empathy, judgment, and strategy — the abilities that truly define our value. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about why losing skills to Gen AI is a winning strategy.This article forms the basis for this episode: https://disasteravoidanceexperts.com/why-losing-skills-to-gen-ai-is-a-winning-strategy/
This is the AI generated discussion of my post, Why Benchmarking Fails. Enjoy! Here is the link to the original post: https://partnersinexcellenceblog.com/why-benchmarking-fails/
To accelerate adoption and impact, teach AI skills through hands-on, coached, time-boxed builds that produce real demos, connect to workflows, and make learning visible and actionable. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about how you actually teach AI skills.This article forms the basis for this episode: https://disasteravoidanceexperts.com/how-do-you-actually-teach-ai-skills/
A clear Gen AI adoption strategy aligns AI with business goals, engages people, uses the right tools, measures impact, and improves continuously — ensuring Gen AI delivers real ROI, not just new technology. That's the key take-away message of this episode of the Wise Decision Maker Show, which describes the 7 steps to a comprehensive Gen AI adoption strategy.This article forms the basis for this episode: https://disasteravoidanceexperts.com/7-steps-to-a-comprehensive-gen-ai-adoption-strategy/
Exploring the transformative potential of minor adjustments, McKay introduces the "Lever Principle" - the idea that a single, structural change can produce exponential results. He argues that massive life overhauls are often unnecessary; instead, true progress begins with the realization that "nothing will change in your life until you change something about your life." Beginning with architect Bjarke Ingels, whose Saturday creative sessions sparked a global firm, McKay explores case studies - like Chris Gardner's late-night studying and Chef Clare Smyth's questioning techniques - showing how habits rewire futures. Our host goes on to share strategies for "structural changes," such as James Dyson's altered commute or the art of "savoring." Join McKay for this important conversation here today, challenge yourself to maintain one non-negotiable change for thirty days, and learn how small, consistent steps can lead to monumental success.Main Themes:Big success often starts with one small, structural change rather than a massive life reboot.Time is the primary resource needed to make whatever change is required.Changing the questions you ask can fundamentally alter your career trajectory and relationships."Savoring" - the deliberate act of appreciating an activity after it happens - can spill over into all areas of life.Benchmarking and studying the success of others provides a roadmap for your own improvement.Recognizing when a phase of life is "over" is as critical as starting something new.Small changes are easier to implement because the emotional and mental resistance to them is low.Top 10 Quotes:"Nothing will change in your life until you change something about your life.""You do not need a massive overhaul. You do not need a perfect plan. You do not need a life reboot. You need a lever.""Life does not move until you do.""If I don't change something today, the next twenty years will look exactly like the last twenty years.""A billion-dollar idea began with a new way of getting to work.""The questions you ask, both out loud and silently in your mind, shape your thinking and your decisions.""We don't need to learn how to let things go; we just need to learn to recognize when they've already gone.""Man only likes to count his troubles, but he does not count his joys.""The emotional and mental resistance to small changes is very low.""What you believe is more important than what has happened in the past."Show Links:Open Your Eyes with McKay Christensen
DOWNLOAD A FREE COPY OF JEFF'S Book Discernment Is AI going to replace service companies… or make the best ones unstoppable? In this episode of Unemployable, Jeff Dudan sits down with Matt Tait, CEO of Decimal, to talk about the real future of accounting, bookkeeping, and tax—and why “AI-only” businesses struggle without trust, accountability, and credibility. We get into: - Why the winners won't be pure tech OR old-school firms (the hybrid model wins) - How AI is compressing software cycles from years… to months… to days - What small business owners actually need (hint: not more spreadsheets) - Outsourcing + global teams (Philippines/India/South America) and where it's heading - Why most owners don't even open their P&L—and what to do about it - The advisory that matters: KPIs, margin leaks, pricing, payroll, and exit readiness - Benchmarking traps: “average” doesn't mean “good” - What Decimal is building: an AI-enabled bookkeeping + tax franchise model CONNECT WITH MATT TAIT: LinkedIn Decimal.com Matt's podcast (After the First Million) SUBSCRIBE for more episodes on franchising, entrepreneurship, AI, leadership, and building companies that last. DOWNLOAD A FREE COPY OF JEFF'S Book Discernment#Unemployable #JeffDudan #Entrepreneurship #SmallBusiness #BusinessGrowth #Accounting #Bookkeeping #TaxPlanning #FinancialOperations #CashFlow #Profitability #KPIs #BusinessAdvisory #AIinBusiness #ArtificialIntelligence #Automation #Outsourcing #Philippines #ProfessionalServices #Franchising #FranchiseBusiness #BusinessSystems #Leadership #Scaling #ExitStrategy Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
DOWNLOAD A FREE COPY OF JEFF'S Book Discernment Is AI going to replace service companies… or make the best ones unstoppable? In this episode of Unemployable, Jeff Dudan sits down with Matt Tait, CEO of Decimal, to talk about the real future of accounting, bookkeeping, and tax—and why “AI-only” businesses struggle without trust, accountability, and credibility. We get into: - Why the winners won't be pure tech OR old-school firms (the hybrid model wins) - How AI is compressing software cycles from years… to months… to days - What small business owners actually need (hint: not more spreadsheets) - Outsourcing + global teams (Philippines/India/South America) and where it's heading - Why most owners don't even open their P&L—and what to do about it - The advisory that matters: KPIs, margin leaks, pricing, payroll, and exit readiness - Benchmarking traps: “average” doesn't mean “good” - What Decimal is building: an AI-enabled bookkeeping + tax franchise model CONNECT WITH MATT TAIT: LinkedIn Decimal.com Matt's podcast (After the First Million) SUBSCRIBE for more episodes on franchising, entrepreneurship, AI, leadership, and building companies that last. DOWNLOAD A FREE COPY OF JEFF'S Book Discernment#Unemployable #JeffDudan #Entrepreneurship #SmallBusiness #BusinessGrowth #Accounting #Bookkeeping #TaxPlanning #FinancialOperations #CashFlow #Profitability #KPIs #BusinessAdvisory #AIinBusiness #ArtificialIntelligence #Automation #Outsourcing #Philippines #ProfessionalServices #Franchising #FranchiseBusiness #BusinessSystems #Leadership #Scaling #ExitStrategy Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
As we close out 2025, we're wrapping up more than just a year. This episode marks the conclusion of the Machine Shop MBA series, a collaboration with CLA and Modern Machine Shop built around insights from the Top Shops benchmarking program. What started as a practical exploration of shop metrics ends with a much bigger question: what truly separates shops that survive from shops that endure? For this final chapter, we're joined again by Brent Donaldson of Modern Machine Shop, who helped kick off the series earlier in the year. Drawing from hundreds of shop visits and years of benchmarking data, Brent helps us connect the dots across operations, finance, leadership, and strategy. Together, we reflect on a clear shift happening across manufacturing: moving away from pure "rise and grind" thinking and toward intentionally designed systems. Throughout the episode, we revisit five deceptively simple questions pulled directly from the Top Shops survey. These questions challenge assumptions and expose where real opportunity lives. From RFQ response time and revenue per employee to reinvestment discipline, standardized scheduling, and succession planning, each one reinforces a central theme we've explored all year. Rather than chasing the next machine or relying on one big customer, the most resilient shops we see are building repeatable processes, measuring what matters, and reducing dependence on tribal knowledge. This conversation serves as both a reflection on what we've learned through the Machine Shop MBA series and a call to action as we head into 2026. If there's one takeaway we hope sticks, it's this: the shops that last aren't just collections of people and equipment. They are systems. Designed on purpose. Improved on purpose. And built to outlast any one individual. Segments (0:00) Wrapping up 2025 and closing out the Machine Shop MBA series (0:36) Why we created the series and partnered with CLA and Modern Machine Shop (2:25) Why you need to head to the 2026 IMTS Exhibitor Workshop (4:34) The shift from viewing shops as machines and people to viewing them as systems (7:52) Moving from survival mode to disciplined, systems-based thinking (12:33) Top Shops Question #1: RFQ response time as a competitive advantage (15:55) Top Shops Question #2: Revenue per employee as a true efficiency metric (17:15) What's Your Method? The unique financing process with Methods Machine Tools (26:47) Grow your top and bottom line with CliftonLarsonAllen (CLA) (27:37) How automation, workholding, and systems increase output per person (32:16) Top Shops Question #3: Reinvesting in equipment, software, and training (36:50) Why consistent reinvestment beats sporadic big spending (37:51) Top Shops Question #4: Standardized scheduling versus tribal knowledge (40:22) How poor systems create stress and constant firefighting (43:05) Top Shops Question #5: Leadership and ownership transition planning (46:01) The Top Shops 2026 Benchmarking survey opens February 1st, 2026 (47:27) How benchmarking accelerates maturity and reveals real gaps (48:19) How we use the Top Shops survey as part of annual strategic planning (49:19) Looking ahead to 2026 and continued collaboration (50:00) Why we love the SMW Autoblok catalog and quality (51:11) Final call to action and why benchmarking matters Resources mentioned on this episode Why you need to head to the 2026 IMTS Exhibitor Workshop What's Your Method? The financing process with Methods Machine Tools The Top Shops 2026 Benchmarking survey opens February 1st, 2026 Check out the SMW Autoblok catalog and quality Connect With MakingChips www.MakingChips.com On Facebook On LinkedIn On Instagram On Twitter On YouTube
Effective Gen AI adoption relies on tracking AI skills development. Data-driven learning, personalized training, and real-world metrics ensure employees confidently apply Gen AI to drive measurable business impact. That's the key take-away message of this episode of the Wise Decision Maker Show, which discusses what tracking Gen AI skills can teach us about the future of work.This article forms the basis for this episode: https://disasteravoidanceexperts.com/what-tracking-gen-ai-skills-can-teach-us-about-the-future-of-work/
Regular, collaborative check-ins help leaders navigate AI disruption by aligning teams, fostering psychological safety, and driving continuous improvement—turning Gen AI experimentation into real business results. That's the key take-away message of this episode of the Wise Decision Maker Show, which describes how to tame the Gen AI disruption with regular check-ins.This article forms the basis for this episode: https://disasteravoidanceexperts.com/taming-the-gen-ai-disruption-with-regular-check-insDr. Gleb Tsipursky bio https://disasteravoidanceexperts.com/glebtsipursky Dr. Gleb Tsipursky LinkedIn (send message when connecting) https://www.linkedin.com/in/dr-gleb-tsipursky/ Dr. Gleb Tsipursky's latest books: "ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation" is available at https://amzn.to/3YI2vuc "Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage" is available at https://disasteravoidanceexperts.com/hybrid/ "Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters" is available at https://disasteravoidanceexperts.com/nevergut "The Blindspots Between Us: How to Overcome Unconscious Cognitive Bias and Build Better Relationships" is available at https://disasteravoidanceexperts.com/blindspots
In episode #338 of SaaS Metrics School, Ben explains how to quickly sanity-check your sales and marketing forecast for the upcoming year using one high-signal SaaS metric: the Cost of ARR. As founders and CFOs finalize budgets, Ben shows how mismatches between projected bookings and planned go-to-market spend can reveal unrealistic assumptions before they turn into missed targets. Using simple examples, Ben walks through how the Cost of ARR connects sales and marketing spend, net new ARR bookings, and historical performance—making it one of the most effective tools for validating SaaS and AI company forecasts during budget season. What You'll Learn How to use the Cost of ARR to validate your sales and marketing budget The relationship between sales and marketing spend and net new ARR bookings How to identify unrealistic growth assumptions in your forecast The difference between blended the Cost of ARR, Cost of New ARR, and Cost of Expansion ARR Why historical performance should anchor forward-looking forecasts How benchmarking by ACV and sales motion improves forecast accuracy Why It Matters Sales and marketing forecasts often fail because spend and bookings assumptions are disconnected Cost of ARR provides a mechanical reality check before committing to a budget Overly aggressive ARR targets can be identified early and corrected Underspending on go-to-market becomes visible when bookings expectations are too conservative Benchmarking against peers helps validate whether forecast assumptions are realistic Strong financial modeling and forecasting discipline improves board and investor confidence Resources Mentioned Cost of ARR metric framework: https://www.thesaascfo.com/saas-cac-ratio/ Benchmarking data from Ray Rike at Benchmarkit.ai Concepts from SaaS FP&A forecasting and go-to-market efficiency analysis: https://www.thesaasacademy.com/the-saas-metrics-foundation
In one of the most popular episodes of the year, Legalbenchmarks.ai Founder Anna Guo discusses her organization's research that tests whether artificial intelligence custom-made for legal tasks better than general AI tools. Anna is a former BigLaw lawyer who left the practice to become an entrepreneur and now focuses her energies on quantifying the utility of AI in the legal industry. Anna's initial anecdotal research for colleagues quickly revealed a strong community interest in a systematic approach to evaluating legal AI tools. This led to the creation of Legalbenchmarks.AI, dedicated to finding out where the promise of humans plus AI is truly better than humans alone or AI alone. The core of the research involves measuring the "delta," or the extent to which AI can elevate human performance. To date, Legalbenchmarks.ai conducted two major studies: one on information extraction from legal sources and a second on contract review and redlining. Key Findings from the Studies: Accuracy vs. Qualitative Usefulness: The highest-performing general-purpose AI tools (like Gemini) were often found to be more accurate and consistent. However, the legal-specific AI tools often received higher marks in qualitative usefulness and helpfulness, as they align more closely with existing legal workflows. Methodology: The testing goes beyond simple accuracy. It includes a three-part assessment: Reliability (objective accuracy and legal adequacy), Usability (qualitative metrics like helpfulness and coherence for tasks such as brainstorming), and Platform Workflow Support (integration, citation checks, and other features). Human-AI Performance: In the contract analysis study, AI tools matched or exceeded the human baseline for reliability in producing first drafts. Crucially, the data demonstrated that the common belief that "human plus AI will always outperform AI alone" was false; the top-performing AI tool alone still had a higher accuracy rate than the human-plus-AI combo. Risk Analysis: A significant finding was that legal AI tools were better at flagging material risks, such as compliance or unenforceability issues in high-risk scenarios, that human lawyers missed entirely. This suggests AI can act as a crucial safety net. Strengths Comparison: AI excels at brainstorming, challenging human bias, and performing mass-scale routine tasks (e.g., mass contract review for simple terms). Humans retain a significant edge in ingesting nuanced context and making commercially reasonable decisions that AI's instruction-following can sometimes lack.
Ron Lanton III, Esq, joined Over the Counter to discuss the Most Favored Nation drug pricing policy and what exactly it may mean for stakeholders heading into 2026.
The final episode of The Food Professor Podcast for 2025 delivers a timely, wide-ranging examination of Canada's food system, blending macroeconomic analysis with a compelling, real-world industry case study. Co-hosts Michael LeBlanc and Dr. Sylvain Charlebois open the episode by reviewing their Top 10 Food Stories of 2025, a list that reflects a year defined less by short-term volatility and more by deep, structural challenges.Among the key themes is the growing consensus that food inflation in Canada is structural rather than cyclical, driven by long-standing issues such as interprovincial trade barriers, fragmented labour policy, logistics inefficiencies, regulatory complexity, and limited scale in food processing. The hosts revisit major developments including tariffs and counter-tariffs, the Grocery Code of Conduct, meat counter economics, the Ozempic and GLP-1 drug effect on food consumption, and the controversy surrounding cloned meat approvals. Together, these stories underscore why Canada's food system struggles to absorb shocks compared to larger, more flexible global peers.The second half of the episode features an in-depth interview with Ryan Koeslag, Executive Vice President & CEO of Mushrooms Canada, joined by Janet Krayden, Workforce Specialist at Mushrooms Canada. Together, they provide a rare inside look at one of Canada's most technologically advanced yet frequently misunderstood agricultural sectors. Listeners learn that Canadian mushrooms are grown 365 days a year, supply nearly 100% of domestic grocery demand, and export approximately 40% of production to the United States—all while operating with largely organic practices and world-class automation.A central focus of the discussion is labour. Koeslag and Krayden explain that mushroom farming is non-seasonal, capital-intensive, and highly technical, yet still dependent on skilled human labour for harvesting. Recent changes to the Temporary Foreign Worker Program, combined with the cancellation of the Agri-Food Immigration Pilot, have created significant unintended consequences for growers, threatening productivity, workforce stability, and long-term investment.The conversation also explores sustainability and innovation, highlighting Canada's leadership in mushroom automation, organic growing methods, and environmental stewardship. Krayden emphasizes that farmers are strong advocates for worker well-being and housing—an aspect often overlooked in public debate.The episode closes with forward-looking commentary on 2026, including front-of-package labelling, AI-driven pricing ethics, and the ongoing challenge of scaling Canada's “unscalable middle” in food processing—making this episode both a reflective year-end review and a practical roadmap for the year ahead.Mushrooms Canada Jobs webpage https://mushrooms.ca/mushroom-jobs/Mushrooms CanadaRecipes https://mushrooms.ca/recipes/Nutrition Page: https://mushrooms.ca/nutritional-benefits/Quality farm worker housing Highline campus in Leamington: https://www.facebook.com/share/p/1CNj4H8dGz/MORE high quality mushroom farm worker housing offered in Ontario for our farm workers https://youtu.be/ocrXL9DX7ys?si=Okdfpk2kx9lVHOoo The Food Professor #podcast is presented by Caddle. About UsDr. Sylvain Charlebois is a Professor in food distribution and policy in the Faculties of Management and Agriculture at Dalhousie University in Halifax. He is also the Senior Director of the Agri-food Analytics Lab, also located at Dalhousie University. Before joining Dalhousie, he was affiliated with the University of Guelph's Arrell Food Institute, which he co-founded. Known as “The Food Professor”, his current research interest lies in the broad area of food distribution, security and safety. Google Scholar ranks him as one of the world's most cited scholars in food supply chain management, food value chains and traceability.He has authored five books on global food systems, his most recent one published in 2017 by Wiley-Blackwell entitled “Food Safety, Risk Intelligence and Benchmarking”. He has also published over 500 peer-reviewed journal articles in several academic publications. Furthermore, his research has been featured in several newspapers and media groups, including The Lancet, The Economist, the New York Times, the Boston Globe, the Wall Street Journal, Washington Post, BBC, NBC, ABC, Fox News, Foreign Affairs, the Globe & Mail, the National Post and the Toronto Star.Dr. Charlebois sits on a few company boards, and supports many organizations as a special advisor, including some publicly traded companies. Charlebois is also a member of the Scientific Council of the Business Scientific Institute, based in Luxemburg. Dr. Charlebois is a member of the Global Food Traceability Centre's Advisory Board based in Washington DC, and a member of the National Scientific Committee of the Canadian Food Inspection Agency (CFIA) in Ottawa. Michael LeBlanc is the president and founder of M.E. LeBlanc & Company Inc, a senior retail advisor, keynote speaker and now, media entrepreneur. He has been on the front lines of retail industry change for his entire career. Michael has delivered keynotes, hosted fire-side discussions and participated worldwide in thought leadership panels, most recently on the main stage in Toronto at Retail Council of Canada's Retail Marketing conference with leaders from Walmart & Google. He brings 25+ years of brand/retail/marketing & eCommerce leadership experience with Levi's, Black & Decker, Hudson's Bay, CanWest Media, Pandora Jewellery, The Shopping Channel and Retail Council of Canada to his advisory, speaking and media practice.Michael produces and hosts a network of leading retail trade podcasts, including the award-winning No.1 independent retail industry podcast in America, Remarkable Retail with his partner, Dallas-based best-selling author Steve Dennis; Canada's top retail industry podcast The Voice of Retail and Canada's top food industry and one of the top Canadian-produced management independent podcasts in the country, The Food Professor with Dr. Sylvain Charlebois from Dalhousie University in Halifax.Rethink Retail has recognized Michael as one of the top global retail experts for the fourth year in a row, Thinkers 360 has named him on of the Top 50 global thought leaders in retail, RTIH has named him a top 100 global though leader in retail technology and Coresight Research has named Michael a Retail AI Influencer. If you are a BBQ fan, you can tune into Michael's cooking show, Last Request BBQ, on YouTube, Instagram, X and yes, TikTok.Michael is available for keynote presentations helping retailers, brands and retail industry insiders explaining the current state and future of the retail industry in North America and around the world.
"Money is one of those things that people just consider as an afterthought. The older you get and the more responsibilities you have, you start to realize money is a tool, and it's required for health and happiness." In this episode, Heather sits down with Douglas and Heather Boneparth to dig into the often uncomfortable but wholly necessary work of talking about finances with your partner. Together, they unpack what really happens beneath the surface when couples avoid money conversations, and how bringing these truths forward can transform the intimacy, teamwork, and emotional safety in your relationship. Explore the stories we inherit around money, why it feels so vulnerable to speak up, and the simple shifts that help you feel like you're on the same team again. What to listen for: ✨ Money is not an afterthought; it's a tool for cultivating health and happiness ✨ How Douglas and Heather first understood that they needed to talk about money ✨ Overcoming the fear of speaking up with your partner about finances "You're communicating even when you're not communicating. Your body language, your actions, and the way that you're passive-aggressive about it. You're just failing to say the thing that needs to be said." ✨ Opening the door to teamwork and shared responsibility in your relationship ✨ Letting go of the "Prince Charming is going to save me" and taking ownership ✨ The importance of acknowledging the impact partners have on one another "The collective ambition needs to come to life here because you share a life together. You still get to be your own person, but you're playing a team game. Championships are won through team efforts, and this is what collaboration is all about. Your marriage is essentially a collaboration of life, and money is this game that you don't get to opt out of." ✨ Why freedom is emotionally uncomfortable and how to navigate that ✨ The question Douglas and Heather ask every couple to bring them together ✨ The power of energetic time management and chasing the feeling you're after "It's not the thing you want, it's the feeling. You say you want more money, more accolades, another book, this, this, this. Amazing. Take it. But what's the feeling you're after?" ✨ The importance of valuing the journey as much as you value the end goal ✨ Benchmarking your money goals and navigating the resistance that comes up ✨ Understanding that intimacy is directly connected to your financial health About Douglas and Heather Boneparth: Heather and Douglas Boneparth are the co-authors of Money Together: How to find fairness in your relationship and become an unstoppable financial team. By day, Douglas Boneparth is a CERTIFIED FINANCIAL PLANNER™ and the founder of Bone Fide Wealth in New York City. Heather spent more than a decade as a corporate lawyer before joining the firm as the director of business and legal affairs. They also co-write a weekly newsletter, The Joint Account, which helps couples talk about money. Connect with Douglas and Heather: Money Together, available in all formats and at http://www.domoneytogether.com The Joint Account, available on Substack at https://www.readthejointaccount.com Bone Fide Wealth, visit https://bonefidewealth.com Everywhere on social: @averagejoelle + @dougboneparth ******* For those of you who are ready to stop feeling drained, overextended, and out of alignment… join me for a one-on-one Time & Energy Audit, a focused session designed to help high-achieving women uncover what's draining them, clarify what truly matters, and create a simple plan that fits their life. We'll pinpoint your biggest time + energy leaks, identify the top areas to focus on for quick momentum, and map out exactly what to let go of so you can reclaim your energy, your time, and your joy. Ready to make your time work for you without adding more to your plate? Book a Time & Energy Audit: heatherchauvin.com/audit Apply for the next Coaching Cohort: heatherchauvin.com/apply Not ready for 1:1? Join the membership (cancel anytime): heatherchauvin.com/membership
Remote work expands opportunity and economic stability for older workers with disabilities, removing barriers and helping them stay employed longer in an inclusive, flexible labor market. That's the key take-away message of this episode of the Wise Decision Maker Show, which describes the ________.This article forms the basis for this episode: https://disasteravoidanceexperts.com/remote-work-offers-a-lifeline-for-older-workers-with-disabilities-research-shows/
This week on The Geek in Review, we sit down with Jennifer McIver, Legal Ops and Industry Insights at Wolters Kluwer ELM Solutions. We open with Jennifer's career detour from aspiring forensic pathologist to practicing attorney to legal tech and legal ops leader, sparked by a classic moment of lawyer frustration, a slammed office door, and a Google search for “what else can I do with my law degree.” From implementing Legal Tracker at scale, to customer success with major clients, to product and strategy work, her path lands in a role built for pattern spotting, benchmarking, and translating what legal teams are dealing with into actionable insights.Marlene pulls the thread on what the sharpest legal ops teams are doing with their data right now. Jennifer's answer is refreshingly practical. Visibility wins. Dashboards tied to business strategy and KPIs beat “everything everywhere all at once” reporting. She talks through why the shift to tools like Power BI matters, and why comfort with seeing the numbers is as important as the numbers themselves. You cannot become a strategic partner if the data stays trapped inside the tool, or inside the legal ops team, or inside someone's head.Then we get into the messy part, which is data quality and data discipline. Jennifer points out the trap legal teams fall into when they demand 87 fields on intake forms and then wonder why nobody enters anything, or why every category becomes “Other,” also known as the graveyard of analytics. Her suggestion is simple. Pick the handful of fields that tell a strong story, clean them up, and get serious about where the data lives. She also stresses the role of external benchmarks, since internal trends mean little without context from market data.Greg asks the question on everyone's bingo card, what is real in AI today versus what still smells like conference-stage smoke. Jennifer lands on something concrete, agentic workflows for the kind of repeatable work legal ops teams do every week. She shares how she uses an agent to turn event notes into usable internal takeaways, with human review still in the loop, and frames the near-term benefit as time back and faster cycles. She also calls out what slows adoption down inside many companies, internal security and privacy reviews, plus AI committees that sometimes lag behind the teams trying to move work forward.Marlene shifts to pricing, panels, AFAs, and what frustrates GCs and legal ops leaders about panel performance. Jennifer describes two extremes, rigid rate programs with little conversation, and “RFP everything” process overload. Her best advice sits in the middle, talk early, staff smart, and match complexity to the right team, so cost and risk make sense. She also challenges the assumption that consolidation always produces value. Benchmarking data often shows you where you are overpaying for certain work types, even when volume discounts look good on paper.We close with what makes a real partnership between corporate legal teams and firms, and Jennifer keeps returning to two themes, communication and transparency, with examples. Jennifer's crystal ball for 2026 is blunt and useful, data first, start the hard conversations now, and take a serious look at roles and skills inside legal ops, because the job is changing fast.Links:Jennifer McIver's LinkedIn pageWolters Kluwer ELM Solutions homepageLegalVIEW Insights reports homepageLegalVIEW DynamicInsights pageTyMetrix 360° pageListen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.]Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
This episode of The Food Professor Podcast opens with Michael and Sylvain analyzing the most pressing developments shaping Canada's food and retail landscape. Sylvain reflects on the extraordinary national and global reach of Canada's Food Price Report, which this year generated unprecedented media attention and continues to influence retailers, manufacturers, governments, and consumers planning for 2026. They dig into the structural issues behind Canada's complex food-tax regime, discuss why the GST holiday changed how Canadians think about food pricing, and explore the broader economic forces influencing consumer behaviour.The hosts then turn to one of the most surprising developments of the season: mounting instability in the chicken sector. With nine consecutive missed production cycles, increased reliance on imports, and confusion around border testing, the system designed to provide stability is under strain. Sylvain breaks down why this matters for households, grocers, foodservice operators, and the broader supply chain—especially as chicken remains Canada's most-purchased protein. The conversation then expands southward to U.S. agricultural subsidies, tariff battles, Costco's legal challenge over tariff refunds, and the potential fallout of proposed U.S. tariffs on Canadian fertilizer.The second half of the episode shifts to a live interview recorded at the Coffee Association of Canada conference, where Michael and Sylvain sit down with Carman Allison, Vice President, NIQ Canada, one of the country's most respected consumer data voices. Carman previews his conference keynote, “Navigating Disruption,” and explains why coffee inflation is reshaping buying behaviour even among loyal consumers who consider coffee essential. He outlines NIQ's segmentation showing that 29% of Canadian households are now financially vulnerable—and how this is affecting deal-seeking, product substitution, and consumption patterns.Drawing on NIQ's expanded Omni Shopper Panel, Carman describes how rapid multicultural population growth is shifting beverage preferences, why Generation X now holds the greatest spending power, and how value-seeking is reshaping entire store categories. He also reveals early evidence of the GLP-1 effect, where households using weight-loss or diabetes medications show measurable declines in food consumption.Carman closes by highlighting growth opportunities in instant coffee, protein-and-coffee hybrids, Maple-forward flavour innovation, and the continued rise of home-meal-replacement programs. His insights give retailers and suppliers a grounded, data-rich roadmap for growth in a highly price-sensitive marketplace. The Food Professor #podcast is presented by Caddle. About UsDr. Sylvain Charlebois is a Professor in food distribution and policy in the Faculties of Management and Agriculture at Dalhousie University in Halifax. He is also the Senior Director of the Agri-food Analytics Lab, also located at Dalhousie University. Before joining Dalhousie, he was affiliated with the University of Guelph's Arrell Food Institute, which he co-founded. Known as “The Food Professor”, his current research interest lies in the broad area of food distribution, security and safety. Google Scholar ranks him as one of the world's most cited scholars in food supply chain management, food value chains and traceability.He has authored five books on global food systems, his most recent one published in 2017 by Wiley-Blackwell entitled “Food Safety, Risk Intelligence and Benchmarking”. He has also published over 500 peer-reviewed journal articles in several academic publications. Furthermore, his research has been featured in several newspapers and media groups, including The Lancet, The Economist, the New York Times, the Boston Globe, the Wall Street Journal, Washington Post, BBC, NBC, ABC, Fox News, Foreign Affairs, the Globe & Mail, the National Post and the Toronto Star.Dr. Charlebois sits on a few company boards, and supports many organizations as a special advisor, including some publicly traded companies. Charlebois is also a member of the Scientific Council of the Business Scientific Institute, based in Luxemburg. Dr. Charlebois is a member of the Global Food Traceability Centre's Advisory Board based in Washington DC, and a member of the National Scientific Committee of the Canadian Food Inspection Agency (CFIA) in Ottawa. Michael LeBlanc is the president and founder of M.E. LeBlanc & Company Inc, a senior retail advisor, keynote speaker and now, media entrepreneur. He has been on the front lines of retail industry change for his entire career. Michael has delivered keynotes, hosted fire-side discussions and participated worldwide in thought leadership panels, most recently on the main stage in Toronto at Retail Council of Canada's Retail Marketing conference with leaders from Walmart & Google. He brings 25+ years of brand/retail/marketing & eCommerce leadership experience with Levi's, Black & Decker, Hudson's Bay, CanWest Media, Pandora Jewellery, The Shopping Channel and Retail Council of Canada to his advisory, speaking and media practice.Michael produces and hosts a network of leading retail trade podcasts, including the award-winning No.1 independent retail industry podcast in America, Remarkable Retail with his partner, Dallas-based best-selling author Steve Dennis; Canada's top retail industry podcast The Voice of Retail and Canada's top food industry and one of the top Canadian-produced management independent podcasts in the country, The Food Professor with Dr. Sylvain Charlebois from Dalhousie University in Halifax.Rethink Retail has recognized Michael as one of the top global retail experts for the fourth year in a row, Thinkers 360 has named him on of the Top 50 global thought leaders in retail, RTIH has named him a top 100 global though leader in retail technology and Coresight Research has named Michael a Retail AI Influencer. If you are a BBQ fan, you can tune into Michael's cooking show, Last Request BBQ, on YouTube, Instagram, X and yes, TikTok.Michael is available for keynote presentations helping retailers, brands and retail industry insiders explaining the current state and future of the retail industry in North America and around the world.
AI agent building turns passive learning into practical skill-building, helping associations boost member capability, deepen engagement, and unlock new revenue opportunities through hands-on, coached innovation. This is the key take-away from this episode, which talks about how AI agent building can take you from education to revenue.You can find the article that forms the basis for this episode at https://disasteravoidanceexperts.com/from-education-to-revenue-with-ai-agent-building/
Constant remote work requests signal a disconnect. Listening to employee needs through surveys and conversations is key to building trust, boosting morale, and shaping a policy that truly works for everyone. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about the cure for constant remote work requests.This article forms the basis for this episode: https://disasteravoidanceexperts.com/the-cure-for-constant-remote-work-requests/
In this episode of the Wise Decision Maker Show, Dr. Gleb Tsipursky speaks to David Lewallen, CEO of Verbatim Digital, about winning the AI chatbot marketing competition. You can learn about Verbatim Digital at https://verbatimdigital.com/
Peter Atwater, one of the leading voices on confidence-driven behavior in markets and society, joins Lance Roberts to share how certainty, control, and herd mentality shape every major trend investors face today. Lance and Peter discuss The Confidence Map, why people behave differently when they're in the "comfort zone" versus the "stress center," and how these shifts explain the rise of speculative investing, the bifurcated K-shaped economy, and the growing disconnect between Wall Street and Main Street. Atwater explores how consumer confidence is driving AI enthusiasm, why the workplace has split between white-collar and blue-collar realities since Covid, and what it will take to move the U.S. back to a non-K-shaped economy. We also dive into Maslow's hierarchy, the collapse of social trust, and what happens when possibility starts to feel like threat. For investors, Atwater lays out what assets are "ice cold or lukewarm," why scrutiny and confidence move in opposite directions, and what to own when today's hot spots finally cool off. We also examine ETFs, gamified trading, the tragedy of benchmarking, and how declining confidence reshapes moral behavior in the markets. 0:00 - INTRO 0:18 - Who is Peter Atwater? 1:10 - Understanding the Behavior of the Herd 2:32 - More Certainty & Control 3:48 - The Confidence Map - The Box Chart - Comfort Zone vs Stress Center 7:03 - Consumer Confidence Metrics & the AI Space 9:24 - The Bifurcated Economy & The K-Shape; White Collar vs Blue Collar Workers During Covid 12:12 - Getting back to a non-K Economy - Looking at the cumulative impact on the economy Maslow's Hierarchy of Need - we need to start at the bottom, make those at the top more aware of what's happening around them 16:48 - What policy should be implemented to accomplish this? How to create income caps on provider side? The K-shaped economy creates slaves to two masters. 19:40 - The Mandami Effect & the risk to our system - the Bottom is really purple, not red or blue 22:35 - The Problem with the Federal Reserve Re-thinking Trickle-down Economics 24:05 - What Happens when Social Trust Collapses - Concern About AI - the furthest thing away from Main Street; when possibility begins to look like threat. 27:24 - The Economy is like a top-heavy Jenga tower, with a circle of flows instead of columns of support Reconsideration of multi-colored pie charts as measures of mood; 31:11 - What should investors own today that are ice-cold or lukewarm; Plan for what you'd like to have happen, but prepare for what you cannot imagine. Where will the money go when the hot spots cool? Scrutiny & Confidence are inversely related What do you own that's tangible? (How to test for confidence) The tragedy of Benchmarking 34:17 - Looking at ETF creation - how to look at sentiment The creation of a gambling environment in investing - the gamification of the markets 37:00 - As confidence falls, the moral compass changes The more you trade, the less money you make. 39:19 - The industry has moved to preying on investors instead of helping them 40:18 - Plan for what you can imagine, be prepared for what you cannot Panic is a reason to be optimistic 41:38 - How Certainty and Control apply at the individual level - Closing thoughts #PeterAtwater #BehavioralFinance #KShapedEconomy #InvestorPsychology #AIandMarkets
Phill Robinson of Boardwave joins Miguel Alava and Massimo Ghislandi of AWS to share research and actionable strategies for European software companies using cloud infrastructure, AI features, and marketplace leverage to drive unprecedented growth.Topics Include:Boardwave and AWS reveal research on European software companies becoming global innovators.Cloud-first businesses exceed customer expectations at 60% versus 46% for laggards.Boardwave's 2,500 CEO members validate findings: AI companies growing 45% annually.Leaders excel at gathering customer feedback for innovation and implementing AI.Top performers leverage marketplaces and deliver continuous customer experience updates consistently.Cloud adoption is foundational for generative AI and agentic AI to scale.Companies face different challenges depending on their cloud maturity stage currently.Cloud serves as table stakes before companies can capture AI growth opportunities.Benchmarking tool helps identify current position and plan strategic next steps forward.Startups should solve universal problems globally, building painkillers not vitamin products.Intercom scales customer service; Wix transforms efficiency through cultural and engineering mindset.Future requires cloud foundation with AI features; AWS offers comprehensive support programs.Participants:Phill Robinson – Chair & Co-Founder, BoardwaveMiguel Alava – EMEA ISV General Manager, Amazon Web ServicesMassimo Ghislandi - Head of EMEA Marketing for Software Companies, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Jeff Huber is the CEO of Chroma, working on context engineering and building reliable retrieval infrastructure for AI systems. Context Engineering, Context Rot, & Agentic Search with the CEO of Chroma, Jeff Huber // MLOps Podcast #348.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractJeff Huber drops some hard truths about “context rot” — the slow decay of AI memory that's quietly breaking your favorite models. From retrieval chaos to the hidden limits of context windows, he and Demetrios Brinkmann unpack why most AI systems forget what matters and how Chroma is rethinking the entire retrieval stack. It's a bold look at whether smarter AI means cleaner context — or just better ways to hide the mess.// BioJeff Huber is the CEO and cofounder of Chroma. Chroma has raised $20M from top investors in Silicon Valley and builds modern search infrastructure for AI.// Related LinksWebsite: https://www.trychroma.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jeff on LinkedIn: /jeffchuber/Timestamps:[00:00] AI intelligence context clarity[00:37] Context rot explanation[03:02] Benchmarking context windows[05:09] Breaking down search eras[10:50] Agent task memory issues[17:21] Semantic search limitations[22:54] Context hygiene in AI[30:15] Chroma on-device functionality[38:23] Vision for precision systems[43:07] ML model deployment challenges[44:17] Wrap up
Earmark Media Presents a bonus episode of Earmark Podcast:Live from Boston on the final stop of the Advisory Amplified tour, Blake sits down with James Erving from Fathom and Chris Macksey from Prix Fixe Accounting to explore what advisory services really mean beyond bookkeeping and compliance. Chris shares how his firm requires advisory for all restaurant clients, using industry expertise and operational metrics to guide decisions on everything from menu pricing to expansion timing. The conversation covers the difference between delivering information versus being integral to decision-making, with insights on forecasting, benchmarking, and why visual KPIs help clients with low financial literacy understand their business performance.Meet Our GuestsJames ErvingLinkedIn: https://www.linkedin.com/in/jameserving/Learn more about FathomOfficial website: http://fathomhq.comChris MackseyLinkedIn: https://www.linkedin.com/in/cmacksey/Learn more about Prix Fixe AccountingOfficial website: https://prixfixe.accountants/Need CPE?Get CPE for this episode: https://earmark.app/c/2912Get CPE for listening to podcasts with Earmark: https://earmarkcpe.comSubscribe to the Earmark Podcast: https://podcast.earmarkcpe.comGet in TouchThanks for listening and the great reviews! We appreciate you! Follow and tweet @BlakeTOliver and @DavidLeary. Find us on Facebook and Instagram. If you like what you hear, please do us a favor and write a review on Apple Podcasts or Podchaser. Call us and leave a voicemail; maybe we'll play it on the show. DIAL (202) 695-1040.SponsorshipsAre you interested in sponsoring The Accounting Podcast? For details, read the prospectus.Need Accounting Conference Info? Check out our new website - accountingconferences.comLimited edition shirts, stickers, and other necessitiesTeePublic Store: http://cloudacctpod.link/merchSubscribeApple Podcasts: http://cloudacctpod.link/ApplePodcastsYouTube: https://www.youtube.com/@TheAccountingPodcastSpotify: http://cloudacctpod.link/SpotifyPodchaser: http://cloudacctpod.link/podchaserStitcher: http://cloudacctpod.link/StitcherOvercast: http://cloudacctpod.link/OvercastClassifiedsWant to get the word out about your newsletter, webinar, party, Facebook group, podcast, e-book, job posting, or that fancy Excel macro you just created? Let the listeners of The Accounting Podcast know by running a classified ad. Go here to create your classified ad: https://cloudacctpod.link/RunClassifiedAdTranscriptsThe full transcript for this episode is available by clicking on the Transcript tab at the top of this page
Workslop exposes the dark side of rushed AI adoption—polished but empty output that drains productivity and trust. The cure isn't better tech, but empowering people to co-create AI tools with purpose, ownership, and real-world impact. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about how “AI workslop” is draining modern enterprises.This article forms the basis for this episode: https://disasteravoidanceexperts.com/how-ai-workslop-is-draining-modern-enterprises/
As the holiday shopping season gets into full swing, this year thoughts are turning to agents and the changing role of AI in commerce. Sheryl Kingstone returns to discuss the impacts and offer insights into strategies for putting agents to work and working in a world of agents with host Eric Hanselman. AI is spanning generations in technology adoption and engagement in ways that previous technologies have struggled. Search and digital engagement had strong splits between different generations. The natural language capabilities of chat interfaces are stepping across technology hesitancy. But it is creating challenges for businesses in reaching their customers. Search engine optimization is well understood, but how can a business ensure it's found by AI entities? Making more information available, but being more selective about which interactions get what data is a critical balance to achieve. Bot management has become a lot more complicated. Building trust in autonomous experiences is the next big hurdle that AI technologies have to accomplish. Gen Z users are more comfortable with automated actions, but trust is still key. Building connections with brand advocates is just as important as it's always been and now has to be delivered through AI. Internal chat can be a good start and it needs to be extended to become a more complete assistant-style interaction. It requires a significant improvement from legacy chatbots and the business it creates can make it worthwhile. More S&P Global Content: 451 IT Insider: A roundup for IT decision-makers Next in Tech | Ep. 205: Agentic AI Impacts National Retail Federation looks to revitalize the modern commerce experience For S&P Global subscribers: Benchmarking digital maturity: Are businesses ready for agentic AI? – Highlights from Vot… Pace of AI agent advancement could spur M&A in the sales automation market Big Picture Report: 2026 AI Outlook – Unleashing agentic potential Credits: Host/Author: Eric Hanselman Guest: Sheryl Kingstone Producer/Editor: Feranmi Adeoshun Published With Assistance From: Sophie Carr, Kyra Smith
True success in Gen AI initiatives comes not from competition but from collaboration: breaking down silos, sharing insights, and working together to unlock innovation, agility, and lasting organizational value. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about why collaboration beats competition in Gen AI initiatives.This article forms the basis for this episode: https://disasteravoidanceexperts.com/why-collaboration-beats-competition-in-gen-ai-initiatives/
Embracing failure is essential to a successful AI strategy. It's not a setback but a catalyst for learning, innovation, and resilience that drives continuous improvement and real business impact in the evolving world of generative AI. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about why failure is the secret sauce in Gen AI strategy.This article forms the basis for this episode: https://disasteravoidanceexperts.com/failure-the-secret-sauce-in-gen-ai-strategy/
Professional services leaders are constantly balancing growth, delivery, and profitability, but what if the biggest opportunities are hiding in plain sight?In this episode, Charles Gustine, Director of Customer and Market Insights at Kantata, and Connor Budden, Global Director at Service Performance Insight (SPI), explore how benchmarking helps firms uncover hidden inefficiencies, strengthen leadership, and transform performance across every area of the business.Recorded live at Kantata Converge, this conversation draws on decades of SPI's benchmarking insights and unveils SPI Insight, a breakthrough integration that brings real-time benchmarking directly into the Kantata platform.In this episode, you'll learn:Why benchmarking matters — and how top-performing services firms use data to turn blind spots into breakthroughs.The five pillars of performance maturity — leadership, client relationships, talent, service execution, and operations.Real-world stories of firms uncovering costly inefficiencies and driving measurable improvements.How AI is reshaping benchmarking — delivering faster, smarter, and more contextual decision-making.A first look at SPI Insight — the new embedded benchmarking capability within Kantata that turns data into real-time guidance. Hosted on Acast. See acast.com/privacy for more information.
In this episode, we discuss the current top issues for employers – AI, DEI and Pay Equity – that were highlighted at multiple employment conferences that David Fortney and FortneyScott attorneys Liz Bradley and Nita Beecher recently attended. For each, we explore the issues and share the key strategies that employers are following to address these developments.Contact Fortney & Scott: Tweet us at @fortneyscott Follow us on LinkedIn Email us at info@fortneyscott.com Thank you for listening! https://www.fortneyscott.com/
Sustained AI innovation thrives when businesses invest time, tools, and support to empower experimentation—transforming curiosity into scalable impact and positioning organizations to lead in an evolving, Gen AI–driven future. That's the key take-away message of this episode of the Wise Decision Maker Show, which describes how resources ignite Gen AI innovation.This article forms the basis for this episode: https://disasteravoidanceexperts.com/resources-ignite-gen-ai-innovation/
How does Python 3.14 perform under a few hand-crafted benchmarks? Does the performance of asyncio scale on the free-threaded build? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
We're back from Texas just in time to chat with Jon Seager, Canonical's VP of Engineering, and their new era with Ubuntu 25.10. On the way, we visit System76 in Denver where the COSMIC team has surprises waiting for us.Sponsored By:Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks: