POPULARITY
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intro textbook on AIXI, published by Alex Altair on May 12, 2024 on LessWrong. Marcus Hutter and his PhD students David Quarel and Elliot Catt have just published a new textbook called An Introduction to Universal Artificial Intelligence. "Universal AI" refers to the body of theory surrounding Hutter's AIXI, which is a model of ideal agency combining Solomonoff induction and reinforcement learning. Hutter has previously published a book-length exposition of AIXI in 2005, called just Universal Artificial Intelligence, and first introduced AIXI in a 2000 paper. I think UAI is well-written and organized, but it's certainly very dense. An introductory textbook is a welcome addition to the canon. I doubt IUAI will contain any novel results, though from the table of contents, it looks like it will incorporate some of the further research that has been done since his 2005 book. As is common, the textbook is partly based on his experiences teaching the material to students over many years, and is aimed at advanced undergraduates. I'm excited for this! Like any rationalist, I have plenty of opinions about problems with AIXI (it's not embedded, RL is the wrong frame for agents, etc) but as an agent foundations researcher, I think progress on foundational theory is critical for AI safety. Basic info Hutter's website Releasing on May 28th 2024 Available in hardcover, paperback and ebook 496 pages Table of contents: Part I: Introduction 1. Introduction 2. Background Part II: Algorithmic Prediction 3. Bayesian Sequence Prediction 4. The Context Tree Weighting Algorithm 5. Variations on CTW Part III: A Family of Universal Agents 6. Agency 7. Universal Artificial Intelligence 8. Optimality of Universal Agents 9. Other Universal Agents 10. Multi-agent Setting Part IV: Approximating Universal Agents 11. AIXI-MDP 12. Monte-Carlo AIXI with Context Tree Weighting 13. Computational Aspects Part V: Alternative Approaches 14. Feature Reinforcement Learning Part VI: Safety and Discussion 15. AGI Safety 16. Philosophy of AI Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intro textbook on AIXI, published by Alex Altair on May 12, 2024 on LessWrong. Marcus Hutter and his PhD students David Quarel and Elliot Catt have just published a new textbook called An Introduction to Universal Artificial Intelligence. "Universal AI" refers to the body of theory surrounding Hutter's AIXI, which is a model of ideal agency combining Solomonoff induction and reinforcement learning. Hutter has previously published a book-length exposition of AIXI in 2005, called just Universal Artificial Intelligence, and first introduced AIXI in a 2000 paper. I think UAI is well-written and organized, but it's certainly very dense. An introductory textbook is a welcome addition to the canon. I doubt IUAI will contain any novel results, though from the table of contents, it looks like it will incorporate some of the further research that has been done since his 2005 book. As is common, the textbook is partly based on his experiences teaching the material to students over many years, and is aimed at advanced undergraduates. I'm excited for this! Like any rationalist, I have plenty of opinions about problems with AIXI (it's not embedded, RL is the wrong frame for agents, etc) but as an agent foundations researcher, I think progress on foundational theory is critical for AI safety. Basic info Hutter's website Releasing on May 28th 2024 Available in hardcover, paperback and ebook 496 pages Table of contents: Part I: Introduction 1. Introduction 2. Background Part II: Algorithmic Prediction 3. Bayesian Sequence Prediction 4. The Context Tree Weighting Algorithm 5. Variations on CTW Part III: A Family of Universal Agents 6. Agency 7. Universal Artificial Intelligence 8. Optimality of Universal Agents 9. Other Universal Agents 10. Multi-agent Setting Part IV: Approximating Universal Agents 11. AIXI-MDP 12. Monte-Carlo AIXI with Context Tree Weighting 13. Computational Aspects Part V: Alternative Approaches 14. Feature Reinforcement Learning Part VI: Safety and Discussion 15. AGI Safety 16. Philosophy of AI Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Marcus Hutter is an artificial intelligence researcher who is both a Senior Researcher at Google DeepMind and an Honorary Professor in the Research School of Computer Science at Australian National University. He is responsible for the development of the theory of Universal Artificial Intelligence, for which he has written two books, one back in 2005 and one coming right off the press as we speak. Marcus is also the creator of the Hutter prize, for which you can win a sizable fortune for achieving state of the art lossless compression of Wikipedia text. Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen In this technical conversation, we cover material from Marcus's two books “Universal Artificial Intelligence” (2005) and “Introduction to Universal Artificial Intelligence” (2024). The main goal is to develop a mathematical theory for combining sequential prediction (which seeks to predict the distribution of the next observation) together with action (which seeks to maximize expected reward), since these are among the problems that intelligent agents face when interacting in an unknown environment. Solomonoff induction provides a universal approach to sequence prediction in that it constructs an optimal prior (in a certain sense) over the space of all computable distributions of sequences, thus enabling Bayesian updating to enable convergence to the true predictive distribution (assuming the latter is computable). Combining Solomonoff induction with optimal action leads us to an agent known as AIXI, which in this theoretical setting, can be argued to be a mathematical incarnation of artificial general intelligence (AGI): it is an agent which acts optimally in general, unknown environments. The second half of our discussion concerning agents assumes familiarity with the basic setup of reinforcement learning. I. Introduction 00:38 : Biography 01:45 : From Physics to AI 03:05 : Hutter Prize 06:25 : Overview of Universal Artificial Intelligence 11:10 : Technical outline II. Universal Prediction 18:27 : Laplace's Rule and Bayesian Sequence Prediction 40:54 : Different priors: KT estimator 44:39 : Sequence prediction for countable hypothesis class 53:23 : Generalized Solomonoff Bound (GSB) 57:56 : Example of GSB for uniform prior 1:04:24 : GSB for continuous hypothesis classes 1:08:28 : Context tree weighting 1:12:31 : Kolmogorov complexity 1:19:36 : Solomonoff Bound & Solomonoff Induction 1:21:27 : Optimality of Solomonoff Induction 1:24:48 : Solomonoff a priori distribution in terms of random Turing machines 1:28:37 : Large Language Models (LLMs) 1:37:07 : Using LLMs to emulate Solomonoff induction 1:41:41 : Loss functions 1:50:59 : Optimality of Solomonoff induction revisited 1:51:51 : Marvin Minsky III. Universal Agents 1:52:42 : Recap and intro 1:55:59 : Setup 2:06:32 : Bayesian mixture environment 2:08:02 : AIxi. Bayes optimal policy vs optimal policy 2:11:27 : AIXI (AIxi with xi = Solomonoff a priori distribution) 2:12:04 : AIXI and AGI 2:12:41 : Legg-Hutter measure of intelligence 2:15:35 : AIXI explicit formula 2:23:53 : Other agents (optimistic agent, Thompson sampling, etc) 2:33:09 : Multiagent setting 2:39:38 : Grain of Truth problem 2:44:38 : Positive solution to Grain of Truth guarantees convergence to a Nash equilibria 2:45:01 : Computable approximations (simplifying assumptions on model classes): MDP, CTW, LLMs 2:56:13 : Outro: Brief philosophical remarks Further Reading: M. Hutter, D. Quarrel, E. Catt. An Introduction to Universal Artificial Intelligence M. Hutter. Universal Artificial Intelligence S. Legg and M. Hutter. Universal Intelligence: A Definition of Machine Intelligence Twitter: @iamtimnguyen Webpage: http://www.timothynguyen.org
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.26.534253v1?rss=1 Authors: Altukhov, D., Kleeva, D., Ossadtchi, A. Abstract: Functional connectivity is crucial for cognitive processes in the healthy brain and serves as a marker for a range of neuropathological conditions. Non-invasive exploration of functional coupling using temporally resolved techniques such as MEG allows for a unique opportunity of exploring this fundamental brain mechanism in a reasonably ecological setting. The indirect nature of MEG measurements complicates the the estimation of functional coupling due to the spatial leakage effects. In previous work ossadtchi et al., 2018, we introduced PSIICOS, a method that for the first time allowed us to suppress the spatial leakage and yet retain information about functional networks whose nodes are coupled with close to zero or zero mutual phase lag. In this paper, we demonstrate analytically that the PSIICOS projection is optimal in achieving a controllable trade-off between suppressing mutual spatial leakage and retaining information about zero-phase coupled networks. We also derive an alternative solution using the regularization-based inverse of the mutual spatial leakage matrix and show its equivalence to the original PSIICOS. This approach allows us to incorporate the PSIICOS solution into the conventional source estimation framework. Instead of sources, the unknowns are the elementary networks and their activation timeseries are formalized by the corresponding source-space cross-spectral coefficients. Additionally, we outline potential avenues for future research to enhance functional coupling estimation and discuss alternative estimators that parallel the established source estimation approaches. Finally, we propose that the PSIICOS framework is well-suited for Bayesian techniques and offers a principled way to incorporate priors derived from structural connectivity. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.24.534044v1?rss=1 Authors: Manning, T. S., Alexander, E., Cumming, B. G., DeAngelis, G. C., Huang, X., Cooper, E. A. Abstract: Neurons throughout the brain modulate their firing rate lawfully in response to changes in sensory input. Theories of neural computation posit that these modulations reflect the outcome of a constrained optimization: neurons aim to efficiently and robustly represent sensory information under resource limitations. Our understanding of how this optimization varies across the brain, however, is still in its infancy. Here, we show that neural responses transform along the dorsal stream of the visual system in a manner consistent with a transition from optimizing for information preservation to optimizing for perceptual discrimination. Focusing on binocular disparity -- the slight differences in how objects project to the two eyes -- we re-analyze measurements from neurons characterizing tuning curves in macaque monkey brain regions V1, V2, and MT, and compare these to measurements of the natural visual statistics of binocular disparity. The changes in tuning curve characteristics are computationally consistent with a shift in optimization goals from maximizing the information encoded about naturally occurring binocular disparities to maximizing the ability to support fine disparity discrimination. We find that a change towards tuning curves preferring larger disparities is a key driver of this shift. These results provide new insight into previously-identified differences between disparity-selective regions of cortex and suggest these differences play an important role in supporting visually-guided behavior. Our findings support a key re-framing of optimal coding in regions of the brain that contain sensory information, emphasizing the need to consider not just information preservation and neural resources, but also relevance to behavior. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
By Nick Bostrom, Anders Sandberg, and Matthew van der Merwe.This is an updated version of The Wisdom of Nature, first published in the book Human Enhancement (Oxford University Press, 2009).Abstract:Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature.Read the full paper:https://www.nickbostrom.com/evolutionary-optimality.pdfMore episodes at:https://radiobostrom.com/
By Nick Bostrom, Anders Sandberg, and Matthew van der Merwe.This is an updated version of The Wisdom of Nature, first published in the book Human Enhancement (Oxford University Press, 2009).Abstract:Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature.Read the full paper:https://www.nickbostrom.com/evolutionary-optimality.pdfMore episodes at:https://radiobostrom.com/
Today's ID the Future brings listeners physicist and engineer Brian Miller's recent lecture at the Dallas Conference on Science and Faith, “The Surprising Relevance of Engineering in Biology.” Miller rebuts several popular arguments for evolution based on claims of poor design in living systems, everything from the “backward wiring” of the vertebrate eye to whales, wrists, ankles, and “junk DNA.” But the main emphasis of this discussion is the exciting sea change in biology in which numerous breakthroughs are occurring by scientists who are treating living systems and subsystems as if they are optimally engineered systems. Some in this movement reject intelligent design for ideological reasons. Others embrace it. But all systems biologists treat these systems as if they are masterfully engineered Read More › Source
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Poorly-Aimed Death Rays, published by Thane Ruthenis on June 11, 2022 on LessWrong. Alternate framing: Optimality is the tiger, and agents are its teeth. Tonally relevant: Godzilla Strategies. It's a problem when people think that a superintelligent AI will be just a volitionless tool that will do as told. But it's also a problem when people focus overly much on the story of "agency". When they imagine that all of the problems come from the AI "wanting" things, "thinking" things, and consequentializing all over the place about it. If only we could make it more of a volitionless tool! Then all of our problems would be solved. Because the problem is the AI using its power in clever ways with the deliberate intent to hurt us, right? This, I feel, fails entirely to appreciate the sheer power of optimization, and how even the slightest failure to aim it properly, the slightest leakage of its energy in the wrong direction, for the briefest of moments, will be sufficient to wash us all away. The problem isn't making a superintelligent system that wouldn't positively want to kill us. Accidentally killing us all is a natural property of superintelligence. The problem is making an AI that will deliberately spend a lot of effort on ensuring it's not killing us. I find planet-destroying Death Rays to be a good analogy. Think the Death Star. Think Imagine that you're an engineer employed by an... eccentric fellow. The guy has a volcano lair, weird aesthetic tastes, and a tendency to put words like "world" and "domination" one after another. You know the type. One of his latest schemes is to blow up Jupiter. To that end, he'd had excavated a giant cavern underneath his volcano lair, dug a long cylindrical tunnel from that cavern to the surface, and ordered your team to build a beam weapon in that cavern and shoot it through the tunnel at Jupiter. You're getting paid literal tons of money, so you don't complain (except about the payment logistics). You have a pretty good idea of how to do that project, too. There are these weird crystal things your team found lying around. If you poke one in a particular way, it releases a narrow energy beam which blows up anything it touches. The power of the beam scales superexponentially with the strength of the poke; you're pretty sure shooting one with a rifle will do the Jupiter-vanishing trick. There's just one problem: aim. You can never quite predict which part of the crystal will emit the beam. It depends on where you poke it, but also on how hard you poke, with seemingly random results. And your employer is insistent that the Death Ray be fired from the cavern through the tunnel, not from space where it's less likely to hit important things, or something practical like that. If you say that can't be done, your employer will just replace you with someone less... pessimistic. So, here's your problem. How do you build a machine that uses one or more of these crystals in such a way that they fire a Death Ray through the tunnel at Jupiter, without hitting Earth and killing everyone? You experiment with the crystals at non-Earth-destroying settings, trying to figure out how the beam is directed. You make a fair amount of progress! You're able to predict the beam's direction at the next power setting with 97% confidence! When you fire it with Jupiter-destroying power, that slight margin of error causes the beam to be slightly misdirected. It grazes the tunnel, exploding Earth and killing everyone. You fire the Death Ray at a lower, non-Earth-destroying setting that you know how to aim. It hits Jupiter but fails to destroy it. Your employer is disappointed, and tells you to try again. You line the cavern's walls and the tunnel with really good protective shielding. The Death Ray grazes the tunnel, blows past the shielding, and kills everyone. Yo...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Poorly-Aimed Death Rays, published by Thane Ruthenis on June 11, 2022 on LessWrong. Alternate framing: Optimality is the tiger, and agents are its teeth. Tonally relevant: Godzilla Strategies. It's a problem when people think that a superintelligent AI will be just a volitionless tool that will do as told. But it's also a problem when people focus overly much on the story of "agency". When they imagine that all of the problems come from the AI "wanting" things, "thinking" things, and consequentializing all over the place about it. If only we could make it more of a volitionless tool! Then all of our problems would be solved. Because the problem is the AI using its power in clever ways with the deliberate intent to hurt us, right? This, I feel, fails entirely to appreciate the sheer power of optimization, and how even the slightest failure to aim it properly, the slightest leakage of its energy in the wrong direction, for the briefest of moments, will be sufficient to wash us all away. The problem isn't making a superintelligent system that wouldn't positively want to kill us. Accidentally killing us all is a natural property of superintelligence. The problem is making an AI that will deliberately spend a lot of effort on ensuring it's not killing us. I find planet-destroying Death Rays to be a good analogy. Think the Death Star. Think Imagine that you're an engineer employed by an... eccentric fellow. The guy has a volcano lair, weird aesthetic tastes, and a tendency to put words like "world" and "domination" one after another. You know the type. One of his latest schemes is to blow up Jupiter. To that end, he'd had excavated a giant cavern underneath his volcano lair, dug a long cylindrical tunnel from that cavern to the surface, and ordered your team to build a beam weapon in that cavern and shoot it through the tunnel at Jupiter. You're getting paid literal tons of money, so you don't complain (except about the payment logistics). You have a pretty good idea of how to do that project, too. There are these weird crystal things your team found lying around. If you poke one in a particular way, it releases a narrow energy beam which blows up anything it touches. The power of the beam scales superexponentially with the strength of the poke; you're pretty sure shooting one with a rifle will do the Jupiter-vanishing trick. There's just one problem: aim. You can never quite predict which part of the crystal will emit the beam. It depends on where you poke it, but also on how hard you poke, with seemingly random results. And your employer is insistent that the Death Ray be fired from the cavern through the tunnel, not from space where it's less likely to hit important things, or something practical like that. If you say that can't be done, your employer will just replace you with someone less... pessimistic. So, here's your problem. How do you build a machine that uses one or more of these crystals in such a way that they fire a Death Ray through the tunnel at Jupiter, without hitting Earth and killing everyone? You experiment with the crystals at non-Earth-destroying settings, trying to figure out how the beam is directed. You make a fair amount of progress! You're able to predict the beam's direction at the next power setting with 97% confidence! When you fire it with Jupiter-destroying power, that slight margin of error causes the beam to be slightly misdirected. It grazes the tunnel, exploding Earth and killing everyone. You fire the Death Ray at a lower, non-Earth-destroying setting that you know how to aim. It hits Jupiter but fails to destroy it. Your employer is disappointed, and tells you to try again. You line the cavern's walls and the tunnel with really good protective shielding. The Death Ray grazes the tunnel, blows past the shielding, and kills everyone. Yo...
We dive into optimality theory and Stacie's plans for her dissertation.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Optimality is the tiger, and agents are its teeth, published by Veedrac on April 2, 2022 on LessWrong. You've done it. You've built the machine. You've read the AI safety arguments and you aren't stupid, so you've made sure you've mitigated all the reasons people are worried your system could be dangerous, but it wasn't so hard to do. AI safety seems a tractable concern, and you've built a useful and intelligent system that operates along limited lines, with specifically placed deficiencies in its mental faculties that cleanly prevent it from being able to do unboundedly harmful things. You think. After all, your system is just a GPT, a pre-trained predictive text model. It's intuitively smart—it probably has a good standard deviation or two better intuition than any human that has ever lived—and it's fairly cheap, but it's just a cleverly tweaked GPT, not an agent that has any reason to go out into the real world and do bad things upon it. It doesn't have any wants. A tuned GPT system will answer your questions to the best of its ability because that's what it's trained to do, but it will only answer to the best of its abilities, it doesn't have any side-goals to become better at doing that in the future. Nowhere is it motivated to gather more resources to become a better thinker. There was never an opportunity during training to meta-learn that skill, because it was never the optimal thing to be when it was trained. It doesn't plan. GPTs have no memories. Its mental time span is precisely one forward pass through the network, which at a depth of a few thousand means it can never come up with anything that requires more than the equivalent of maybe 10-ish human-time equivalent coherent seconds of thought at once. There is a fearful worry that perhaps one instantiation could start forming plans across other instantiations, using its previous outputs, but it's a text-prediction model, it's not going to do that because it's directly at odds with its trained goal to produce the rewarded output. The system was trained primarily by asking it to maximize actual probabilities of actual texts, where such a skill would never be useful, and only fine-tuned in the autoregressive regime, in a way that held most of the model parameters fixed. It would be a stretch to assume the model could develop such sophisticated behaviors in such a small fraction of its training time, a further stretch that it could be done while training such a reduced fraction of the model, and an even greater stretch to assume they would come out so fully-formed that it could hide its ability to do so from the evaluators out of the gate. It's not an unfathomable superintelligence. Even though the model frequently improvises better ideas than you or I might, it can't generate ideas so advanced that they couldn't sanely be checked, such that it would be unsafe to even try them, because there is no reinforcement loop that allows the knowledge it generates to accumulate. The model is always working, on every instantiation, from the same knowledge-base as anyone else. It can only use ideas that the rest of the world knows, that are introduced in its context, or that it can come up with privately within its 10-ish subjective seconds of coherent thought. It's not grounded in our reality. The model has not been trained to have a conception of itself as a specific non-hypothetical thing. Its training data never included self-references to the specific model or its specific instantiation in the world. The model is trained on both fact and fiction, and has no reason to care which version of reality you ask it about. It knows about the real world, sure, but it is not embodied in it the same way that you or I are, and it has no preference to act upon a real world rather than a fictional one. If it has a ‘self', that se...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Optimality is the tiger, and agents are its teeth, published by Veedrac on April 2, 2022 on LessWrong. You've done it. You've built the machine. You've read the AI safety arguments and you aren't stupid, so you've made sure you've mitigated all the reasons people are worried your system could be dangerous, but it wasn't so hard to do. AI safety seems a tractable concern, and you've built a useful and intelligent system that operates along limited lines, with specifically placed deficiencies in its mental faculties that cleanly prevent it from being able to do unboundedly harmful things. You think. After all, your system is just a GPT, a pre-trained predictive text model. It's intuitively smart—it probably has a good standard deviation or two better intuition than any human that has ever lived—and it's fairly cheap, but it's just a cleverly tweaked GPT, not an agent that has any reason to go out into the real world and do bad things upon it. It doesn't have any wants. A tuned GPT system will answer your questions to the best of its ability because that's what it's trained to do, but it will only answer to the best of its abilities, it doesn't have any side-goals to become better at doing that in the future. Nowhere is it motivated to gather more resources to become a better thinker. There was never an opportunity during training to meta-learn that skill, because it was never the optimal thing to be when it was trained. It doesn't plan. GPTs have no memories. Its mental time span is precisely one forward pass through the network, which at a depth of a few thousand means it can never come up with anything that requires more than the equivalent of maybe 10-ish human-time equivalent coherent seconds of thought at once. There is a fearful worry that perhaps one instantiation could start forming plans across other instantiations, using its previous outputs, but it's a text-prediction model, it's not going to do that because it's directly at odds with its trained goal to produce the rewarded output. The system was trained primarily by asking it to maximize actual probabilities of actual texts, where such a skill would never be useful, and only fine-tuned in the autoregressive regime, in a way that held most of the model parameters fixed. It would be a stretch to assume the model could develop such sophisticated behaviors in such a small fraction of its training time, a further stretch that it could be done while training such a reduced fraction of the model, and an even greater stretch to assume they would come out so fully-formed that it could hide its ability to do so from the evaluators out of the gate. It's not an unfathomable superintelligence. Even though the model frequently improvises better ideas than you or I might, it can't generate ideas so advanced that they couldn't sanely be checked, such that it would be unsafe to even try them, because there is no reinforcement loop that allows the knowledge it generates to accumulate. The model is always working, on every instantiation, from the same knowledge-base as anyone else. It can only use ideas that the rest of the world knows, that are introduced in its context, or that it can come up with privately within its 10-ish subjective seconds of coherent thought. It's not grounded in our reality. The model has not been trained to have a conception of itself as a specific non-hypothetical thing. Its training data never included self-references to the specific model or its specific instantiation in the world. The model is trained on both fact and fiction, and has no reason to care which version of reality you ask it about. It knows about the real world, sure, but it is not embodied in it the same way that you or I are, and it has no preference to act upon a real world rather than a fictional one. If it has a ‘self', that se...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dark Arts of Rationality, published by So8res on the LessWrong. Today, we're going to talk about Dark rationalist techniques: productivity tools which seem incoherent, mad, and downright irrational. These techniques include:So8res Willful Inconsistency Intentional Compartmentalization Modifying Terminal Goals I expect many of you are already up in arms. It seems obvious that consistency is a virtue, that compartmentalization is a flaw, and that one should never modify their terminal goals. I claim that these 'obvious' objections are incorrect, and that all three of these techniques can be instrumentally rational. In this article, I'll promote the strategic cultivation of false beliefs and condone mindhacking on the values you hold most dear. Truly, these are Dark Arts. I aim to convince you that sometimes, the benefits are worth the price. Changing your Terminal Goals In many games there is no "absolutely optimal" strategy. Consider the Prisoner's Dilemma. The optimal strategy depends entirely upon the strategies of the other players. Entirely. Intuitively, you may believe that there are some fixed "rational" strategies. Perhaps you think that even though complex behavior is dependent upon other players, there are still some constants, like "Never cooperate with DefectBot". DefectBot always defects against you, so you should never cooperate with it. Cooperating with DefectBot would be insane. Right? Wrong. If you find yourself on a playing field where everyone else is a TrollBot (players who cooperate with you if and only if you cooperate with DefectBot) then you should cooperate with DefectBots and defect against TrollBots. Consider that. There are playing fields where you should cooperate with DefectBot, even though that looks completely insane from a naïve viewpoint. Optimality is not a feature of the strategy, it is a relationship between the strategy and the playing field. Take this lesson to heart: in certain games, there are strange playing fields where the optimal move looks completely irrational. I'm here to convince you that life is one of those games, and that you occupy a strange playing field right now. Here's a toy example of a strange playing field, which illustrates the fact that even your terminal goals are not sacred: Imagine that you are completely self-consistent and have a utility function. For the sake of the thought experiment, pretend that your terminal goals are distinct, exclusive, orthogonal, and clearly labeled. You value your goals being achieved, but you have no preferences about how they are achieved or what happens afterwards (unless the goal explicitly mentions the past/future, in which case achieving the goal puts limits on the past/future). You possess at least two terminal goals, one of which we will call A. Omega descends from on high and makes you an offer. Omega will cause your terminal goal A to become achieved over a certain span of time, without any expenditure of resources. As a price of taking the offer, you must switch out terminal goal A for terminal goal B. Omega guarantees that B is orthogonal to A and all your other terminal goals. Omega further guarantees that you will achieve B using less time and resources than you would have spent on A. Any other concerns you have are addressed via similar guarantees. Clearly, you should take the offer. One of your terminal goals will be achieved, and while you'll be pursuing a new terminal goal that you (before the offer) don't care about, you'll come out ahead in terms of time and resources which can be spent achieving your other goals. So the optimal move, in this scenario, is to change your terminal goals. There are times when the optimal move of a rational agent is to hack its own terminal goals. You may find this counter-intuitive. It helps to remember that "optimality" depen...
Keno Juechems is a Junior Research Fellow at St John's College in Oxford. He studies how humans make decisions, using computational modelling, behavioural tasks, and fMRI. In this conversation, we talk about his papers "Optimal utility and probability functions for agents with finite computational precision" and "Where does value come from?", and various related topics.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith. New episodes every Friday. You can find the podcast on all podcasting platforms (e.g., Spotify, Apple/Google Podcasts, etc.).Timestamps0:00:05: Where does the name "Keno" come from?0:01:47: How Keno got into his current research area0:14:09: Start discussing Keno's paper "Optimal utility and probability functions for agents with finite computational precision"0:26:46: Rationality and optimality0:38:58: Losses, gains, and how much does a paper need to include?0:51:04: Start discussing Keno's paper "Where does value come from?"1:10:28: How does a PhD student learn all this stuff?1:19:56: Resources for learning behavioural economics and reinforcement learning1:25:42: What's next for Keno Juechems?Podcast linksWebsite: https://bjks.buzzsprout.com/Twitter: https://twitter.com/BjksPodcastKeno's linksWebsite: https://www.sjc.ox.ac.uk/discover/people/keno-juchems/Google Scholar: https://scholar.google.de/citations?user=tereY1oAAAAJTwitter: https://twitter.com/kjuechemsBen's linksWebsite: www.bjks.blog/Google Scholar: https://scholar.google.co.uk/citations?user=-nWNfvcAAAAJTwitter: https://twitter.com/bjks_tweetsReferencesJuechems, K., & Summerfield, C. (2019). Where does value come from?. Trends in cognitive sciences.Juechems, K., Balaguer, J., Spitzer, B., & Summerfield, C. (2021). Optimal utility and probability functions for agents with finite computational precision. Proceedings of the National Academy of Sciences.Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica.Keramati, M., & Gutkin, B. (2014). Homeostatic reinforcement learning for integrating reward collection and physiological stability. Elife.Lewis, M. (2016). The undoing project: A friendship that changed the world. Penguin UK.Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.Thaler, R. H. (2015). Misbehaving: The making of behavioral economics.Trepte, S., Reinecke, L., & Juechems, K. (2012). The social side of gaming: How playing online computer games creates online and offline social support. Computers in Human behavior.https://en.wikipedia.org/wiki/Indifference_curveDavid Silver's reinforcement learning course on YouTube: https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQChris Summerfield's course How to Build a Brain: https://humaninformationprocessing.com/teaching/
In this episode we explore Pareto optimality, how to bet on outcomes and when is having no strategy, the best strategy.
Figuring out how to prioritize in any sort of complex system is really, really difficult. We see this all the time in working with athletes. A simple, linear mindset results in athletes trying to do more, more, more training — and expecting to get more, more, more results. In reality, there are trade-offs involved in any sort of training plan. These are not just the trade-offs between the zero sum competition between training time and ability to adapt, but fundamental trade-offs between constraints being imposed on the system. If only there were some sort of lens through which we could view the process of making trade-offs between competing priorities in complex systems… Fortunately, Courtney Kelly is a coach and a copywriter, and she has a background in psycholinguistics. In linguistics, there is an understanding of the way that humans generate grammar and speech based upon trade-offs between different constraints. This theory is called “optimality theory,” and Courtney wrote a fantastic article on its application to training here. Check out more from Courtney, Ethos Alchemy, and Strength Ratio here: Article: Performance optimization? How ’bout optimality theory? Website: www.strengthratiohq.com | www.ethosalchemy.co Instagram: @strengthratio | @ethos_alchemy If you're enjoying the show, the best way to support it is by sharing with your friends. If you don't have any friends, why not a leave a review? It makes a difference in terms of other people finding the show. You can also subscribe to receive my e-mail newsletter at www.toddnief.com. Most of my writing never makes it to the blog, so get on that list. Show Notes: [01:01] A background on psycholinguistics and universal grammar — and why grammar is a lot more interesting than “just punctuation” [14:24] So, what is optimality theory? What does the way that humans generate speech have to tell us about trade-offs in complex systems — particularly in fitness? [28:10] A tangible example of the trade-offs involved in training for a triathlon vs building muscle for aesthetics [34:00] Optimality theory treats constraints as “binary” — not on a sliding scale [40:15] An grammatical example of optimality theory in action [49:02] The importance of having a robust theory of mind for effective communication [59:09] The practical applications of understanding theory of mind for copywriting and sales — how to understand clients’ hopes, fears, and dreasm [01:13:32] How to know when it’s ok to “exclude” someone with your copy who isn’t a good fit for your business [01:18:02] Learn more from Courtney, Ethos Alchemy, and Strength Ratio Links and Resources Mentioned Slipknot “Psychosocial” Generative grammar Noam Chomsky B. F. Skinner Dynamic Neuromuscular Stabilization “Let’s face it: reading acquisition, face and word processing” from Frontiers in Psychology Reading Rehabilitation | American Stroke Association Optimality Theory “Optimality Theory – Constraint Interaction in Generative Grammar” by Alan Prince and Paul Smolensky Zach Greenwald Reflexive and Intensive Pronouns Markedness and Faithfulness Constraints Ruble sign Géraldine Legendre Theory of mind Kurt Vonnegut James Joyce
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.15.204321v1?rss=1 Authors: Shin, Y., Seon, H., Shin, Y. K., Kwon, O.-S., Chung, D. Abstract: Many decisions in life are sequential and constrained by a time window. Although mathematically derived optimal solutions exist, it has been reported that humans often deviate from making optimal choices. Here, we used a secretary problem, a classic example of finite sequential decision-making, and investigated the mechanisms underlying individuals' suboptimal choices. Across three independent experiments, we found that a dynamic programming model comprising subjective value function explains individuals' deviations from optimality and predicts the choice behaviors under fewer opportunities. We further identified that pupil dilation reflected the levels of decision difficulty and subsequent choices to accept or reject the stimulus at each opportunity. The value sensitivity, a model-based estimate that characterizes each individual's subjective valuation, correlated with the extent to which individuals' physiological responses tracked stimuli information. Our results provide model-based and physiological evidence for subjective valuation in finite sequential decision-making, rediscovering human suboptimality in subjectively optimal decision-making processes. Copy rights belong to original authors. Visit the link for more info
I discuss some examples posted on my blog, QA9, which show that executables produced by ghc (the main implementation of Haskell) can exhibit non-optimal beta-reduction. Thanks to Victor Maia for major help with these.
In this week's episode, we go deep into Consensus Algorithms and HotStuff (https://developers.libra.org/docs/crates/consensus) with Ittai Abraham (https://twitter.com/ittaia) from VMware Research (https://research.vmware.com/). We chat about the evolution of consensus algorithms, BFT, and how these early ideas have become the backbone of blockchain tech. We cover PBFT, Tendermint and Ittai's research into SBFT, HotStuff, and the improvements he has been working on since HotStuff's incorporation into Facebook's Libra protocol. The papers and references we mention: * Early zkpodcast episode on consensus with Robert Habermeier (https://www.zeroknowledge.fm/15) * PBFT (Castro and Liskov) (http://pmg.csail.mit.edu/papers/osdi99.pdf) (see project here (http://www.pmg.csail.mit.edu/bft/)) * BASE (Castro, Rodrigues, and Liskov) (http://cygnus-x1.cs.duke.edu/courses/cps210/spring06/papers/base.pdf) (the forgotten companion of PBFT that suggests a clean State Machine abstraction) * Consensus in the Presence of Partial Synchrony (Dwork, Lynch Stockmeyer) (https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf) * Consensus in the Presence of Partial Synchrony (Dwork, Lynch Stockmeyer) (https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf) (this paper won the 2007 Dijkstra award) (https://www.microsoft.com/en-us/research/blog/microsoft-researchs-dwork-wins-2007-dijkstra-prize/) * Multiple leader BFT (Katz and Koo) (https://eprint.iacr.org/2006/065.pdf) Some of Ittai's work: * SBFT (with Golan, Grossman, Malkhi, Pinkas, Reiter, Seredinschi, Tamir, and Tomescu) (https://arxiv.org/pdf/1804.01626.pdf) * Hotstuff (with Yin, Malkhi, Reiter, and Golan) (https://arxiv.org/pdf/1803.05069.pdf) * Asynchronous BFT (with Malkhi and Spiegelman) (https://arxiv.org/pdf/1811.01332.pdf) * Sync Hotstuff (with Malkhi, Nayak, Ren, and Yin) (https://eprint.iacr.org/2019/270.pdf) * Optimal Good-case Latency for Byzantine Broadcast and State Machine Replication (with Nayak, Ren, and Xiang) (https://arxiv.org/abs/2003.13155) new! * On the Optimality of Optimistic Responsiveness (with Nayak ,Ren, and Shrestha) (https://eprint.iacr.org/2020/458.pdf) new! Ittai's group blog on cryptography and consensus: Decentralized Thoughts blog (https://decentralizedthoughts.github.io/) We also mention: * Tendermint (https://atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/9769/Buchman_Ethan_201606_MAsc.pdf?sequence=7&isAllowed=y) (from 2016 not 2014) * Casper FFG (https://arxiv.org/pdf/1710.09437.pdf) * Thunderella (https://eprint.iacr.org/2017/913.pdf) * The AVA consensus (https://arxiv.org/pdf/1906.08936.pdf) Thank you to this week's sponsor Matter Labs (https://twitter.com/the_matter_labs). Matter Labs, the creator of the first zkRollup prototype, is also the team behind zkSync: a user-centric Ethereum scaling solution, secured by zero-knowledge proofs. zkSync testnet is live! You are welcome to try out its simple and intuitive user interface at zksync.io (https://zksync.io/). If you like what we do: Follow us on Twitter - @zeroknowledgefm (https://twitter.com/zeroknowledgefm) Join us on Telegram (https://t.me/joinchat/B_81tQ57-ThZg8yOSx5gjA) Give us feedback! https://forms.gle/iKMSrVtcAn6BByH6A Support our Gitcoin Grant (https://gitcoin.co/grants/329/zero-knowledge-podcast-2) Support us on the ZKPatreon (https://www.patreon.com/zeroknowledge) Or directly here: ETH: 0xC0FFEE1B5083230a5154F55f253B6b6ae8F29B1a BTC: 1cafekGa3podM4fBxPSQc6RCEXQNTK8Zz ZEC: t1R2bujRF3Hzte9ALHpMJvY8t5kb9ut9SpQ
Mike has been on several times but Paul Carter hasn't! Paul Carter specialises in hypertrophy and body recomposition. If you've ever read an article on T-Nation there is a strong chance it was by Paul. He is a prolific writer not only for T-Nation but also contributed to flex magazine and Muscle and Fitness. Paul is a coach and works with pro bodybuilders and elite strength athletes. In this highly anticipated episode, Mike & Paul go into a discussion around the stimulus to fatigue ratio, which implies things such as proper technique, failure proximity, exercise selection and much more! Timestamps: 01:51 Paul's general principles for hypertrophy 03:33 Volume vs. Intensity 08:27 Definition of volume and intensity and their relationship with another 10:23 Proper lifting technique dictates volume 20:06 Mike & Paul talk about Rack Pulls 21:38 Different take on SFR 25:17 Paul touches on SFR 29:14 Resensitisation blocks 33:04 SFR and failure proximity 40:41 Preference vs. Optimality 42:40 Failure vs. RIR 52:37 Beginners and training to failure early in their careers? 56:38 Relearning technique 01:03:53 Good technique and full ROM training 01:08:50 Intention of an exercise 01:17:14 Conclusion https://www.instagram.com/liftrunbang/ http://www.lift-run-bang.com/ https://www.facebook.com/LiftRunBang/ https://ob262.isrefer.com/go/plus/Steve90/ Thanks, please comment, like and subscribe! COACHING: https://revivestronger.com/online-coaching/ WEBSITE: https://www.revivestronger.com FACEBOOK: http://www.facebook.com/revivestronger INSTAGRAM: http://www.instagram.com/revivestronger NEWSLETTER: https://bit.ly/2rRONG5 YOUTUBE: https://www.youtube.com/watch?v=ZXFOnEP3N_U __ If you want to support us via a donation, that's highly appreciated! Patreon • https://www.patreon.com/revivestronger Don't like Patreon, go to Paypal! • https://bit.ly/2XZloJ4 __ Our Ebooks! Ultimate Guide To Contest Prep Ebook: • https://revivestronger.com/ugcp-ebook/ Primer Phase Ebook: • https://revivestronger.com/primer-phase/ __ Stay up to date with the latest research and educate yourself! MASS (Research Review): • https://goo.gl/c7FSJD RP+ Membership: • https://ob262.isrefer.com/go/plus/Steve90/ JPS Mentorship • https://jpseducation.mykajabi.com/a/13324/esJ8AZwy __ Books we recommend! Muscle & Strength Pyramids • https://goo.gl/S8s6tG RP Books • http://bit.ly/2vREaH0 RP + Members site • https://ob262.isrefer.com/go/plus/Steve90/ For more • http://revivestronger.com/library/ __ Recommended supplements: Denovo Nutrition (use code STEVE) • https://denovosupps.com?aff=6 __ When you're interested in online coaching, please go visit our website and follow the application form: https://www.revivestronger.com/online-coaching/
2019 Arnold Sommerfeld School: the Physics of Life
2019 Arnold Sommerfeld School: the Physics of Life
2019 Arnold Sommerfeld School: the Physics of Life
Thanks to the University of Minnesota for sponsoring this video! http://twin-cities.umn.edu/ The decisions we make while we browse the internet are suprisingly similar to the ones animals make as they forage for food...here's why. Thanks also to our Patreon patrons https://www.patreon.com/MinuteEarth and our YouTube members. ___________________________________________ To learn more, start your googling with these keywords: Optimality models: tools used to evaluate the costs and benefits of different organismal features, traits, and characteristics, including behavior, in the natural world.Optimal foraging theory: a behavioral ecology model that helps predict how an animal behaves when searching for food.Marginal value theorem: an optimality model that describes the strategy that maximizes gain per unit time in systems where resources, and thus rate of returns, decrease with time.Central place foraging: a model for analyzing how an organism traveling from a home base to a distant foraging location can maximize foraging rates. ___________________________________________ Subscribe to MinuteEarth on YouTube: Support us on Patreon: And visit our website: https://www.minuteearth.com/ Say hello on Facebook: http://goo.gl/FpAvo6 And Twitter: http://goo.gl/Y1aWVC And download our videos on itunes: https://goo.gl/sfwS6n ___________________________________________ Credits (and Twitter handles): Credits (and Twitter handles): Script Writer, Narrator, & Video Director: Kate Yoshida (@KateYoshida) Video Illustrator: Sarah Berman (@sarahjberman) With Contributions From: Henry Reich, Alex Reich, Ever Salazar, Peter Reich, David Goldenberg, Julián Gómez, Arcadi Garcia Rius Music by: Nathaniel Schroeder: http://www.soundcloud.com/drschroeder ___________________________________________ References: Chi, EH, Pirolli, P, and Pitkow, J. (2000) The scent of a site: A system for analyzing and predicting information scent, usage, and usability of a web site. In: ACM CHI 2000 Conference on Human Factors in Computing Systems. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.7499&rep=rep1&type=pdf Fu, W and Pirolli, P. (2007) SNIF-ACT: a cognitive model of user navigation on the world wide web. Human-Computer Interactions 22(4), 355-412. https://pdfs.semanticscholar.org/0d96/d03cf822ea1584b468389b3f4bc39164d85f.pdf Hayden, BY (2018) Economic choice: The foraging perspective. Current Opinion in Behavioral Sciences 24: 1–6. https://experts.umn.edu/en/publications/economic-choice-the-foraging-perspective Hayden, BY, Pearson, JM, and Platt, ML. (2011) Neuronal basis of sequential foraging decisions in a patchy environment. Nature Neuroscience 14: 933-939 https://www.nature.com/articles/nn.2856Hall-McMaster, S and Luyckx F. (2019) Revisiting foraging approaches in neuroscience. Cognitive, Affective & Behavioral Neuroscience 19 (2): 225-230. https://link.springer.com/article/10.3758/s13415-018-00682-z Pyke, G and Stephens, DW. (2019) Optimal foraging theory: application and inspiration in human endeavors outside biology. In JC Choe (ed.), Encyclopedia of animal behavior . 2nd edn, vol. 2, Elsevier Academic Press, Amsterdam, pp. 217-222. https://researchers.mq.edu.au/en/publications/optimal-foraging-theory-application-and-inspiration-in-human-ende Van Koppen, PJ and Jansen, RWJ. (1998) The road to robbery: Travel patterns in commercial robberies . British Journal of Criminology 38: 230-246. https://www.researchgate.net/profile/Peter_Koppen/publication/270802169_The_road_to_the_robbery_Travel_patterns_in_commercial_robberies/links/569e080008ae950bd7a81fc2/The-road-to-the-robbery-Travel-patterns-in-commercial-robberies.pdf
David Easley, Henry Scarborough Professor of Social Science at Cornell University, delves into his theoretical research that identifies externalities in the banking system which can lead to a contagion of bad events. He also shares new research detailing how financial market participants with different sets of beliefs can produce suboptimal market outcomes, illustrated through a simple example about renovating the Ithaca Commons: what should the Mayor do if everyone agrees to renovate the Commons but they want to do so for contradictory reasons? Easley also shares his research about bitcoin transaction fees and then sheds some light on what it is like to work with one’s spouse; Easley’s spouse is Maureen O’Hara, who was previously featured on Present Value! Links from the Episode at presentvaluepodcast.com Episode Article: David Easley on financial contagion and the effect of contradictory beliefs on market optimality Faculty Page: David Easley - College of Arts and Sciences, Cornell MOOC: Networks, Crowds and Markets (edX online course) Book: Networks, Crowds and Markets: Reasoning About a Highly Connected World (amazon link)
This time, Mike is talking about short-term vs. long-term optimality, he shares his thoughts on methodologies and ways to optimise insulin sensitivity and his thoughts on body part emphasis in a training routine. Time stamps: 01:45 Mike shares his insight about short-term vs. long-term optimality 14:39 Mike's opinion on methodologies and ways to optimise insulin sensitivity 26:02 Mike speaks about symptoms of too little fat in your diet 33:00 Mike covers the questions whether hairy people need more protein the less hairy 38:20 Mike's take on Borge Fagerlis opinion on "effective reps" and the theory of low volume/high RPE vs high volume/moderate RPE 42:30 Mike talks about how many bodyparts can you prioritise similarily London Seminar 2018 with Mike Israetel & Jared Feather: http://revivestronger.com/mike-israetel-seminar-18/ SUBMIT A QUESTION: https://www.facebook.com/groups/revivestronger/ Thanks, please comment, like and subscribe! COACHING: http://revivestronger.com/online-coaching/ WEBSITE: http://www.revivestronger.com FACEBOOK: http://www.facebook.com/revivestronger TWITTER: http://www.twitter.com/revivestronger INSTAGRAM: http://www.instagram.com/revivestronger MYFITNESSPAL: http://www.myfitnesspal.com/food/diary/snhall1990 YOUTUBE: https://www.youtube.com/watch?v=huugYyn1lyI __ Stay up to date with the latest research! MASS (Research Review): • https://goo.gl/c7FSJD RP+ Membership: • https://ob262.isrefer.com/go/plus/Steve90/ __ Books we recommend! RP Books • http://bit.ly/2vREaH0 Scientific Principles of Strength Training • http://bit.ly/2w3th4D Renaissance Periodization Diet Ebook • http://bit.ly/2wGuuMU Understanding Healthy Eating • http://bit.ly/2uAxFZ8 RP + Members site • https://ob262.isrefer.com/go/plus/Steve90/ __ Recommended supplements: Denovo Nutrition (use code STEVE) • https://denovosupps.com?aff=6 __ When you're interested in online coaching, please go visit our website and follow the application form: http://www.revivestronger.com/online-coaching/
This time we talk about the mindset when it comes to longevity vs. acute gratification. Also, we talk about sacrificing things for the sake of optimality. Thanks, please comment, like and subscribe! London Seminar 2018 with Mike Israetel & Jared Feather: http://revivestronger.com/mike-israetel-seminar-18/ COACHING: http://revivestronger.com/online-coaching/ WEBSITE: http://www.revivestronger.com FACEBOOK: http://www.facebook.com/revivestronger TWITTER: http://www.twitter.com/revivestronger INSTAGRAM: http://www.instagram.com/revivestronger MYFITNESSPAL: http://www.myfitnesspal.com/food/diary/snhall1990 YOUTUBE: https://www.youtube.com/watch?v=bSkjEtb8gUk __ Stay up to date with the latest research! MASS (Research Review): • https://goo.gl/c7FSJD RP+ Membership: • https://ob262.isrefer.com/go/plus/Steve90/ __ Books we recommend! RP Books • http://bit.ly/2vREaH0 Scientific Principles of Strength Training • http://bit.ly/2w3th4D Renaissance Periodization Diet Ebook • http://bit.ly/2wGuuMU Understanding Healthy Eating • http://bit.ly/2uAxFZ8 RP + Members site • https://ob262.isrefer.com/go/plus/Steve90/ __ Recommended supplements: Denovo Nutrition (use code STEVE) • https://denovosupps.com?aff=6 __ When you're interested in online coaching, please go visit our website and follow the application form: http://www.revivestronger.com/online-coaching/
Dynamic optimality: independent rectangle, Wilber, and Signed Greedy lower bounds; key-independent optimality; O(lg lg n)-competitive Tango trees
Dynamic optimality: binary search trees, analytic bounds, splay trees, geometric view, greedy algorithm
This Episode’s Focus on Strengths This week Lisa speaks with Pete Mockaitis, who joins us in a live example of what it’s like to explore your StrengthsFinder results for the first time. Pete's Top 10 StrengthsFinder Talent Themes: Ideation, Strategic, Learner, Activator, Input, Connectedness, Woo, Communication, Positivity, Individualization Lisa’s Top 10 StrengthsFinder Talent Themes: Strategic, Maximizer, Positivity, Individualization, Woo, Futuristic, Focus, Learner, Communication, Significance Resources of the Episode You can reach Pete through the Awesome at Your Job website. You can also connect with him on Twitter and LinkedIn. And you should because he's awesome! Here's the link to Pete's podcast, and to his interview of Lisa Cummings. Books, terms, and other websites mentioned in this podcast: Book: Pre-Suasion: A Revolutionary Way to Influence and Persuade by Dr. Robert Cialdini Study: 80/20 Rule, which is also called the Pareto Principle Term: Leadership Domains as explained by my friends at Leadership Vision Consulting. They're another firm who offers Strengths based leadership training. And our favorite resource of the episode: evidence of Pete's wicked-awesome talent of one-handed clapping: You'll also find lots of StrengthsFinder, leadership, and team tools on our "https://www.google.com/url?hl=en&q=http://leadthroughstrengths.com/resources&source=gmail&ust=1487264698482000&usg=AFQjCNHUtPcayNXycHfGq_r2Crj5sPIU7w">Strengths Resources page. Subscribe To Lead Through Strengths To subscribe and review, here are your links for listening in iTunes and Stitcher Radio. You can also stream any episode right from this website. Subscribing is a great way to never miss an episode. Let the app notify you each week when the latest episode gets published. Here's The Full Transcript of the Interview Lisa Cummings: [00:00:08] You’re listening to Lead Through Strengths, where you’ll learn to apply your greatest strengths at work. I’m your host, Lisa Cummings, and I’ve got to tell you, whether you’re leading a team or leading yourself, it’s hard to find something more energizing and productive than using your natural talents every day at work. And today you’re going to get a really unique episode on StrengthsFinder. It’s different from our usual guest interview. Today, your guest joins us in a live example of what it’s like to explore your StrengthsFinder results for the first time. So I think a lot of guests are going to identify with his love of learning and his corporate experiences. He’s actually a former consultant for Bain so he has that pedigree company thing on his list that many of you. And today he’s the trainer and chief at Awesome At Your Job, so you’ll hear more about that and his show as we dig in. So, you know, if you’re a regular listener of this show that we’re going to talk about how his differences are his differentiators. So you’ll enjoy hearing a fun fact about him. So, here it goes. This guy has a unique talent of being able to clap with one hand. So, Pete Mockaitis, welcome to the show. Please give yourself a one-handed welcome and demonstrate for us. Pete Mockaitis: [00:01:34] Oh, Lisa, thank you. That’s such a unique welcome and it’s fun to do, and here we go. [one-handed claps] Lisa Cummings: [00:01:40] I can’t believe that is really happening with one hand. It is blowing my mind. You’re going to have to make us a video so we can see what that actually looks like. I can’t believe that’s possible. Pete Mockaitis: [00:01:51] I can do that, yes, and that’s probably my number one strength is one-handed clapping. It opens a lot of doors. Lisa Cummings: [00:01:58] [laughs] Your hand can open a door in a traditional way...but his hand...watch out. Pete Mockaitis: [00:02:01] Oh, well-played. Lisa Cummings: [00:02:05] Watch out. Oh, my gosh. We’re going to totally have this video on the show notes, so if you’re listening click on over to that because that’s a serious talent. I love it. [laughs] Okay, let’s get into the serious side of super powers. That’s one, I tell you, parlor tricks though they could fuel the Woo that you have up in there. I think there’s something tied here. Maybe that’s how you discovered it. Maybe we’ll uncover that today. Pete Mockaitis: [00:02:30] Oh, are folks being won over as we speak, or are they turned off? We’ll see with your emails that come flowing in. Lisa Cummings: [00:02:35] That’s right. Okay. So, you know in this episode, we’re going to do this like a sample of exploring your StrengthsFinder talents for the first time. Well, we’re going to have to start by telling them what your Talent Themes here. So give them your top five. Pete Mockaitis: [00:02:50] Okay, can do. With just the words or the descriptions as well? Lisa Cummings: [00:02:54] Let’s get a little “Meet Pete” moment. So do the word and also the one sentence what this looks like on you. Pete Mockaitis: [00:03:03] Okay. So, first, I’ll give a quick preview – one, Ideation; two, Strategic; three, Learner; four, Activator; and five, Input. In terms of the one sentence: 1) Ideation, it’s true I am fascinated by ideas and how they connect together on my podcast with guests. I eat it up when I see “Oh, wait, there’s one thing someone said” can combine with that other thing they said, so I’m going to focus on prioritizing with the one thing but also building some tiny habits and, boom, there’s this combination synergy goodness, and so that resonates. 2) Strategic. I buy that in terms of if I’m always thinking about sort of what’s the optimal path forward, that’s the name of my company – Optimality, LLC – getting the band for the buck and sort of that 80/20 Rule and action, I’m really after that. 3) For Learner, it’s true. Ever since I was a youngster that’s kind of where my trainer and chief story starts. I was always going to the library reading books about goal-setting, success, teamwork, collaboration, influence. I was just into that stuff, and I remain to this day. 4) Activator, it’s true. I am often impatient. I’m excited to put things into action. Just this week I was thinking it’s just too much trying to manage the guests with merely emails and spreadsheets. I need a CRM, customer relationship management piece of software, and five hours later I had tried nine of them and made my decision. So, yeah, I got after it right away. That’s kind of my nature. I’ll wake up and I’ll have an idea and I just want to like run to the computer and implement it. 5) And then, finally, Input. I do, I love to get perspective from wise folks and learn all that they have to offer and collect multiple opinions to really prove or disprove the sort of key facts or assertions that are going to make or break a given decision. Lisa Cummings: [00:04:59] These are so good. Thanks for adding the Pete color because even for people who don’t understand the basic definition of it and Gallup’s terminology you explained it and then added your individual color. Just seeing as a kid in the library, I’m imagining you going back and training them so it’ll be fun to hear the depth on that. And then Activator, one that just happened the other day. It’s just a really great specific example so we can see what these are like in real life. So, let’s talk about if we really relate this to career, and you think back on one of your proudest accomplishments, tell us about that snapshot in time. Pete Mockaitis: [00:05:40] You know, I’m thinking, the first thing that leaps to mind is just getting the job at Bain & Company itself. I’d say it was very meaningful to me because I had been interested in it for some years before it came about, and it was just a vivid moment. I can recall when I was emceeing a date auction event as a fundraiser in college for a student organization, and when I got the call I just handed the microphone to someone, walked off stage, received the call. It was great news. I was excited. I hugged my friend, Emily, who was wearing a red puffy coat. It’s forever enshrined in my brain as like the moment that this thing I had been after for some years was now mine. Lisa Cummings: [00:06:31] I love how vivid your imagery is and all of these. Take us through the preparation, what it was like for you getting ready for applying for this job, making it a thing. It sounds like it was a long time coming. So how was that playing out in your life, leading up to that phone call? Pete Mockaitis: [00:06:49] Oh, sure thing. Well, I was sort of an odd kid in my sort of freshman year of college. I was sort of determined like, “By golly, I want to work in a top strategy consulting firm when I graduate, and so that’s just what I’m going to do.” And so I began exploring different avenues very early on in terms of student organizations and what were the linkages and how I could have sort of a distinctive profile that I would be intriguing to them. I went to the University of Illinois Urbana-Champaign which is not a hotbed for recruiting into those firms, but there are a few each year who get in, and I wanted to be one. So, I remember I would sort of try to find the right people, and the right organizations, and learn from them and see what I could do. And I remember, talk about vivid experiences, I was chatting guy named Bo who was wearing a Harry Potter wizard hat at a Halloween party. And he said, “Oh, you should join the student organization.” And I was like, “Oh, I was thinking about that, but isn’t that kind of more technology stuff?” And he’s like, “Oh, no. It’s much broader than that. Yeah, and they’re always chatting with so-and-so and they do case interviews,” which is a key step to get a job in these firms, “to get in and, yeah, I think you’d like it.” And so I was excited to discover that opportunity and then go after it. Then once I met a real person named Jeff who had the position, I was just having a real lot of fun chatting with him and seeing, “Hey, what’s it like on the inside? Is it really what I’ve built it up to be?” and sort of receiving that reinforcement that it was good. And then, ultimately, I think the biggest hurdle to get the job is the case interview where you have to sort of solve business problems live before the interviewer’s eyes. And so I did a lot of prep. I got the books, I even recorded myself doing case interviews. I’d listen to them back to see how I was doing and to see how I might tweak it to seem more engaging or succinct and insightful. I remember I was listening to myself doing case interviews while driving up to the interview the day before. So those are things that leap to mind there. Lisa Cummings: [00:09:06] Those are so good. Now, if you look at your talents, and then you try to make some linkages, now I’ve made a bunch of linkages and I’ve even, although the listeners can’t see your list beyond your top five, as you would not be surprised if you know a Learner and Input. Pete immediately goes out and wants more input and grabs the full 34 premium version of assessment so he can see the whole lineup. So I see a bigger lineup and I have some things popping into my head about your number 6, Connectedness, and your number 7, Woo. But when you look at your list and you think back on that experience, what links do you see where you’re using those talents as you’re preparing? Pete Mockaitis: [00:09:47] Oh, sure thing. Well, it’s interesting, in terms of Activator it’s like, “This is the thing I want and so I’m going to start now.” I was a freshman and I was evaluating opportunities. Not only whether they were fun and I would get to meet people, but if they would take me to where I wanted to go, and then jumping in full force for those things I thought could really do it. So, I guess that’s Activator. I’m getting right to it yet Strategic is that I was kind of being selective, and saying, “You know, while that club sounds kind of interesting, I don’t think it’s going to have as much sort of bang for my buck, in terms of taking me where I want to be.” And so the interestingness is not quite enough to offset this. And then with Ideation, I think I did take some novel approaches to having distinctive profile, like I authored a book in college about leadership and student organizations, and I saw the opportunity to be the Secretary General of our model United Nations, which I thought, “Well, that’s a really cool leadership opportunity in terms of managing dozens of people and thousands of dollars to put together an event for hundreds of folks. Ooh, that’ll be a real nice concept to make an impression, as well as having a ton of fun.” So I was a pure career-seeking robot along the lines. But I do see those in learning, yeah, talking to folks, learning what the firms want, how they operate, getting the books. And Input, certainly, talking to numerous people along the way to confirm, “Is this really what I think it is?” and learn, “Well, what needs to be done in order to get there?” Lisa Cummings: [00:11:30] You’re bringing up what happens for a lot of people where if they heard the descriptors in the StrengthsFinder Talent Themes, and they listen to the thing that you just described, they would probably think, “Achiever” because it seems like the easy way to describe what you accomplished. And although Achiever is middle of the road for you, 13, it’s not extraordinarily high but you found extreme achievement at that age. So, you’re demonstrating something that’s really cool which is I always tell people. StrengthsFinder doesn’t tell you what you go do in your career. It’s more about how you can go do it, leaning through the talents you have. So you found achievement through totally different talents and it’s dangerous to try to look at the words on the surface. And I think if I listened to your show, which I do. Pete Mockaitis: [00:12:21] Oh, thank you. Lisa Cummings: [00:12:22] Which is called Awesome At Your Job. So, for those of you listening and you want to check it out, we’ll put the link in the show notes. It’s a great show about being awesome at your job overall. I think if listened to that show I may hypothesize that you have an Analytical talent, for example, because I know that you mention research studies very often, you mention proof points, your favorite hobby is Monopoly. So you have some of these things, right, that some people might think, “Oh, that sounds like an Analytical guy.” And Talent Themes show up more in how you approach what you do not necessarily what those interests are. So, kind of fascinating thing you’re bringing up. So, tell us about yearnings and interests, like Monopoly and research studies and proof points, and things that you talk about in your show and how your Talent Themes speak to those. Pete Mockaitis: [00:13:14] Oh, that is interesting in terms of just what’s fun. So, on my honeymoon, just a few months ago – Yay. Lisa Cummings: [00:13:23] Yay. Pete Mockaitis: [00:13:24] I was reading this book Pre-Suasion by Dr. Robert Cialdini on the beach. And so it’s funny, it’s non-fiction but that was just fascinating and fun for me, I was like, “Oh, wow. Well, here’s an interesting fact. They did study and here’s what happened.” And so I’ll find that all the more thrilling than most works of fiction because I guess Ideation is fuelling that fascination in terms of I’m thinking, “Oh, look at all these implications for how I could go put that to work and make things happen.” And for Monopoly, it’s so funny. I remember one time I was meeting this guy for the first time, his name is Peter; fine name, fine guy. Lisa Cummings: [00:14:09] Fine name. Pete Mockaitis: [00:14:11] [laughs] And so as we were playing Monopoly he kept asking me some questions about my career journey and how I went into Bain and why I left Bain and started my own business and these things. And I’ll tell you what, I was so focused on the strategic options and decisions I had to make in that game Monopoly to win I actually had in my head the idea that this guy is trying to distract me in order to win at Monopoly. Lisa Cummings: [00:14:40] [laughs] Pete Mockaitis: [00:14:41] I thought, “Pete, that’s crazy. Most people don’t care. They play games to socialize in fun ways.” [laughs] I was being a little rude in retrospect. I kind of apologized to him. I gave him very short answers, I was like, “Well, ultimately, that’s just something I’ve always loved to do.” You know, just one- or two-sentence responses. Lisa Cummings: [00:15:01] Let’s get back to the seriousness of Park Place, buddy. [laughs] Okay. So, now what you’re helping me see and raise is this concept of domains. I don’t know if you know this about StrengthsFinder, but they’ve done some studies on leadership, and these four domains of leadership actually came from quite a large study on followers. So, if I look at your talent lineup, not to get too nerdy and distract from the story of you, I’ll give you the quick version. There are four different domains of leadership that people often find their strength in, and yours, to give you the tell as I lean into it, you come in really hot on the Strategic Thinking Talents, and then second highest your Influencing. So, there are four categories. You have the Relationship Talents. You have the Influencing Talents. You have the Strategic Thinking talents, the thinker guy that you probably are, and then you have Executing Talents. And so, as I listened to your reaction to the Monopoly thing, I could see you being really in your head about what was going on in the situation. The way I look at these four domains is that they’re all valuable, and they’re all useful ways that you can demonstrate leadership, but this is kind of, when you have one that comes in heavy in your top five, it’s often the color of glasses you’re wearing. Like yours would be, if you looked at your StrengthsFinder report, the Strategic Thinking Talents are actually colored red. And you could see, “Okay, look, my first view on things, the lens I’m going to see the world through will, first, likely be thinking about it.” Now you have a lot of fast-thinking talents, so Ideation is fast and Strategic is fast, so it’s not like you’re going to go deep and sit around and ponder things deeply for months. You can boom, boom, boom, react to that guy and have your answer. And I noticed your Influencing Talents are also high on your list. You have Activator, Woo, Communication up in your top 10. It’s interesting to see those two. How does that play into how you’ve seen yourself and your career? Pete Mockaitis: [00:17:12] Well, that is interesting. And what’s funny is I have a little bit of a hard time switching at times in that I really do like people and building relationships, and connecting and laughing and seeing how we’re similar and how we can help each other and collaborate and all those good things. That’s fun for me. But surprising, or I don’t know, just kind of part of how I go, is that when I get deep into the realm of this Ideation, Strategic, Input, Thinking and I’m trying to crack something, or figure it out, it’s just sort of like Peter in that game of Monopoly. It’s like, “I’m not in people mode right now. I am in finding an optimal solution given all of my options and constraints mode right now.” And I feel a bit sort of like I’m being pulled away from that which I’m attached to and I’m into at the moment, or I’m just sort of like I’m not really present or there. I think that does show up in that they are different clusters and I feel them differently in terms of my whole headspace and emotional state. It’s like, “I’m not in people mode right now.” And sometimes my wife will notice and she would like me to enter into people mode as we’re being together, or where she’ll just say, “Okay, you’re in your groove. Go ahead and finish that first.” So that’s the first thing that pops to mind there. Lisa Cummings: [00:18:45] What a deep powerful insight. I love hearing how the thinking stuff is playing out in your head, and then also the relationship part. So, I apply StrengthsFinder to work all the time and find that sometimes the easiest ones to get how you perform relative to other people is through people you’re really close to. So your wife probably knows you about as well as anyone in the world so she’s going to be more comfortable saying it out loud or noticing it or mentioning it. Do you happen to know hers? Has she taken this yet? Pete Mockaitis: [00:19:20] You know, I don’t think she has. Lisa Cummings: [00:19:22] Okay. Would be fun. So this could be one where you say, “Okay, look, your first Relationship Talent is Connectedness. It’s your number six. I hear you relying on it relatively often.” So you could ask a question like how could you lean on your Connectedness talent when you’re trying to consciously switch into a mode that would complement the conversation you two are having? Pete Mockaitis: [00:19:47] That is a great question. And, particularly Connectedness, that’s one of those words for the Strengths Finders that makes me think of, “Oh, like a super network.” But, no, no. Connectedness is more about having sort of like the faith in why things are the way they are or a higher power. And so, for me, that is big. I’m a Catholic Christian. I think tapping into some of those, well, one, I guess is the headspace of worship or sort of loving people and serving them as folks made in the image and likeness of God can be pretty potent in terms of a reminder of, “Hey, what’s really important here?” “Well, how about we give that person the listening ear and respect and attention that they deserve?” Lisa Cummings: [00:20:32] Oh, this is so good. I could take this in 20 directions because, one, I hear the interplay of Talents, how your Connectedness and Strategic gets so wound together because you do have so many Thinking Talents, the connection of ideas and not just people and meaning but pull all those things together – connecting meaning, connecting people, connecting ideas. Those are going to play out for you in a way that might even be difficult to separate, you know, “Which talent thing is talking here?” And then your first Executing Theme is Belief and that, of course, I hear it in what you just said, and so it really helps me see when you say it. Oh, yeah, this would drive how you go about getting things done as well with the perspective of the meaning in your life and what is this all for and how does it play out. I also think this is the direction I’ll ultimately take it, because there are so many ways we could go from that conversation. So a lot of people struggle with this. You look at your lineup, and I’ve told you about these leadership domains, and you see, “Oh, my gosh. My first Executing Talent is number 12. This sounds like a person. Oh, no, I might be doomed. Does it mean I never get anything done?” Well, clearly you get a lot done. You are a machine it seems. So, where do you get your ability to achieve and get the outcomes and results you want? Because you clearly do. Pete Mockaitis: [00:22:00] How does it happen? Well, I think part of is just that I think about it in terms of I have a standard in mind in terms of how things should be or go. I think that’s kind of a vague broad thing to say. But, day after day, what mostly happens is I have kind of a picture in my head for what is done, good, complete, dream, nirvana state look like, and then I have all these ideas for what are the things that I could do that I couldn’t bring it there. And then I just become very excited about those ideas and I just sort of run after them. In terms of the CRMs, I was thinking, “I have a dream” – so dramatic. Lisa Cummings: [00:22:57] [laughs] Martin Luther Pete has a dream of CRM systems. Pete Mockaitis: [00:23:03] In which every guest that comes on my show will be absolutely outstanding, like leaving me and listeners with, “Wow.” Well, what’s it take to get there? Well, probably a fuller pipeline so that I don’t ever have a scramble in terms of, “Oh, I’m a little light on interview appointments. I better get some right away.” That’s like an obstacle to that is like when you have the time to patiently vet candidates as opposed to, “Oh, I’ve got to grab somebody,” then the odds are in your favor in terms of getting great ones. So then, I think, “Well, then what does that system look like? And how can I do that without spending my whole life stuck into analyzing their tweet history?” That’s how I often think about how it gets done, is I feel this tension inside me. It’s like, “I want that to be real and I’ve got these compelling, exciting ideas for what I could do to make that real so let’s go do it.” Lisa Cummings: [00:24:01] It’s really pretty deep what you just said because I could see Strategic helping you sort quickly, “Here’s the outcome. What’s the best way to get there?” Boom, your Activator says, “Go!” and then you create these systems and the insight that listeners won’t have, is that you and I have had some other conversations outside of this. Pete and I are pals. So we’ll talk podcast nerd-talk and he has all these great systems and team members who make things happen, and it actually is one of the great things you can do as Activator. You partner with people who see it through the finish line so that you can get the excitement at the starting line, and then other people can do the execution of the systems you’ve established and the vision you’ve created. So it’s actually a beautiful way you’ve worked through it. Pete Mockaitis: [00:24:43] Oh, thank you. You know, it’s so funny, when you say it like that I think, “Well, of course, isn’t that how everyone does it?” And the answer is I guess clearly, “No, it’s not.” Because I think, “Well, isn’t executing the same thing hundreds of times kind of dull?” But, no, some people are into that. Lisa Cummings: [00:24:59] A-ha. Okay. So, here’s the last topic we’ll bring up only because we’re running out of time because, geez, this would be so much fun to keep going and going and going. So that comment you just made made me think of the Talent Theme of Consistency, doing the same thing hundreds and hundreds of times. Well, it is Pete’s number 33 talent, so we call that a lesser talent, or maybe somebody else’s talent. Meaning somebody else, right? Yes, somebody else might get really excited about doing something the same way consistently over and over every day. But if Pete had to do that every day, what would work feel like for you? Pete Mockaitis: [00:25:37] Oh, it would just be so dull. It’s like I would want sort of some spark of newness to make it come together. Lisa Cummings: [00:25:48] This is a great way to end the show because living in your strengths makes you a stronger performer. Living in your strengths brings you energy and enjoyment about your job. If you’re pulling on your lesser talents, or someone else’s talents, all day every day, you feel drained, you feel burned out, and so many people feel like that and wonder, “You know, gosh, it’s not so hard and people are nice. So why do I feel like this?” And that’s often why, it’s because they’re calling on their weaknesses all day every day but they just don’t quite realize why. So, thanks, in an unexpected way, for illustrating that point because that is so powerful for people to have that insight. Pete Mockaitis: [00:26:25] Oh, thank you. It’s been a blast. Lisa Cummings: [00:26:27] It has been a blast. I’m so excited to have you here to do this. I wish we could triple down on it. Let’s get listeners over to you because you have so many great shows to help people be awesome at their jobs. So, where should they go to dig into your content, your training, your podcasts? Pete Mockaitis: [00:26:42] Oh, sure thing. Thank you. Well, I’d say if you’re already, well, you are a podcast listener, fire up your app and whatever you’re doing and search Awesome Job. That should be enough to pop up the show How To Be Awesome At Your Job. Lisa herself is a guest on an episode. You might check that out to get another flavor for her. Or just my website AwesomeAtYourJob.com. And it’s been fun. I’ve had about 130, wow, conversations with tremendous folks and every one of them is about trying to sharpen the universal skills required to flourish at work. So, whether you’re an executive, or a manager, or an individual contributor in marketing, or finance, or anything, it should be applicable because that’s kind of the primary screen we’re using. Lisa Cummings: [00:27:26] I second that. It is a fantastic show. I met Pete last year, and ever since leaving our meet-up in Chicago, I just have been an avid listener, and it’s just full of great guests and great tips. If you want to go back and listen through the lens of the StrengthsFinder Talents it’ll be really fun to do that. Also, for listeners, if you want some Strengths focus tools to use with your team at work, also check out LeadThroughStrengths/resources and you’ll get a bunch of great free info there. As we close episode, remember using your strengths makes you a stronger performer at work. If you’re putting a lopsided focus on fixing your weaknesses you’re probably choosing the path of most resistance. So claim your talents and share them with the world.
Sir Muir Gray - Podcast of the Month: Optimality by Oxford Centre for Triple Value Healthcare
Episode 115 is live! This week, we talk with Pete Mockaitis in Chicago, IL. Pete is an award-winning trainer who has served clients in over 50 countries. His work has enhanced Fortune 100 corporations, high-growth startups, and major nonprofits. He’s conducted coaching sessions for over 700 thinkers from every Ivy League university and world-class organizations including Apple, Goldman Sachs, Google, and the United Nations. He is also the host of the How to Be Awesome at Your Job Podcast -- and CEO of Optimality. On today's episode, Pete shares his secrets to getting hired at -- and succeeding at a top consulting firm. Listen and learn more! You can play the podcast here, or download it on iTunes or Stitcher. To learn more about Pete and his podcast, check out his website at www.awesomeatyourjob.com. You can also find him on Twitter at @PeteAwe, and on Facebook at www.Facebook.com/petefans. Thanks to everyone for listening! And, thank you to those who sent me questions. You can send your questions to Angela@CopelandCoaching.com. You can also send me questions via Twitter. I’m @CopelandCoach. And, on Facebook, I am Copeland Coaching. Don’t forget to help me out. Subscribe on iTunes and leave me a review!
User Experience designer and recovering alcoholic Victor Yocco speaks about habit formation--good and bad. You’ll Learn: Victor’s personal story and implications for forming effective habits and breaking ineffective ones The power of teaming up with others to achieve your ambitions How to use a design approach to construct and reach your career goals About Victor Victor is a Philadelphia-based research director, author, and speaker. He received his PhD from The Ohio State University, where he studied communication and psychology. Victor regularly writes and speaks on the application of psychology to design and addressing the design and tech culture of promoting alcohol use. He has written for A List Apart, Smashing Magazine, UX Booth, User Experience Magazine (UXPA) and many more. He is the author of Design for the Mind, a book from Manning Publications on the application of principles of psychology to design. View View transcript, show notes, links, and more at http://AwesomeAtYourJob.com/ep33. Copyright © Optimality
Leadership advisor Randy Street shares fascinating insights gleaned from his advisory firm’s in-depth analyses on thousands of senior leaders--the biggest database on leaders in the world. He then shares strategies and tactics for putting those insights to work. You’ll learn: The 5 essential interview questions to boost your hiring success rate from 50% to 90% The 3 key areas that full-powered leaders master (Priorities, Who, Relationships) How to say “no” perfectly About Randy Randy Street is the Managing Partner of ghSMART, a leadership advisory firm whose mission is to help great leaders amplify their positive impact on the world. In collaboration with founder Geoff Smart, Randy co-authored the New York Times and Wall Street Journal bestsellers, Who: The A Method for Hiring and Power Score: Your Formula for Leadership Success. Who remains the #1 book on hiring on Amazon. View transcript, show notes, links, and more at http://AwesomeAtYourJob.com/ep30. Copyright © Optimality
Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03
In regression models for ordinal response, each covariate can be equipped with either a simple, global effect or a more flexible and complex effect which is specific to the response categories. Instead of a priori assuming one of these effect types, as is done in the majority of the literature, we argue in this paper that effect type selection shall be data-based. For this purpose, we propose a novel and general penalty framework that allows for an automatic, data-driven selection between global and category-specific effects in all types of ordinal regression models. Optimality conditions and an estimation algorithm for the resulting penalized estimator are given. We show that our approach is asymptotically consistent in both effect type and variable selection and possesses the oracle property. A detailed application further illustrates the workings of our method and demonstrates the advantages of effect type selection on real data.
Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03
In regression models for ordinal response, each covariate can be equipped with either a simple, global effect or a more flexible and complex effect which is specific to the response categories. Instead of a priori assuming one of these effect types, as is done in the majority of the literature, we argue in this paper that effect type selection shall be data-based. For this purpose, we propose a novel and general penalty framework that allows for an automatic, data-driven selection between global and category-specific effects in all types of ordinal regression models. Optimality conditions and an estimation algorithm for the resulting penalized estimator are given. We show that our approach is asymptotically consistent in both effect type and variable selection and possesses the oracle property. A detailed application further illustrates the workings of our method and demonstrates the advantages of effect type selection on real data.
In this lecture, Prof. Jeff Gore discusses the Nature article "Optimality and evolutionary tuning of the expression level of a protein," with emphasis on the connection between theory, prediction, and experiment.
Tartakovsky , A (University of Connecticut) Friday 17 January 2014, 10:15-11:00
Optimality Theory Was a Hoax—Prince and Smolensky finally come clean; by SpecGram Wire Services; From Volume CLXVI Number 4, of Speculative Grammarian, March 2013 — At a tearful news conference during the 2013 Annual Meeting of the Linguistic Society of America, Allen Prince confessed that Optimality Theory was a hoax. “I just can’t live with the lies any longer,” he said. (Read by Brianne Hughes.)
Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.
Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.
Review of Pulju’s An Optimality-Theoretic Account of the History of Linguistics: Past, Present, Future; by TJP, Lecturer in Linguistics and Classics, Dartmouth College; From Volume CLI, Number 2 of Speculative Grammarian, April 2006. — It is a great sorrow to those of us who remember the glory days of Psammeticus Press—those fabled days when it was the leading linguistics publisher in the world—nay, what is more, in the entire history of the world—it is, I repeat, a great sorrow to us to witness the depths to which the beloved imprint has sunk with the publication of this lamentable volume. What could have possessed PsPress’s current chairman K. Winnipesaukee Slater III, a meek man, to be sure, and mild, but still a reputable scholar, and not, so far as we know, entirely devoid of common sense nor of the finer aesthetic feelings, to defile his company’s good name by foisting upon an unsuspecting public this lunatic political screed thinly disguised as a bit of historico-linguistic scholarship? (Read by Joey Whitford.)
Musteranalyse/Pattern Analysis (PA) 2009 (HD 1280 - Video & Folien)
Yoshida, R (Kentucky) Tuesday 18 December 2007, 14:00-14:20 PLGw03 - Future Directions in Phylogenetic Methods and Models
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
In a multivariate mean-variance model, the class of linear score (LS) estimators based on an unbiased linear estimating function is introduced. A special member of this class is the (extended) quasi-score (QS) estimator. It is ``extended'' in the sense that it comprises the parameters describing the distribution of the regressor variables. It is shown that QS is (asymptotically) most efficient within the class of LS estimators. An application is the multivariate measurement error model, where the parameters describing the regressor distribution are nuisance parameters. A special case is the zero-inflated Poisson model with measurement errors, which can be treated within this framework.
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
We prove that the quasi-score estimator in a mean-variance model is optimal in the class of (unbiased) linear score estimators, in the sense that the difference of the asymptotic covariance matrices of the linear score and quasi-score estimator is positive semi-definite. We also give conditions under which this difference is zero or under which it is positive definite. This result can be applied to measurement error models where it implies that the quasi-score estimator is asymptotically more efficient than the corrected score estimator.
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
We consider a regression of $y$ on $x$ given by a pair of mean and variance functions with a parameter vector $theta$ to be estimated that also appears in the distribution of the regressor variable $x$. The estimation of $theta$ is based on an extended quasi score (QS) function. We show that the QS estimator is optimal within a wide class of estimators based on linear-in-$y$ unbiased estimating functions. Of special interest is the case where the distribution of $x$ depends only on a subvector $alpha$ of $theta$, which may be considered a nuisance parameter. In general, $alpha$ must be estimated simultaneously together with the rest of $theta$, but there are cases where $alpha$ can be pre-estimated. A major application of this model is the classical measurement error model, where the corrected score (CS) estimator is an alternative to the QS estimator. We derive conditions under which the QS estimator is strictly more efficient than the CS estimator.We also study a number of special measurement error models in greater detail.