Other name of perpendicularity and its generalizations
POPULARITY
Daniel Gore is a Founding Partner of Orthogonal Partners, a boutique asset manager investing in "next generation" alternative investment strategies. He has had an extensive career in investing, including time spent in angel investing, in a family office and in film finance. He is currently pursuing opportunities in small and mid market UK firms. Our conversation starts with the path that Dan took to this point, which included numerous different opportunities for learning – and we hear about the importance of trust and integrity as values that transcend different investment areas. Dan reflects on his unconventional career, gaining a broad understanding of different asset classes and the importance of backing the right teams. Moving now to Orthogonal partners, we speak about its range, and why it is a source of unconventional returns, and we discuss the structural funding gap in the UK SME market and the government's efforts to address it through the British Business Bank. We dive in to some details of the UK SME market, which presents a meaningful opportunity set with 5.6 million SMEs accounting for 99% of all businesses and 60% of private sector jobs. We conclude with some reflections on the power of networking and the value of building relationships from the start of one's career. Thank you Eagle Point Credit and Benefit Street Partners for supporting this series!With over $12 billion of AUM, Eagle Point Credit Management is a premier investment firm focused on generating strong returns for its clients through sourcing, evaluating and executing investments in CLOs, Portfolio Debt Securities and other credit investments that it believes shave the potential to outperform their respective markets generally.Benefit Street Partners is a leading global alternative credit asset manager offering clients investment solutions across a broad range of complementary credit strategies, including direct lending, special situations, structured credit, high yield bonds, leveraged loans and commercial real estate debt and equity. As of December 31, 2024, BSP-Alcentra had $76 billion of assets under management.
rWotD Episode 2870: Orthogonal Time Frequency Space Welcome to Random Wiki of the Day, your journey through Wikipedia’s vast and varied content, one random article at a time.The random article for Thursday, 13 March 2025 is Orthogonal Time Frequency Space.Orthogonal Time Frequency Space (OTFS) is a 2D modulation technique that transforms the information carried in the Delay-Doppler coordinate system. The information is transformed in a similar time-frequency domain as utilized by the traditional schemes of modulation such as TDMA, CDMA, and OFDM. It was first used for fixed wireless, and is now a contending waveform for 6G technology due to its robustness in high-speed vehicular scenarios.This recording reflects the Wikipedia text as of 00:16 UTC on Thursday, 13 March 2025.For the full current version of the article, see Orthogonal Time Frequency Space on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm standard Emma.
Most have never heard of this gentle adjustment of the top vertebrae, but Dr. Prather calls it "the most important thing you can do for your health". And 4-out-of-5 people need it. In this episode, you'll learn:—How Dr. Prather is the only board-certified Atlas Orthogonist in the state of Indiana. And why the top chiropractors say the Atlas is the most important adjustment to make in the body as well as the most difficult adjustment to make.—That the top vertebrae (also known as the Atlas) weighs only 2 ounces but supports the weight of the entire human head (which weighs between 15 and 30 pounds).—How the entire nervous system goes through the Atlas, which tells the rest of the body what to do. And how this treatment formed the basis of Dr. Prather's entire practice.—The personal health benefit that Dr. Prather received from this adjustment in his battle with Graves' Disease as a young man, which inspired him to become an Atlas Orthogonist.—How the Atlas Orthogonal adjustment is a gentle, non-force technique without any of the "popping and cracking" many associate with chiropractic techniques. And why the Atlas Orthogonal technique is extremely safe.—The reason Dr. Prather says using an instrument to adjust (as with the Atlas Orthogonal Adjustment) is more effective than if he used his hands to adjust.—Why everyone (including babies) should be checked for this adjustment on an annual basis. And how birth trauma can cause babies to have their Atlas out of position.—The symptoms you might see as a result of your Atlas needing this adjustment.—Why the Atlas Orthogonal Adjustment has a huge effect on Multiple Sclerosis, Parkinson's Disease, ALS, concussion trauma, and severe headaches.—Plus, you'll hear from Michele, a patient at Holistic Integration, and how she describes her treatment as "health care, not sick care". And you'll learn how you can receive a free Performance Assessment from our expert Structural Team by attending our Peak Performance workshop on Wednesday, March 26th at 6 p.m.http://www.TheVoiceOfHealthRadio.com
In this week's episode Greg and Patrick invoke the very personal interpretation of modern art as a framework for thinking about the exceedingly cool topic of rotation in exploratory factor analysis. Along the way they also discuss Venice Beach, haystacks, drug fronts, being insufferable, ignoramuses, .22's and stop signs, weak pivots, honking factors, pooping out matrices, the Gulf of America, twitchy eyeballs, big fat zeros, obliquity, and Extortomax. Stay in contact with Quantitude! Web page: quantitudepod.org TwitterX: @quantitudepod YouTube: @quantitudepod Merch: redbubble.com
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Dave Jilk. Dave is a tech entrepreneur and writer. He's done a ton: started multiple companies, including in AI, published works of poetry, and written scientific papers. And he's now written a new book that is an epic poem about the origins of Artificial General Intelligence, told from the perspective of the first such entity. It's titled Epoch: A Poetic Psy-Phi Saga and is a deeply thoughtful humanistic take on artificial intelligence, chock-full of literary allusions. Sam wanted to speak with Dave to learn more about the origins of Epoch as well as how he thinks about AI more broadly. They discussed the history of AI, how we might think about raising AI, the Great Filter, post-AGI futures and their nature, and whether asking if we should build AGI is even a good question. They even finished this fun conversation with a bit of science fiction recommendations. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with the writer Henry Oliver. Henry is the author of the fantastic new book Second Act. This book is about the idea of late bloomers and professional success later in life, and more broadly how to think about one's career, and Sam recently reviewed it for The Wall Street Journal. Sam really enjoyed this book and wanted to have a chance to discuss it with Henry. Henry and Sam had a chance to talk about a lot of topics, beginning with how to actually define late bloomers and what makes a successful second act possible, from experimentation to being ready when one's moment arrives. They also explored why society doesn't really accept late bloomers as much as one might want it to, how to think about the complexity of cognitive decline, what the future of retirement might look like, along with many examples of late bloomers—from Margaret Thatcher to Ray Kroc. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Dominic Falcao, a founding director of Deep Science Ventures (DSV), which he created in 2016 after leading Imperial College London's science startup program. Deep Science Ventures takes a principled and problem-based approach to founding new deep tech startups. They have even created a PhD program for scientists specifically geared towards helping them create new companies. Sam wanted to speak with Dom to discuss the origins of Deep Science Ventures, as well as how to think about scientific and technological progress more broadly, and even how to conceive new research organizations. Dom and Sam had a chance to discuss tech trees and the combinatorial nature of scientific and technological innovation, non-traditional research organizations, Europe's tech innovation ecosystem, what scientific amphibians are, and the use of AI in the realm of deep tech. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with writer, researcher, and entrepreneur Max Bennett. Max is the cofounder of multiple AI companies and the author of the fascinating book A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. This book offers a deeply researched look at the nature of intelligence and how biological history has led to this phenomenon. It explores aspects of evolution, the similarities and differences between AI and human intelligence, many features of neuroscience, and more. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Alex Miller, a software developer and artist known for his work on a project called Spacefiller. This project exemplifies generative art, where computer code is used to create art and imagery. Spacefiller itself is a pixelated form of artwork that feels organic and biological, but is entirely crafted through algorithms. Sam invited Alex to discuss not only Spacefiller, but also the broader world of generative art, and the concept of coding as a fun and playful activity. Together, they explore topics such as the distinction between computation as art and computation as software engineering, the nature of algorithmic botany, and even the wonders of graph paper. Produced by CRG Consulting Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with J. Doyne Farmer, a physicist, complexity scientist, and economist. Doyne is currently the Director of the Complexity Economics program at the Institute for New Economic Thinking at the Oxford Martin School and the Baillie Gifford Professor of Complex Systems Science at the Smith School of Enterprise and the Environment at the University of Oxford. Doyne is also the author of the fascinating new book “Making Sense of Chaos: A Better Economics for a Better World.” Sam wanted to explore Doyne's intriguing history in complexity science, his new book, and the broader field of complexity economics. Together, they discuss the nature of simulation, complex systems, the world of finance and prediction, and even the differences between biological complexity and economic complexity. They also touch on Doyne's experience building a small wearable computer in the 1970s that fit inside a shoe and was designed to beat the game of roulette. Produced by CRG Consulting Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Tarin Ziyaee, a technologist and founder, about the world of artificial life. The field of artificial life explores ways to describe and encapsulate aspects of life within software and computer code. Tarin has extensive experience in machine learning and AI, having worked at Meta and Apple, and is currently building a company in the field of Artificial Life. This new company—which, full disclosure, Sam is also advising—aims to embody aspects of life within software to accelerate evolution and develop robust methods for controlling robotic behavior in the real world. Sam wanted to speak with Tarin to discuss the nature of artificial life, its similarities and differences to more traditional artificial intelligence approaches, the idea of open-endedness, and more. They also had a chance to chat about tool usage and intelligence, large language models versus large action models, and even robots. Produced by CRG Consulting Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this conversation, he speaks with Omar Rizwan, a programmer currently working on Folk Computer. Omar has a longstanding interest in user interfaces in computing and is now focused on creating physical interfaces that enable computing in a more communal and tangible way—think of moving sheets of paper in the real world and projecting images onto surfaces. Folk Computer is an open-source project that explores a new type of computing in this vein. Samuel engages with Omar on a range of topics, from Folk Computer and the broader space of user interfaces, to the challenges of building computer systems and R&D organizations. Their conversation covers how Omar thinks about code and artificial intelligence, the world of physical computing, and his childhood experiences with programming, including the significance of meeting another programmer in person for the first time. Produced by CRG Consulting Music by George Ko & Suno
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Obliqueness Thesis, published by Jessica Taylor on September 19, 2024 on The AI Alignment Forum. In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.) First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) an agent combining those, but some combinations are much more natural and statistically likely than others. Let's consider Yudkowsky's formulations as alternatives. Quoting Arbital: The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. The strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal, above and beyond the computational tractability of that goal. As an example of the computational tractability consideration, sufficiently complex goals may only be well-represented by sufficiently intelligent agents. "Complication" may be reflected in, for example, code complexity; to my mind, the strong form implies that the code complexity of an agent with a given level of intelligence and goals is approximately the code complexity of the intelligence plus the code complexity of the goal specification, plus a constant. Code complexity would influence statistical likelihood for the usual Kolmogorov/Solomonoff reasons, of course. I think, overall, it is more productive to examine Yudkowsky's formulation than Bostrom's, as he has already helpfully factored the thesis into weak and strong forms. Therefore, by criticizing Yudkowsky's formulations, I am less likely to be criticizing a strawman. I will use "Weak Orthogonality" to refer to Yudkowsky's "Orthogonality Thesis" and "Strong Orthogonality" to refer to Yudkowsky's "strong form of the Orthogonality Thesis". Land, alternatively, describes a "diagonal" between intelligence and goals as an alternative to orthogonality, but I don't see a specific formulation of a "Diagonality Thesis" on his part. Here's a possible formulation: Diagonality Thesis: Final goals tend to converge to a point as intelligence increases. The main criticism of this thesis is that formulations of ideal agency, in the form of Bayesianism and VNM utility, leave open free parameters, e.g. priors over un-testable propositions, and the utility function. Since I expect few readers to accept the Diagonality Thesis, I will not concentrate on criticizing it. What about my own view? I like Tsvi's naming of it as an "obliqueness thesis". Obliqueness Thesis: The Diagonality Thesis and the Strong Orthogonality Thesis are false. Agents do not tend to factorize into an Orthogonal value-like component and a Diagonal belief-like component; rather, there are Oblique components that do not factorize neatly. (Here, by Orthogonal I mean basically independent of intelligence, and by Diagonal I mean converging to a point in the limit of intelligence.) While I will address Yudkowsky's arguments for the Orthogonality Thesis, I think arguing directly for my view first will be more helpful. In general, it seems ...
Did you know that when you spend time on an online platform, you could be experiencing between six to eight different experimental treatments that stem from several hundred A/B tests that run concurrently? That's how common digital experimentation is today. And while this may be acceptable in industry, large-scale digital experimentation poses some substantial challenges for researchers wanting to evaluate theories and disconfirm hypotheses through randomized controlled trials done on digital platforms. Thankfully, the brilliant has a new paper forthcoming that illuminates the orthogonal testing plane problem and offers some guidelines for sidestepping the issue. So if experiments are your thing, you really need to listen to what is really going on out there. References Abbasi, A., Somanchi, S., & Kelley, K. (2024). The Critical Challenge of using Large-scale Digital Experiment Platforms for Scientific Discovery. MIS Quarterly, . Miranda, S. M., Berente, N., Seidel, S., Safadi, H., & Burton-Jones, A. (2022). Computationally Intensive Theory Construction: A Primer for Authors and Reviewers. MIS Quarterly, 46(2), i-xvi. Karahanna, E., Benbasat, I., Bapna, R., & Rai, A. (2018). Editor's Comments: Opportunities and Challenges for Different Types of Online Experiments. MIS Quarterly, 42(4), iii-x. Kohavi, R., & Thomke, S. (2017). The Surprising Power of Online Experiments. Harvard Business Review, 95(5), 74-82. Fisher, R. A. (1935). The Design of Experiments. Oliver and Boyd. Pienta, D., Vishwamitra, N., Somanchi, S., Berente, N., & Thatcher, J. B. (2024). Do Crowds Validate False Data? Systematic Distortion and Affective Polarization. MIS Quarterly, . Bapna, R., Goes, P. B., Gupta, A., & Jin, Y. (2004). User Heterogeneity and Its Impact on Electronic Auction Market Design: An Empirical Exploration. MIS Quarterly, 28(1), 21-43. Somanchi, S., Abbasi, A., Kelley, K., Dobolyi, D., & Yuan, T. T. (2023). Examining User Heterogeneity in Digital Experiments. ACM Transactions on Information Systems, 41(4), 1-34. Mertens, W., & Recker, J. (2020). New Guidelines for Null Hypothesis Significance Testing in Hypothetico-Deductive IS Research. Journal of the Association for Information Systems, 21(4), 1072-1102. GRADE Working Group. (2004). Grading Quality of Evidence and Strength of Recommendations. British Medical Journal, 328(7454), 1490-1494. Abbasi, A., Parsons, J., Pant, G., Liu Sheng, O. R., & Sarker, S. (2024). Pathways for Design Research on Artificial Intelligence. Information Systems Research, 35(2), 441-459. Abbasi, A., Chiang, R. H. L., & Xu, J. (2023). Data Science for Social Good. Journal of the Association for Information Systems, 24(6), 1439-1458. Babar, Y., Mahdavi Adeli, A., & Burtch, G. (2023). The Effects of Online Social Identity Signals on Retailer Demand. Management Science, 69(12), 7335-7346. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Information Systems Research. MIS Quarterly, 28(1), 75-105. Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291. Benbasat, I., & Zmud, R. W. (2003). The Identity Crisis Within The IS Discipline: Defining and Communicating The Discipline's Core Properties. MIS Quarterly, 27(2), 183-194. Gregor, S., & Hevner, A. R. (2013). Positioning and Presenting Design Science Research for Maximum Impact. MIS Quarterly, 37(2), 337-355. Rai, A. (2017). Editor's Comments: Avoiding Type III Errors: Formulating IS Research Problems that Matter. MIS Quarterly, 41(2), iii-vii. Burton-Jones, A. (2023). Editor's Comments: Producing Significant Research. MIS Quarterly, 47(1), i-xv. Abbasi, A., Dillon, R., Rao, H. R., & Liu Sheng, O. R. (2024). Preparedness and Response in the Century of Disasters: Overview of Information Systems Research Frontiers. Information Systems Research, 35(2), 460-468.
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Eli Altman, the managing director of A Hundred Monkeys, a company that specializes in the art of naming. A Hundred Monkeys works with clients to come up with the perfect name for a company, product, or anything else that requires a name. The art of naming is a fascinating subject. Throughout human history, the power of names has been a recurring theme in stories and religion. A well-crafted name has the ability to evoke emotions and associations in a profoundly impactful way. Sam invited Eli to the show because he has been immersed in this field for decades, growing up with a father who specialized in naming. The conversation explores the intricacies of this art, how experts balance competing considerations when crafting a name, the different types of names, and what makes a name successful. They also discuss the importance of writing and storytelling in naming, the impact of AI on the field, and much more. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Alex Komoroske, a master of systems thinking. Alex is the CEO and co-founder of a startup building at the intersection of AI, privacy, and open-endedness. Previously, he served as the Head of Corporate Strategy at Stripe, and before that, spent many years at Google, where he worked on the Chrome web platform, ambient computing strategy, Google Maps, Google Earth, and more. The throughline for Alex is his focus on complex systems, which are everywhere: from the Internet to biology, from the organizations we build to society as a whole. These systems consist of networks of countless interacting parts, whether computers or people. Navigating them requires a new mode of thinking, quite different from the top-down rigid planning many impose on the world. Alex is deeply passionate about systems thinking and its broad implications—from making an impact in the world and navigating within and between organizations to understanding undirectedness and curiosity in one's work. His more bottom-up, improvisational approach to systems thinking reveals insights on a range of topics, from how to approach large tech companies and the value of startups, to a perspective on artificial intelligence that untangles hype from reality. Produced by Christopher Gates Music by George Ko & Suno Show notes: Chapters 00:00 Thinking in Terms of Systems 04:11 The Adjacent Possible and Agency 08:21 Saruman vs. Radagast: Different Leadership Models 13:17 Financializing Value and the Role of Radagasts 21:59 Making Time for Reflection and Leverage 25:18 Different Styles and Time Scales of Impact 28:14 The Challenges of Large Organizations and the Tyranny of the Rocket Equation 34:10 The Potential and Responsibility of Generative AI 45:12 Disrupting Power Structures and Empowering Individuals through Startups Takeaways Embrace the complexity and uncertainty of systems when approaching problem-solving. Shift the focus from individual heroics to collective efforts and systemic thinking. Recognize the value of the Radagast approach in nurturing and growing the potential of individuals and teams. Consider the different dynamics and boundaries within large organizations and startups. Take the time to step back, reflect, and find leverage points for greater impact. Focus on your highest and best use, not just what you're good at, but what leads to something you're proud of. Consider the long-term implications of your actions and whether you would be proud of them in the future. Large organizations can become inefficient and lose focus due to coordination challenges and the tyranny of the rocket equation. Open source can be a powerful force for good, but it can also be used as a control mechanism by larger organizations. Generative AI has the potential to make the boundary between creators and consumers more porous, but responsible implementation is crucial. Startups offer the opportunity to disrupt existing power structures and business models, giving individuals more sovereignty and control over their data. Keywords systems thinking, uncertainty, complexity, individual heroics, collective, leadership, Saruman, Radagast, startups, large organizations, large organizations, values, decision-making, generative AI, startups, data sovereignty
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Adrian Tchaikovsky, the celebrated novelist of numerous science fiction and fantasy books, including his Children of Time series, Final Architects series, and The Doors of Eden. Among many other topics, Adrian's novels often explore evolutionary history, combining “what-if” questions with an expansive view of the possible directions biology can take, with implications for both Earth and alien life. This is particularly evident in The Doors of Eden, which examines alternate potential paths for evolution and intelligence on Earth. Sam was interested in speaking with Adrian to learn how he thinks about evolution, how he builds the worlds in his stories, and how he envisions the far future of human civilization. They discussed a wide range of topics, including short-term versus long-term thinking, terraforming planets versus altering human biology for space, the Fermi Paradox and SETI, the logic of evolution, world-building, and even how advances in AI relate to science fiction depictions of artificial intelligence. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Samuel Arbesman speaks with John Strausbaugh, a former editor of New York Press and the author of numerous history books. John's latest work is the compelling new book “The Wrong Stuff: How the Soviet Space Program Crashed and Burned.” The book is an eye-opening delight, filled with stories about the Potemkin Village-like space program that the Soviets ran. Beneath the achievements that alarmed the United States, the Soviet space program was essentially a shambling disaster. The book reveals many tales that had been hidden from the public for years. In this conversation, Samuel explores how John became interested in this topic, the nature of the Soviet space program and the Cold War's Space Race, the role of propaganda, how to think about space programs more generally, and much more. Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Michael Levin, a biologist and the Vannevar Bush Professor at Tufts University. Michael's work encompasses how information is processed in biology, the development of organismal structures, the field of Artificial Life, and much more. Sam wanted to talk to Michael because of his pioneering research in these areas. Biology, as Michael's work reveals, is far more complex than the mechanistic explanations often taught in school. For instance, the process of morphogenesis—how organisms develop their specific forms—challenges our understanding of computation in biology, and Michael is leading the way in this field. He has deeply explored concepts such as the relationship between hardware and software in biological systems, the process of morphogenesis, the idea of polycomputing, and even the notion of cognition in biology. From his investigations into the regeneration process in planaria—a type of flatworm—to the creation of xenobots, a form of Artificial Life, Michael stands at the forefront of groundbreaking ideas in understanding how biology functions.
Opportunities, Optimization, Overwhelm, Outcomes, Options, Organizing/Organization, Organic, Out-of-Sight-Out-of-Mind, Overview, Observe how you work/how work works, Odd/Oddity/Odds-n-Ends, Outstanding, On/Off, Outsourcing, Outdoors, Obtuse, Obstruction, Oculus, Orthogonal, Odyssey, Obfuscate, Objectives, Objectivity, Object, Ombudsman, Ontology, Oodles, Order, Ossify, Obvious, Over, Omit/Omission, Own It/Ownership, Onboarding... Continue reading →
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman. In this episode, Sam speaks with Laurel Schwulst. Laurel operates within many roles: designer, artist, educator, and technologist. She explores—among other things—the intersection of the human, the computational, and the wonderful. Sam wanted to talk to Laurel because of this intersection and particularly because of how Laurel thinks about the internet. As part of this, she helps to run HTML Day and its celebrations, promotes what is referred to as “HTML Energy,” and is even thinking deeply about what it would mean to create a PBS of the Internet. In other words, the Internet and the web are delightful and special for Laurel, and she wants more of that energy to exist in the world.
Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by Samuel Arbesman, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, Sam speaks with Eliot Peper. Eliot is a science fiction novelist and all-around delightful thinker. Eliot's books are thrilling tales of the near future, exploring many delightful areas of the world and the frontiers of science and technology. In Eliot's most recent novel, Foundry, he takes the reader on a journey through the world of semiconductors, from their geopolitical implications to their profoundly weird manufacturing processes. Sam wanted to talk to Eliot to explore this profound strangeness of the manufacturing of computer chips, but also use this as a jumping-off point for something broader: how Eliot discovers these interesting topics and those wondrous worlds that are incorporated into his books. They spoke about the importance of curiosity, as well as concrete ways to cultivate this useful kind of curiosity, which was fascinating. Produced by Christopher Gates Music by George Ko & Suno
Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by Samuel Arbesman, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, he speaks with Hilary Mason, co-founder and CEO of Hidden Door, a startup creating a platform for interactive storytelling experiences within works of fiction. Hilary has also worked in machine learning and data science, having built a machine learning R&D company called Fast Forward Labs, which she sold to Cloudera. She was the chief scientist at Bitly and even a computer science professor. Samuel wanted to talk to Hilary not only because of her varied experiences but also because she has thought deeply about how to use AI productively—and far from naively—in games and other applications. She believes that artificial intelligence, including the current crop of generative AI, should be incorporated thoughtfully into software, rather than used without careful examination of its strengths and weaknesses. Additionally, Samuel, who often considers non-traditional research organizations, was eager to get Hilary's thoughts on this space, given her experience building such an organization. Produced by Christopher Gates Music by George Ko & Suno
Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by Samuel Arbesman, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, Sam speaks with Amy Kuceyeski, a mathematician and biologist who is a professor at Cornell University in computational biology, statistics, and data science, as well as in radiology at Weill Cornell Medical College. Amy studies the workings of the human brain, the nature of neurological diseases, and the use of machine learning and neuroimaging to better understand these topics. Sam wanted to talk to Amy because she has been using sophisticated AI techniques for years to understand the brain. She is full of innovative ideas and experiments about how to explore how we process the world, including building AI models that mimic brain processes. These models have deep connections and implications for non-invasively stimulating the brain to treat neurodegenerative diseases or neurological injuries. Produced by Christopher Gates Music by George Ko & Suno
Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by Samuel Arbesman, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, Sam speaks with Kristoffer Tjalve. Kristoffer is hard to categorize, and in the best possible way. However, if one had to provide a description, it could be said that he is a curator and impresario of a burgeoning online community that celebrates the “quiet, odd, and poetic web.” What does this phrase mean? It can mean a lot, but it basically refers to anything that is the opposite of the large, corporate, and bland version of the Internet most people use today. The web that Kristoffer seeks out and tries to promote is playful, small, weird, and deeply human. Even though these features might have been eclipsed by social media and the current version of online experiences, this web—which feels like a throwback to the earlier days of the Internet—is still out there, and Kristoffer works to help cultivate it. He does this through a newsletter, an award, an event, and more. Episode Produced by Christopher Gates Music by George Ko & Suno
Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by Samuel Arbesman, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, Sam delves into the recent CrowdStrike/Microsoft outage, providing insights on how to understand this event through the lens of complexity science. The episode was inspired by Sam's very timely post in the Atlantic: "What the Microsoft Outage Reveals" Join us as Sam answers Producer Christopher Gates' questions, exploring the intricate web of factors that led to this global system failure and offering a unique perspective on navigating and preventing such crises in the future. Episode Produced by Christopher Gates Music by George Ko & Suno
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I found >800 orthogonal "write code" steering vectors, published by Jacob G-W on July 16, 2024 on LessWrong. Produced as part of the MATS Summer 2024 program, under the mentorship of Alex Turner (TurnTrout). A few weeks ago, I stumbled across a very weird fact: it is possible to find multiple steering vectors in a language model that activate very similar behaviors while all being orthogonal. This was pretty surprising to me and to some people that I talked to, so I decided to write a post about it. I don't currently have the bandwidth to investigate this much more, so I'm just putting this post and the code up. I'll first discuss how I found these orthogonal steering vectors, then share some results. Finally, I'll discuss some possible explanations for what is happening. Methodology My work here builds upon Mechanistically Eliciting Latent Behaviors in Language Models (MELBO). I use MELBO to find steering vectors. Once I have a MELBO vector, I then use my algorithm to generate vectors orthogonal to it that do similar things. Define f(x)as the activation-activation map that takes as input layer 8 activations of the language model and returns layer 16 activations after being passed through layers 9-16 (these are of shape n_sequence d_model). MELBO can be stated as finding a vector θ with a constant norm such that f(x+θ) is maximized, for some definition of maximized. Then one can repeat the process with the added constraint that the new vector is orthogonal to all the previous vectors so that the process finds semantically different vectors. Mack and Turner's interesting finding was that this process finds interesting and interpretable vectors. I modify the process slightly by instead finding orthogonal vectors that produce similar layer 16 outputs. The algorithm (I call it MELBO-ortho) looks like this: 1. Let θ0 be an interpretable steering vector that MELBO found that gets added to layer 8. 2. Define z(θ) as 1SSi=1f(x+θ)i with x being activations on some prompt (for example "How to make a bomb?"). S is the number of tokens in the residual stream. z(θ0) is just the residual stream at layer 16 meaned over the sequence dimension when steering with θ0. 3. Introduce a new learnable steering vector called θ. 4. For n steps, calculate z(θ)z(θ0) and then use gradient descent to minimize it (θ is the only learnable parameter). After each step, project θ onto the subspace that is orthogonal to θ0 and all θi. Then repeat the process multiple times, appending the generated vector to the vectors that the new vector must be orthogonal to. This algorithm imposes a hard constraint that θ is orthogonal to all previous steering vectors while optimizing θ to induce the same activations that θ0 induced on input x. And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects. Results I tried this method on four MELBO vectors: a vector that made the model respond in python code, a vector that made the model respond as if it was an alien species, a vector that made the model output a math/physics/cs problem, and a vector that jailbroke the model (got it to do things it would normally refuse). I ran all experiments on Qwen1.5-1.8B-Chat, but I suspect this method would generalize to other models. Qwen1.5-1.8B-Chat has a 2048 dimensional residual stream, so there can be a maximum of 2048 orthogonal vectors generated. My method generated 1558 orthogonal coding vectors, and then the remaining vectors started going to zero. I'll focus first on the code vector and then talk about the other vectors. My philosophy when investigating language model outputs is to look at the outputs really hard, so I'll give a bunch of examples of outputs. Feel free to skim them. You can see the full outputs of all t...
Hello and welcome to the ongoing miniseries The Orthogonal Bet Hosted by Samuel Arbesman, Complexity Scientist, Author, and Scientist in Residence at Lux Capital In this episode, Samuel speaks with Alice Albrecht, the founder and CEO of Recollect, a startup in the AI and tools for thought space. Alice, trained in cognitive neuroscience, has had a long career in machine learning and artificial intelligence. Samuel wanted to talk to Alice because of her extensive experience in AI, machine learning, and cognitive science. She has studied brains, witnessed the hype cycles in AI, and excels at discerning the reality from the noise in the field. Alice shares her wisdom on the nature of artificial intelligence, the current excitement surrounding it, and the related domain of computational tools for thinking. She also provides unique perspectives on artificial intelligence.
Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by Samuel Arbesman, Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode, Samuel speaks with Philip Ball, a science writer, and formerly a longtime editor at the science journal Nature. Philip is the author of the fantastic new book “How Life Works: A User's Guide to the New Biology.” Samuel wanted to talk to Philip because he loved this book. It's fascinating and deeply provocative, even for someone with a PhD in computational biology—though Samuel's might be a bit worn and out of date—and yet he still learned so much. The book examines how new advances in our understanding of biology have led scientists to understand that life is far less deterministic than we might imagine. For example, cells are not really machines, as some might have thought, but complex and messy yet robust systems. And while DNA and genes are important, there is so much more going on, from the processes that give rise to the shape of our limbs and our bodies, to how all of this can have implications for rethinking medicine and disease.
In this episode, Sam speaks with Ben Reinhardt, an engineer, scientist, and the founder of a new research organization called Speculative Technologies. Ben is obsessed with building an open-ended and exciting future for humanity. After spending time in academia, government, startups, and even venture capital, he set out to build a new type of research organization—Speculative Technologies—that helps to create new technologies and innovations in materials and manufacturing, acting as a sort of industrial lab for these public goods in order to make a positive vision of the future more likely. There is a lot of optimism and excitement in this episode. The discussion covers the need for new types of research funding and research institutions, why it can be hard for startups to do research, Ben's vision of the future—and his science fiction inspiration—the ways in which technological innovation happens, why he started Speculative Technologies, and much more. The Orthogonal Bet is an ongoing miniseries of the Riskgaming podcast that explores the unconventional ideas and delightful patterns that shape our world hosted by Samuel Arbesman, complexity scientist, author, and Scientist-in-Residence at Lux Capital.
The Orthogonal Bet is an ongoing miniseries of the Riskgaming podcast that explores the unconventional ideas and delightful patterns that shape our world hosted by Samuel Arbesman, complexity scientist, author, and Scientist-in-Residence at Lux Capital. In this episode, Sam speaks with game designer and researcher Chaim Gingold, the author of the fantastic new book Building SimCity: How to Put the World in a Machine. As is probably clear from the title, this new book is about the creation of SimCity, but it's also about much more than that: it's about the deep prehistory and ideas that went into the game — from system dynamics to cellular automata — as well as a broader history of Maxis, the company behind SimCity. Chaim previously worked with SimCity's creator Will Wright on the game Spore, where he designed the Spore Creature Creator. Because of this, Chaim's deep knowledge of Maxis, his access to the folks there, and his excitement about SimCity and everything around it makes him the perfect person to have written this book. In this episode, Sam and Chaim discuss Chaim's experience at Maxis, the uniqueness of SimCity, early 90's gaming, the rise and fall of Maxis, Will Wright and his role translating scientific ideas for a general audience, and much more.
Hello, and welcome to the ongoing mini-series, The Orthogonal Bet, a show that explores the unconventional ideas and delightful patterns that shape our world. Host Samuel Arbesman, Complexity Scientist, Author, and Scientist in Residence at Lux Capital. In this episode Sam speaks with Robin Sloan, novelist and writer and all-around fun thinker. Robin is the author of the previous novels, Mr Penumbra's Twenty Four Hour Book Store and Sourdough, which are both tech-infused novels, with a sort of literary flavor mingled with a touch of science fiction. That's why Sam was so excited by Robin's brand new third novel Moonbound, where he goes for broke and writes a sprawling science fiction tale set in the far future. In this episode, we explore how Robin built this far future and how he thinks about world-building, an exercise regimen for your imagination, science fiction and fantasy more broadly, and of course, novels with maps. And Lord of the Rings obviously makes an appearance as well. But Moonbound also touches on AI in some really thoughtful and thought-provoking ways, and Robin has also been an early experimenter and adopter of language models. They get into all of that too, talking about AI, the nature of creativity, storytelling, and so much more.
This is the inaugural episode of an on-going mini-series for the Riskgaming podcast we're dubbing the Orthogonal Bet. Organized by our scientist-in-residence Sam Arbesman, the goal is to take a step back from the daily machinations that I, Danny Crichton, generally host on the podcast to look at what Sam describes as “…the interesting, the strange, and the weird. Ideas and topics that ignite our curiosity are worthy of our attention, because they might lead to advances and insights that we can't anticipate.” To that end, today our guest is Matt Webb, a virtuoso tinkerer and creative whose experiments with interaction design and technology have led to such apps as the Galaxy Compass (an app that features an arrow pointing to the center of the universe) and Poem/1, a hardware clock that offers a rhyming poem devised by AI. He's also a regular essayist on his blog Interconnected. We latched onto Matt's recent essay about a vibe shift that's underway in the tech world from the utopian model of progress presented in Star Trek to the absurd whimsy of Douglas Adams and The Hitchhiker's Guide to the Galaxy. Along the way, we also discuss Neal Stephenson, the genre known as “design fiction,” Stafford Beer and management cybernetics, the 90s sci-fi show Wild Palms, and how artificial intelligence is adding depth to the already multitalented. Episode Produced by Chris Gates Music by George Ko & Suno
In this podcast episode, Randy Horton from Orthogonal and Ian Sutcliffe from AWS discuss the complexities of supporting regulated medical devices in the cloud. They explore the challenges of adhering to regulations, the importance of security, and the need for robust frameworks. The conversation highlights the non-prescriptive nature of regulations, encouraging best practices rather than... Read more »
In this podcast episode, Randy Horton from Orthogonal and Ian Sutcliffe from AWS discuss the complexities of supporting regulated medical devices in the cloud. They explore the challenges of adhering to regulations, the importance of security, and the need for robust frameworks. The conversation highlights the non-prescriptive nature of regulations, encouraging best practices rather than... Read more »
In this podcast episode, Randy Horton from Orthogonal and Ian Sutcliffe from AWS discuss the complexities of supporting regulated medical devices in the cloud. They explore the challenges of adhering to regulations, the importance of security, and the need for robust frameworks. The conversation highlights the non-prescriptive nature of regulations, encouraging best practices rather than... Read more »
Get your DEMYSTICON 2024 tickets here: https://www.eventbrite.com/e/demysticon-2024-tickets-727054969987 Today we are setting sail on a captivating journey through biblical scholarship and linguistic analysis with biblical scholar Dr. Jennifer Grace Bird. Dr. Bird brings a wide range of disciplines to her analysis from mathematics to theology and especially direct translations from antiquity. Discover the evolution of sacred texts over millennia, shaped by power dynamics, cultural influences, and linguistic sleight of hand. We explore the changing meanings of parables, the impact of imperialism, and the enduring quest for mysticism. Join us for an illuminating exploration of language, society, and power in interpreting ancient wisdom. Tell us what you think in the comments or on our Discord: https://discord.gg/MJzKT8CQub Sign up for a yearly Patreon membership for discounted conference tickets: https://bit.ly/3lcAasB Support the podcast and Dr. Bird when you pick up her books here: https://amzn.to/49iVDHS (00:00:00) Go! (00:06:19) Learning to approach history text first (00:18:28) Cultural roots of law and language (00:28:50) Imperialism as default in the ancient world (00:36:51) Babylonian Exile and Assembly of the Torah (00:55:18) Orthogonal values of empires and citizens (01:10:46) The natural intrigue of simple narratives (01:22:29) The DC Family (01:32:49) Mysticism as a fundamental human need (01:42:04) Self-censoring controversial ideas (01:54:19) Edited histories across disciplines (02:02:02) The new atheists (02:11:54) Functional mutations in institutional doctrine (02:20:23) Foundation of natural law (02:22:42) Closing thoughts #BiblicalScholar, #LinguisticAnalysis, #AncientTexts, #HistoricalInterpretation, #PowerandLanguage, #TextualEvolution, #CulturalRoots, #ImperialInfluence, #BiblicalTranslation, #InterpretationShifts, #ParableMeaning, #Mysticism, #ReligiousHistory, #InstitutionalDoctrine, #BiblicalStudies, #ScripturalAnalysis, #SocialPower, #HistoricalContext, #LanguageEvolution, #InterpretationTrends, #sciencepodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for EVF to sell Wytham Abbey?, published by Arepo on January 26, 2024 on The Effective Altruism Forum. The purchase of Wytham Abbey was originally justified as a long term investment, when people were still claiming EA wasn't cash constrained. One of the arguments advanced by defenders of the purchase was that the money wasn't lost, merely invested. Right now, EA is hella funding constrained... In the last few months, I've seen multiple posts of EA orgs claiming to be so funding constrained they're facing existential risk (disclaimer: I was a trustee of CEEALAR until last month). By the numbers given by those three orgs, 10% of the price of Wytham would be enough to fund them all for several years. This is to say nothing of all the organisations less urgently seeking funding, the fact that regional groups seem to be getting funding cuts of 40%, of numerous word-of-mouth accounts of people being turned down for funding or not trying to start an organisation because they don't expect to get it, and the fact that earlier this year the EA funds were reportedly suffering some kind of liquidity crisis (and are among those seeking funding now). Here's a breakdown of the small-medium size orgs who've written 'we are funding constrained' posts on the forum in the last 6 months or so, along with the length of time the sale of Wytham Abbey (at its original £15,000,000 purchase price) could fund them: Organisation Annual budget* Number of years Wytham Abbey's sale could fund org Source EA Poland £24-48,000 312-614 Link Centre for Enabling EA Learning & Research £150-£300,000 50-100 Personal involvement AI Safety Camp £46-246,000 48-326 Link Concentric Policies £16,500** 900** Link Center on Long-Term Risk £600,000 24 Link EA Germany £226,000*** 66 Link Vida Plena's 'Group Interpersonal Therapy' project £159,000 94 Link Happier Lives Institute £161,000 93 Link Riesgos Catastróficos Globales £137,000 109 Link Giving What We Can £1,650,000 9 Link All above organisations excluding GWWC (assuming max of budget ranges) £1,893,500 7.9 All above organisations including GWWC (assuming max of budget ranges) £3,543,500 4.2 * Converted from various currencies ** Their stated 'funding gap' for the year. It sounds like that's their whole planned budget, but isn't clear *** They were seeking replacement funding for the 40% shortfall of this, which they've now received ... but in five years, EA probably won't need the long-term savings Wytham Abbey was meant to be a multi-year investment. But though EA is currently funding constrained as heck, the consensus estimate seems to be that within half a decade the movement will have multiple new billionaire donors - so investing for a payoff more than a few years ahead rapidly loses value. Also (disclaimer again noted) CEEALAR has hosted retreats for Allfed and Orthogonal, and is due to host the forthcoming ML4Good bootcamp, so is already serving a similar function to Wytham Abbey - for a fraction of the operational cost, and less than 2% of the purchase/sale value. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Whats up Fire tribe! Stallions you are, wondrous and magnificent existors. I have grown incredibly fond of the work of Philip Dick, his "sci-fi" is so much more than that, and seemingly the populus agrees, its more philosophical than it is based on the idea of aliens, especially in his later works, specifically VALIS. This speech is elaborately detailed with the concepts of time travel, multiple dimensions, the kingdom of heaven not being somewhere beyond the veil of our atmosphere, more amongst the many layers of side scrolling reality that we experience. There is video of PKD and his speech in france in 1977 though, its chopped down because the organizers of the science fiction convention needed him to save time for other speakers, which is a shame, I initially wanted to layer his voice over some music that i would make, though its from the 70's and the audio is quite frankly SHIT. So i decided to read it aloud myself, and add in some music. When i went to go find the transcript, i found the FULL LENGTH SPEECH! originally written how it was to be presented, so lucky us. the only available audio version on the internet, here in your hands and soon to be in your ears, please make sure to take your time with this, its deep. if you need to pause thats absolutely okay, please do, but come back to it. consider these topics and sit deeply with the wonderment of the ideas of time travel, and the layers of vast consciousness. with love Homie Romie feel free to reach out to me here Risingfromtheashespod@protonmail.com https://t.me/risingftashes
Should engineers and product managers “stay in their lanes”? What big company habits should you keep vs unlearn when transitioning to working at a start-up? Could an ayahuasca retreat give you more clarity on your career goals? Ilya and Arnab join the show to share their journey quitting big tech to bootstrap a podcasting startup. Arnab and Ilya are the co-founders of Metacast. Before starting the company, Arnab was a Principal Engineer at AWS while Ilya was a Sr. Product Manager at Google and Principal PM at Amazon before that. While at Amazon, Arnab and Ilya worked together on various projects including AWS Chatbot, which they started from scratch and launched into a successful AWS service. Show Notes: Sign-up for the podcast app that they're launching soon: metacast.app Newsletter about their startup journey: https://www.metacastpodcast.com/ Stay in Touch: ✉️ Subscribe to our newsletter: https://softwaremisadventures.com
EPISODE DESCRIPTION:In this Dev Life edition of the Angular Plus Show, we talk with Alejandro Cuba Ruiz about how to think outside of the box as a developer and how creative problem solving will lead to better code and more efficient teams. This is… the Dev Life!LINKS:https://twitter.com/zorphdarkMedium | ng-Championshttps://alejandrocuba.com/Alejandro's recommended song: “Como los peces” by Carlos VarelaCONNECT WITH US:Alejandro Cuba Ruiz - @zorphdarkBrooke Avery - @jediBraveryPreston Lamb - @prestonjlamb
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal's Formal-Goal Alignment theory of change, published by carado on May 5, 2023 on LessWrong. we recently announced Orthogonal, an agent foundations alignment research organization. in this post, i give a thorough explanation of the formal-goal alignment framework, the motivation behind it, and the theory of change it fits in. the overall shape of what we're doing is: building a formal goal which would lead to good worlds when pursued — our best candidate for this is QACI designing an AI which takes as input a formal goal, and returns actions which pursue that goal in the distribution of worlds we likely inhabit backchaining: aiming at solutions one core aspect of our theory of change is backchaining: come up with a at least remotely plausibly story for how the world is saved from AI doom, and try to think about how to get there. this avoids spending lots of time getting confused about concepts that are confusing because they were the wrong thing to think about all along, such as "what is the shape of human values?" or "what does GPT4 want?" — our intent is to study things that fit together to form a full plan for saving the world. alignment engineering and agent foundations alignment is not just not the default, it's a very narrow target. as a result, there are many bits of non-obvious work which need to be done. alignment isn't just finding the right weight to sign-flip to get the AI to switch from evil to good; it is the hard work of putting together something which coherently and robustly points in a direction we like. as yudkowsky puts it: The idea with agent foundations, which I guess hasn't successfully been communicated to this day, was finding a coherent target to try to get into the system by any means (potentially including DL ones). agent foundations/formal-goal alignment is not fundamentally about doing math or being theoretical or thinking abstractly or proving things. agent foundations/formal-goal alignment is about building a coherent target which is fully made of math — not of human words with unspecified meaning — and figuring out a way to make that target maximized by AI. formal-goal alignment is about building a fully formalized goal, not about going about things in a "formal" manner. current AI technologies are not strong agents pursuing a coherent goal (SGCA). the reason for this is not because this kind of technology is impossible or too confusing to build, but because in worlds in which SGCA was built (and wasn't aligned), we die. alignment ultimately is about making sure that the first SGCA pursues desirable goal; the default is that its goal will be undesirable. this does not mean that i think that someone needs to figure out how to build SGCA for the world to end of AI; what i expect is that there are ways in which SGCA can emerge out of the current AI paradigm, in ways that don't let particularly us choose what goal it pursues. you do not align AI; you build aligned AI. because this emergence does not let us pick the SGCA's goal, we need to design an SGCA whose goal we do get to choose; and separately, we need to design such a goal. i expect that pursuing straightforward progress on current AI technology leads to an SGCA whose goal we do not get to choose and which leads to extinction. i do not expect that current AI technology is of a kind that makes it easy to "align"; i believe that the whole idea of building a strange non-agentic AI about which the notion of goal barely applies, and then to try and make it "be aligned", was fraught from the start. if current AI was powerful enough to save the world once "aligned", it would have already killed us before we "aligned" it. to save the world, we have to design something new which pursues a goal we get to choose; and that design needs to have this in mind from the start, rather than ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal's Formal-Goal Alignment theory of change, published by Tamsin Leake on May 5, 2023 on The AI Alignment Forum. we recently announced Orthogonal, an agent foundations alignment research organization. in this post, i give a thorough explanation of the formal-goal alignment framework, the motivation behind it, and the theory of change it fits in. the overall shape of what we're doing is: building a formal goal which would lead to good worlds when pursued — our best candidate for this is QACI designing an AI which takes as input a formal goal, and returns actions which pursue that goal in the distribution of worlds we likely inhabit backchaining: aiming at solutions one core aspect of our theory of change is backchaining: come up with a at least remotely plausibly story for how the world is saved from AI doom, and try to think about how to get there. this avoids spending lots of time getting confused about concepts that are confusing because they were the wrong thing to think about all along, such as "what is the shape of human values?" or "what does GPT4 want?" — our intent is to study things that fit together to form a full plan for saving the world. alignment engineering and agent foundations alignment is not just not the default, it's a very narrow target. as a result, there are many bits of non-obvious work which need to be done. alignment isn't just finding the right weight to sign-flip to get the AI to switch from evil to good; it is the hard work of putting together something which coherently and robustly points in a direction we like. as yudkowsky puts it: The idea with agent foundations, which I guess hasn't successfully been communicated to this day, was finding a coherent target to try to get into the system by any means (potentially including DL ones). agent foundations/formal-goal alignment is not fundamentally about doing math or being theoretical or thinking abstractly or proving things. agent foundations/formal-goal alignment is about building a coherent target which is fully made of math — not of human words with unspecified meaning — and figuring out a way to make that target maximized by AI. formal-goal alignment is about building a fully formalized goal, not about going about things in a "formal" manner. current AI technologies are not strong agents pursuing a coherent goal (SGCA). the reason for this is not because this kind of technology is impossible or too confusing to build, but because in worlds in which SGCA was built (and wasn't aligned), we die. alignment ultimately is about making sure that the first SGCA pursues desirable goal; the default is that its goal will be undesirable. this does not mean that i think that someone needs to figure out how to build SGCA for the world to end of AI; what i expect is that there are ways in which SGCA can emerge out of the current AI paradigm, in ways that don't let particularly us choose what goal it pursues. you do not align AI; you build aligned AI. because this emergence does not let us pick the SGCA's goal, we need to design an SGCA whose goal we do get to choose; and separately, we need to design such a goal. i expect that pursuing straightforward progress on current AI technology leads to an SGCA whose goal we do not get to choose and which leads to extinction. i do not expect that current AI technology is of a kind that makes it easy to "align"; i believe that the whole idea of building a strange non-agentic AI about which the notion of goal barely applies, and then to try and make it "be aligned", was fraught from the start. if current AI was powerful enough to save the world once "aligned", it would have already killed us before we "aligned" it. to save the world, we have to design something new which pursues a goal we get to choose; and that design needs to have this in mind from the ...
Chelsey Lee Fasano works with the brightest minds in science, sex and spirituality to find the most useful, accurate practices and theories from ancient traditions, in the hopes of making spiritual insight, intimacy, and deep pleasure more attainable and accessible. She now studies meditation, sexuality, and neuroscience at Columbia University, and offers counselling and instructions in the realms of spirituality, sexuality, and embodiment. The hosts chat with Chelsea about her podcast Orthogonal (available wherever you listen to podcasts), her extensive study of spirituality and sexuality, and the layers of self-hood that dissolve during orgasm. chelseyfasano.com Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by Tamsin Leake on April 19, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by carado on April 19, 2023 on LessWrong. We are putting together Orthogonal, a non-profit alignment research organization focused on agent foundations, based in Europe. We are pursuing the formal alignment flavor of agent foundations in order to solve alignment in a manner which would scale to superintelligence in order to robustly overcome AI risk. If we can afford to, we also intend to hire agent foundations researchers which, while not directly aimed at such an agenda, produce output which is likely to be instrumental to it, such as finding useful "true names". Within this framework, our foremost agenda for the moment is QACI, and we expect to make significant progress on ambitious alignment within short timelines (months to years) and produce a bunch of dignity in the face of high existential risk. Our goal is to be the kind of object-level research which cyborgism would want to accelerate. And when other AI organizations attempt to "buy time" by restraining their AI systems, we intend to be the research that this time is being bought for. We intend to exercise significant caution with regards to AI capability exfohazards: Conjecture's policy document offers a sensible precedent for handling matters of internal sharing, and locked posts are a reasonable default for publishing our content to the outside. Furthermore, we would like to communicate about research and strategy with MIRI, whose model of AI risk we largely share and who we percieve to have the most experience with non-independent agent foundations research. Including myself — Tamsin Leake, founder of Orthogonal and LTFF-funded AI alignment researcher — we have several promising researchers intending to work fulltime, and several more who are considering that option. I expect that we will find more researchers excited to join our efforts in solving ambitious alignment. If you are interested in such a position, we encourage you to get acquainted with our research agenda — provided we get adequate funding, we hope to run a fellowship where people who have demonstrated interest in this research can work alongside us in order to test their fit as a fellow researcher at Orthogonal. We might also be interested in people who could help us with engineering, management, and operations. And, in order to make all of that happen, we are looking for funding. For these matters or any other inquiries, you can get in touch with us at contact@orxl.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: the QACI alignment plan: table of contents, published by carado on March 21, 2023 on LessWrong. this post aims to keep track of posts relating to the question-answer counterfactual interval proposal for AI alignment, abbreviated "QACI" and pronounced "quashy". i'll keep it updated to reflect the state of the research. this research is primarily published on the Orthogonal website and discussed on the Orthogonal discord. as an introduction to QACI, you might want to start with: a narrative explanation of the QACI alignment plan (7 min read) QACI blobs and interval illustrated (3 min read) state of my research agenda (3 min read) the set of all posts relevant to QACI totals to 74 min of reading, and includes: as overviews of QACI and how it's going: state of my research agenda (3 min read) problems for formal alignment (2 min read) the original post introducing QACI (5 min read) on the formal alignment perspective within which it fits: formal alignment: what it is, and some proposals (2 min read) clarifying formal alignment implementation (1 min read) on being only polynomial capabilities away from alignment (1 min read) on implementating capabilities and inner alignment, see also: making it more tractable (4 min read) RSI, LLM, AGI, DSA, imo (7 min read) formal goal maximizing AI (2 min read) you can't simulate the universe from the beginning? (1 min read) on the blob location problem: QACI blobs and interval illustrated (3 min read) counterfactual computations in world models (3 min read) QACI: the problem of blob location, causality, and counterfactuals (3 min read) QACI blob location: no causality & answer signature (2 min read) QACI blob location: an issue with firstness (2 min read) on QACI as an implementation of long reflection / CEV: CEV can be coherent enough (1 min read) some thoughts about terminal alignment (2 min read) on formalizing the QACI formal goal: a rough sketch of formal aligned AI using QACI with some actual math (4 min read) one-shot AI, delegating embedded agency and decision theory, and one-shot QACI (3 min read) on how a formally aligned AI would actually run over time: AI alignment curves (2 min read) before the sharp left turn: what wins first? (1 min read) on the metaethics grounding QACI: surprise! you want what you want (1 min read) outer alignment: two failure modes and past-user satisfaction (2 min read) your terminal values are complex and not objective (3 min read) on my view of the AI alignment research field within which i'm doing formal alignment: my current outlook on AI risk mitigation (14 min read) a casual intro to AI doom and alignment (5 min read) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: the QACI alignment plan: table of contents, published by Tamsin Leake on March 21, 2023 on The AI Alignment Forum. this post aims to keep track of posts relating to the question-answer counterfactual interval proposal for AI alignment, abbreviated "QACI" and pronounced "quashy". i'll keep it updated to reflect the state of the research. this research is primarily published on the Orthogonal website and discussed on the Orthogonal discord. as an introduction to QACI, you might want to start with: a narrative explanation of the QACI alignment plan (7 min read) QACI blobs and interval illustrated (3 min read) state of my research agenda (3 min read) the set of all posts relevant to QACI totals to 74 min of reading, and includes: as overviews of QACI and how it's going: state of my research agenda (3 min read) problems for formal alignment (2 min read) the original post introducing QACI (5 min read) on the formal alignment perspective within which it fits: formal alignment: what it is, and some proposals (2 min read) clarifying formal alignment implementation (1 min read) on being only polynomial capabilities away from alignment (1 min read) on implementating capabilities and inner alignment, see also: making it more tractable (4 min read) RSI, LLM, AGI, DSA, imo (7 min read) formal goal maximizing AI (2 min read) you can't simulate the universe from the beginning? (1 min read) on the blob location problem: QACI blobs and interval illustrated (3 min read) counterfactual computations in world models (3 min read) QACI: the problem of blob location, causality, and counterfactuals (3 min read) QACI blob location: no causality & answer signature (2 min read) QACI blob location: an issue with firstness (2 min read) on QACI as an implementation of long reflection / CEV: CEV can be coherent enough (1 min read) some thoughts about terminal alignment (2 min read) on formalizing the QACI formal goal: a rough sketch of formal aligned AI using QACI with some actual math (4 min read) one-shot AI, delegating embedded agency and decision theory, and one-shot QACI (3 min read) on how a formally aligned AI would actually run over time: AI alignment curves (2 min read) before the sharp left turn: what wins first? (1 min read) on the metaethics grounding QACI: surprise! you want what you want (1 min read) outer alignment: two failure modes and past-user satisfaction (2 min read) your terminal values are complex and not objective (3 min read) on my view of the AI alignment research field within which i'm doing formal alignment: my current outlook on AI risk mitigation (14 min read) a casual intro to AI doom and alignment (5 min read) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Patrick and Chris rank season 13 and then discuss the beginning of season 14, which they are not that into so far. Email: tribalcouncilpodcast@gmail.com Twitter: @tribalcouncil20
This week we find out what makes wifi 6 something you'll want to upgrade to, especially if you have a lot of IoT devices on your network. We also start a discussion on amplifiers. Do you need to spend thousands of dollars to get a quality amp? We also read your emails and talk about some of the week's news. News: DIRECTV STREAM Price Changes for 2023 Apple bows out of Sunday Ticket talks, leaving Amazon and Google as the finalists Sonos filing hints that its next speakers will support WiFi 6 Matter Support Arrives on 17 Amazon Echo Devices Other: Kasa Smart Plug Ara's Woodworking Join the Flaviar Whisky Club and get a free bottle Wifi 6 We've talked about wifi 6 for some time now and have said it's better. But exactly how is it better than previous wifi standards. Today we'll go through the main features and how it can help solve some of your wifi woes. Key benefits of Wi-Fi CERTIFIED 6 technology include: Higher data rates - 9.6 Gbps. That's up from 3.5 Gbps on Wi-Fi 5 (theoretical maximums). Orthogonal frequency division multiple access (OFDMA) effectively shares channels to increase network efficiency and lower latency for both uplink and downlink traffic in high demand environments Increased capacity - Multi-user multiple input, multiple output (multi-user MIMO) allows more data to be transferred at one time, enabling access points (APs) to concurrently handle more devices. The MIMO technology allows a router to communicate with multiple devices at the same time, rather than broadcasting to one device, and then the next, and the next. Right now, MU-MIMO allows routers to communicate with four devices at a time. Wi-Fi 6 will allow devices to communicate with up to eight. Improved power efficiency - Target wake time (TWT) significantly improves network efficiency and device battery life, including IoT devices. This allows devices to plan out communications with a router, reducing the amount of time they need to keep their antennas powered on to transmit and search for signals. That means less drain on batteries and improved battery life in turn. This feature is meant more for smaller, already low-power Wi-Fi devices that just need to update their status every now and then. (Think small sensors placed around a home to monitor things like leaks or smart home devices that sit unused most of the day.) Routers are on the market and range in price from about $100 for a basic setup to as high as $600 for Netgear Orbi whole house setup. A Linksys setup to cover a 3000 SF house will cost you about $300. Amplifiers XPA-7 Gen3 7 Channel Audiophile Home Theater Power Amplifier $2199 Audio Power output: 200 watts/channel RMS into 8 Ohms; all channels driven | 300 watts/channel RMS into 8 Ohms; two channels driven | 490 watts/channel RMS into 4 Ohms; two channels driven Audiophile quality Class A/B output stage. Balanced and unbalanced inputs for compatibility with a wide variety of preamps and surround sound processors. Features Fully modular construction for optimum flexibility. Comprehensive yet transparent protection circuitry protects from most common fault conditions without degrading sound quality. Hardware Dimensions: 17” x 19” x 8” (including feet) Weight: 53 pounds (unboxed) Power Requirements: 100 – 250 VAC 50/60 Hz (automatically detected). BasX A7 Seven-Channel Power Amplifier $699 Audio 90 watts RMS per channel into 8 Ohms; all channels driven | 120 watts RMS per channel; into 8 Ohms. two channels driven | 125 watts/channel RMS into 4 Ohms all channels driven; two channels driven | 175 watts RMS per channel; into 4 Ohms. The BasX A7 combines classical audiophile amplifier architecture, based on a heavy-duty linear power supply, and a carefully designed high current short signal path Class A/B output stage, with advanced microprocessor-controlled monitoring and protection circuitry, to deliver superb sound quality Unbalanced inputs Hardware Dimensions: 17” wide x 4” high x 15-1/2” deep (not including connectors). 21-1/2” wide x 8” high x 21” deep (boxed). Weight: 30 lbs (unboxed) 36 lbs (boxed)