POPULARITY
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Problems in AIXI Agent Foundations, published by Cole Wyeth on September 13, 2024 on LessWrong. I believe that the theoretical foundations of the AIXI agent and variations are a surprisingly neglected and high leverage approach to agent foundations research. Though discussion of AIXI is pretty ubiquitous in A.I. safety spaces, underscoring AIXI's usefulness as a model of superintelligence, this is usually limited to poorly justified verbal claims about its behavior which are sometimes questionable or wrong. This includes, in my opinion, a serious exaggeration of AIXI's flaws. For instance, in a recent post I proposed a simple extension of AIXI off-policy that seems to solve the anvil problem in practice - in fact, in my opinion it has never been convincingly argued that the anvil problem would occur for an AIXI approximation. The perception that AIXI fails as an embedded agent seems to be one of the reasons it is often dismissed with a cursory link to some informal discussion. However, I think AIXI research provides a more concrete and justified model of superintelligence than most subfields of agent foundations [1]. In particular, a Bayesian superintelligence must optimize some utility function using a rich prior, requiring at least structural similarity to AIXI. I think a precise understanding of how to represent this utility function may be a necessary part of any alignment scheme on pain of wireheading. And this will likely come down to understanding some variant of AIXI, at least if my central load bearing claim is true: The most direct route to understanding real superintelligent systems is by analyzing agents similar to AIXI. Though AIXI itself is not a perfect model of embedded superintelligence, it is perhaps the simplest member of a family of models rich enough to elucidate the necessary problems and exhibit the important structure. Just as the Riemann integral is an important precursor of Lebesgue integration, despite qualitative differences, it would make no sense to throw AIXI out and start anew without rigorously understanding the limits of the model. And there are already variants of AIXI that surpass some of those limits, such as the reflective version that can represent other agents as powerful as itself. This matters because the theoretical underpinnings of AIXI are still very spotty and contain many tractable open problems. In this document, I will collect several of them that I find most important - and in many cases am actively pursuing as part of my PhD research advised by Ming Li and Marcus Hutter. The AIXI (~= "universal artificial intelligence") research community is small enough that I am willing to post many of the directions I think are important publicly; in exchange I would appreciate a heads-up from anyone who reads a problem on this list and decides to work on it, so that we don't duplicate efforts (I am also open to collaborate). The list is particularly tilted towards those problems with clear, tractable relevance to alignment OR philosophical relevance to human rationality. Naturally, most problems are mathematical. Particularly where they intersect recursion theory, these problems may have solutions in the mathematical literature I am not aware of (keep in mind that I am a lowly second year PhD student). Expect a scattering of experimental problems to be interspersed as well. To save time, I will assume that the reader has a copy of Jan Leike's PhD thesis on hand. In my opinion, he has made much of the existing foundational progress since Marcus Hutter invented the model. Also, I will sometimes refer to the two foundational books on AIXI as UAI = Universal Artificial Intelligence and Intro to UAI = An Introduction to Universal Artificial Intelligence, and the canonical textbook on algorithmic information theory Intro to K = An...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Problems in AIXI Agent Foundations, published by Cole Wyeth on September 13, 2024 on LessWrong. I believe that the theoretical foundations of the AIXI agent and variations are a surprisingly neglected and high leverage approach to agent foundations research. Though discussion of AIXI is pretty ubiquitous in A.I. safety spaces, underscoring AIXI's usefulness as a model of superintelligence, this is usually limited to poorly justified verbal claims about its behavior which are sometimes questionable or wrong. This includes, in my opinion, a serious exaggeration of AIXI's flaws. For instance, in a recent post I proposed a simple extension of AIXI off-policy that seems to solve the anvil problem in practice - in fact, in my opinion it has never been convincingly argued that the anvil problem would occur for an AIXI approximation. The perception that AIXI fails as an embedded agent seems to be one of the reasons it is often dismissed with a cursory link to some informal discussion. However, I think AIXI research provides a more concrete and justified model of superintelligence than most subfields of agent foundations [1]. In particular, a Bayesian superintelligence must optimize some utility function using a rich prior, requiring at least structural similarity to AIXI. I think a precise understanding of how to represent this utility function may be a necessary part of any alignment scheme on pain of wireheading. And this will likely come down to understanding some variant of AIXI, at least if my central load bearing claim is true: The most direct route to understanding real superintelligent systems is by analyzing agents similar to AIXI. Though AIXI itself is not a perfect model of embedded superintelligence, it is perhaps the simplest member of a family of models rich enough to elucidate the necessary problems and exhibit the important structure. Just as the Riemann integral is an important precursor of Lebesgue integration, despite qualitative differences, it would make no sense to throw AIXI out and start anew without rigorously understanding the limits of the model. And there are already variants of AIXI that surpass some of those limits, such as the reflective version that can represent other agents as powerful as itself. This matters because the theoretical underpinnings of AIXI are still very spotty and contain many tractable open problems. In this document, I will collect several of them that I find most important - and in many cases am actively pursuing as part of my PhD research advised by Ming Li and Marcus Hutter. The AIXI (~= "universal artificial intelligence") research community is small enough that I am willing to post many of the directions I think are important publicly; in exchange I would appreciate a heads-up from anyone who reads a problem on this list and decides to work on it, so that we don't duplicate efforts (I am also open to collaborate). The list is particularly tilted towards those problems with clear, tractable relevance to alignment OR philosophical relevance to human rationality. Naturally, most problems are mathematical. Particularly where they intersect recursion theory, these problems may have solutions in the mathematical literature I am not aware of (keep in mind that I am a lowly second year PhD student). Expect a scattering of experimental problems to be interspersed as well. To save time, I will assume that the reader has a copy of Jan Leike's PhD thesis on hand. In my opinion, he has made much of the existing foundational progress since Marcus Hutter invented the model. Also, I will sometimes refer to the two foundational books on AIXI as UAI = Universal Artificial Intelligence and Intro to UAI = An Introduction to Universal Artificial Intelligence, and the canonical textbook on algorithmic information theory Intro to K = An...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Aspiration-based designs] 3. Performance and safety criteria, and aspiration intervals, published by Jobst Heitzig on April 28, 2024 on The AI Alignment Forum. Summary. In this post, we extend the basic algorithm by adding criteria for choosing the two candidate actions the algorithm mixes, and by generalizing the goal from making the expected Total equal a particular value to making it fall into a particular interval. We only use simple illustrative examples of performance and safety criteria and reserve the discussion of more useful criteria for later posts. Introduction: using the gained freedom to increase safety After having introduced the basic structure of our decision algorithms in the last post, in this post we will focus on the core question: How shall we make use of the freedom gained from having aspiration-type goals rather than maximization goals? After all, while there is typically only a single policy that maximize some objective function (or very few, more or less equivalent policies), there is typically a much larger set of policies that fulfill some constraints (such as the aspiration to make the expected Total equal some desired value). More formally: Let us think of the space of all (probabilistic) policies, Π, as a compact convex subset of a high-dimensional vector space with dimension d1 and Lebesgue measure μ. Let us call a policy πΠ successful iff it fulfills the specified goal, G, and let ΠGΠ be the set of successful policies. Then this set has typically zero measure, μ(ΠG)=0, and low dimension, dim(ΠG)d, if the goal is a maximization goals, but it has large dimension, dim(ΠG)d, for most aspiration-type goals. E.g., if the goal is to make expected Total equal an aspiration value, Eτ=E, we typically have dim(ΠG)=d1 but still μ(ΠG)=0. At the end of this post, we discuss how the set of successful policies can be further enlarged by switching from aspiration values to aspiration intervals to encode goals, which makes the set have full dimension, dim(ΠG)=d, and positive measure, μ(ΠG)>0. What does that mean? It means we have a lot of freedom to choose the actual policy πΠG that the agent should use to fulfill an aspiration-type goal. We can try to use this freedom to choose policies that promise to be rather safe than unsafe according to some generic safety metric, similar to the impact metrics used in reward function regularization for maximizers. Depending on the type of goal, we might also want to use this freedom to choose policies that fulfill the goal in a rather desirable than undesirable way according to some goal-related performance metric. In this post, we will illustrate this with only very few, "toy" safety metrics, and one rather simple goal-related performance metric, to exemplify how such metrics might be used in our framework. In a later post, we will then discuss more sophisticated and hopefully more useful safety metrics. Let us begin with a simple goal-related performance metric since that is the most straightforward. Simple example of a goal-related performance metric Recall that in step 2 of the basic algorithm, we could make the agent pick any action a whose action-aspiration is at most as large as the current state-aspiration, E(s,a)E(s), and it can also pick any other action, a+, whose action-aspiration is at least as large as the current state-aspiration, E(s,a+)E(s). This flexibility is because in steps 3 and 4 of the algorithm, the agent is still able to randomize between these two actions a,a+ in a way that makes expected Total, Eτ, become exactly E(s). If one had an optimization mindset, one might immediately get the idea to not only match the desired expectation for the Total, but also to minimize the variability of the Total, as measured by some suitable statistic such as its variance. In a sequential decision makin...
In this episode of the Ecommerce Coffee Break Podcast, I talk with Josip Begic, founder of Lebesgue.io. We discuss how to use AI to combine your business data, your competitors' data, and overall marketing trends to figure out how to optimize your marketing.On the Show Today You'll Learn:Mistakes people make when spending their marketing budgetThe best ways to optimize your ad spendStrategies for competing in the marketThe most important reporting tool for Shopify merchantsFacebook, Google, Shopify. What is their relation?And moreJosip Begic is a mathematician who spent $20M on Facebook and Google ads for Shopify stores. Worked as a head of growth for 3 YC companies (all on Shopify). Now he is the founder of Lebesgue: Smarter Marketing - an app that works as your Artificial Intelligence CMO.Links & ResourcesWebsite: https://lebesgue.io/Shopify App store: https://apps.shopify.com/advertising-insightsLinkedIn: https://www.linkedin.com/company/lebesgue/Continue the ConversationIn our ECOM MERCHANT PRO community, you can connect with our podcast guests and continue the conversation.Our community is also a great place to get advice from other Shopify merchants who have achieved what you are aiming for.This is your safe place to actively grow your online retail business with the support of the most amazing and helpful group of ecommerce entrepreneurs behind you.Join the Ecom Merchant Pro Community! Podcast listeners get 50% off for life with the coupon: EMPCJoin here: https://www.clauslauter.com/ecom-merchant-pro/Subscribe & Listen Everywhere:Listen On: clauslauter.com | Apple Podcasts/iTunes | Spotify | Amazon Music/Audible | Stitcher | Deezer | Google PodcastBy rating and reviewing the show in the app that you are listening to, you will enable us to invite bigger and more impactful guests.Please also remember to subscribe to our podcast and switch on the notifications to never miss an episode.Tag the podcast on Instagram @clauslauter and let me know what you like about it.If you like the content and would like to support the podcast, you can buy me a coffee here.Support the showBecome a better Shopify merchant. Subscribe and get tips on how to run a profitable Shopify business. Join our free newsletter at https://www.clauslauter.com/ecommerce-podcast-newsletter/
Joseph Bennish, Prof. Emeritus of CSULB, describes the field of Diophantine approximation, which started in the 19th Century with questions about how well irrational numbers can be approximated by rationals. It took Cantor and Lebesgue to develop new ways to talk about the sizes of infinite sets to give the 20th century new ways to think about it. This led up to the Duffin-Schaeffer conjecture and this year's Fields Medal for James Maynard. --- Send in a voice message: https://anchor.fm/the-art-of-mathematics/message
durée : 00:01:43 - La minute Picarde FB Picardie
Neste episódio conversamos sobre o conceito do infinito, sobre o que são números reais. Conversaremos sobre Dedekind, Cantor, Heine e Hilbert, dentre muitos nos escolhemos dar a visão destes sobre o que de fato é um número real. O episodio foi baseado nos livros da Tatiana Roque; História da matemática e A história da analise matemática de Cauchy a Lebesgue da Rosa lucia. Com uma busca na internet, vc deve conseguir encontrar ambas as obras. Nosso contato é jogosematematica@gmail.com Nos siga no Instagram @jogosematematica
Doç. Dr. Serhan Yarkan ve Halil Said Cankurtaran'ın yer aldığı, Bilim Tarihi Serisi'nin Kümeler Kuramı odaklı ikinci kısmında: Georg Cantor'un kuram üzerine yaptığı çalışmaların yankıları, kendisini kuram üzerine adamış bilim insanlarının hayatları ve Kümeler Kuramı'nın topoloji, cebirsel yapılar, olasılık kuramı, ölçüm kuramı, analiz, hesaplama ve yarı iletkenler üzerine olan etkileri üzerine konuşulmuştur. Bu süreçte, kuramın gelişimine ve diğer çalışma alanlarında ortaya çıkan etkilerde katkısı olan Georg Cantor, David Hilbert, Leopold Kronecker, Richard Dedekind, Jean Baptiste Joseph Fourier, Henri Léon Lebesgue, Félix Édouard Justin Émile Borel, Gottlob Frege, Bertrand Arthur William Russell, Ebû Ca'fer Muhammed bin Mûsâ el-Hârizmî, Udny Yule, Andrey Nikolayeviç Kolmogorov ve Giuseppe Vitali gibi pek çok önemli bilim insanına değinilmiştir. Keyifli dinlemeler. Kümeler Kuramı I. Kısım: https://youtu.be/pSksJkWK6wU David Hilbert'in, 1926 yılında Mathematische Annalen'da yayımlanan makalesi (Almanca): https://link.springer.com/article/10.1007/BF01206605 Makalenin İngilizce'ye çevirisi (On the Infinite, David Hilbert): https://math.dartmouth.edu/~matc/Readers/HowManyAngels/Philosophy/Philosophy.html Tapir Lab. GitHub: @TapirLab, https://www.github.com/tapirlab Tapir Lab. Instagram: @tapirlab, https://www.instagram.com/tapirlab/ Tapir Lab. Twitter: @tapirlab, https://www.twitter.com/tapirlab Tapir Lab.: http://www.tapirlab.com
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus's birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus's evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus's discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue's Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus's birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus's evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus's discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue's Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Calculus Reordered: A History of the Big Ideas (Princeton UP, 2019) takes readers on a remarkable journey through hundreds of years to tell the story of how calculus evolved into the subject we know today. David Bressoud explains why calculus is credited to seventeenth-century figures Isaac Newton and Gottfried Leibniz, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus represents a sounder way for students to learn this fascinating area of mathematics. Delving into calculus’s birth in the Hellenistic Eastern Mediterranean—particularly in Syracuse, Sicily and Alexandria, Egypt—as well as India and the Islamic Middle East, Bressoud considers how calculus developed in response to essential questions emerging from engineering and astronomy. He looks at how Newton and Leibniz built their work on a flurry of activity that occurred throughout Europe, and how Italian philosophers such as Galileo Galilei played a particularly important role. In describing calculus’s evolution, Bressoud reveals problems with the standard ordering of its curriculum: limits, differentiation, integration, and series. He contends that the historical order—integration as accumulation, then differentiation as ratios of change, series as sequences of partial sums, and finally limits as they arise from the algebra of inequalities—makes more sense in the classroom environment. Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David M. Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College and Director of the Conference Board of the Mathematical Sciences. His many books include Second Year Calculus and A Radical Approach to Lebesgue’s Theory of Integration. He lives in St. Paul, Minnesota. Mark Molloy is the reviews editor at MAKE: A Literary Magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
Stephan Ajuvo (@ajuvo) vom damals(tm) Podcast, Damon Lee von der Hochschule für Musik und Sebastian Ritterbusch trafen sich zu Gulasch-Programmiernacht 2019 des CCC-Erfakreises Entropia e.V., die wieder im ZKM und der HfG Karlsruhe stattfand. Es geht um Musik, Mathematik und wie es so dazu kam, wie es ist. Damon Lee unterrichtet seit einem Jahr an der Hochschule für Musik und befasst sich mit Musik für Film, Theater, Medien und Videospielen. Im aktuellen Semester verwendet er Unity 3D um mit räumlicher Musik und Klängen virtuelle Räume im Gaming-Umfeld umzusetzen. Auch im Forschungsprojekt Terrain wird untersucht, in wie weit räumliche Klänge eine bessere Orientierungsfähigkeit im urbanen Umfeld unterstützen können. Die Idee zu dieser Folge entstand im Nachgang zur gemeinsamen Aufnahme von Stephan und Sebastian zum Thema Rechenschieber, da die Musik, wie wir sie kennen, auch ein Rechenproblem besitzt, und man dieses an jedem Klavier wiederfinden kann. Dazu spielte Musik auch eine wichtige Rolle in der Technikgeschichte, wie beispielsweise das Theremin und das Trautonium. Die Klaviatur eines herkömmlichen Klaviers erscheint mit den weißen und schwarzen Tasten alle Töne abzubilden, die unser gewöhnliches Tonsystem mit Noten abbilden kann. Der Ursprung dieses Tonsystems entstammt aus recht einfachen physikalischen und mathematischen Eigenschaften: Wird eine Saite halbiert und im Vergleich zu zuvor in Schwingung gebracht, so verdoppelt sich die Frequenz und wir hören den einen gleichartigen höheren Ton, der im Tonsystem auch gleich benannt wird, er ist nur um eine Oktave höher. Aus einem Kammerton a' mit 440Hz ändert sich in der Tonhöhe zu a'' mit 880Hz. Neben einer Verdopplung ergibt auch eine Verdreifachung der Frequenz einen für uns Menschen angenehmen Klang. Da aber der Ton über eine Oktave höher liegt, wird dazu der wieder um eine Oktave tiefere Ton, also der Ton mit 1,5-facher Frequenz betrachtet. Dieses Tonintervall wie beispielsweise von a' mit 440Hz zu e'' mit 660Hz ist eine (reine) Quinte. Entsprechend des Quintenzirkels werden so alle 12 unterschiedlichen Halbtöne des Notensystems innerhalb einer Oktave erreicht. Nur gibt es hier ein grundsätzliches mathematisches Problem: Gemäß des Fundamentalsatzes der Arithmetik hat jede Zahl eine eindeutige Primfaktorzerlegung. Es ist also nicht möglich mit mehreren Multiplikationen mit 2 zur gleichen Zahl zu gelangen, die durch Multiplikationen mit 3 erreicht wird. Somit kann der Quintenzirkel nicht geschlossen sein, sondern ist eigentlich eine niemals endende Quintenspirale und wir müssten unendlich viele unterschiedliche Töne statt nur zwölf in einer Oktave haben. In Zahlen ist . Nach 12 reinen Quinten erreichen wir also nicht genau den ursprünglichen Ton um 7 Oktaven höher, doch der Abstand ist nicht sehr groß. Es ist grundsätzlich unmöglich ein endliches Tonsystem auf der Basis von reinen Oktaven und reinen Quinten zu erzeugen, und es wurden unterschiedliche Strategien entwickelt, mit diesem Problem zurecht zu kommen. Wird das Problem ignoriert und nur die letzte Quinte verkleinert, damit sie auf den ursprünglichen Ton um sieben Oktaven höher trifft, so entsteht eine schlimm klingende Wolfsquinte. Auch im Cello-Bau können durch Wahl der Verhältnisse der Saiten und der Schwingungsfrequenzen des Korpus fast unspielbare Töne entstehen, diese werden Wolfston genannt. In der Musik wird die erforderliche Korrektur von Intervallen auch Komma-Anpassung genannt, die beispielsweise bei Streichinstrumenten automatisch, da hier die Töne nicht auf festen Frequenzen festgelegt sind, sondern durch die Fingerposition auf dem Griffbrett individuell gespielt wird. Bei Tasteninstrumenten müssen die Töne aber im Vorfeld vollständig in ihrer Frequenz festgelegt werden, und hier haben sich historisch verschiedene Stimmungen ergeben: Nach vielen Variationen, die immer durch die Wolfsquinte unspielbare Tonarten beinhalteten, wurde ab 1681 in der Barockzeit von Andreas Werkmeister die Wohltemperierte Stimmung eingeführt, in der zwar jede Tonart spielbar, aber jeweils individuelle Stimmungen und Charaktäre vermittelten. Diese Unterschiede sollen Johann Sebastian Bach bis 1742 zum Werk Das wohltemperierte Klavier inspiriert haben, wo er die jeweiligen Eigenheiten aller Tonarten musikalisch umsetzte. Die heute am häufigsten verwendete Gleichtstufige oder Gleichmäßige Stimmung verkleinert alle Quinten statt 1,5 auf den gleichen Faktor , so dass alle Töne auf die Frequenzen festgelegt sind. Damit sind alle Tonarten absolut gleichberechtigt gut spielbar, sie klingen aber auch alle gleich, und haben alle den gleichen kleinen Fehler. Da aber gerade bei Streichinstrumenten natürlich passendere Frequenzen gewählt werden, klingen gerade synthetisch erzeugte Streicher unrealistisch, wenn sie der exakten gleichstufigen Stimmung folgen. Während bei der Klavierstimmung die Töne durch die Spannung der Saiten eingestellt werden können, so werden metallische Orgelpfeifen mechanisch mit einem Stimmeisen in ihrer Frequenz angepasst. Die Porzellanorgel ist eine ungewöhnliche unter anderem in Meissen hergestellte Form, deren Pfeifen natürlich auch mit Luft und nicht durch Vibration, wie beim Schlaginstrument des Vibraphons klingen. György Ligeti, populär bekannt durch Filmmusiken in 2001: Odyssee im Weltraum und Eyes Wide Shut, hat sich in seinem späteren Schaffenswerk auch mit exotischeren Tonsystemen auf Basis reiner Intervalle mit Streichern befasst. Beispielsweise sollte Continuum, für Cembalo, mit Mitteltöniger Stimmung gespielt werden. Um in der herkömmlichen Notation auf der Basis von 12 Halbtönen auch feinere Tonschritte bezeichnen zu können, wurden die Zeichen Halb-Kreuz und Halb-b eingeführt, die auf die Viertelton-Musik führten. Hier stellt sich die interessante Frage, ob eine Erhöhung auf 24 Tönen pro Oktave bei reinen Intervallen sich der Fehler reduziert. Diese Frage beantwortet die Berechnung des entsprechenden Faktors aus Quinten mit dem nächsten Faktor aus Oktaven und die Berechnung des relativen Fehlers, der korrigiert werden muss. Bis 53 Quinten haben folgende Kombinationen einen Fehler von weniger als 7%: Quinten n 5 7 12 17 24 29 36 41 46 48 53 Oktaven m 3 4 7 10 14 17 21 24 27 28 31 Fehler5,1%6,8%1,4%3,8%2,8%2,5%4,2%1,1%6,6%5,6%0,2% Ein sehr primitives Tonsystem kann also mit 5 Tönen aufgestellt werden, aber offensichtlich treffen 12 Töne deutlich besser. 24 Töne ermöglichen zwar mehr Tonvielfalt, verbessern aber den Fehler nicht. Erst ein Tonsystem mit 29 Tönen würde bei gleichstufiger Stimmung einen exakteren Klang als bei 12 Tönen ermöglichen. Noch besser wäre dann nur noch ein Tonsystem mit 41 Tönen pro Oktave, eine extreme Verbesserung ergibt sich bei 51 Tönen pro Oktave bei entsprechenden Problemen beim Bau einer solchen Klaviatur. Dazu haben Tonsystemerweiterungen in Vielfachen von 12 eine höhere Kompatibilität zum herkömmlichen System, und die Nähe der besseren Tonsysteme mit 29 zu 24 und 53 zu 48 zeigt, dass die Vielfachen in der Aufführung als Näherungen zu den besseren Darstellungen betrachtet werden können. Gérard Grisey (z.B. Les espaces acoustiques) und Tristan Murail sind Vertreter der Spektralisten, die in ihren Partituren erweiterte Tonsysteme verwenden. Hier sind die Tonangaben jedoch harmonisch statt melodisch gedacht, sind also in der Aufführung entsprechend zu interpretieren. YouTube: Gérard Grisey - Vortex Temporum - Ensemble Recherche Natürlich dürfen die Töne von Instrumenten nicht nur mit ihrer Grundfrequenz betrachtet werden, sondern erst das Zusammenspiel aller Harmonischen und Obertöne in Vielfachen der Grundfrequenz machen den charakteristischen Klang eines Instruments aus. Durch eine Fourier-Analyse kann mathematisch ein solches Frequenzspektrum eines Geräusches oder eines Tons berechnet werden. Oft ist hier eine überraschende Anzahl von Obertönen zu sehen, die von Menschen nicht unabhängig vom Grundton gehört werden. In der Ottoman Musik finden sich oft für west-europäische Ohren ungewohnte Harmonien, die aus ihrer langen orientalischen Geschichte andere Formen der Komposition und Tonsysteme entwickelt haben. In der Audioelektronik wurden ab etwa 1912 Röhren für Verstärker und insbesondere in der Musik verwendet, und die exakte Bauform der Bleche und Elektroden hatte deutliche Auswirkungen auf die Übertragung und Erzeugung von Spektren und Audiowellen durch Verzerrungen. Die Hammondorgel war eine sehr beliebte elektromechanische Orgel, wo anstatt von Pfeifen rotierende Zahnräder vor elektrischen Abnehmern die Töne erzeugten. Mit Hilfe von Röhren wurde in der DDR versucht, Silbermann-Orgeln als elektronische Orgeln auf Basis des Prinzips der Hammondorgel nachzubilden. Die Klangfarben der Silbermann-Orgeln wurden hier durch elektronische Rekonstruktion der Obertöne nachempfunden. Was als angenehmer Klang empfunden wird, ist eine persönliche Sache. Jedoch ist auffällig, dass der harmonische Grundklang eines Dur-Akkords einen sehr mathematischen Hintergrund hat: Die Quinte integriert den Faktor 3, bzw. 3/2, also 1.5, die große Terz den Faktor 5, bzw. 5/4 also 1.25, und die Quarte zur nächsten Oktave mit Faktor 2 ist der Faktor 4/3. Ein Zusammenspiel von so kleinen Faktoren wird bei kleinem kleinsten gemeinsamen Vielfachen wieder periodisch und ergibt einen gleichmäßigen Klang. Das persönliche Empfinden kann physiologisch mit dem Aufbau der Hörschnecke zusammenhängen, wird aber auch stark durch Erfahrungen geprägt. Musik besteht aber nicht aus einem Klang, sondern einer zeitlichen Abfolge von Konsonanz und Dissonanz, und das gilt nicht nur für neue Veröffentlichungen alter Meister von Wolfgang Rehm. So spielt Ornette Coleman mit den Erwartungen der Hörenden bis ins Chaos. YouTube: Ornette Coleman Solo - Rare! Im Google-Doodle zu Ehren von Johann Sebastian Bach hingegen versucht aus eine Vorgabe mit einem neuronalen Netz gerade die erwartete Vervollständigung im Stil von Bach zu komponieren. Eine Regelmäßigkeit oder Überraschung in der Musik kann auch im Sinne eines Informationsgehalts interpretiert werden: Sehr regelmäßige Formen sind vorhersagbar und enthalten wenig Information, die unerwartete Wendung hingegen trägt viel Information. Die als algorithmischen Komposition bezeichneten Werkzeuge werden in vielen Programmen und Geräten angeboten, beispielsweise als automatische Begleitung. Die Ergebnisse erscheinen aber nicht sehr kreativ. Bei der Verwendung von künstlichen neuronalen Netzen für die Komposition ist es leider nicht möglich im Nachhinein zu analysieren, warum und wie bestimmte Passagen erzeugt wurden: Auch wenn sie mit existierenden Beispielen mit Backpropagation trainiert wurden, arbeiten dann als Black Box, aus der nicht direkt abstrakte Entscheidungsgrundlagen reproduziert werden können. Alles Lernen setzt voraus, dass es ein Maß für die Güte gibt, was ist demnach die Qualität einer Komposition, was unterscheidet Kreativität vom Zufall und wo stimmt dies zwischen unterschiedlichen Menschen überein? Wie an prähistorischen Instrumenten zu erkennen, ist Klangerzeugung und Musik mit der Stimmbildung eng mit der Evolution des Menschen verknüpft. Recht spät entstanden Techniken zur Kodifizierung von Tonfolgen, wie beispielsweise in der Gregorianik. Es ist anzunehmen, dass der gesellschaftliche Einfluss auf die Kompositionen ihrer Zeit sehr groß war, und es jeweils auch besondere Auswirkungen wie die Blue Notes gegeben hat. Heute wird Komposition in vielen Schritten gelehrt: Angefangen von der Musiktheorie, Erlernen von Instrumenten und Musikgeschichte wird dann in Kompositionstechniken unterschiedlicher Musikepochen eingeführt. Ausgehend von den Techniken von Josquin Desprez im 15. Jahrhundert zur Verwendung des Kontrapunkt im 16. Jahrhundert, oder wie Johann Sebastian Bach den Kontrapunkt im 18. Jahrhundert nutzte. In den Notenblättern von Ludwig van Beethoven ist zu erkennen, wie er von Joseph Haydn das Komponieren auf Basis von Kontrapunkten erlernte, und auch heute mit seinen inzwischen vom Betthoven-Haus umfangreich digitalisierte Werk die Musikforschung begeistert. Ein Lehrkanon kann sich wie Kompositionstechniken über die Zeit ändern, so wie in der Mathematik früher das Riemannsche Integral Standard war, so sehen wir inzwischen den Übergang zum mächtigeren und der Wirklichkeit näheren Integralbegriff nach Lebesgue. So wie heute häufiger der neuere Begriff zum Einsatz kommt, so ist es sinnvoll und gut, auch frühere Techniken, wie auch frühere Kompositionstechniken, zu kennen und daraus lernen zu können. Im Berufsbild einer Komponistin oder eines Komponisten ist es heute meisstens nicht so, dass der Kreativität freien Lauf gelassen wird, sondern die Arbeit erfolgt in interdisziplinärer Zusammenarbeit in einem Team. Besonders für Videospielmusik oder Filmmusik wird die Komposition auf besondere Situationen hin entwickelt und erarbeitet. Wie Kreativität, Teamwork, Künstliche Intelligenz und Programmieren zu neuen Lösungen zusammenwirken kann, war auf der Gulaschprogrammiernacht auch in der Projektion der Schlangenprogrammiernacht zu sehen, wo verschiedene Programme als Schlangen in einer virtuellen Welt miteinander lebten. Der spielerische Umgang mit Algorithmen wie bei Schere, Stein, Papier führt schnell auf Spieltheorie und Herausforderungen im Hochfrequenzhandel. Literatur und weiterführende Informationen C.-Z. A. Huang, C. Hawthorne, A. Roberts, M. Dinculescu, J. Wexler, L. Hong, J. Howcroft: The Bach Doodle: Approachable music composition with machine learning at scale, ISMIR 2019. U. Peil: Die chromatische Tonleiter - Mathematik und Physik, Jahrbuch der Braunschweigischen Wissenschaftlichen Gesellschaft, 2012. M. Schönewolf: Der Wolf in der Musik. Podcasts U. Häse, S. Ajuvo: Theremin, Folge 56 im damals(tm) Podcast, 2018. N. Ranosch, G. Thäter: Klavierstimmung, Gespräch im Modellansatz Podcast, Folge 67, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2015. P. Modler, S. Ritterbusch: Raumklang, Folge 8 im Podcast Neues Terrain, 2019. R. Pollandt, S. Ajuvo, S. Ritterbusch: Rechenschieber, Gespräch im damals(tm) und Modellansatz Podcast, Folge 184, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2018. S. Ajuvo, S. Ritterbusch: Finanzen damalsTM, Gespräch im Modellansatz Podcast, Folge 97, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2016. S. Brill, T. Pritlove: Das Ohr, CRE: Technik, Kultur, Gesellschaft, Folge 206, 2014. C. Conradi: Der erste letzte Ton, Systemfehler Podcast, Folge 26, 12.4.2018. C. Conradi: Elektronische Orgel made in DDR, Zeitfragen, Deutschlandfunk Kultur, 12.6.2019. G. Follmer, H. Klein: WR051 Ortsgespräch, WRINT: Wer redet ist nicht tot, Folge 51, 2012. Audiospuren Tonbeispiele von D. Lee und S. Ritterbusch MuWi: C-g pythagoräischer Wolf, CC-BY-SA, 2007. Mdd4696: WolfTone, Public Domain, 2005. GPN19 Special P. Packmohr, S. Ritterbusch: Neural Networks, Data Science Phil, Episode 16, 2019. P. Packmohr, S. Ritterbusch: Propensity Score Matching, Gespräch im Modellansatz Podcast, Folge 207, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019. http://modellansatz.de/propensity-score-matching C. Haupt, S. Ritterbusch: Research Software Engineering, Gespräch im Modellansatz Podcast, Folge 208, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019. http://modellansatz.de/research-software-engineering D. Lee, S. Ajuvo, S. Ritterbusch: Tonsysteme, Gespräch im Modellansatz Podcast, Folge 216, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019. http://modellansatz.de/tonsysteme GPN18 Special D. Gnad, S. Ritterbusch: FPGA Seitenkanäle, Gespräch im Modellansatz Podcast, Folge 177, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2018. http://modellansatz.de/fpga-seitenkanaele B. Sieker, S. Ritterbusch: Flugunfälle, Gespräch im Modellansatz Podcast, Folge 175, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2018. http://modellansatz.de/flugunfaelle A. Rick, S. Ritterbusch: Erdbebensicheres Bauen, Gespräch im Modellansatz Podcast, Folge 168, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2018. http://modellansatz.de/erdbebensicheres-bauen GPN17 Special Sibyllinische Neuigkeiten: GPN17, Folge 4 im Podcast des CCC Essen, 2017. A. Rick, S. Ritterbusch: Bézier Stabwerke, Gespräch im Modellansatz Podcast, Folge 141, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2017. http://modellansatz.de/bezier-stabwerke F. Magin, S. Ritterbusch: Automated Binary Analysis, Gespräch im Modellansatz Podcast, Folge 137, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2017. http://modellansatz.de/binary-analyis M. Lösch, S. Ritterbusch: Smart Meter Gateway, Gespräch im Modellansatz Podcast, Folge 135, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2017. http://modellansatz.de/smart-meter GPN16 Special A. Krause, S. Ritterbusch: Adiabatische Quantencomputer, Gespräch im Modellansatz Podcast Folge 105, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2016. http://modellansatz.de/adiabatische-quantencomputer S. Ajuvo, S. Ritterbusch: Finanzen damalsTM, Gespräch im Modellansatz Podcast, Folge 97, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2016. http://modellansatz.de/finanzen-damalstm M. Fürst, S. Ritterbusch: Probabilistische Robotik, Gespräch im Modellansatz Podcast, Folge 95, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2016. http://modellansatz.de/probabilistische-robotik J. Breitner, S. Ritterbusch: Incredible Proof Machine, Gespräch im Modellansatz Podcast, Folge 78, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2016. http://modellansatz.de/incredible-proof-machine
Lorsque l'espace de probabilité n'est pas dénombrable, la construction d'une mesure de probabilité nécessite le recours au cadre général de la théorie de la mesure, introduit dans le cadre de l'intégrale de Lebesgue (notion de pi-systèmes, classes monotones).
Contrairement au cas de l'intégrale de Lebesgue, les espaces L^p définis pour l'intégrale par rapport à une mesure de probabilité sont emboités. Cette particularité provient du fait que la mesure de l'espace entier est fini (alors que la mesure de Lebesgue de R est infini).
Une mesure de probabilité vise à intégrer des variables aléatoires. La définition d'une telle intégrale est similaire à celle de l'intégrale de Lebesgue.
Un amoureux de la nature qui l’environne : fermes, rivières et moulins à eau, champs aux terres riches. Le poète la transforme en rythme universel.
…en pays de Bray (picard) avec Philéas Lebesgue (1869-1958) et ses poésies autour de semailles, paysages, moissons / en pays de Villers-Cotterêts avec ses rues, sa forêt et son gibier, ses marais évoqués par Alexandre Dumas (1802-1870)dans ses Mémoires, Ange Pitou, etc.
Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).
Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).