Every semester, Arizona State University's School of Mathematical & Statistical Sciences hosts a seminar series featuring a wide range of topics related to mathematics.
School of Mathematical & Statistical Sciences
Algebraic statistics advocates polynomial algebra as a tool for addressing problems in statistics and its applications. This connection is based on the fact that most statistical models are defined either parametrically or implicitly via polynomial equations. The idea is summarized by the phrase "Statistical models are semialgebraic sets". I will try to illustrate this idea with two examples, the first coming from the analysis of contingency tables, and the second arising in computational biology. I will try to keep the algebraic and statistical prerequisites to an absolute minimum and keep the talk accessible to a broad audience.
Mathematical concepts are often difficult for students to acquire. This difficulty is evidenced by failure of knowledge to transfer from the learning situation to a novel isomorphic situation. What choice of instantiation most effectively facilitates successful transfer? One possibility is that grounding the concept through a concrete, contextualized instantiation may facilitate learning and in turn facilitate transfer. On the other hand, several cognitive factors influence the process of analogical transfer suggesting that facilitating transfer may depend on more than promoting initial learning. A series of experiments examined learning and transfer when students learned instantiations of a simple mathematical concept that were either concrete/ contextualized or abstract/ generic. I discuss these findings in detail, specifically the underlying cognitive mechanisms involved in analogical.
This presentation does not require previous knowledge of C*-algebras, labeled graphs, or group actions. A labeled graph over an alphabet consists of a directed graph together with a labeling map . One can associate a C*-algebra to a labeled graph in such a way that if the labeling is trivial then the resulting C*-algebra is the C*-algebra of the graph . In this presentation, I will discuss joint work with Teresa Bates and David Pask concerning (discrete) group actions on labeled graphs and the resulting crossed product C*-algebras. In particular, I will discuss our main theorem which shows that the crossed product that arises when a group acts freely on a labeled graph is strongly Morita equivalent to the C*-algebra of the quotient graph of the action. I will focus on the two major ideas needed to prove this Morita equivalence. The first is a generalization of the so-called Gross-Tucker theorem, which shows that a free labeled graph action is naturally equivariantly isomorphic to a skew product action obtained from the quotient labeled graph. The second is a generalization of a theorem of Kaliszewski, Quigg, and Raeburn to the e ect that the C*-algebra of a skew product labeled graph is naturally isomorphic to a co-crossed product of a coaction of the group on the C*-algebra of the labeled graph.
Current community models in the geosciences employ a variety of numerical methods from finite-difference, finite-volume, finite- or spectral elements, to pseudospectral methods. All have specialized strengths but also serious weaknesses. The first three methods are generally considered low-order and can involve high algorithmic complexity (as in triangular elements or unstructured meshes). Global spectral methods do not practically allow for local mesh refinement and often involve cumbersome algebra. Radial basis functions have the advantage of being spectrally accurate for arbitrary node layouts in multi-dimensions with extreme algorithmic simplicity, and naturally permits local node refinement. We will show test examples ranging from vortex roll-ups, modeling idealized cyclogenesis, to the unsteady nonlinear flows posed by the shallow water equations to 3-D mantle convection in the earth’s interior.
I am an active mathematical physicist who has also engaged long-term with mathematics education; particularly, with research on mathematical learning and problem solving. This led, perhaps inevitably, to a focus not only on cognition, but also on the psychology of what is called “the affective domain” – i.e., emotional feelings, attitudes, beliefs, and values – in relation to mathematics. In this talk, I shall discuss some important affective constructs which relate directly to mathematical teaching and learning, with a particular focus on the nature of student engagement. Among the ideas considered are the importance of emotional feelings during mathematical problem solving, the idea that information important to learning is encoded affectively (interactions with cognition), the role of beliefs about mathematics, students’ (longer-term) motivational orientations, and various “in the moment” motivating desires that can foster (or inhibit) students’ mathematical engagement, as well as difficulties that arise in efforts to study mathematical affect. Some broader implications are suggested for how we prepare mathematics teachers, how we connect with our own students, and how we represent mathematics to the wider community.
Approximating functions or data by polynomials is an everyday tool, starting with Taylor series. Approximating by rational functions can be much more powerful, but also much more troublesome. In different contexts rational approximations may fail to exist, fail to be unique, or depend discontinuously on the data. Some approximations show forests of seemingly meaningless pole-zero pairs or "Froissart doublets", and when these artifacts should not be there in theory, they often appear in practice because of rounding errors on the computer. Yet for some applications, like extrapolation of sequences and series, rational approximations are indispensable. In joint work with Pedro Gonnet and Ricardo Pachon we have developed a method to get around most of these problems in rational interpolation and least-squares fitting, based on the singular value decomposition. The talk will show many examples of the performance of our "ratdisk" code, including an application to radial basis function interpolation.
We propose a new design for phase II oncology clinical trials based on two considerations. (1): Currently most phase II oncology trials use complete remission (CR) as the primary end point. The drugs having higher CR rates enter into subsequent phase III trials, which are usually required to demonstrate benefit on survival. Although achieving CR is necessary for prolonging survival, it is not sufficient because patients may relapse shortly after achieving CR. This discrepancy was one of the major reasons for the high failure rates of phase III trials. So it is desirable to evaluate survival outcomes in phase II trials. (2): Assigning more patients to the better treatment arms is more ethical than equal randomization. However, due to the long waiting time to observe the survival outcome, a response-adaptive randomization for survival clinical trials can be inefficient in skewing the randomization probabilities to favor better-performing treatment arms. A natural idea is to use the short-term response information to “speed-up” the response-adaptiveness of the randomization procedure of survival clinical trials? Based on these considerations, we propose a new phase II design that use information on both CR and survival. Their relationship is specified by a Bayesian model. This model is first constructed by using prior clinical information, and then updated continuously by the information accumulated in the ongoing trial. Comparing with a trial using only information on survival, the new design uses fewer patients and takes less time, and can more effectively assign patients to the better treatment arms. Comparing with a trial using only the information on CR, the new design is more reliable in the sense that it picks drug candidates that are more likely to succeed in subsequent phase III trials. Published in Statistics in Medicine. 28(12): 1680-1689, 2009. Free software available on https://biostatistics.mdanderson.org/SoftwareDownload/Default.aspx
Hosted by Professor Kyeong Hah Roh Abstract Much of what we say and write in our mathematics classes assumes that our students understand linguistic and logical conventions that have never been made explicit to them. What problems result from this assumption, and how can we address them? Biography Susanna S. Epp (Ph.D., University of Chicago, 1968) is Vincent de Paul Professor of Mathematical Sciences at DePaul University. After initial research in commutative algebra, she became interested in cognitive issues associated with teaching analytical thinking and proof and has published a number of articles and given many talks related to this topic. She is the author of Discrete Mathematics with Applications, now in its fourth edition, and of the newly published Discrete Mathematics: An Introduction to Mathematical Reasoning. She also co-authored the first edition of Precalculus and Discrete Mathematics, which was developed as part of the University of Chicago School Mathematics Project. Long active in the Mathematical Association of America, she is a co-author of CUPM Curriculum Guide 2004. In January 2005 she received the Louise Hay Award for contributions to mathematics education, and in April 2010 she received the Award for Distinguished Teaching of Mathematics from the Illinois Section of the Mathematical Association of America.
The School of Mathematical and Statistical Sciences presents Dr. Erica Flapan, Lingurn H. Burkhead Professor, Department of Mathematics, Pomona College presenting on the topic, Topological Symmetries of Molecules.
We will describe what a genus one curve is and then specialize to the case of elliptic curves. After discussing a simple looking problem through which elliptic curves become objects that we want to understand, we will summarize some known results about elliptic curves. To conclude, we will go back to the general genus one curve over the rationals and see how close this object is to being an elliptic curve. Mirela Çiperiani is an Assistant Professor in the Department of Mathematics at the University of Texas at Austin
In algebraic topology, we learn to associate groups Hn(T) to locally compact spaces which “count the n-dimensional holes in T". In this talk, I want to describe how to realize H3(T) as a set Br(T) of equivalence classes of certain well-behaved C* -algebras. The group structure imposed on Br(T) via its identification with H3(T) is very natural in its C* -setting. With this group structure, Br(T) is called the Brauer groupof T. Depending on your point of view, this result can be viewed either as a concrete realization of H3(T) or as a classification result for a class of C* -algebras. In the last part of the talk, I want to describe an equivariant version of Br(T) developed jointly with David Crocker, Alex Kumjian and Iain Raeburn. No prior knowledge of C* -algebras or operator algebras will be assumed.
The increase of entropy was regarded as perhaps the most perfect and unassailable law in physics and it was even supposed to have philosophical import. Einstein, like most physicists of his time, regarded the second law of thermodynamics as one of the major achievements of the field, and it entered his work in several ways. The essence of the second law is the statement that all processes can be quantified by an entropy function whose increase is a necessary and sufficient condition for a process to occur. As a fundamental physical law no deviation, however tiny, is permitted and its consequences are far-reaching. Current wisdom regards the second law as a consequence of statistical mechanics but the entropy principle, which was discovered before statistical mechanics was invented, ought to be derivable from a few logical principles without recourse to Carnot cycles, ideal gases and other assumptions about such things as 'heat', 'hot' and 'cold', 'temperature', 'reversible processes', etc. Like conservation of energy (the ``first'' law), the existence of a law so precise and so model-independent must have a logical foundation that is independent of the details of the constitution of matter. In this lecture the foundations of the subject and the construction (with J. Yngvason) of entropy from a few simple principles will be presented. (No previous familiarity with the subject is required.) A summary can be found in: "A Guide to Entropy and the Second Law of Thermodynamics", Notices of the Amer. Math. Soc. vol 45 571-581 (1998). http://www.ams.org/notices/199805/lieb.pdf. arXiv math-ph/9805005 This paper received the American Mathematical Society 2002 Levi Conant prize for ``the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years''. A Fresh Look at Entropy and the Second Law of Thermodynamics, Physics Today {bf 53}, 32-37 (April 2000). arXiv math-ph/0003028
I will focus on two issues of spatial population dynamics. The first will be on the dynamics and spread of populations in space, which is joint work with Brett Melbourne that begins with experimental work with the flour beetle Tribolium which is then coupled with analyses of stochastic population models incorporating different sources of variability to understand highly variable spread rates. The second part of the talk will cover questions related to control of spread of invasive species.
We propose a general framework to design asymptotic preserving schemes for the Boltzmann kinetic kinetic and related equations. Numerically solving these equations are challenging due to the nonlinear stiff collision (source) terms induced by small mean free or relaxation time. We propose to penalize the nonlinear collision term by a BGK-type relaxation term, which can be solved explicitly even if discretized implicitly in time. Moreover, the BGK-type relaxation operator helps to drive the density distribution toward the local Maxwellian, thus natually imposes an asymptotic-preserving scheme in the Euler limit. The scheme so designed does not need any nonlinear iterative solver or the use of Wild Sum. It is uniformly stable in terms of the (possibly small) Knudsen number, and can capture the macroscopic fluid dynamic (Euler) limit even if the small scale determined by the Knudsen number is not numerically resolved. It is also consistent to the compressible Navier-Stokes equations if the viscosity and heat conductivity are numerically resolved. The method is applicable to many other related problems, such as hyperbolic systems with stiff relaxation, and high order parabilic equations.
Starting point of my talk are approximation algorithms for NP-hard problems in combinatorial optimization which are based on semidefinite programming (SDP), a recent and powerful method in convex optimization. One example is the theta number of Lovasz which provides an upper bound for the largest size of an independent set of finite graphs based on a solution of a semidefinite program. Many problems in extremal discrete geometry can be formulated in this way but for infinite geometric graphs. A famous example is the kissing number problem which goes back to a discussion of Newton and Gregory in 1692: What is the maximum number of non-overlapping unit balls that can simultaneously touch a central unit ball? To tackle this problem we generalize the theta number (and strengthenings based on SDP hierarchies) to infinite graphs which yields an infinite-dimensional semidefinite program. By using symmetries and tools from harmonic analysis it turns out that one can solve these semidefinite programs by computer giving the best known upper bounds in dimensions up to 24.
The hypothesis that the structure of high Reynolds number turbulence conists of thin shear layers, with thickness of the order of the Taylor micro scale, has been further confirmed by numerical studies by Ishihara and Kaneda of conditional statistics and local dynamics; by PIV measurements of lab experiments by Wirth and Nickels, and by further developments of the theory, especially the transport of energy into the layers leading to the generation of intense structures, on the scale ofthe Kolomogorov micro-scale. This analysis provides (for the first time?) a physically justifiable analysis for the higher moments and why these are generally less isotropic than lower order moments, eg in thermal convection. The theoretical and practical implications of the flow near and within these layers for fluctuations of temperature and other scalars are explained. Ref JCRHunt , I Eames, P Davidson,J.Westerweel, J Fernando, S Voropayev, M Braza J Hyd Env Res 2010
“It is impossible to trisect an arbitrary angle.” We (the mathematical community) have been certain of this for the past 170 years. Missing in that statement is the qualifying phrase“... using a straightedge and compass.” But if we are clumsy enough to scratch our straightedge in two places, we can in fact trisect an arbitrary angle, a result that was known to Archimedes. We are equally confident in the much more modern assertion: “There is no algorithm to solve an arbitrary quintic,” where the oft omitted qualifying phrase is “... using the extraction of roots.” In this talk, we will give an example of a quintic whose roots are not expressible using the extraction of roots, but whose real roots are constructible using a compass and twice-notched straightedge. We will also analyze the power and limitations of these tools, and present some open questions.
Fluid flows in the presence of free surfaces occur in a great many situations in nature; examples include waves on the ocean and the flow of groundwater. In this talk, I will discuss my contributions to the understanding of the systems of nonlinear partial differential equations which model such phenomena. The most important step in these results is making a suitable formulation of the problem. Influenced by the computational work of Hou, Lowengrub, and Shelley, we formulate the problems in natural, geometric variables. I will discuss my proofs (most of which are joint with Nader Masmoudi) of existence of solutions to the initial value problems for vortex sheets and water waves. I will also discuss computational results, including work with Jon Wilkening on the computation of special solutions, especially time-periodic interfacial flows.
This lecture is part of the Fall 2010 Seminar series and was recorded on September 3, 2010 in Physical Science Center A Wing, Room 107,