POPULARITY
Michael Freedman is a mathematician who was awarded the Fields Medal in 1986 for his solution of the 4-dimensional Poincare conjecture. Mike has also received numerous other awards for his scientific contributions including a MacArthur Fellowship and the National Medal of Science. In 1997, Mike joined Microsoft Research and in 2005 became the director of Station Q, Microsoft's quantum computing research lab. As of 2023, Mike is a Senior Research Scientist at the Center for Mathematics and Scientific Applications at Harvard University. Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen In this wide-ranging conversation, we give a panoramic view of Mike's extensive body of work over the span of his career. It is divided into three parts: early, middle, and present day, which respectively include his work on the 4-dimensional Poincare conjecture, his transition to topological physics, and finally his recent work in applying ideas from mathematics and philosophy to social economics. Our conversation is a blend of both the nitty-gritty details and the anecdotal story-telling that can only be obtained from a living legend. I. Introduction 00:00 : Preview 01:34 : Fields Medalist working in industry 03:24 : Academia vs industry 04:59 : Mathematics and art 06:33 : Technical overview II. Early Mike: The Poincare Conjecture (PC) 08:14 : Introduction, statement, and history 14:30 : Three categories for PC (topological, smooth, PL) 17:09 : Smale and PC for d at least 5 17:59 : Homotopy equivalence vs homeomorphism 22:08 : Joke 23:24 : Morse flow 33:21 : Whitney Disk 41:47 : Casson handles 50:24 : Manifold factors and the Whitehead continuum 1:00:39 : Donaldson's results in the smooth category 1:04:54 : (Not) writing up full details of the proof then and now 1:08:56 : Why Perelman succeeded II. Mid Mike: Topological Quantum Field Theory (TQFT) and Quantum Computing (QC) 1:10:54: Introduction 1:11:42: Cliff Taubes, Raoul Bott, Ed Witten 1:12:40 : Computational complexity, Church-Turing, and Mike's motivations 1:24:01 : Why Mike left academia, Microsoft's offer, and Station Q 1:29:23 : Topological quantum field theory (according to Atiyah) 1:34:29 : Anyons and a theorem on Chern-Simons theories 1:38:57 : Relation to QC 1:46:08 : Universal TQFT 1:55:57 : Witten: Donalson theory cannot be a unitary TQFT 2:01:22 : Unitarity is possible in dimension 3 2:05:12 : Relations to a theory of everything? 2:07:21 : Where topological QC is now III. Present Mike: Social Economics 2:11:08 : Introduction 2:14:02 : Lionel Penrose and voting schemes 2:21:01 : Radical markets (pun intended) 2:25:45 : Quadratic finance/funding 2:30:51 : Kant's categorical imperative and a paper of Vitalik Buterin, Zoe Hitzig, Glen Weyl 2:36:54 : Gauge equivariance 2:38:32 : Bertrand Russell: philosophers and differential equations IV: Outro 2:46:20 : Final thoughts on math, science, philosophy 2:51:22 : Career advice Some Further Reading: Mike's Harvard lecture on PC4: https://www.youtube.com/watch?v=TSF0i6BO1Ig Behrens et al. The Disc Embedding Theorem. M. Freedman. Spinoza, Leibniz, Kant, and Weyl. arxiv:2206.14711 Twitter: @iamtimnguyen Webpage: http://www.timothynguyen.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I learned to stop worrying and love skill trees, published by junk heap homotopy on May 23, 2023 on LessWrong. There seems to be a stupid, embarrassingly simple solution to the following seemingly unrelated problems: Upskilling is hard: the available paths are often lonely and uncertain, workshops aren't mass-producing Paul Christianos, and it's hard for people to stay motivated over long periods of time unless they uproot their entire lives and move to London/Berkeley[1]. It takes up to five years for entrants in alignment research to build up their portfolio and do good work–too slow for short timelines. Alignment researchers don't seem to stack. LessWrong–and by extension greenfield alignment–is currently teetering on the edge of an Eternal September: most new people are several hundred thousand words of reading away from automatically avoiding bad ideas, let alone being able to discuss them with good truth-seeking norms. We don't have a reliable way to gauge the potential of someone we've never met to do great work[2]. This is not a new idea. It's a side project of mine that could be built by your average first-year CS undergrad and that I have shelved multiple times. It's just that, for some reason, like moths to a flame or a dog to its vomit I just keep coming back to it. So I figured, third time's the charm, right? The proposal (which I call 'Blackbelt' for obscure reasons) is really simple: a dependency graph of tests of skill. Note that last bit: 'tests of skill'. If my intention was merely to add to the growing pile of Intro to AI Safety (Please Don't Betray Us and Research Capabilities Afterward)[3] courses out there then we can all just pack up and go home and forget this poorly-worded post ever existed. But alas, my internal model says we will not go from doomed to saved with the nth attempt at prettifying the proof of the rank-nullity theorem. The real problem is not finding better presentations or a better Chatty McTextbook explanation, but can be found by observing what does not change. That is, let's invert the question of how to produce experts and instead ask: "What things should I be able to do, to be considered a minimum viable expert in X?" So for instance, since we're all trying to get more dignity points in before 2028, let's consider the case of the empirical alignment researcher. The minimum viable empirical researcher (and by 'minimum', I mean it) should probably know: How to multiply two matrices together How to train a handwriting classifier on the MNIST dataset How to implement backprop from scratch How to specify a reward function as Python code etc. Sure, there's nothing groundbreaking here, but that's precisely the point. What happens in the wild, in contrast, looks something like grocery shopping: "Oh, you need vector calculus, and set theory, and–textbooks? Read Axler, then Jaynes for probability 'cause you don't want to learn from those dirty, dirty frequentists...yeah sprinkle in some category theory as well from Lawvere, maybe basic game theory, then go through MLAB's course..." Maybe it's just me, but I get dizzy when every other word of someone's sentence packs months' worth of implied thankless work. Never mind how much it sounds like a wide-eyed Victorian-era gentleman rattling off classics one supposedly has read: reading a whole textbook is not an atomic action, let alone going through entire courses and assuming infinite motivation on the part of the victim[4]. There's no accounting for tests What is a test, really? Related: the most accurate map of the territory is the territory itself, but what happens when the territory is slippery[5]? An apocryphal story goes that, when Pope Benedict XI was in search of a fresco artist he sent a messenger to a man named Giotto. The messenger asked him to provide a demonstration of ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I learned to stop worrying and love skill trees, published by junk heap homotopy on May 23, 2023 on LessWrong. There seems to be a stupid, embarrassingly simple solution to the following seemingly unrelated problems: Upskilling is hard: the available paths are often lonely and uncertain, workshops aren't mass-producing Paul Christianos, and it's hard for people to stay motivated over long periods of time unless they uproot their entire lives and move to London/Berkeley[1]. It takes up to five years for entrants in alignment research to build up their portfolio and do good work–too slow for short timelines. Alignment researchers don't seem to stack. LessWrong–and by extension greenfield alignment–is currently teetering on the edge of an Eternal September: most new people are several hundred thousand words of reading away from automatically avoiding bad ideas, let alone being able to discuss them with good truth-seeking norms. We don't have a reliable way to gauge the potential of someone we've never met to do great work[2]. This is not a new idea. It's a side project of mine that could be built by your average first-year CS undergrad and that I have shelved multiple times. It's just that, for some reason, like moths to a flame or a dog to its vomit I just keep coming back to it. So I figured, third time's the charm, right? The proposal (which I call 'Blackbelt' for obscure reasons) is really simple: a dependency graph of tests of skill. Note that last bit: 'tests of skill'. If my intention was merely to add to the growing pile of Intro to AI Safety (Please Don't Betray Us and Research Capabilities Afterward)[3] courses out there then we can all just pack up and go home and forget this poorly-worded post ever existed. But alas, my internal model says we will not go from doomed to saved with the nth attempt at prettifying the proof of the rank-nullity theorem. The real problem is not finding better presentations or a better Chatty McTextbook explanation, but can be found by observing what does not change. That is, let's invert the question of how to produce experts and instead ask: "What things should I be able to do, to be considered a minimum viable expert in X?" So for instance, since we're all trying to get more dignity points in before 2028, let's consider the case of the empirical alignment researcher. The minimum viable empirical researcher (and by 'minimum', I mean it) should probably know: How to multiply two matrices together How to train a handwriting classifier on the MNIST dataset How to implement backprop from scratch How to specify a reward function as Python code etc. Sure, there's nothing groundbreaking here, but that's precisely the point. What happens in the wild, in contrast, looks something like grocery shopping: "Oh, you need vector calculus, and set theory, and–textbooks? Read Axler, then Jaynes for probability 'cause you don't want to learn from those dirty, dirty frequentists...yeah sprinkle in some category theory as well from Lawvere, maybe basic game theory, then go through MLAB's course..." Maybe it's just me, but I get dizzy when every other word of someone's sentence packs months' worth of implied thankless work. Never mind how much it sounds like a wide-eyed Victorian-era gentleman rattling off classics one supposedly has read: reading a whole textbook is not an atomic action, let alone going through entire courses and assuming infinite motivation on the part of the victim[4]. There's no accounting for tests What is a test, really? Related: the most accurate map of the territory is the territory itself, but what happens when the territory is slippery[5]? An apocryphal story goes that, when Pope Benedict XI was in search of a fresco artist he sent a messenger to a man named Giotto. The messenger asked him to provide a demonstration of ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You are probably not a good alignment researcher, and other blatant lies, published by junk heap homotopy on February 2, 2023 on LessWrong. When people talk about research ability, a common meme I keep hearing goes something like this: Someone who would become a great alignment researcher will probably not be stopped by Confusions about a Thorny Technical Problem X that's Only Obvious to Someone Who Did the Right PhD. Someone who would become a great alignment researcher will probably not have Extremely Hairy But Also Extremely Common Productivity Issue Y. Someone who would become...great...would probably not have Insecurity Z that Everyone in the Audience Secretly Has. What is the point of telling anyone any of these? If I were being particularly uncharitable, I'd guess the most obvious explanation that it's some kind of barely-acceptable status play, kind of like the budget version of saying "Are you smarter than Paul Christiano? I didn't think so." Or maybe I'm feeling a bit more generous today so I'll think that it's Wittgenstein's Ruler, a convoluted call for help pointing out the insecurities that the said person cannot admit to themselves. But this is LessWrong and it's not customary to be so suspicious of people's motivations, so let's assume that it's just an honest and pithy way of communicating the boundaries of hard-to-articulate internal models. First of all, what model? Most people here believe some form of biodeterminism. That we are not born tabula rasa, that our genes influence the way we are, that the conditions in our mother's womb can and do often snowball into observable differences when we grow up. But the thing is, these facts do not constitute a useful causal model of reality. IQ, aka (a proxy for) the most important psychometric construct ever discovered and most often the single biggest predictor of outcomes in a vast number of human endeavours, is not a gears-level model. Huh? Suppose it were, and it were the sole determinant of performance in any mentally taxing field. Take two mathematicians with the exact same IQ. Can you tell me who would go on to become a number theorist vs an algebraic topologist? Can you tell me who would over-rely on forcing when disentangling certain logic problems? Can you tell me why Terrence Tao hasn't solved the Riemann hypothesis yet? There is so much more that goes into becoming a successful scientist than can be distilled in a single number it's not even funny. Even if said number means a 180 IQ scientist is more likely to win a Nobel than a 140 IQ nobody. Even if said number means it's a safer bet to just skim the top 10 of whatever the hell the modern equivalent of the SMPY is than to take a chance on some rando on the other side of the world who is losing sleep on the problem. But okay, sure. Maybe approximately no one says that it's just IQ. Nobody on LessWrong is so naïve as to have a simple model with no caveats so: let's say it's not just IQ but some other combo of secret sauces. Maybe there's like eight variables that together form a lognormal distribution. Then the question becomes: how the hell is your human behaviour predicting machine so precise that you're able to say with abject confidence what can exclude someone from doing important work? Do you actually have a set of values in mind for each of these sliders, for your internal model of which kinds of people alone can do useful research? Did you actually go out there and measure which of the people on this page have which combinations of factors? I think the biggest harm that comes from making this kind of claim is that, like small penis discourse (WARNING: CW) there's a ton of collateral damage done when you say it out loud that I think far outweighs whatever clarity the listeners gain. I mean, what's the chain of thought gonna be for the oth...
Moobarkfluff! Taebyn is very excited about a new branch of math-Homotopy Type Theory. Give our weekly challenge a try this week and tell us about it on the BFFT chat. Lots of discussion on Last Week Today. Who remembers Cop Rock? Taebyn issues so many challenges this week. Can you do them all? We visit the Trasfurmation Station. How is time reckoned in Star Trek? Frog is starting to sound like a certain mouse. Will Tater accept the commission to draw a wallaby wearing a sock? Only time will tell, and time is what is in rare supply during this episode, so spend some time with us, you will laugh, you will cry (let's face it, you will probably cry more out of pity), you will sigh and high five! Moobarkfluff! https://www.bonfire.com/store/bearly-furcasting/Support the show (https://ko-fi.com/bearlyfurcasting)
Diferenciální geometr Radek Suchánek rád propojuje matematiku s fyzikou. Posdílí s námi: - kolik trojúhelníků narýsoval na vysoké škole - která geometrie je vhodná na silnici a která pro GPS - co provádějí červové s lasy v různých dimenzích - jak si užít studium a rozšířit si mentální kapacitu Startovač: https://www.startovac.cz/patron/misto-problemu/ FB stránka: https://www.facebook.com/mistoproblemu Web: https://www.mistoproblemu.cz/ Odkazy: - diferenciální geometrie: https://en.wikipedia.org/wiki/Differential_geometry - stahování smyček: https://en.wikipedia.org/wiki/Homotopy - zkoumání děr: https://www.quantamagazine.org/topology-101-how-mathematicians-study-holes-20210126/ Časové značky: (00:00) úvod (1:06) různé geometrie a jejich motivace (27:58) prostory a díry (40:22) vztahy a rozdíly mezi matematiky a fyziky (54:38) cesta k matematice a osobitý přístup
Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: Did you ever meet any of the Manhattan project spies? (Theodore Hall, Klaus Fuchs, Alan Nunn May) - Did you have any interactions with Aaron Swartz? - Is it possible that while moving from the 20 original equations used by Maxwell to the 4 we use today we treated something as negligible by mistake because quantum theory was not around? - Did you meet Elon Musk or Steve Jobs? - What did you do and who did you meet at the Institute for advanced study (did not realise you went there until reading your article about Tini Veltman). - If you make fundamental breakthroughs in Homotopy type theory I bet IAS would be very interested - Did you meet any person related to the "Human Genome project"? Eric Lander, Craig Venter...? - Did you interact with Claude Shannon? - The french composer Erik Satie would only eat white food too - Any anecdotes about Ed witten or Leonard Susskind? - Are you familiar with the work of Roy Frieden about Physics from Fisher Information? What do you think about it? - What's a good place to get one genome sequenced? - Do you know how Joseph Fourier developed the math that lead to Fourier transformation? - IAS was the perfect place for Kurt godel Did you read godels Citizenship hearing? He pointed out logical inconsistencies in the American Constitution - Could you give us Feynman and Steve Jobs couple of anecdotes? - The Book "Faster than thought" (1953) has the following Leibniz quote "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used". It seems that you have the same opinion as Leibniz.
Carlo is a postdoc in the Computer Science Department at Carnegie Mellon University, where he received a Ph.D. under Robert Harper. He previously studied at Indiana University Bloomington, where he received a B.S. in Mathematics and in Computer Science. Today Carlo joined us to discuss Homotopy Type Theory, a new foundations for mathematics based on a recently-discovered connection between Homotopy Theory and Type Theory. Carlo explains intuitively what Homotopy Type Theory is and how it is used, and then goes over various possible implementations of Homotopy Type Theory in a theorem-proving environment such as Coq. Finally, he fields questions on Homotopy Type Theory, theorem-proving, and other topics from the Boston Computation Club audience. The Boston Computation Club can be found at https://bstn.cc/ Carlo Angiuli can be found at https://www.cs.cmu.edu/~cangiuli/ A video recording of this talk is available at https://youtu.be/VMqF06fDljU For more on Homotopy Type Theory refer to https://homotopytypetheory.org/book/
Welcome to the Christmas special community edition of MLST! We discuss some recent and interesting papers from Pedro Domingos (are NNs kernel machines?), Deepmind (can NNs out-reason symbolic machines?), Anna Rodgers - When BERT Plays The Lottery, All Tickets Are Winning, Prof. Mark Bishop (even causal methods won't deliver understanding), We also cover our favourite bits from the recent Montreal AI event run by Prof. Gary Marcus (including Rich Sutton, Danny Kahneman and Christof Koch). We respond to a reader mail on Capsule networks. Then we do a deep dive into Type Theory and Lambda Calculus with community member Alex Mattick. In the final hour we discuss inductive priors and label information density with another one of our discord community members. Panel: Dr. Tim Scarfe, Yannic Kilcher, Alex Stenlake, Dr. Keith Duggar Enjoy the show and don't forget to subscribe! 00:00:00 Welcome to Christmas Special! 00:00:44 SoTa meme 00:01:30 Happy Christmas! 00:03:11 Paper -- DeepMind - Outperforming neuro-symbolic models with NNs (Ding et al) 00:08:57 What does it mean to understand? 00:17:37 Paper - Prof. Mark Bishop Artificial Intelligence is stupid and causal reasoning wont fix it 00:25:39 Paper -- Pedro Domingos - Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 00:31:07 Paper - Bengio - Inductive Biases for Deep Learning of Higher-Level Cognition 00:32:54 Anna Rodgers - When BERT Plays The Lottery, All Tickets Are Winning 00:37:16 Montreal AI event - Gary Marcus on reasoning 00:40:37 Montreal AI event -- Rich Sutton on universal theory of AI 00:49:45 Montreal AI event -- Danny Kahneman, System 1 vs 2 and Generative Models ala free energy principle 01:02:57 Montreal AI event -- Christof Koch - Neuroscience is hard 01:10:55 Markus Carr -- reader letter on capsule networks 01:13:21 Alex response to Marcus Carr 01:22:06 Type theory segment -- with Alex Mattick from Discord 01:24:45 Type theory segment -- What is Type Theory 01:28:12 Type theory segment -- Difference between functional and OOP languages 01:29:03 Type theory segment -- Lambda calculus 01:30:46 Type theory segment -- Closures 01:35:05 Type theory segment -- Term rewriting (confluency and termination) 01:42:02 MType theory segment -- eta term rewritig system - Lambda Calculus 01:54:44 Type theory segment -- Types / semantics 02:06:26 Type theory segment -- Calculus of constructions 02:09:27 Type theory segment -- Homotopy type theory 02:11:02 Type theory segment -- Deep learning link 02:17:27 Jan from Discord segment -- Chrome MRU skit 02:18:56 Jan from Discord segment -- Inductive priors (with XMaster96/Jan from Discord) 02:37:59 Jan from Discord segment -- Label information density (with XMaster96/Jan from Discord) 02:55:13 Outro
Pavel Mnev is an Assistant Professor of Mathematics at the University of Notre Dame. He was awarded his Ph.D. in Mathematical Physics in 2008 from the St. Petersburg Department of Steklov Mathematical Institute at the Russian Academy of Sciences under the supervision of Acad L. D. Faddeev. He was awarded the Andre Lichnerowicz prize in Poisson Geometry in 2016, and has previously worked at the Max Planck Institute for Mathematics in Bonn, Germany.His research is in mathematical physics, more precisely he is interested in the interactions of quantum field theory with topology, homological/homotopical algebra and supergeometry.His website is here: https://www3.nd.edu/~pmnev/We would like to thank Pavel for being on our show "Meet a Mathematician" and for sharing his stories and perspective with us!www.sensemakesmath.comPODCAST: http://sensemakesmath.buzzsprout.com/TWITTER: @SenseMakesMathPATREON: https://www.patreon.com/sensemakesmathFACEBOOK: https://www.facebook.com/SenseMakesMathSTORE: https://sensemakesmath.storenvy.comSupport the show (https://www.patreon.com/sensemakesmath)
Phillip Jedlovec (A.K.A. PJ) is a Mathematics Lecturer at Santa Clara University. He received his Ph.D. in 2018 from the University of Notre Dame under the supervision of Mark Behrens.His research is in Algebraic Topology, specifically in unstable homotopy theory. He is also interested in philosophy of math and areas of intersection between mathematics and economics. He is deeply passionate about teaching mathematics in innovative and effective ways.We talk about his journey through mathematics, as well as teaching mathematics (flipped classrooms) and philosophy of math.His website can be found here: https://pjedlovec.github.io/We'd like to thank PJ for being on our show "Meet a Mathematician" and for sharing his stories and perspective with us!www.sensemakesmath.comPODCAST: http://sensemakesmath.buzzsprout.com/TWITTER: @SenseMakesMathPATREON: https://www.patreon.com/sensemakesmathFACEBOOK: https://www.facebook.com/SenseMakesMathSTORE: https://sensemakesmath.storenvy.comSupport the show (https://www.patreon.com/sensemakesmath)
Fredrik talks to Bartosz Milewski - programmer, writer and creator of mind-expanding presentations - about a wide range of things in the lands between mathematics and programming. Bartosz explains his increasing interest in mathematics, type and category theory and why he thinks mathematics and programming can and are coming closer together. We eventually get to the topic of Bartosz’ talk last year, and perhaps the only way humans can understand things and how that affects what we discover. Perhaps even what we are able to discover. Recorded on stage at Øredev 2018. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @iskrig and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! Links Øredev 2018 Bartosz Milewski Bartosz’ presentation the day before - Programming with math Bartosz’s second presentation of the year is unfortunately not online yet Type theory Category theory Template metaprogramming Cateogry theory for the working mathematician Functor Monad Richard Feynman Category theory for programmers Bartosz’ videos on Youtube Quadratic equations Fermat’s last theorem and the proof Homotopy type theory The Curry-Howard isomorphism Bartosz’ talk from last year - The earth is flat Titles I skipped a lot of slides Something related to math Pushed by external forces What is fascinating to me at the moment Tone down the category theory I’m really comfortable with math I discovered a whole new franchise I read a few first sentences The idea of category theory is not that difficult Multiply and divide things for months This gap between programming and math (There is) A lot of commonality How to split things and how to compose them The science of composition We humans have to structure things The different ways of splitting things Mathemathics is the future Who wants to program in assembly language Test-driven proof development A lot of hand-waving in math as well Mechanizing proofs An outgowth of type theory The only way we humans can understand nature Life can only exist in a decomposable environment Our brains work by decomposing things Why would there be a simple solution?
Fredrik talks to Bartosz Milewski - programmer, writer and creator of mind-expanding presentations - about a wide range of things in the lands between mathematics and programming. Bartosz explains his increasing interest in mathematics, type and category theory and why he thinks mathematics and programming can and are coming closer together. We eventually get to the topic of Bartosz' talk last year, and perhaps the only way humans can understand things and how that affects what we discover. Perhaps even what we are able to discover. Recorded on stage at Øredev 2018. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! Links Øredev 2018 Bartosz Milewski Bartosz' presentation the day before - Programming with math Bartosz’s second presentation of the year is unfortunately not online yet Type theory Category theory Template metaprogramming Cateogry theory for the working mathematician Functor Monad Richard Feynman Category theory for programmers Bartosz' videos on Youtube Quadratic equations Fermat’s last theorem and the proof Homotopy type theory The Curry-Howard isomorphism Bartosz' talk from last year - The earth is flat Titles I skipped a lot of slides Something related to math Pushed by external forces What is fascinating to me at the moment Tone down the category theory I’m really comfortable with math I discovered a whole new franchise I read a few first sentences The idea of category theory is not that difficult Multiply and divide things for months This gap between programming and math (There is) A lot of commonality How to split things and how to compose them The science of composition We humans have to structure things The different ways of splitting things Mathemathics is the future Who wants to program in assembly language Test-driven proof development A lot of hand-waving in math as well Mechanizing proofs An outgowth of type theory The only way we humans can understand nature Life can only exist in a decomposable environment Our brains work by decomposing things Why would there be a simple solution?
Jack du Rose is a former jewelry and diamond artist, current blockchain nerd and co-founder of Colony.io & Ownage.io. He is phenomenally excited by the power shift our decentralised future brings. When not making $100m diamond skulls, or paradigm changing social platforms, Jack enjoys a nice cup of tea and a sit-down. Dr. Aron Fischer received his Ph.D. in mathematics from the City University of New York in 2015. His specialization was in Algebraic Topology and Homotopy theory. Since his graduation, he has been studying homotopy type theory in his free time and is interested in how (higher) type theory can help us write safer smart contracts. He is working for Colony in R&D, developing the governance protocols, and for the Ethereum Foundation's Swarm team where he is working on state and payment channels for the swarm incentive structure. Jack’s Challenge; Get involved in the Ethereum community and subreddit Aron’s Challenge; Conduct your own Ethereum transaction. Resources Mentioned Ethereum Subreddit Ethereum Github Gitter.im KanBan - Toyota’s Revolutionary Product System Like my crypto episodes? Support the work! Ethereum: 0x1c5f5da1efad45078c41bceb18eb777099138e6b Bitcoin: 13RcQZnZM4Lx6bw37YVdiv6Uc2X5b7anF3 If you liked this interview, check out Episode 221 with Ryan Snowden or 158 with Roger Ver for a discussion of blockchains and crypto. Subscribe on iTunes | Stitcher | Overcast | PodBay
Follow The SwiftCoder Journey at WWDC 2017 via Instagram. I'll be posting a lot of Stories: https://www.instagram.com/swiftcoders/ Come to The SwiftCoders Meet & Greet at AltConf 2017. Find all the details here: https://swiftcoders.eventfarm.com In this episode, I interview Robert Widmann. Robert is a rising Junior at Carnegie Mellon University where he studies Mathematics. He was an intern at Apple on the Swift Compiler Team in 2016. He will be interning at Apple on the Swift Static Analysis Team this summer, and he's also a frequent contributor to Swift Open Source. I really wanted to interview Robert because he helped me get my first Swift Open Source pull request merged in. The Swift Community is really lucky to have Robert as a member. Anyone interested in talking about Swift or getting started with Swift Open Source should definitely reach out to him on Twitter. Enjoy! Links: https://twitter.com/CodaFi_ http://xn--wxak1a.com https://github.com/CodaFi https://github.com/typelift https://en.wikipedia.org/wiki/No_true_Scotsman https://en.wikipedia.org/wiki/Lego_Mindstorms http://www.vanschneider.com https://en.wikipedia.org/wiki/APL_(programming_language) https://en.wikipedia.org/wiki/Haskell_(programming_language) https://en.wikipedia.org/wiki/Depth-first_search https://en.wikipedia.org/wiki/Tree_traversal https://en.wikipedia.org/wiki/Homotopy_type_theory https://github.com/HoTT/HoTT https://en.wikipedia.org/wiki/Agda_(programming_language) https://bugs.swift.org/ https://twitter.com/UINT_MIN https://github.com/apple/swift/blob/master/docs/Lexicon.rst https://github.com/trill-lang/trill https://twitter.com/aciidb0mb3r https://gist.github.com/CodaFi/fe42d673f1a37395ffd1 https://github.com/apple/swift/pull/8908 Listen on iTunes. Support this podcast via Patreon. Questions, comments, or you just wanna say Hi? Contact your host @garricn on Twitter. This episode was recorded using the Cast platform by @JulianLepinski. Wanna start your own podcast? Try Cast!
Episode 3: Dan Licata on Homotopy Type Theory
Majumdar, A (University of Bath) Tuesday 09 April 2013, 10:00-11:00
Pridham, JP (University of Cambridge) Thursday 04 April 2013, 15:00-16:00
Dotsenko, V (Trinity College Dublin) Thursday 04 April 2013, 11:00-12:00
Berglund, A (Stockholm University) Tuesday 02 April 2013, 13:30-14:30
Fresse, B (Université Lille 1) Tuesday 12 March 2013, 14:00-15:15
Fresse, B (Université Lille 1) Tuesday 12 March 2013, 15:45-17:00
Fresse, B (Université Lille 1) Tuesday 05 March 2013, 15:45-17:00
Vallette, B (Université de Nice Sophia Antipolis) Wednesday 30 January 2013, 10:30-12:00
Moerdijk, I (Radboud Universiteit Nijmegen) Wednesday 09 January 2013, 15:30-16:30
Steve Awodey (CMU/MCMP) gives a talk at the MCMP Colloquium (13 June, 2012) titled "Homotopy Type Theory and Univalent Foundations of Mathematics". Abstract: Recent advances in foundations of mathematics have led to some developments that are significant for the philosophy of mathematics, particularly structuralism. The discovery of an interpretation of constructive type theory into homotopy theory suggests a new approach to the foundations of mathematics with both intrinsic geometric content and a computational implementation. In this setting, leading homotopy theorist Vladimir Voevodsky has proposed new axiom for foundations with both geometric and logical significance: the Univalence Axiom. It captures the familiar aspect of informal mathematical practice, according to which one can identify isomorphic objects. While it is incompatible with conventional foundations, it is a powerful addition to homotopy type theory, and forms the basis of the new Univalent Foundations Program.