Computer programmer
POPULARITY
Alexander Repenning created AgentSheets, an environment to help kids develop computational thinking skills. It wrapped an unusual computational model with an even more unusual user interface. The result was divisive. It inspired so many other projects, whilst being rejected at every turn and failing to catch on the way Scratch later did. So in 2017, Repenning published this obit of a paper, Moving Beyond Syntax: Lessons from 20 Years of Blocks Programming in AgentSheets, which covers his findings over the years as AgentSheets evolved and transformed, and gives perspective on block-based programming, programming-by-example, agents / rule / rewrite systems, automata, and more. This is probably the most "normal" episode we've done in a while — we stay close to the text and un-clam many a thought-tickling pearl. I'm saying that sincerely now to throw you off our scent the next time we get totally lost in the weeds. I hear a clock ticking. Links $ Do you want to move beyond syntax? Frustrated by a lack of syntactic, semantic, or pragmatic support? Join our Patreon! Choose the tier that best reflects your personal vision of the future of coding. Get (frequently unhinged) monthly bonus content. Most of all: let us know that you enjoy this thing we do, and help us keep doing it for years to come. Argos, for our non-UK listeners. They were acquired by future TodePond sponsor, Sainsbury's. Once again, I am asking for your Marcel Goethals makes a lot of cool weird stuff and is a choice follow. Scratch isn't baby programming. Also, you should try this bizarre game Ivan programmed in 3 blocks of Scratch. Sandspiel Studio is a delightful block-based sand programming simulator automata environment. Here's a video of Lu and Max introducing it. Simple Made Easy, a seminal talk by Rich Hickey. Still hits, all these years later. Someday we'll do an episode on speech acts. Rewrite rules are one example of rewriting in computing. Lu's talk —and I quote— "at Cellpond", was actually at SPLASH, about Cellpond, and it's a good talk, about —and I quote— "actually, what if they didn't give up on rewrite rules at this point in history and what if they went further?" Oh yeah — Cellpond is cool. Here's a video showing you how it works. And here's a video studying how that video works. And here's a secret third thingthat bends into a half-dimension. Here's Repenning's "rule-bending" paper: Bending the Rules: Steps Toward Semantically Enriched Graphical Rewrite Rules I don't need to link to SimCity, right? You all know SimCity? Will Wright is, arguably, the #1 name in simulation games. Well, you might not have caught the fantastic article Model Metropolis that unpacks the (inadvertently?) libertarian ideology embodied within the design of its systems. I'd also be remiss not to link to Polygon's video (and the corresponding write-up), which lend a little more colour to the history. Couldn't find a good link to Blox Pascal, which appears in the paper Towards "Second Generation" Interactive, Graphical Programming Environments by Ephraim P. Glinert, which I also couldn't find a good link to. Projectional / Structural Editor. Here's a good one. Baba is You Vernacular Programmers Filling Typed Holes with Live GUIs is, AFAIK, the most current canonical reference for livelits. I'm not linking to Minecraft. But I will link to the Repeater 32 Checkboxes Wiremod is a… you know what, just watch this. Chomsky Hierarchy The Witness Ivan wrote a colorful Mastodon thread surveying the history of the Connection Machine. Harder Drive is a must-watch video by the inimitable Tom7. Also couldn't find a good link for TORTIS. :/ Programming by Example (PbE) Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo Alex Warth, one of the most lovely humans Ivan knows, is a real champion of "this, because that". Ivan's magnetic field simulations — Magnets! How do they work? Amit Patel's Red Blob Games, fantastic (fantastic!) explorable explanations that help you study various algorithms and techniques used in game development. Collaborative diffusion — "This article has multiple issues." Shaun Lebron, who you might know as the creator of Parinfer, made a game that interactively teaches you how the ghost AI works in Pac-Man. It's fun! Maxwell's Equations — specifically Gauss's law, which states that magnetic fields are solenoidal, meaning they have zero divergence at all points. University of Colorado Boulder has a collection of simulations called PhET. They're… mid, at least when compared to building your own simulation. For instance. Music featured in this episode: snot bubbles ! Send us email, share your ideas in the Slack, and catch us at these normal places: Ivan: Mastodon • Website Jimmy: Mastodon • Website Lu: Login • Website See you in the future! https://futureofcoding.org/episodes/073Support us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.
Inventing on PrincipleStop Drawing Dead FishThe Future of Programming Yes, all three of them in one episode. Phew! Links $ patreon.com/futureofcoding — Lu and Jimmy recorded an episode about Hest without telling me, and by total coincidence released it on my birthday. Those jerks… make me so happy. Lu's talk at SPLASH 2023: Cellpond: Spatial Programming Without Escape Gary Bernhardt's talk Wat Inventing on Principle by Bret Victor ("""Clean""" Audio) Braid, the good video game from the creator of The Witness David Hellman is the visual artist behind Braid, A Lesson Is Learned but the Damage Is Irreversible, Dynamicland, and… the Braid section of Inventing on Principle. Light Table by Chris Granger Learnable Programming by Bret Victor When Lu says "It's The Line", they're referring to this thing they're working on called Seet (or "see it"), and you can sneak a peek at seet right heet. Paris Fashion Week absolutely struts, and so can you! The Canadian Tuxedo. As the representative of Canada, I can confirm that I own both a denim jacket and denim pants. If you see me at a conference wearing this combo, I will give you a hug. Jimmy runs a personal Lichess data lake. Hot Module Replacement is a good thing. Pygmalion has a lot of juicy silly bits, 'parently. Cuttle is awesome! It's a worthy successor to Apparatus. Toby Schachman, Forrest Oliphant, I think maybe a few other folks too? Crushing it. Oh, and don't miss Toby's episode of this very podcast! Recursive Drawing, another Toby Schachman joint. Screens in Screens in Screens, another Lu Wilson joint. Larry Tesler. Not a fan of modes. Lu writes about No Ideas on their blog, which is actually just a wiki, but it's actually a blog, but it's actually just a garden. When we mention Rich Hickey, we're referring to the talk Simple Made Easy Jacob Collier, ugh. Suffragettes, women advocating for their right to vote, absolutely had a principle. Not sure that we should be directly likening their struggle to what we do in tech. On the other hand, it's good to foster positive movements, to resist incel and other hateful ones. Instead of linking to e/ anything, I'm just gonna link to BLTC for reasons that only make sense to longtime listeners. Stop Writing Dead Programs by Jack Rusher. Jack Rusher? Jack Rusher! It's the fish one, the one with the fish. …Sorry, these aren't actually fish, or something, because they're just drawings. René Magritte is the creator behind La Trahison des Images, origin of "Ceci n'est pas une pipe". Or maybe it was Margit the Fell Omen? Magritte's Words and Images are lovely. Here's an English translation, though its worth taking a look at the original in context. Acousmatic Music Lu has made art with behaviour — various sands, and CellPond, say. Barnaby Dixon? Barnaby Dixon. Barnaby Dixon! Barnaby Dixon!! You can listen to part of Ivan's """Metronome""", if you want. Or you can listen to an early version of the song he's using this metronome to write. Or you can hear snippets of it in the Torn Leaf Zero video (especially the ending). But, like, you could also go make yourself lunch. I recommend mixing up a spicy peanut sauce for your roasted carrots. Shred a bit of cheese, tomato. Toast the bread. Pull the sausages right when the oil starts to spit. Put them straight into the compost. Look at the bottom of the compost bucket. What's down there? It's shiny. Why are you reading this? Why am I writing this? Why do we make thispodcast? Wintergatan — Marble Machine exists Oh, I forgot to add a link to Arroost earlier. You can also watch a pretty good video that is basically an Arroost tutorial, not much to it. There are also some nice examples of things people have made with Arroost. The Rain Room looks pretty cool. It's the exact inverse of how rain works in many video games. YOU MUST PLAY RAIN WORLD. Here's a beautiful demo of a microtonal guitar, and speaking of using complex machines to make music that would be "easier" to make with a computer, here's a microtonal guitar with mechanized frets that can change the tuning dynamically. This entire YT channel is gold. Shane Crowley wrote a lovely blog post about creating music with Arroost. blank.page is a fun experiment in writing with various frictions. Super Meat Boy (the successor to Meat Boy, a Flash game) and Celeste are great examples of communicating tacit knowledge through the design of a simulation. Newgrounds and eBaum's World and Homestar Runner were early examples of (arguably) computer-native media. Hey, here's this episode's requisite link to the T2 Tile Project and Robust-First Computing. I should probably just create a hard-coded section of the episode page template linking to T2, The Witness, and Jack Rusher. The pun-proof Ivan Sutherland made Sketchpad. Planner exists. The PlayStation 3 Cell processor was this weirdly parallel CPU that was a pain in the butt to program. The SpaceMouse Put all metal back into the ground. Music featured in this episode: Fingers from This Score is Butt Ugly The Sailor's Chorus from Wagner's The Flying Dutchman. ! Send us email, share your ideas in the Slack, and catch us at these normal places: Ivan: Mastodon • Website Jimmy: Mastodon • Website Lu: Mastodon • Website See you in the future! https://futureofcoding.org/episodes/71See omnystudio.com/listener for privacy information.
For the final episode of Elixir Wizards' Season 11 “Branching Out from Elixir,” we're featuring a recent discussion from the Software Unscripted podcast. In this conversation, José Valim, creator of Elixir, interviews Richard Feldman, creator of Roc. They compare notes on the process and considerations for creating a language. This episode covers the origins of creating a language, its influences, and how goals shape the tradeoffs in programming language design. José and Richard share anecdotes from their experiences guiding the evolution of Elixir and Roc. The discussion provides an insightful look at the experimentation and learning involved in crafting new languages. Topics discussed in this episode What inspires the creation of a new programming language Goals and use cases for a programming language Influences from Elm, Rust, Haskell, Go, OCaml, and more Tradeoffs involved in expressiveness of type systems Opportunistic mutation for performance gains in a functional language Minimum version selection for dependency resolution Build time considerations with type checking and monomorphization Design experiments and rolling back features that don't work out History from the first simple interpreter to today's real programming language Design considerations around package management and versioning Participation in Advent of Code to gain new users and feedback Providing performance optimization tools to users in the future Tradeoffs involved in picking integer types and arithmetic Comparing floats and equality checks on dictionaries Using abilities to customize equality for custom types Ensuring availability of multiple package versions for incremental upgrades Treating major version bumps as separate artifacts Roc's focus on single-threaded performance Links mentioned in this episode Software Unscripted Podcast https://feeds.resonaterecordings.com/software-unscripted Roc Programming Language https://www.roc-lang.org/ Roc Lang on Github https://github.com/roc-lang/roc Elm Programming Language https://elm-lang.org/ Elm in Action by Richard Feldman https://www.manning.com/books/elm-in-action Richard Feldman on Github https://github.com/rtfeldman Lua Programming Language https://www.lua.org/ Vimscript Guide https://google.github.io/styleguide/vimscriptfull.xml OCaml Programming Language https://ocaml.org/ Advent of Code https://adventofcode.com/ Roc Language on Twitter https://twitter.com/roclang Richard Feldman on Twitter https://twitter.com/rtfeldman Roc Zulip Chat https://roc.zulipchat.com Clojure Programming Language https://clojure.org/ Talk: Persistent Data Structures and Managed References by Rich Hickey https://www.youtube.com/watch?v=toD45DtVCFM Koka Programming Language https://koka-lang.github.io/koka/doc/index.html Flix Programming Language https://flix.dev/ Clojure Transients https://clojure.org/reference/transients Haskell Software Transactional Memory https://wiki.haskell.org/Softwaretransactional_memory Rust Traits https://doc.rust-lang.org/book/ch10-02-traits.html CoffeeScript https://coffeescript.org/ Cargo Package Management https://doc.rust-lang.org/book/ch01-03-hello-cargo.html Versioning in Golang https://research.swtch.com/vgo-principles Special Guests: José Valim and Richard Feldman.
Episode Notes Clojure Conj: https://2023.clojure-conj.org/ “Design by Pratice” by Rich Hickey: https://www.youtube.com/watch?v=c5QF2HjHLSE&list=PLZdCLR02grLpIQQkyGLgIyt0eHE56aJqd&index=1 “Vector Symbolic Architectures in Clojure” by Carin Meier: https://www.youtube.com/watch?v=j7ygjfbBJD0&list=PLZdCLR02grLpIQQkyGLgIyt0eHE56aJqd&index=2 “Clojure Isp: One tool to lint them all” by Eric Dallo: https://www.youtube.com/watch?v=nxcNrjKL2WA&list=PLZdCLR02grLpIQQkyGLgIyt0eHE56aJqd&index=18 “A Relational Model of Data for Large Shared Data Banks” by Ted Codd: https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf
Out of the Tar Pit is in the grand pantheon of great papers, beloved the world over, with just so much influence. The resurgence of Functional Programming over the past decade owes its very existence to the Tar Pit's snarling takedown of mutable state, championed by Hickey & The Cloj-Co. Many a budding computational philosophizer — both of yours truly counted among them — have been led onward to the late great Bro86 by this paper's borrow of his essence and accident. But is the paper actually good? Like, really — is it that good? Does it hold up to the blinding light of hindsight that 2023 offers? Is this episode actually an April Fools joke, or is it a serious episode that Ivan just delayed by a few weeks because of life circumstances and his own incoherent sense of humour? I can't tell. Apologies in advance. Next time, we're going back to our usual format to discuss Intercal. Links Before anything else, we need to link to Simple Made Easy. If you don't know, now you know! It's a talk by Rich Hickey (creator of Clojure) that, as best as I can tell, widely popularized discussion of simplicity and complexity in programming, using Hickey's own definitions that built upon the Tar Pit paper. Ignited by this talk, with flames fanned by a few others, as functional programming flared in popularity through the 2010s, the words “simple”, “easy”, “complex”, and “reason about” became absolutely raging memes. We also frequently reference Fred Brooks and his No Silver Bullet. Our previous episode has you covered. The two great languages of the early internet era: Perl & TcL For more on Ivan's “BLTC paradise-engineering wombat chocolate”, see our episode on Augmenting Human Intellect, if you dare. For more on Jimmy's “Satoshi”, see Satoshi Nakamoto, of course. And for Anonymous, go on. Enemy of the State — This film slaps. “Some people prefer not to commingle the functional, lambda-calculus part of a language with the parts that do side effects. It seems they believe in the separation of Church and state.” — Guy Steele “my tempo” FoC Challenge: Brooks claimed 4 evils lay at the heart of programming — Complexity, Conformity, Changeability, and Invisibility. Could you design a programming that had a different set of four evils at the heart of it? (Bonus: one of which could encompass the others and become the ur-evil) The paper introduces something called Functional Relational Programming, abbreviated FRP. Note well, and do not be confused, that there is a much more important and common term that also abbreviates to FRP: Family Resource Program. Slightly less common, but yet more important and relevant to our interests as computer scientists, is the Fluorescence Recovery Protein in cyanobacteria. Less abundant, but again more relevant, is Fantasy Role-Playing, a technology with which we've all surely developed a high degree of expertise. For fans of international standards, see ISO 639-3 — the Franco-Provençal language, represented by language code frp. As we approach the finality of this paragraph, I'll crucially point out that “FRP”, when spoken aloud all at once at though it were a word, sounds quite like the word frp, which isn't actually a word — you've fallen right into my trap. Least importantly of all, and also most obscurely, and with only minor interest or relevance to listeners of the podcast and readers of this paragraph, we have the Functional Reactive Programming paradigm originally coined by Conor Oberst and then coopted by rapscallions who waste time down by the pier playing marbles. FoC Challenge: Can you come up with a programming where informal reasoning doesn't help? Where you are lost, you are without hope, and you need to get some kind of help other than reasoning to get through it? Linear B LinearB Intercal Esolangs FoC Challenge: Can you come up with a kind of testing where using a particular set of inputs does tell you something about the system/component when it is given a different set of inputs? It was not Epimenides who said “You can't dip your little toesies into the same stream” two times — presumably because he only said it once. Zig has a nicely explicit approach to memory allocation. FoC Challenge: A programming where more things are explicit — building on the example of Zig's explicit allocators. Non-ergonomic, Non-von Neumann, Nonagon Infinity One of Ivan's favourite musical acts of the 00s is the ever-shapeshifting Animal Collective — of course
[00:00:10] Jason tells the story of getting Derek on to be on this podcast, that started with Chris telling Jason about a book that he thought he should read. [00:03:32] Derek shares his story of how he got into programming. [00:06:56] Derek explains when he learned Ruby and his Rails history, which he used to rewrite CD Baby. [00:13:24] Derek gives the best analogy for what it's like to do two years of work and then decide this isn't working and push the work aside.[00:13:57] We find out where Ruby fits into Derek's life, post CD Baby, what kind of things he builds with Ruby these days, and his experience in the Ruby community. [00:17:10] When Derek was first learning programming, he explains his only interest in it was to help musicians.[00:21:12] Derek has some blog posts about ways to use the database and he tells us about a RailsConf talk from 2012 with Rich Hickey on YouTube that is the single most influential talk of his life and how it completely changed the way he approaches programming. [00:28:18] Whoa! Derek tells us he doesn't use bundle, and only uses two gems, pg, the PostgreSQL connector and Sinatra. [00:30:58] Jason wonders if code is still fun for Derek when he has to make updates or changes.[00:32:05] In one of Derek's books, he mentions he has a database of people he interacts with so he can remember, and he tells us more about that.[00:36:11] We hear Derek's philosophy on how he sees himself, and he explains that you give a different answer based on who you're with. [00:42:17] Find out how Derek hosts all his stuff since he stopped using Git, where he hosts it, and how he gets the code there. Also, he tells us he wrote on his blog why he loves the OpenBSD. [00:44:37] Does Derek ever feel like the simplicity comes with, I need to do something, but now I have to build up things in order to do this complex thing? [00:49:10] Derek shares what it means to be philosophical and why he gets philosophical about programming.[00:55:17] Much of Derek's history is as a musician, and Jason just wonders if he's ever had the opportunity to intersect programming in music. Also, he tells us how he uses Stripe as his payment processor.[00:58:30] We end with Derek emphasizing for everyone to check out Rich Hickey's RailsConf 2012 talk on YouTube, and if you're a programmer, please email Derek since he LOVES talking tech. Panelists:Jason CharnesChris OliverGuest:Derek SiversSponsor:HoneybadgerLinks:Jason Charnes TwitterChris Oliver TwitterAndrew Mason TwitterDerek Sivers TwitterDerek Sivers WebsiteDerek Sivers StoreDerek Sivers Tech BlogDerek Sivers Books (Amazon)RailsConf 2012 Keynote: Simplicity Matters by Rich HickeyVULTRrsyncRuby Radar TwitterRuby for All Podcast
Este episódio é sobre a linguagem de programação Clojure, uma das linguagens importantes para o surgimento de Elixir. Clojure é uma linguagem de programação funcional criada por Rich Hickey em 2007 (cinco anos antes de Elixir). Nossos entrevistados: Camilo Cunha de Azevedo https://www.linkedin.com/in/2cazevedo/ Marcio Lopes de Faria https://www.linkedin.com/in/marciodefaria/ Links: “Aprenda Clojure”, uma lista de “Material para aprender Clojure” https://github.com/Camilotk/aprenda-clojure Artigo sobre Clojerl: “Clojerl: the expressive power of Clojure on the BEAM”, de Juan Facorro e Natalia Chechina https://doi.org/10.1145/3406085.3409012 ou https://eprints.bournemouth.ac.uk/34248/. Clojure/Conj 2023 https://2023.clojure-conj.org/ Vagas Clojure https://github.com/clj-br/vagas Clojure Brasil no Telegram https://t.me/clojurebrasil Livro HTDP https://htdp.org/ Joy of Elixir https://joyofelixir.com/ Universidade Livre https://github.com/Universidade-Livre Mais links https://gist.github.com/adolfont/a60cf3d26187bb74db15ca6367961b56 Anotações para futuros episódios que possivelmente continuarão este: Episódio sobre as relações entre Elixir e Clojure Assista a esta entrevista no YouTube em https://youtu.be/VOhNsfg5JSk Nosso canal é https://www.youtube.com/@ElixirEmFoco Associe-se à Erlang Ecosystem Foundation em https://bit.ly/3Sl8XTO. O site da fundação é https://bit.ly/3Jma95g. Nosso site é https://elixiremfoco.com. Estamos no Twitter em @elixiremfoco https://twitter.com/elixiremfoco. Nosso email é elixiremfoco@gmail.com. --- Send in a voice message: https://podcasters.spotify.com/pod/show/elixiremfoco/message
Following our previous episode on Richard P. Gabriel's Incommensurability paper, we're back for round two with an analysis of what we've dubbed the Worse is Better family of thought products: The Rise of Worse Is Better by Richard P. Gabriel Worse is Better is Worse by Nickieben Bourbaki Is Worse Really Better? by Richard P. Gabriel Next episode, we've got a recent work by a real up-and-comer in the field. While you may not have heard of him yet, he's a promising young lad who's sure to become a household name. Magic Ink by Bret Victor Links The JIT entitlement on iOS is a thing that exists now. Please, call me Nickieben — Mr. Bourbaki is my father. A pony is a small horse. Also, horses have one toe. Electron lets you build cross-platform apps using web technologies. The apps you build in it are, arguably, doing a bit of "worse is better" when compared to equivalent native apps. Bun is a new JS runner that competes somewhat with NodeJS and Deno, and is arguably an example of "worse is better". esbuild and swc are JS build tools, and are compared to the earlier Babel. The graphs showing the relative lack of churn in Clojure's source code came from Rich Hickey's A History of Clojure talk. To see those graphs, head over to the FoC website for the expanded version of these show notes. Some thoughts about wormholes. futureofcoding.org/episodes/059See omnystudio.com/listener for privacy information.
CitationsCrafting Science: A Sociohistory of the Quest for the Genetics of Cancer, Joan Fujimura, 1997. Contingency, Irony, and Solidarity, Richard Rorty, 1989. Smalltalk Best Practice Patterns, Kent Beck, 1996.Ward Cunningham on "working the program", 2004.The Mathematical Experience, Phillip J. Davis and Reuben Hersh, 1980."Elephant Talk", King Crimson, 1981 (audio)."Hammock-Driven Development", Rich Hickey, 2010 (video)."What is Hammock-Driven Development?", Keagan Stokoe, 2021CreditsImage of contrasting words from Flickr user andeecollard, Creative Commons License CC BY-SA 2.0
Datomic is an immutable database that borrows ideas from functional programming. We discuss how an immutable database changes the architectural possibilities of web apps. Links/Resources: - [Datomic with Rich Hickey](https://www.youtube.com/watch?v=9TYfcyvSpEQ) - [Database as Values with Rich Hickey](https://www.youtube.com/watch?v=V6DKjEbdYos) - [Intro To Datomic with Rich Hickey](https://www.youtube.com/watch?v=RKcqYZZ9RDY) - [KotlinConf 2018 - Datomic: The Most Innovative DB You've Never Heard Of by August Lilleaas](https://www.youtube.com/watch?v=hicQvxdKvnc) - [Love Letter To Clojure: And A Datomic Experience Report - Gene Kim](https://www.youtube.com/watch?v=5mbp3SEha38) - [Turning the database inside out](https://www.youtube.com/watch?v=fU9hR3kiOK0) - [Datomic - a scalable, immutable database system by Marek Lipert](https://www.youtube.com/watch?v=xGrCsIiiTUs) - ["Real-World Datomic: An Experience Report" by Craig Andera (2013)](https://www.youtube.com/watch?v=2WeFdAXZz30) - [https://tonsky.me/blog/unofficial-guide-to-datomic-internals/](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/) - [https://www.infoq.com/articles/Architecture-Datomic/](https://www.infoq.com/articles/Architecture-Datomic/) - [https://www.infoq.com/presentations/The-Design-of-Datomic/](https://www.infoq.com/presentations/The-Design-of-Datomic/) - [Talking about Datomic, Datalog, GraphQL, APIs](https://www.notion.so/Datomic-52000e1e65d345509cbcde4681d5522f) - [What Datomic does to REST](https://web.archive.org/web/20210421110723/http://dustingetzcom.hyperfiddle.com/:what-datomic-does-to-rest/) - [Unofficial guide to Datomic Internals](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/) - [Datomic Documentation Overview](https://docs.datomic.com/on-prem/overview/overview.html) - [The web after tomorrow](https://tonsky.me/blog/the-web-after-tomorrow/) - [APIs are about policy](https://acko.net/blog/apis-are-about-policy/) Chapters: 0:00 Intros [00:02:21] What is Datomic? [00:05:01] The Immutable Database [00:14:59] The N+1 Problem [00:20:59] Inference and Logical Programming [00:26:45] Database in the browser [00:39:24] Reducing the Impedence Mismatch [00:42:09] The Change in Perspective [00:51:14] Data as Social Artifact ===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/ SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7 APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
This episode is all about the Lisp family of programming languages! Ever looked at Lisp and wondered why so many programmers gush about such a weird looking programming language style? What's with all those parentheses? Surely there must be something you get out of them for so many programming nerds to gush about the language! We do a light dive into Lisp's history, talk about what makes Lisp so powerful, and nerd out about the many, many kinds of Lisps out there!Announcement: Christine is gonna give an intro-to-Scheme tutorial at our next Hack & Craft! Saturday July 2nd, 2022 at 20:00-22:00 ET! Come and learn some Scheme with us!Links:Various histories of Lisp:History of Lisp by John McCarthyThe Evolution of Lisp by Guy L. Steele and Richard P. GabrielHistory of LISP by Paul McJonesWilliam Byrd's The Most Beautiful Program Ever Written demonstrates just how easy it is to write lisp in lisp, showing off the kernel of evaluation living at every modern programming language!M-expressions (the original math-notation-vision for users to operate on) vs S-expressions (the structure Lisp evaluators actually operate at, in direct representational mirror of the typically, but not necessarily, parenthesized representation of the same).Lisp-1 vs Lisp-2... well, rather than give a simple link and analysis, have a thorough one.Lisp machinesMIT's CADR was the second iteration of the lisp machine, and the most influential on everything to come. Then everything split when two separate companies implemented it...Lisp Machines, Incorporated (LMI), founded by famous hacker Richard Greenblatt, who aimed to keep the MIT AI Lab hacker culture alive by only hiring programmers part-time.Symbolics was the other rival company. Took venture capital money, was a commercial success for quite a while.These systems were very interesting, there's more to them than just the rivalry. But regarding that, the book Hackers (despite its issues) captures quite a bit about the AI lab before this and then its split, including a ton of Lisp history.Some interesting things happening over at lisp-machine.orgThe GNU manifestio mentions Lisp quite a bit, including that the plan was for the system to be mostly C and Lisp.Worse is Better, including the original (but the first of those two links provides a lot of context)The AI winter. Bundle up, lispers!Symbolics' Mac IvoryRISC-V tagged architecture, plus this lowRISC tagged memory tutorial. (We haven't read these yet, but they're on the reading queue!)SchemeThere's a lot of these... we recommend Guile if you're interested in using Emacs (along with Geiser), and Racket if you're looking for a more gentle introduction (DrRacket, which ships with Racket, is a friendly introduction)The R5RS and R7RS-small specs are very short and easy to read especiallySee this section of the Guile manual for a bit of... historyCommon Lisp... which, yeah there are multiple implementations, but these days really means SBCL with Sly or SLIMEClojure introduced functional datastructures to the masses (okay, maybe not the masses). Neat stuff, though not a great license choice (even if technically FOSS) in our opinion and Rich Hickey kinda blew up his community so maybe use something else these days.Hy, always hy-lariousFennel, cutest lil' Lua Lisp you've ever seenWebassembly's text syntax isn't technically a Lisp, but let's be honest... is it technically not a Lisp either?!Typed Racket and HackettEmacs... Lisp?... well let's just give you the tutorial! The dreams of the 60s-80s are alive in Emacs.Actually, we just did an episode about Emacs, didn't we?Digital Humanities Workshops episodeWe guess if you wanted to use Racket and VS Code, you could use Magic Racket?! We dunno, we've never used VS Code! (Are we out of touch?!)What about for Guile?! Someone put some energy into Guile Studio!Hack & Craft!
So ein richtig eindeutiges Thema hatten wir diesmal nicht: Dominik und Jochen unterhalten sich über alles Mögliche :). Es ging zunächst ein bisschen um die neuen Exception Groups für Python 3.11, dann darüber, wie man Django-Projekte am besten initialisiert, dann um CSS, Softwarearchitektur und Microservices und dann noch ein bisschen über machine learning. Tja. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene Ultraschall 5 / Reaper / Auphonic PEP 654 -- Exception Groups and except / Twitter Thread / trio Notes on structured concurrency, or: Go statement considered harmful Closure (wikipedia) PEP 3134 -- Exception Chaining and Embedded Tracebacks asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio iPython 8 Release Werbung Exklusiv-Deal + ein Geschenk
Tommi on käyttänyt Svelteä pidemmän aikaa, myös ihan oikeassa työssä. Jakson aikana tutustutaan tähän "blazing fast" UI-kirjastoon. Tommi kertoo Svelten historiasta ja miksi Rich Harris (ei Rich Hickey) alun perin alkoi rakentaa Svelteä. Lisäksi jutellaan muun muassa komponenttikehityksestä Sveltessä, 2-way data bindauksesta, Svelten reaktiivisuudesta, animaatoista ja transitioista, sekä Svelten ympärillä olevasta komponenttiekosysteemistä.Kaikkea ei edes yritetty mahduttaa yhteen jaksoon, joten toinen jakso Sveltestä on tulossa.LinkitSvelten kotisivu - https://svelte.devSvelten integraatioita - https://github.com/sveltejs/integrationsResponsive Svelte (exploring Svelte's reactivity) - https://youtu.be/fvY1TAKNPgYSvelte Society - https://sveltesociety.devSvelte Discord - https://discord.com/invite/yy75DKsViikon hyvät fiiliksetTommi: Piparkakkutalon rakennusAntti: Acapulco-sarja AppleTV:stä
Idiomatic Elm package guidedillonkearns/elm-package-starterLessonsAvoid unexpected or silent behaviorGive context/feedback when things go wrong so the user knows their change was registered, to enhance trustGood errors aren't just for beginners - Curb Cut EffectSandi metz - code has a tendency to be duplicated - be a good role model - we're influenced by precedenceMatt Griffith - API design is holistic. It's a problem domain. Rethink from the ground up.Learn from the domain and terms, but don't limit yourself to it when you can create better abstractions.Linus Torvalds' definition of elegance/good taste - recognize two code paths as one. Reduce the number of concepts in your API when you can treat two things as one, then things compose more easily. How Elm Code Tends Towards Simplicity.You don't need a direct mapping of your domain, but start with the spec and terms. Leverage existing concepts, and have an economy of concepts. Tereza's talk: elm-plot The Big PictureAPI design is a tool to help you solve problems.There's a qualitative difference when you wire up feedback before you up front.Avoid toy examples, use meaningful use cases to direct your design.Design for concrete use cases, and drive changes through feedback from concrete use cases. Legal standing. Better to do it right than to do it right now Evan's concept from the Elm philosophy. If you don't have a motivating use case, then wait. Extract APIs from real world code. It's okay for there to be duplication. Premature abstraction is the root of all evil. sSmplicity is the best thing you can do to anticipate future API design needs.Come up with an API with the most benefits and the least pain points.If there's something that you want to make really good, invest in giving it a good feedback mechanism.Rich Hickey's talk Hammock Driven Development. We don't design APIs, our extremely creative subconscious designs APIs - let your conscious brain do the hard work to put all the information in front of your subconscious so it can do what it does best. elm-pages 2.0 screencast with Jeroen and Dillon.Pay no attention to the man behind the curtain. Parse, Don't validate at the high level, but under the hood you may need a low level implementation.Have a clear message/purpose - whether it's an API, or an internal module.Take responsibility for user experiences.
In this episode, I read from and discuss a comment thread between Rich Hickey and Alan Kay.
This episode is also available as a blog post: http://quiteaquote.in/2021/04/17/rich-hickey-programming-is-thinking/ --- Send in a voice message: https://anchor.fm/quiteaquote/message
Alex Miller is the creator of the annual Strange Loop Conference, an interdisciplinary software conference in St. Louis, MO that brings together developers doing leading-edge applied computer science in areas such as emerging languages, nosql data storage, mobile, web, concurrency, and distributed systems. He's also the creator of the Clojure/West and Lambda Jam conferences, Lambda Lounge user group (for the study of functional and dynamic programming languages) and the Clojure Lunch Club. Alex has a strong theoretical computer science background with focuses in computational complexity and artificial intelligence, and has worked across the software and product lifecycle during his two-decade career, spending most of his time in Java and now in Clojure. SHOW NOTESStrange Loop Conference Clojure LanguageCognitectNubank Douglas Hoofstader books (Gödel, Escher, Bach: an Eternal Golden Braid and I am a Strange Loop) Train to Busan“Simple Made Easy” talk, Rich Hickey, Strange Loop 2011Project AlloyCity MuseumITENThe World's Fastest Indian
Category theory may strike you as intimidating, but trust us, you can (and after this episode, are probably itching to) talk applicative functors and parser combinators over afterwork drinks. Listen in to learn why Esko and Antti – both of whom started programming with dynamically typed languages – are so into category theory right now that they see applications of it everywhere.GuestAntti Holvikari is endlessly fascinated by pure functional programming languages such as Haskell and PureScript. Software quality and personal productivity are two things he’s constantly improving.HostEsko Lahti is an engineer who always wanted to learn about category theory in practice – but never knew where to start. Then he met Antti Holvikari.Episode linksPureScript: https://www.purescript.org/Parser Combinators, a Walkthrough: https://hasura.io/blog/parser-combinators-walkthrough/fp-ts: https://github.com/gcanti/fp-tsio-ts: https://github.com/gcanti/io-tsAlgebraic Data Types: https://dev.to/gcanti/functional-design-algebraic-data-types-36kfDiscriminated Unions in TypeScript: https://basarat.gitbook.io/typescript/type-system/discriminated-unionsMaybe Not, a talk by Rich Hickey: https://youtu.be/YR5WdGrpoug
Este es el primer episodio de una serie especial del podcast dirigida a descubrir la programación funcional. Andros Fenollosa charla con diferentes invitados sobre el paradigma de la programación funcional, los lenguajes más destacables, herramientas y el cambio de mentalidad a la hora de afrontar este tipo de programación, especialmente si vienes de la programación orientada a objetos. Para este primer episodio Andros ha invitado a Vachi, un programador especializado en Clojure y actualmente trabajando en una fintech. Con Vachi habla sobre el lenguaje Clojure, sus características más destacables, principios de programación y lo que hace de Clojure un lenguaje tan atractivo para muchas empresas. También hablan de la comunidad de Clojure y recursos de interés para los que quieran comenzar con Clojure. Clojure es un lenguaje multiparadigma de propósito general creado por Rich Hickey como un dialecto de Lisp y orientado a trabajar con los datos a través de funciones. A diferencia de otros lenguajes como Python o C, Clojure se ejecuta sobre la JVM (Máquina virtual de Java).
Este es el primer episodio de una serie especial del podcast dirigida a descubrir la programación funcional. Andros Fenollosa charla con diferentes invitados sobre el paradigma de la programación funcional, los lenguajes más destacables, herramientas y el cambio de mentalidad a la hora de afrontar este tipo de programación, especialmente si vienes de la programación orientada a objetos. Para este primer episodio Andros ha invitado a Vachi, un programador especializado en Clojure y actualmente trabajando en una fintech. Con Vachi habla sobre el lenguaje Vachi, sus características más destacables, principios de programación y lo que hace de Clojure un lenguaje tan atractivo para muchas empresas. También hablan de la comunidad de Clojure y recursos de interés para los que quieran Clojure es un lenguaje multiparadigma de propósito general creado por Rich Hickey como un dialecto de Lisp y orientado a trabajar con los datos a través de funciones. Notas del episodio en: https://republicaweb.es/podcast/descubriendo-la-programacion-funcional-clojure-con-vachi/
Este es el primer episodio de una serie especial del podcast dirigida a descubrir la programación funcional. Andros Fenollosa charla con diferentes invitados sobre el paradigma de la programación funcional, los lenguajes más destacables, herramientas y el cambio de mentalidad a la hora de afrontar este tipo de programación, especialmente si vienes de la programación orientada a objetos. Para este primer episodio Andros ha invitado a Vachi, un programador especializado en Clojure y actualmente trabajando en una fintech. Con Vachi habla sobre el lenguaje Vachi, sus características más destacables, principios de programación y lo que hace de Clojure un lenguaje tan atractivo para muchas empresas. También hablan de la comunidad de Clojure y recursos de interés para los que quieran Clojure es un lenguaje multiparadigma de propósito general creado por Rich Hickey como un dialecto de Lisp y orientado a trabajar con los datos a través de funciones. Notas del episodio en: https://republicaweb.es/podcast/descubriendo-la-programacion-funcional-clojure-con-vachi/
Реактивное программирование стало де-факто решением многих проблем, особенно архитектурных, во многих сферах программирования. В этом выпуске мы немного обсудили историю его начала, а также как должны работать множество из его основных компонентов. В качестве основного примера мы выбрали RxJava, как самую популярную библиотеку в Android мире для реализации реактивщины. В связи с этим также поговорили и об актуальности данной библиотеки в наше время.00:48 - Functional Reactive Programming, а также немного истории и биографии Rich Hickey.08:47 - Pub/Sub (Publisher - Subscriber отношения).10:29 - Что такое Stream и как его можно изменять операторами?13:42 - Начало обсуждения RxJava и проблем, которые она решает (и её заставляют решать), а также что такое Observable, Single, Completable и Flowable.31:18 - Subjects и Processors.33:05 - ReactiveX API и его плюсы.34:45 - Решение проблем с threading с помощью RxJava: Schedulers и как они работают.41:53 - Как тестировать RxJava.48:04 - Есть ли смысл RxJava в 2020м году и нужно ли переходить на корутины.49:20 - Небольшой оффтоп о влиянии 15 собеседований в день на психику.Комментарии и пожелания можно оставлять в нашем телеграмм чате.
On this continuation of Gene Kim’s interview with Michael Nygard, Senior Vice President, Travel Solutions Platform Development Enterprise Architecture, for Sabre, they discuss his reflections on Admiral Rickover's work with the US Naval Reactor Core and how it may or may not resonate with the principles we hold so near and dear in the DevOps community. They also tease apart the learnings from the architecture of the Toyota Production System and their ability to drive down the cost of change. They also discuss how we can tell when there are genuinely too many “musical notes” or when those extra notes allow for better and simpler systems that are easier to build and maintain and can even make other systems around them simpler too? And how so many of the lessons and sensibilities came from working with Rich Hickey, the creator of the Clojure programming language. Bio: Michael Nygard strives to raise the bar and ease the pain for developers around the world. He shares his passion and energy for improvement with everyone he meets, sometimes even with their permission. Living with systems in production taught Michael about the importance of operations and writing production-ready software. Highly-available, highly-scalable commerce systems are his forte. Michael has written and co-authored several books, including 97 Things Every Software Architect Should Know and the bestseller Release It!, a book about building software that survives the real world. He is a highly sought speaker who addresses developers, architects, and technology leaders around the world. Michael is currently Senior Vice President, Travel Solutions Platform Development Enterprise Architecture, for Sabre, the company reimagining the business of travel. Twitter: @mtnygard LinkedIn: https://www.linkedin.com/in/mtnygard/ Website: https://www.michaelnygard.com/ You’ll Learn About: Admiral Rickover’s work with the Naval Nuclear Reactor Core Building great architecture for generality. Architecture as an organizing logic and means of software construction. Toyota Production System’s ability to drive down the cost of change through architecture Clojure programming language Cynefin framework How to know if a code is simpler or more complex RESOURCES Cynefin framework Failure Is Not an Option: Mission Control from Mercury to Apollo 13 and Beyond by Gene Kranz "Why software development is an engineering discipline," presentation by Glenn Vanderburg at O'Reilly Software Architecture Conference "10+ Deploys Per Day: Dev and Ops Cooperation," presentation by John Allspaw "Architecture Without an End State," presentation by Michael T. Nygard at YOW! 2012 "Spec-ulation Keynote," presentation by Rich Hickey re-frame (re-frame is the magnificent UI framework which both Mike and I love using and hold in the highest regard — by no means should the "too many notes" comment be construed that re-frame has too many notes!) "Fabulous Fortunes, Fewer Failures, and Faster Fixes from Functional Fundamentals," presentation by Scott Havens at DevOps Enterprise Summit Las Vegas, 2019 "Clojure for Java Programmers Part 1," presentation by Rich Hickey at NYC Java Study Group Simple Made Easy presentation by Rich Hickey at Strange Loop 2011 Love Letter To Clojure (Part 1) by Gene Kim The Idealcast, Episode 5: The Pursuit of Perfection: Dominant Architectures, Structure, and Dynamics: A Conversation With Dr. Steve Spear LambdaCast podcast hosted by David Koontz TIMESTAMPS [00:09] Intro [02:19] Mike’s reflections on Steve Spear, Admiral Rickover and the US Naval reactor core [04:33] Admiral Rickover’s 1962 memo [08:13] Cynefin framework [12:40] Applying to software engineering [16:06] Gene tells Mike a Steve Spear’s story [18:58] 10+ deploys a day everyday at Flickr [19:43] Back to the story [24:34] Why the story is important [27:35] When notes are useful [35:05] Too many notes vs. too few notes [40:00] DevOps Enterprise Summit Vegas Virtual [41:35] How to know if a code is simpler or more complex [47:23] A lively exchange of ideas [51:31] The opposing argument [54:20] Implementing items of interests [55:21] Back to the payment processing example [56:07] Case 3 [1:03:03] The challenge with Option 2 [1:08:19] Pure function [1:10:19] Rich Hickey and Clojure [1:15:01] Rich Hickey’s “Simple Made Easy” presentation [1:16:37] Exploring those ideas work at the macro scale [1:22:31] Immutability concept [1:23:58] The importance of senior leaders’ understanding of these issues [1:26:53] Outro
O mundo de JavaScript mais uma vez muda tudo, surge um competente substituto ao Node.js, o Deno. Ele terá futuro? Funciona bem? O que é bom, o que precisa melhorar? Comentamos tudo neste episódio. Feed do podcast: www.lambda3.com.br/feed/podcast Feed do podcast somente com episódios técnicos: www.lambda3.com.br/feed/podcast-tecnico Feed do podcast somente com episódios não técnicos: www.lambda3.com.br/feed/podcast-nao-tecnico Lambda3 · #200 - Deno Pauta: O que é Deno Como pronuncia? Vai substituir o Node.js? História Os 7 arrependimentos Dependências e compatibilidade dos módulos TypeScript será o principal foco do Deno? Como está o suporte a TypeScript no Deno Como está a experiência de desenvolvimento com Deno Suporte de editor de texto/IDE Testes automatizados Ferramentas de build Serviços em nuvem Standard Library do Deno Como migrar projetos Node.js Quanto de conhecimento é aproveitado do Node.js para o Deno Qual será o futuro do Node.js e do Deno Links Citados: Site do Deno Video do Ryan Dahl falando dos 10 arrependimentos (que são só 7) Palestra "Simple made easy" de Rich Hickey no InfoQ Participantes: Andre Valenti – @awvalenti Giovanni Bassi - @giovannibassi Lucas Teles – @lucasteles42 William Grasel – @willgmbr Edição: Compasso Coolab Créditos das músicas usadas neste programa: Music by Kevin MacLeod (incompetech.com) licensed under Creative Commons: By Attribution 3.0 - creativecommons.org/licenses/by/3.0
HCI (Human-Computer Interaction) studies how people relate to their digital tools. Mark and Adam discuss their journey into HCI, how others can get into the field, and its influence on Muse. @MuseAppHQ hello@museapp.com Show notes You and Your Research The Art and Science of Doing Engineering Stripe Press Human-Computer Interaction Ink & Switch Xerox PARC Microsoft Research MeetAlive: Room-Scale Omni-Directional Display System CHI 2019 proceedings Peripheral Notifications in Large Displays Sensing Posture-Aware Pen+Touch Interactions on Tablets A Small Matter of Programming Strategies in Creative Professionals’ Use of Digital Tools The Science of Managing Our Digital Stuff Associative memory Ben Reinhardt and innovation orgs Brett Victor and Dynamicland Andy Matuschak and a new mnemonic medium Johnathon blow and Braid, Jai programming language Rich Hickey’s Hammock Driven Development Dan Luu and Computer latency Martin Kleppmann and Local-First Software
Datomic is a database system based on an append-only record keeping system. Datomic users can query the complete history of the database, and Datomic has ACID transactional support. The data within Datomic is stored in an underlying database system such as Cassandra or Postgres. The database is written in Clojure, and was co-authored by the creator of Clojure, Rich Hickey.Datomic has a unique architecture, with a component called a Peer, which gets embedded in an application backend. A Peer stores a subset of the database data in memory in this application backend, improving the latency of database queries that hit this caching layer.Marshall Thompson works at Cognitect, the company that supports and sells the Datomic database. Marshall joins the show to talk about the architecture of Datomic, its applications, and the life of a query against the database.We're looking for new show ideas, so if you have any interesting topics, please feel free to reach out via twitter or email us at jeff@softwareengineeringdaily.com
Phil Hagelberg (technomancy) joins Serge for a conversation about Clojure, a new lisp called Fennel, making video games, and making free hardware design keyboards.Links:Fennel Language WebsiteAtreus KeyboardPhil Hagelberg's WebsitePhil Talking about the creation of the Atreus KeyboardLeiningenEnergize! GameEXO_encounter 667 GameThe Lua LanguageOpen Source is Not About You, Rich Hickey's essay about community
Phil Hagelberg (technomancy) joins Serge for a conversation about Clojure, a new lisp called Fennel, making video games, and making free hardware design keyboards.Links:Fennel Language WebsiteAtreus KeyboardPhil Hagelberg's WebsitePhil Talking about the creation of the Atreus KeyboardLeiningenEnergize! GameEXO_encounter 667 GameThe Lua LanguageOpen Source is Not About You, Rich Hickey's essay about community
Phil Hagelberg (technomancy) joins Serge for a conversation about Clojure, a new lisp called Fennel, making video games, and making free hardware design keyboards.Links:Fennel Language WebsiteAtreus KeyboardPhil Hagelberg's WebsitePhil Talking about the creation of the Atreus KeyboardLeiningenEnergize! GameEXO_encounter 667 GameThe Lua LanguageOpen Source is Not About You, Rich Hickey's essay about community
Chris is joined by Devon Zuegel who recently joined GitHub in the new Open Source Product Manager role. Devon and Chris discuss the complexities inherent to open source including funding models, managing motivation and burnout, different open source models, and end with a discussion around how we can be better open source citizens, both as consumers and maintainers. Devon on Twitter Devon's Blog Nadia Eghbal - Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure Patreon Sindre Sorhus on Patreon Open Collective ESLint on Open Collective Webpack on Open Collective Babel on Open Collective Sidekiq Pro GraphQL Pro GitHub related issues Clojure Rich Hickey Elm Evan Czaplicki Matz replies to post around Ruby moving slowly Open Source Maintainers Group on GitHub Thank you to CircleCI for sponsoring this episode.
Guest: Philip Poots: GitHub | ClubCollect Previous Episode: 056: Ember vs. Elm: The Showdown with Philip Poots In this episode, Philip Poots joins the show again to talk about the beauty of simplicity, the simplicity and similarities between Elm and Ruby programming languages, whether Elixir is a distant cousin of the two, the complexity of Ember and JavaScript ecosystems (Ember helps, but is fighting a losing battle), static vs. dynamic, the ease of Rails (productivity), and the promise of Ember (productivity, convention). The panel also talks about the definition of "quality", making code long-term maintainable, and determining what is good vs. what is bad for your codebase. Resources: Michel Martens mote Learn the Elm Programming Language and Build Error-Free Apps with Richard Feldman Worse is Better: Richard P. Gabriel Gary Bernhardt's Destroy All Software Screencasts Zen and the Art of Motorcycle Maintenance: An Inquiry into Values The Calm Company It Doesn't Have to Be Crazy at Work This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES:: Hello, everybody and welcome to The Frontside Podcast, Episode 113. My name is Charles Lowell. I'm a developer here at the Frontside and with me today are Taras Mankovski and David Keathley. Hello? DAVID:: Hey, guys. TARAS: Hello, hello. CHARLES:: And we're going to be talking with a serial guest on our serial podcast, Mr Philip Poots, who is the VP of engineering at ClubCollect. Welcome, Philip. PHILIP: Hey, guys. Thanks for having me on. CHARLES:: Yeah. I'm actually excited to have you on. We've had you on a couple of times before. We've been trying to get you on the podcast, I think for about a year, to talk about I think what has kind of a unique story in programming these days. The prevailing narrative is that folks start off with some language that's dynamically typed and object oriented and then at some point, they discover functional programming and then at some point, they discover static programming and they march off into a promised land of Nirvana and no bugs ever, ever happening again. It seems like it's pretty much a straight line from that point to the next point and passing through those way stations. When I talk to you, I guess... Gosh, I think you were the first person that really introduced me to Elm back at Wicked Good Ember in 2016 and it seemed like you were kind of following that arc but actually, that was a bit deceptive because then the next time I talked to you, you were saying, "No, man. I'm really into Ruby and kind of diving in and trying to get into Ruby again," and I was kind of like, "Record scratch." You're kind of jumping around the points. You're not following the preordained story arc. What is going on here? I just kind of wanted to have a conversation about that and find out what the deal was and then, what's kind have guided your journey. PHILIP: There was one event and that was ElmConf Europe, which was a fantastic conference. Really, one of the best conferences I've been to, just because I guess with the nature of early language, small conference environment. There's just a lot of things happening. There's a lot of people. Evan was there, Richard Feldman was there, the leading lights of the Elm community were there and it was fantastic. But I guess, one thing that people have always said to me is the whole way track is the best track of the conference and it's not something I really appreciated before and during the breaks, I ended up talking to a guy called Michel Martens. He is the finder of a Redis sourcing company and I guess, this was just a revelation to me. He was interested in Elm. He was friends with the guys that organized the conference and we got talking and he was like, "I do this in Ruby. I do this in Ruby. I did this in Ruby," and I was like, "What?" and he was like, "Yeah, yeah, yeah." He's a really, really humble guy but as soon as I got home, I checked him out. His GitHub is 'soveran' and it turns out he's written... I don't know, how many gems in Ruby, all with really well-chosen names, very short, very clear, very detailed. The best thing about his libraries is you can print them out on paper. What I mean by that is they were tiny. They were so small and I guess, I just never seen that before. I go into Ruby on Rails -- that was my first exposure to programming, that was my first exposure to everything -- unlike with Rails, often when you hit problems, you'd start to dive a bit deeper and ultimately, you dive so deep that you sunk essentially and you just accepted, "Okay, I'm not going to bend the framework that way this time. Let's figure out how everyone else goes with the framework and do that." Then with Ember when I moved into frontend, that was a similar thing. There were so many layers of complexity that I never felt like had a real handle on it. I kind of just thought this was the way things were. I thought it's always going to be complex. That's just the nature of the problem. That's just the problem they're trying to solve. It's a complex problem and therefore, that complexity is necessary. But it was Elm that taught me, I think that choosing the right primitives and thinking very carefully about the problem can actually give you something that's quite simple but incredibly powerful. Not only something quite simple but something so simple that it can fit inside your head, like this concept of a program fitting inside your head and Rails, I don't know how many heads I need to fit Rails in or Ember for that matter and believe me, I tried it but with Elm, there was that simplicity. When I came across this Ruby, a language I was very familiar with but this Ruby that I had never seen before, a clear example was a templating library and he calls it 'mote' and it's including comments. It's under a hundred lines of code and it does everything you would need to. Sure, there were one or two edge cases that it doesn't cover but it's like, "Let's use the trade off." It almost feels like [inaudible] because he was always a big believer in "You ain't going to need it. Let's go for that 80% win with 20% effort," and this was like that taken to the extreme. CHARLES:: I'm just curious, just to kind of put a fine point on it, it sounds like there might be more in common, like a deeper camaraderie between this style of Ruby and the style encouraged by Elm, even though that on the surface, one is a dynamically typed object oriented language and the other is a statically typed functional language and yet, there's a deeper philosophical alignment that seems like it's invisible to 99% of the discussion that happens around these languages. PHILIP: Yeah, I think so. I think the categories we and this is something Richard Feldman talks. He's a member of the Elm community. He does a lot of talks and has a course also in Frontend Masters, which I highly recommend. But he often talks about the frame of the conversation is wrong because you have good statically typed languages and you have bad statically typed languages. You have good dynamic languages and you have bad dynamic languages. For all interpretations of good and bad, right? I don't want to start any wars here. I think one of the things that Elm and Ruby have in common is the creator. Matz designed Ruby because he wanted programming to be a joy, you know? And Evan created Elm because he wanted programming to be a delight. I think if you experience both of those, like developing in both of those languages, you gain a certain appreciation for what that means. It is almost undefinable, indistinguishable, although you can see the effects of it everywhere. In Ruby, everything is an object, including nil. In Elm, it's almost he's taken everything away. Evan's taken everything away that could potentially cause you to stumble. There's a lot to learn with Elm in terms of getting your head around functional mindset and also, working with types but as far as that goes, people often call it like the Haskell Light, which I think those are a disservice to Elm because it's got different goals. CHARLES:: Yeah, you can tell that. You know, my explorations with Elm, the personality of Elm is 100% different than the personality of Haskell, if that is even a programming term that you can apply. For example, the compiler has an identity. It always talks to you in the first person, "I saw that you did this, perhaps you meant this. You should look here or I couldn't understand what you were trying to tell me." Literally that's how the Elm compiler talks to you. It actually talks to you like a person and so, it's very... Sorry, go ahead. PHILIP: No, no, I think the corollary to that is the principle of the surprise in Ruby. You know, is there going to be a method that does this? You type it out and you're like, "Oh, yes it is," which is why things like inject and reduce are both methods in enumerable. You didn't choose one over the other. It was just like, "Let's make it easy for the person who's programming to use what they know best." I think as well, maybe people don't think about this as deeply but the level of thought that Evan has put into designing Elm is crazy, like he's thought this through. I'm not sure if I said this the last time but I went to a workshop in the early days in London, which is my kind of first real exposure to Elm and Evan was giving the workshop. Someone asked him, "Why didn't you do this?" and he was like, "Well, that might be okay for now but I'm not sure that would make so much sense in 10 years," and I was kind of like, "What?" Because JavaScript and that ecosystem is something which is changing like practically hourly and this is a guy that's thinking 10 years into the future. TARAS: You might have answered it already but I'm curious of what you think is the difference, maybe it just comes down to that long term thinking but we see this in JavaScript world a lot, which is this kind of almost indifference to APIs. It almost doesn't really matter what the API is for whatever reason, there seems to be a big correlation between the API that's exposed with the popularity of the tool. I think there are some patterns, like something that's really simple, like jQuery and React have become popular because of the simplicity of their APIs. What the flip side to that? What other ways can APIs be created that we see in JavaScript world. Because we're talking about this beautiful APIs and I can relate to some of the work that Charles has been doing and I've been doing microstates but I wonder like what would be just a brief alternative to that API, so it's kind of a beautiful API. PHILIP: I don't know if anyone is familiar with the series of essays 'Worse is Better' like East Coast versus West Coast, from Richard Gabriel. The problem is, I guess and maybe this is just my understanding over my paraphrase of it, I'm not too familiar with it but I think that good APIs take time and people don't have time. If someone launches a V1 at first and it kind of does the job, people will use that over nothing and then whenever they're happy with that, they'll continue to use it and develop it and ultimately, if she's market share and then that's just the thing everyone uses and the other guy's kind of left behind like, "This is so much better." I guess this is a question, I think it was after Wicked Good Ember, I happened to be on the same trend as Tom Dale on the way back to New York and we started talking about this. I think that's his big question. I think it's also a question that still has to be answered, which is, "Will Elm ever be mainstream? Will it be the most popular thing?" aside from the question of whether it has to be or not. For me, a good APIs good design comes from understanding the problem fully -- CHARLES:: And you can understand the problem fully without time. PHILIP: Exactly and often, what happens -- at least this is what happens in my experience with the production software that I've written -- is that you don't actually understand the problem until you've developed a solution for it. Then when you've developed a solution for it, often the pressures or the commercial pressures or an open source is [inaudible] the pressures of backwards compatibility, mean that you can never refactor your way to what you think the best solution is and often, you start from scratch and the reality is people are too far away with the stuff you wrote in the past about the thing you're writing now. Those are always kind of at odds. I think there are a lot of people that are annoyed with Elm because the updates are too slow, it relies on Evan and we want to have a pool request accepted. All of the things that they don't necessarily recognize like the absence of which make Elm an Elm, if you know what I mean. The very fact that Evan does set such a high standard and does want everything to go through his personal filters because otherwise, you wouldn't gain the benefits that Elm gives you. The attention is very real in terms of I want to shift my software now and it becomes easier then. I think to go to a language like JavaScript, which has all of the escape hatches that you need, to be able to chop and change, to edit, to do what you need to do to get the job done and let's be quite honest, I think, also with Elm, that's the challenge for someone who's not an expert level like me. Once you hit a roadblock, you'll say, "Where do I go from here?" I know if I was using JavaScript, I could just like hack it and then clients are happy and everything's fine and you know there's a bit of stuff in your code that you would rather wasn't but at the end of the day, you go home and the job's done. DAVID:: Have you had to teach Elm to other people? You and I did some work like I've seen you pair with someone and guide them through the work that they needs to get done. If you had a chance to do something like that with Elm and see how that actually happens, like how do developer's mind develops as they're working through in using the tool? PHILIP: Unfortunately not. I would actually love to go through that experience. I hope none of my developers are listening to this podcast but secretly, I want to push them in the direction of Elm on the frontend. But no, but I can at least make from my own perspective. I find it very challenging at first because for me, being a Ruby developer and also, I would never say that I understood JavaScript as much as I would have liked. Coming from dynamic language, no functional experience to functional language with types, it's almost like learning a couple of different things at the same time and that was challenging. I think if I were to take someone through it, I would maybe start with a functional aspects and then move on to the type aspects or vice versa, like try and clearly breakdown and it's difficult because those two are so intertwined at some level. Gary Bernhardt of Destroy All Software Screencast, I watch quite a bit of his stuff and I had sent him an email to ask him some questions about one of the episodes that he did and he told me that he done the programming languages course, I think it's on Coursera from Daniel Grossman, so [inaudible] ML which is kind of the father of all MLs like Haskell and also Elm. I find that really helpful because he broke it down on a very basic level and his goal wasn't to teach you ML. It was to teach you functional programming. It would be a very interesting exercise, I think. I think the benefit that Elm gave you is you get to experience that delight very quickly with, "Oh, it's broken. Here's a nice message. I fix the message. It compiles. Wow, it works," and then there's a very big jump whenever you start talking about the effects. Whenever you want to actually do something like HTTP calls or dealing with the time or I guess, the impure stuff you would call in the Haskell-land and that was also kind of a bit weird. CHARLES:: Also, there's been some churn around that, right? PHILIP: That's right. When I started learning, they had signals, then they kind of pushed that all behind the scenes and made it a lot more straightforward. Then I just mastered it and I was like, "Yes, I know it," and then I was like, "All right. I don't need to know it anymore." This is the interesting thing for me because at work, most of our work now is in Elixir and Phoenix. I'm kind of picking a little bit up as I work with them. I think Elm's architecture behind the scenes is kind of based, I believe on Erlang's process model, so the idea of a mailbox and sending messages and dealing with immutable state. CHARLES:: Which is kind of ironically is very object oriented in a way, right? It's functional but also the concept of mailboxes and sending messages and essentially, if you substitute object for process, you have these thousands and thousands of processes that are sending messages back and forth to each other. PHILIP: Yeah, that's right. It's like on a grand scale, on a distributed scale. Although I wouldn't say that I'm that far with Erlang, Elixir to appreciate the reality of that yet but that's what they say absolutely. CHARLES:: Now, Phoenix and Elixir is a dynamically typed functional language. does it share the simplicity? One of the criticisms you had of Rails was that you couldn't fit it in your head. It was very difficult. Is there anything different about Elixir that kind of makes it a spirit cousin of Elm and the simple Ruby? PHILIP: I think so, yes. Absolutely. I don't think it gets to the same level but I think it's in the right direction and specifically on the framework front, it was designed specifically... I mean, in a sense it's like the anti-type to Rails because it was born out of people's frustrations with Rails. José Valim was pretty much one of Rails top core committers. Basically, every Rails application I wrote at one period, at 80% of the code written by José Valim, if you included all the gems, the device and the resourceful and all the rest of it. Elixir in many ways was born out of the kind of limitations of Ruby with Rails and Phoenix was also born out of frustrations with the complexity of Rails. While it's not as simple as say, Michel Martens' Syro which is like his web framework, which is a successor to Cuba if people have heard of that, it is a step in the right direction. I don't understand it but I certainly feel like I could. They have plug which is kind of analogous but not identical to Rack but then the whole thing is built out of plugs. I remember Yehuda Katz give a presentation like 'The Next Five Years' and essentially about Rails 3.0. This is going way back and Phoenix is in some ways the manifestation of his desire to have like the Russian doll pattern, where you could nest applications inside applications and you could have them side by side and put them inside each other and things like that. Phoenix has this concept called umbrella applications which tells that, like Ecto is a really, really nice obstruction for working with the database. CHARLES:: I see. It feels like, as opposed to being functional or static versus dynamic, the question is how do you generate your complexity? How do you cope with complexity? Because I think you touched on it at the beginning of the conversation where you thought that my problems are complex so the systems that I work with to solve those problems must necessarily also be complex. I think one of the things that I've certainly realized, kind of in the later stages of my career is that first part is true. The problems that we encounter are extremely complex but you're much better served if you actually achieve that complexity by composing radically simple systems and recombining them. To the commonality of your system is going to determine how easy it's going to work with and how well it can cope with complexity. What really drives a good system is the quality of its primitives. PHILIP: Absolutely. After ElmConf, I actually invited Michel to come to my place in the Netherlands. He live in Paris but I think he grew up Buenos Aires in Argentina. To my amazement, he said, "Yes, okay," and we spent a couple of days together and there he talked to me about Christopher Alexander and the patterns book, where patterns and design patterns actually grew out of. One of his biggest things was the code is the easiest part, like you've got to spend 80% of your time thinking deeply about the problem, like literally go outside, take long looks. I'm not sure if this is what Rich Hickey means with Hammock Driven Development. I've never actually got around to watching the talk. CHARLES:: I think it's exactly what he means. PHILIP: And he said like once you get at, the code just comes. I think Michel's work, you should really check it out. I'll send you a link to put in the show notes but everything is built out of really small libraries that do one thing and do it really well. For example, he has a library like a Redis client but the Redis client also has something called Nest, which is a way to generate the keys for nested hashes. Because that's a well-designed, the Redis client is literally just a layer on top. If you understand the primitive then, you can use the library on top really well. You can embed Syro applications within Syro applications. I guess, there you also need the luxury of time and I think this is where maybe my role as VP of engineering, which is kind of my first role of that kind, comes in here which is when you're working on the commercial pressure, try to turn around to a business guy and say, "Yes, we'll solve this problem but can we take three weeks to think about it?" It's never going to happen -- CHARLES:: No. PHILIP: Absolutely, it will never going to happen. Although the small things that I tried to do day to day now is get away from the computer, write on paper, write out the problem as you understand it, attack it from different angles, think about different viewpoints, etcetera. CHARLES:: I think if you are able to quantify the cost of not thinking about it for three weeks, then the business person that you're going to talk to is their ears are going to perk up, right? But that's so hard to do. You know, I try and make like when we're saying like, "What technologies are you going to choose? What are the long term ramifications in terms of dollars or euros or whatever currency you happen to be in for making this decision?" I wish we had more support in thinking about that but it is kind of like a one-off every time. Anyway, I'm getting a little bit off track. PHILIP: No, not at all. This is a subject I love to talk about because we kind of had a few a bit of turbulence because we thought, maybe we should get product people in, maybe we should get them a product team going and what I find was -- and this is maybe unique to the size of the company -- that actually made things a lot more difficult because you got too many heads in many ways. Sometimes, it's better to give the developer all of the context so that he can think about it and come up with the best solution because ultimately, he's the only one who can understand. I wouldn't say understands the dollars and cents but he understands the cost implications of doing it in efficient ways, which often happens when you're working in larger teams. TARAS: One thing I find really interesting about this conversation is the definition of good is really complicated here. I've observed Charles work on microstates and I work with him, like I wrote a lot of the code and we got through like five or six iterations and at every point, he got better but it is so difficult to define that. Then when you start to that conversations outside of that code context and you start to introduce business into the mix, the definition of good becomes extremely complicated. What do you think about that? How do we define it in a way? Are there cultures or engineering cultures or societal cultures that have a better definition for good that is relevant to doing quality work of this? CHARLES:: That's a deep question. PHILIP: Wow. Yeah, a really, really deep question. I think often for business, like purely commercially-driven, money-oriented good is the cheapest thing that gets the job done and often that's very short term, I think. As you alluded to Charles, that people don't think about the cost of not doing the right things, so to speak in our eyes and also, there's a huge philosophical discussion whether our definition of good as programmers and people who care about our craft is even analogous to or equal to a good in a commercial context. CHARLES:: Yes, because ultimately and this is if you have read Zen in the Art of Motorcycle Maintenance, one of the things that Pirsig talks about is what is the definition of quality. How do we define something that's good or something that's bad? One of the definitions that gets put forward is how well something is fit to purpose. Unless you understand the purpose, then you can't determine quality because the purpose defines a very rich texture, a very rich surface and so, quality is going to be the object that maps very evenly and cleanly over that surface. When it comes to what people want in a program, they're going to want very different thing. A developer might need stimulation for this is something that's very new, this is something that's going to keep my interest or it's going to be keeping my CPU max and I'm going to be learning a whole lot. A solution that actually solves for that purpose is going to be a high quality solution. Also, this is going to be fast. We're going to be able to get to market very quickly. It might be one of the purposes and so, a solution that is fast and the purpose fits so it's going to be good. Also, I think developers are just self-indulgent and looking for the next best thing in something that's going to keep their interest, although we're all guilty of that. But at the same time, we're going to be the ones maintaining software, both in our current projects and collectively when we move to a new job and we're going to be responsible for someone else's code, then we're going to be paying the cost of those decisions. We both want to minimize the pain for ourselves and minimize the pain for others who are going to be coming and working in our code to make things long term maintainable. That's one axis of purpose and therefore, an axis of quality. I think in order to measure good and bad, you really have to have a good definition of what is the purpose of that surface is so rich but the more you can map it and find out where the contours lie, the more you're going to be able to determine what's good and what's bad. TARAS: It makes me think of like what is a good hammer. A sledgehammer is a really good hammer but it's not the right hammer for every job. CHARLES:: Right. TARAS: I think what you're saying is understanding what is it that you're actually doing and then matching your solution to what you're actually trying to accomplish. PHILIP: Yeah, absolutely and in my experience, we have a Ruby team building a Rails application. That's our monolith and then, we have a couple of Elixir teams with services that have been spun out of that. This isn't proven. This is just kind of gut feel right now and it is that Elixir is sometimes slower to develop the same feature or ship it but in the long term it's more maintainable. I haven't actually gotten dived into to React and all of the amazing frameworks that it has in terms of getting things up and running quickly but in terms of the full scale application, I still think 10, 11 years on, Rails has no equal in terms of proving a business case in the shortest time possible. CHARLES:: Yeah. I feel very similarly too but the question is does your development team approach the problem as proving a business case or do they approach the problem as I want to solve the set of features? PHILIP: Yes. Where I'm working at the moment, I started out just as a software developer. I guess, we would qualify for 37 signals or sorry... base camps definition of a calm company -- CHARLES:: Of a what company? PHILIP: A calm company. Sorry. They just released a new book and called 'The Calm Company' and 'It Doesn't Have to Be Crazy at Work.' I was given in my first couple of months, a problem. It was business oriented, it had to be solved but it had to be solved well from a technical perspective because we didn't want to have to return to it every time. It was standardizing the way that we exported data from the database to Excel. You know, I was amazed because it was literally, the first time that I'd been given the space to actually dive in on a technical level to do that kind of stuff. But I think even per feature, that varies and that sometimes challenging when handing the work on because you've got to say, "This fit. Literally, we're just trying to prove, whether if we have this feature, the people will use it?" versus, "This is a feature that's going to be used every day and therefore, needs to be at good, technical quality." Those are the tradeoffs that I guess, keep you in a job. Because if it was easy, then you would need anyone to figure it out but it's always a challenge. What I like is that our tools are actually getting better and I think, with Elm for example, it's kind of major selling point is maintainability and yet, with Elm, there haven't been that many companies with Elm over a period of years that exists, that can live to tell the tale. Whereas, we certainly know with Rails applications have done well like Basecamp and GitHub. For sure, they can be super maintainable but the fact that it took GitHub to just moved Elm to Rails 5.0, I belief, the fact that it took them years and years and they were running off at fork of Rails 2.3, I think it shows the scale of the problem in that way. You know, Phoenix also went through a few issues, kind of moving architectures from the classic Rails to a more demand driven design model. I think we're getting there slowly, zig-zagging towards a place where we better understand how to write software to solve business problems. I guess, I was really interested in microstates when you shared it at Wicked Good Ember because that to me was attacking the problem from the right perspective. It's like given the fact that the ecosystem is always changing. How can we extract the business logic such that these changes don't affect the logic of our application? CHARLES:: Man, we got a lot to show you. It has changed quite a bit in the last two years. Hopefully, for the better. TARAS: It's been reduced and it's almost a quarter of its size while maintaining the same feature set and it's faster, it's lazier, it's better in every respect. It's just the ideas have actually been fairly consistent. It's just the implementation that's evolved. CHARLES:: Yeah, it's been quite a journey. It parallels kind of the story that we're talking about here in the sense that it really has been a search for primitives and a search for simplification. One of the things that we've been talking about, having these Ruby gems that do one thing and do it very, very, very well or the way that Elixir being architected has some very, very good primitives or Elm, the same kind of thing being spiritually aligned, even though on the surface, it might share more in common with Haskell. There's actually a deep alignment with a thing like Ruby and that's a very surprising result. I think one of the things that appeals to me about the type of functional programming that is ironically, I guess not present in Elm, where you have the concept of these type classes but I actually think, I love them for their simplicity. I've kind of become disenchanted with things like Lodash, even though they're nominally functional. The fact that you don't have things like monoid and functors and stuff is kind of first class participants in the ecosystems, means you have to have a bunch of throwaway functions. Those API surface area is very large, whereas if you do account for those things, these kind of ways of combining data and that's how you achieve your complexity, is not by a bunch of one-off methods that are like in Lodash, they're all provided for you so you don't have or have to write them yourself. That is one level of convenience but having access to five primitives, I think that's the power of the kind of the deeper functional programming types. PHILIP: And Charles, do you think that that gives you the ability to think at a higher level, about the problems that you're solving? Would you make that link? CHARLES:: Absolutely. PHILIP: So, if we're not doing that, then we're actually doing ourselves a disservice? CHARLES:: I would say so. PHILIP: Because we're actually creating complexity, where it shouldn't exist? CHARLES:: Yeah, I think if you have a more powerful primitive, you can think of things like async functions and generator functions, there's a common thread between async functions, generator functions, promises arrays and they're all functors. For me, that's a very profound realization and there might be a deeper spiritual link between say, an async function and an array in the same way that there's a deep spiritual link between Ruby and Elm, that if you don't see that, then you're doing yourself a disservice and you're able to think at a higher level. Also, you have a smaller tool set where each tool is more powerful. PHILIP: You did a grit, I think it was a repository with a ReadMe, where you boiled down what people would term what I would term, the scary functional language down to a very simple JavaScript. Did you ever finish that? Did you get to the monads? CHARLES:: I did get to the monads, yeah. PHILIP: Okay. I need to check that out again. I find that really, really helpful because I think one of Evan's big things with Elm is he doesn't use those terms ever and he avoids them like the plague because I think he believes they come tinged with the negative experiences of people trying Haskell and essentially getting laughed at, right? CHARLES:: Yes. I think there's something to that. TARAS: But we're doing that in microstates as well, right? In microstates documentation, even though microstates are written completely with these functional primitives, on the outside, there's almost no mention of it. It's just that when you actually go to use it, if you have an idea, one of the thing that's really powerful with microstates is that this idea that you can return another microstate from a transition and what that will do is what you kind of like what a flat map would do, which is replace that particular node with the thing that you returned it with. For a lot of people, they might not know that that's like a flat map would do but a microstate will do exactly what they wanted to do when it didn't realize that's actually should just work like that. I think, a lot of the work that we've done recently is to package all things and it make it powerful and to access the concepts that it is very familiar, something you don't need to learn. You just use it and it just works for you. CHARLES:: Right but it is something that I feel like there's unharvested value for every programmer out there in these type classes: monads and monoids and functors and co-functors or covariant functors, contravariant functors, blah-blah-blah, that entire canon. I wish there was some way to reconcile the negative connotations and baggage that that has because we feel kind of the same way and I think that Evan's absolutely right. You do want to hide that or make it so that the technology is accessible without having to know those things. But in the same way, these concepts are so powerful, both in terms of just having to think less and having to write less code but also, as a tool to say, "I've got this process. Is there any way that could it be a functor? If I can find a way that this thing is a functor, I can just save myself so much time and take so many shortcuts with it." PHILIP: And in order to be able to communicate that, or at least communicate about that, you need to have terms to call these things, right? Because you can't always just refer to the code or the pattern. It's always good to have a name. I'm with you. I see value in both, like making it approachable, so the people who don't know the terms are not frightened away. But I also see value in using the terms that have always existed to refer to those things, so that things are clear and we can communicate about them. CHARLES:: Right. definitely, there's a tradeoff there. I don't know where exactly the line is but it would be nice to be able to have our cake and eat that one too. We didn't get really to talk about the type versus dynamic in the greater context of this whole conversation. We can explore that topic a little bit. PHILIP: Well, I can finish with, I think the future is typed Erlang. Maybe, that's Elm running on BEAM. CHARLES:: Whoa. What a take? Right there, folks. I love it. I love it but what makes you say that? Typed Erlang doesn't exist right now, right? PHILIP: Exactly. CHARLES:: And Elm definitely doesn't run on BEAM. PHILIP: I don't know if I'm allowed to say this. When I was at this workshop with Evan, he mentioned that and I'm not sure whether he mentioned it just as a throwaway comment or whether this is part of his 20-year plan but I think the very fact that Elm is designed around like Erlang, the signal stuff was designed around the way Erlang does communication and processes, it means I know at least he appreciates that model. From my point of view, with my experience with Elixir and Erlang in production usage, it's not huge scale but it's scale enough to need to start doing performance work on Rails and just to see how effortless things are with Elixir and with Erlang. I think Elm in the backend would be amazing but it would have to be a slightly different language, I think because the problems are different. We began this by saying that my story was a little different to the norm because I went back to the dynamic, at the dark side but for example in Elixir, I do miss types hugely. They kind of have a little bit of a hack with Erlang because they return a lot of tuples with OK and then the object. You know, it's almost like wrapping it up in a [inaudible]. There are little things and there's Dialyzer to kind of type check and I think there are a few projects which do add types to Erlang, etcetera. But I think something that works would need to be designed from the ground up to be typed and also run in the BEAM, rather than be like a squashed version of something else to fit somewhere else, if that makes sense. CHARLES:: It makes total sense. PHILIP: I think so. I recently read a book, just to finish which was 'FSharpForFunAndProfit' is his website, Scott Wlaschin, I think. It's written up with F# but it's about designing your program in a type functional language. Using the book, you could probably then just design your programs on paper and only commit to code at the end because you're thinking right down to the level of the types and the process and the pipelines, which to me sounds amazing because I could work outside. CHARLES:: Right. All right-y. I will go ahead and wrap it up. I would just like to say thank you so much, Philip for coming on and talking about your story, as unorthodox as it might be. PHILIP: Thank you. CHARLES:: Thank you, Taras. Thank you, David. TARAS: Thank you for having us. CHARLES:: That's it for Episode 113. We are the Frontside. This is The Frontside Podcast. We build applications that you can stake your future on. If that's something that you're interested in, please get in touch with us. If you have any ideas for a future podcast, things that you'd like to hear us discuss or any feedback on the things that you did here, please just let us know. Once again, thank you Mandy for putting together this wonderful podcast and now we will see you all next time.
I wrote my interpretation of Rich Hickey’s keynote. I called it “Clojure vs the Static Typing World”. However, I missed something very big. I missed that he talked about how common it was to have lots of sub-solutions and partial data. I expand on that idea here.
В четвёртом выпуске Run Loop мы поговорили с Никитой Прокоповым и передали привет Илье Цареву. Илья — это наш третий ведущий, помните? А Никита Прокопов примечателен тем, что он создал Fira Code, внёс заметный вклад в развитие Clojure сообщества и опубликовал в open source такие проекты как Datascript и Rum. Помимо этого он пишет на Objective-C под macOS: Программа AnyBar подскажет о наступлении какого-либо события в status bar, ой, menu bar вашего компьютера. Помимо этого Никита рассказывает о том, что он выступает на AppsConf ‘18 с докладом «Обретение навыков» и как он пришёл к этой теме. Ссылки на проекты Никиты: — FiraCode — https://github.com/tonsky/FiraCode — Datascript — https://github.com/tonsky/datascript — Rum — https://github.com/tonsky/rum — AnyBar — https://github.com/tonsky/AnyBar — https://grumpy.website — https://github.com/tonsky/grumpy — Видеоблог о создании grumpy.website — https://www.youtube.com/watch?v=YZzkQW9Unvo. Отдельным Gist-ом собраны ссылки и описания к каждому выпуску: https://gist.github.com/zelark/a18965fbc255ea21dc9d3d1311ceda37 Лекции Rich Hickey про программирование в целом и как устроен мозг программиста: — Hammock Driven Development — http://www.youtube.com/watch?v=f84n5oFoZBc — Simple Made Easy — http://www.infoq.com/presentations/Simple-Made-Easy Вся подборка хитов от Rich Hickey: https://changelog.com/posts/rich-hickeys-greatest-hits
Panel: Charles Max Wood Eric Berry Josh Adams Special Guests: Osayame David Gaius-Obaseki In this episode of Elixir Mix, the panel talks to Osayame David Gaius-Obaseki. Osa is a software engineer at a company called MailChimp, is originally from Nigeria, and has been writing Elixir for a couple years now. They talk about his talk, Why Elixir Matters, how he came about writing this talk, and lambda calculus. They also touch on how Elixir compares to other functional programming languages, the idea of the genealogy of a language, and more! In particular, we dive pretty deep on: Osa intro Software engineer at MailChimp Elixir His talk – Why Elixir Matters His talk goes into the history of functional programming The heritage that Elixir has Clojure Curious about how Elixir came to exist Functional languages become popular for a year and then decline Lambda calculus His approach to functional programming At some level, you don’t have to understand lambda calculus The basis of lambda calculus Jim Weirich Y-Not talk How do we get to the high level stud we are doing with Elixir? Lisp, Steam, and Erlang Making ideas practical for use Approachable languages In your research, did you get a sense of organic growth? Genealogies of languages ML languages - Reason Resiliency of programs applied to the front-end And much, much more! Links: MailChimp Elixir His talk – Why Elixir Matters Clojure Jim Weirich Y-Not talk Erlang Reason @osagaius Osa’s Medium Osa’s GitHub Sponsors: Digital Ocean Picks: Charles Golf Chuck@devchat.tv - For podcast planning program Podcast Movement Anti-Pick – Amazon Prime Day Josh Building the Google Photos Web UI Eric Golf Clash app Osa Rich Hickey and Brian Beckman - Inside Clojure
Panel: Charles Max Wood Eric Berry Josh Adams Special Guests: Osayame David Gaius-Obaseki In this episode of Elixir Mix, the panel talks to Osayame David Gaius-Obaseki. Osa is a software engineer at a company called MailChimp, is originally from Nigeria, and has been writing Elixir for a couple years now. They talk about his talk, Why Elixir Matters, how he came about writing this talk, and lambda calculus. They also touch on how Elixir compares to other functional programming languages, the idea of the genealogy of a language, and more! In particular, we dive pretty deep on: Osa intro Software engineer at MailChimp Elixir His talk – Why Elixir Matters His talk goes into the history of functional programming The heritage that Elixir has Clojure Curious about how Elixir came to exist Functional languages become popular for a year and then decline Lambda calculus His approach to functional programming At some level, you don’t have to understand lambda calculus The basis of lambda calculus Jim Weirich Y-Not talk How do we get to the high level stud we are doing with Elixir? Lisp, Steam, and Erlang Making ideas practical for use Approachable languages In your research, did you get a sense of organic growth? Genealogies of languages ML languages - Reason Resiliency of programs applied to the front-end And much, much more! Links: MailChimp Elixir His talk – Why Elixir Matters Clojure Jim Weirich Y-Not talk Erlang Reason @osagaius Osa’s Medium Osa’s GitHub Sponsors: Digital Ocean Picks: Charles Golf Chuck@devchat.tv - For podcast planning program Podcast Movement Anti-Pick – Amazon Prime Day Josh Building the Google Photos Web UI Eric Golf Clash app Osa Rich Hickey and Brian Beckman - Inside Clojure
Panel: Charles Max Wood Cory House Special Guests: Lucas Reis In this episode of React Round Up, the panel discusses simple React patterns with Lucas Reis. Lucas works as a senior front-end developer at Zocdoc and previously worked in Brazil for an ecommerce company called B2W. He recently wrote a blog post about simple React patterns that really took off and became popular on the web. They talk about this blog post, what defines a successful pattern, and then they discuss the different patterns that he has discovered in his years of React programming. In particular, we dive pretty deep on: Lucas intro Tries to write blog posts as much as possible Simple React Patterns blog post React What does he mean by “successful” patterns? Three things that define good patterns Define successful? The mix component The Container/Branch/View pattern First successful pattern he has found Separation of concerns Common concern: are we worried about mixing concerns? If/else Can you encapsulate in the view? Pattern matching React loadable You need to think of 3 states at least Higher-order component Render props And much, much more! Links: Zocdoc B2W Simple React Patterns blog post React Simple Made Easy by Rich Hickey Lucas’s GitHub Lucas’s Blog @iamlucasreis Picks: Charles FullContact Udemy Cory Fluent conf Immer Lucas Percy Be studying the languages and be inspired!
Panel: Charles Max Wood Cory House Special Guests: Lucas Reis In this episode of React Round Up, the panel discusses simple React patterns with Lucas Reis. Lucas works as a senior front-end developer at Zocdoc and previously worked in Brazil for an ecommerce company called B2W. He recently wrote a blog post about simple React patterns that really took off and became popular on the web. They talk about this blog post, what defines a successful pattern, and then they discuss the different patterns that he has discovered in his years of React programming. In particular, we dive pretty deep on: Lucas intro Tries to write blog posts as much as possible Simple React Patterns blog post React What does he mean by “successful” patterns? Three things that define good patterns Define successful? The mix component The Container/Branch/View pattern First successful pattern he has found Separation of concerns Common concern: are we worried about mixing concerns? If/else Can you encapsulate in the view? Pattern matching React loadable You need to think of 3 states at least Higher-order component Render props And much, much more! Links: Zocdoc B2W Simple React Patterns blog post React Simple Made Easy by Rich Hickey Lucas’s GitHub Lucas’s Blog @iamlucasreis Picks: Charles FullContact Udemy Cory Fluent conf Immer Lucas Percy Be studying the languages and be inspired!
Joy Clark talks with Rich Hickey about Clojure and Datomic and the reasons that Rich decided to design them the way that he did. They discuss the dependency problem and how we should change our method of developing libraries so that we do not introduce breaking changes. Rich also introduces Clojure spec and describes what it can be used for and how it differs from a static type system. To wrap up the episode, they talk about the best way to solve a problem (and know it is the right problem!) and Rich gives some advice on how to develop software and what technologies are worth looking into.
Frontside alum and original podcast host, Brandon Hays, makes a special guest appearance to talk with Charles about the evolution of The Frontside as a company: where it's been, where it's going, and more hopes, dreams, and goals for the future! Transcript CHARLES: Hello everybody. Welcome to The Frontside Podcast Episode 100. Here we are. Episode 100. My name is Charles Lowell. I'm a developer here at The Frontside and I think it's safe to say, your official podcast host. With me to celebrate the 100th episode, he was also here a few episodes ago but also was here on our first episode I believe, is the [inaudible] Hays. Hello Brandon. BRANDON: Hi. CHARLES: Welcome back to the podcast. BRANDON: Actually, are you going to light your trainee badge on fire now in a bucket, in a ceremonial pyre? CHARLES: I live in New Mexico, so I think I'm going to just after this, grab my shotgun and give myself a 21 gun salute. Just in my front yard. BRANDON: There goes old man Lowell again, with the shotgun. CHARLES: I'm just going to [gun shot sounds] in my own honor. BRANDON: I was at the Alamo this weekend, actually. And I don't know if it was just because it was fiesta in San Antonio but they had a demonstration, like a musket firing demonstration where those things are basically little cannons. They're just small cannons. It's very interesting. They're very loud. CHARLES: Yeah. They're small, handheld cannons, yeah. So wait, were you – what is fiesta? Now, as someone who grew up in Central [inaudible], I feel like I ought to know this. BRANDON: I don't know. We found out by accident because we were planning a weekend to go hang out and get drunk on the riverwalk and we took our families down with some friends and then they're like, “Oh, it's fiesta,” which is like a 10-day celebration of the history and establishment of San Antonio – which I did not know is a 300-year-old institution. So, it's like one of the oldest things in this entire western United States. So, it's pretty neat. It's different. It's weird. It's like 90 minutes from Austin. There's nothing in Austin that's older than six months. Every six months we must demolish something and then build a condo skyscraper in its place. So, it's kind of neat to be in a city where it has – walking around the Alamo, I'm realizing, “Wow. Setting aside any of the historical significance of Texas independence or whatever, this is just like a really interesting very old building. This is hundreds of years old in an area where there's nothing that's hundreds of years old.” So yeah, it was pretty cool. It was a good weekend and we got to see muskets being fired. And we saw a doctor gross my kids out by talking about the medicine of the day, in full costume and showing all of the procedures and threatening my kids with amputation. And it was a good time. We all had a good time. My nine-year-old thought it was the coolest damn thing he'd ever seen. CHARLES: Really? Did the have bloody saws and everything? BRANDON: Oh, yeah. CHARLES: Was it like a reenactment of 300-year-old surgery? BRANDON: It wasn't a full reenactment. But it was a graphic description using the tools of the time. CHARLES: Wow. BRANDON: Highly recommend, check out the Alamo. Super fun. CHARLES: That does sound really cool. BRANDON: I did not expect to have a good time and it was a good time. CHARLES: Yeah. Yeah, I know the whole reenactment with the musket firing is fun. And it is, it's actually an incredible building. Although there's been a big kerfuffle about something about how they're going to preserve the lawn. But I haven't really followed that too much. BRANDON: Yeah. Yeah, I don't care about the lawn. I care about – no offense, lawn, if lawn is listening. This is not weird, how Stanley broke our brains with the word ‘lawn'. CHARLES: That's true. BRANDON: Yeah. He broke us real good. CHARLES: Yeah. I can't see a lawn without a beard. BRANDON: So yeah. So, life has been pretty good, man. Let's see. I left Frontside September, October. CHARLES: 2016. BRANDON: 2016. CHARLES: So, it's been months. BRANDON: 18. Yeah, thereabouts, right? So, I assume that nothing happened since then and if I came back to The Frontside now, everything would be exactly as I left it. My posters are still up in my room. My Bon Jovi poster. You left my bed just as I made it, like kind of unmade. Everything is just preserved as a shrine to me. CHARLES: Pretty much. I mean, we did give away the mics to Goodwill. BRANDON: No. CHARLES: We actually did not give away those mics. BRANDON: I never even got to use them. CHARLES: I know. Well, you know part of the problem is we don't even get to use them that much either. It looks really cool and it plays really well, like our podcast studio. But you know, I'm now spending 75% of my time in Corrales, New Mexico. And at any given time, people are either working from home, or working remotely. So, a lot of times the podcast room tragically does not get used. But it looks so cool. People come in there and they're like, “Wow, you guys must be really smart and technical people.” BRANDON: I realize this is probably a rote stereotype at this point, but I am assuming the only reason that you moved is that you are dabbling in the production of meth. CHARLES: Pretty much. BRANDON: It's like, I want to learn a new trade. Programming, it's just – programming, how interesting does it stay honestly for 25 years? CHARLES: Right. Yeah, and you know, we've got some good techniques. Continuous integration, deployment, things like that. Test-first. These are things that can be applied to different verticals. And I was looking… BRANDON: [Laughs] We ship meth to production on the first day. CHARLES: Right. [Laughter] Exactly. So, I figured it was a market ripe for disruption. BRANDON: [Laughs] It's probably true. So yeah, I wanted to ask you about that. You all kind of scattered to the four winds in some ways. You have Elrich in Boston and you're in New Mexico most of the time. CHARLES: Joe is in [inaudible]. BRANDON: Oh yeah, Joe moved to New York. CHARLES: Yup. And honestly, the traffic is so bad in Austin that I'd say 50% of the time, people stay home rather than drive into our centrally-located office. So, that's actually something that we're struggling with right now because the bulk of the team is still in Austin. But the office space is underutilized. Our team size now, we have eight engineers. And five of them are in Austin. Our other staff is also in Austin. So, what do we do with the office? It's a big question. BRANDON: And that's quite a cultural change, too. Because when I was there, we would tell people, “We want to be able to do remote someday. But we just don't know how to get into that culture to change the way that we do our meetings and change the way that we do standups and coordination and communication.” I didn't feel like we had the tooling at the time. So, something – I knew that at some point there would be probably a forcing function to basically catalyze something to allow that to work. And I'm curious to know what that process was like there. CHARLES: I wish I could say that there was a process other than experiencing the force of the forcing function and then being forced into it and then just kind of dealing with it. I have not taken a poll of the other remote employees of which now I am one, at least for the time being. So, I don't want to speak for them. But it was less painful than you might imagine. And the reason is because – and it's one of those things you actually gave me this analogy back, probably three or four years ago and I love it – is sometimes you're hanging off of a precipice and you don't realize that you're toes are two inches off the ground. And then all you can perceive is the precipice and you feel the weight of your own body concentrated on your fingers gripped to the ledge. And you don't focus on the fact that you're actually, the fall is only two inches long. And that's kind of what we experience with the remote culture. Now, I don't want to say we were Pollyanna about it and didn't realize that this was the step that we were taking and making sure to check in with the remote employees. But one of the things is our communication styles were already very asynchronous both for our client work, for our internal work, using mostly Slack and GitHub pull requests and issues – certainly for the development portion, very little changed. What we didn't realize is that because of our involvement in open source, we were already acclimated to a distributed work style. We just didn't really realize it. We didn't have to change much. I think where we have a lot more work to do is kind of integrating people socially and making sure that conversations don't happen that aren't available for other people to consume asynchronously. So, if you're having some architecture problem and you're sitting next to somebody, you'll take that avenue rather than let it play out in chat or over email. And there is definitely a certain portion of that, but I think we still do a lot of pair programming. That's still our major mode. I'd say 75% of our code gets written as people collaborating. And so, while those in-office discussions do happen, the ramifications circulate rather quickly. And most of those are in the context of people pairing inside the office. Does that make sense? BRANDON: Mmhmm. CHARLES: So, I don't think the office and the physical space were as much of a bottleneck as we thought they might be. And so, because of the – a lot of people did work from home already because of the traffic. And we were involved in open source. And our communication with our clients is usually – we don't currently have any clients in Austin. So, that's all to say that the transition was actually quite natural. And I think there's some strong analogies between collaborating in open source and having a remote culture in your office. I think what we need to get better about is making sure that we get the team together at least twice a year, everybody together. Making sure that people are able to understand their priorities and get to circulate around and get introduced to a bunch of different people. And yeah, I don't know. There's definitely a lot of work to be done on the non-development front. BRANDON: It's interesting. The agile approach to things is to try something. I'm starting to think the agile and the scientific method are related where it's like, “Here's a hypothesis. Here's the experiment. Here's what we think we want to learn,” and then you learn it and you take the next step based on that information. And that failure is an option. I think that's the point of agile, is to make failure safe because it's small and you're guaranteed to learn from it. Like, the point is to learn. And so, I really, I'm starting to think that those are just basically the same thing. That agile is like the application of the scientific method to product development. And it sounds like you're being agile or experimental about your work. And the trick is, like any scientific discovery, the trick is in coming back around to it and analyzing it and deciding whether this was successful or a failure based on feedback and finding what the measurement was that you were trying to improve. So, the lesson there was, “Oh, people become disconnected from each other. We need to gather everybody for an all-hands periodically.” We didn't use to have to do that because all-hands was every week, at least. CHARLES: Right. Yeah, everybody was constantly – there was a constant chatter and you could just kind of, the context was just all sitting at that one table, in that one room on 38th Street. And all you needed to do was dip your ear into that pool of context and you're set. Whereas that's just not an option right now. So yeah. I think the danger with agile is not being concentrated in your experimentation. I think what gave us our fear about saying we're going to do remote work – because I remember we always talked about it. We danced around the issue – was are we going to lose who we are? We have a set of way that we do things. And there is power in kind of sticking to the framework of the way that you do things. Because you understand it and you know it. So, when you're pushing and you're experimenting, being able to say, “We're going to – push and we're going to focus on this one area and we're going to iterate on it and we're going to keep everything else static,” it's going to be the wall that we can walk along. But we are going to push in this area. And so, I think the dangers of you doing that in all the areas of your business or all the areas of your project, you're iterating and refining, nothing ever gets done. And so, it's kind of like once you get to some ground that's solid, when you do start iterating it, you start introducing instability. So, when you go remote you have to start thinking about remote work, whereas we didn't have to think about that before. We were essentially, the feature of saying that we were a one-office company and an on-site company is we didn't have to think about that problem. BRANDON: One thing that you were just taking about is this idea of concentrating so that your experiments are happening one or two or maybe three at a time instead of trying to run five experiments at a time. And yeah, there's another danger I think in agile of seeking local optimization where you're basically like – it's like taking a bacteria and running it through many, many, many iterations that's targeting one thing and it mutates into this weird thing that only does this one thing. Or a dog breed that the whole – did you see that, I don't know where this came from but there was some scientific findings that there was a dog that was bred in ancient prehistoric times that was bred to turn a spit to roast meat over. So, they bread a dog that the whole point of this dog was to turn a spit so that people could roast meat and go to sleep and let their dog serve it, cook for them I guess? CHARLES: Wow. BRANDON: That's pretty impressive. CHARLES: I would say like their dystopia is in the past. Or certainly canine dystopias. I guess we live in a canine dystopia. BRANDON: Not in my house. CHARLES: Not at your house. BRANDON: This place is known as a canine paradise. So yeah, I think that's a really interesting point though, that limiting the number of concurrent experiments so that you can actually respond to them in a meaningful way instead of just being like, “Wow, we learned a bunch of stuff we're doing wrong. Anyway, back to the grind.” CHARLES: [Laughs] Yeah. BRANDON: Back to sucking at everything. CHARLES: Right, right. BRANDON: That kind of feeds into a lesson that I have learned very, very, very recently in the interview process for looking for my first real job in over a decade. And that process is very humbling. And one of the humbling experiences was being rejected for a job from a very notable larger former startup here in Austin. And their interview process is really buttoned up. I got really deep into the interview process and at the end of it they're like, “Oh, you're not technical enough.” And it was really, it was like, I don't know. It was hard for me to process at the time but it's super easy now to look back and go, “Oh, I was definitely not a fit for that type of job if being able to write JavaScript on a whiteboard without the aid of Google to solve problems and refactor code is like a fundamental part of what is valued in a manager there.” That's just not going to be me. But one thing I – and it wasn't a colossal waste of time. There was a ton of time and energy I invested into that specific process, but I actually derived a ton of value out of it. Because every person I met there was focused on the same thing: their culture of making experimentation inexpensive so that everything there is framed in terms of an experiment. What's the experiment here? What's the hypothesis? What's the expected outcome? How soon can we get to a place where we can validate that outcome? So, it's kind of like everything is really lean. And yes, it does – like I asked, “What's the dark side of that?” and it can lead to optimizing for a local maximum. So, you have to pause every once in a while and reflect at a larger scale. But it changed my attitude about a lot of stuff. I tend to walk around fearing failure. That's more my speed. I'm afraid of failing because failure can be catastrophic. But that's because I take big swings at stuff. When I go give a conference talk, it better be the best conference talk of my life. When somebody's like, “Oh, that was the best conference talk I have ever seen,” I'm like, “Ah. I'm so glad you said that because if you'd said literally anything else I would have collapsed internally.” You know? The stakes are so high for everything. And making it safe for yourself to fail by treating things like an experiment and working with my teammates. And so, two or three scenarios over the phone in a week when I was managing the team at my last company, somebody would bring something to me and I'm like, I instantly went to all the reasons this probably won't work. “Here's the problem with this.” And I thought, and I immediately turn around and went, “Wait a minute. Bring me a hypothesis and the experiment and how we can experiment with this thing.” And he's like, “Well, we could try this next week and we'll know whether or not this is a good idea.” And we tried it the next week. It was like organizing an architecture team because we were waiting to hire an architect. And the results were mixed for reasons I won't get too deep into. But the fact was, it gave us the freedom to try things. And I'm trying to carry that spirit around with me now. It's been really eye-opening. So, completely like, just a 4% alteration in the way that I think about problems, but it has the ability to dramatically alter the trajectory of how I solve things in the future. CHARLES: So, do you include now inside the planning process experiments? Like, a certain number. BRANDON: Absolutely. CHARLES: So, the typical “enterprise” development is we have our features, we're going to do them in this order because they're this priority. And then agile comes along and it's like, “You need to take these things and you need to break them up into small chunks so that they can be accomplished in small time slices,” so that you don't basically bark up wrong trees. Or explore [inaudible]. BRANDON: Yeah, but that's almost like a stupider version of waterfall. CHARLES: But exactly. That's exactly my point. Whereas the problem is, there's no avenue for experimentation in there. Rather than saying the entire team is marching in this one direction that meanders around and focuses in on the local maximum, which hopefully is relative to the market landscape is the absolute maximum, saying, “We're actually going to be marching in one major direction but we're going to be sending out scouts at all points.” If you were actually – I've actually been reading a lot of ancient military history. And It's just insane that an army, or even a detachment, would go all in one clump. They're constantly sending out people. Information is really, really, really important. BRANDON: That's an extremely, extremely good point. I've actually – it's so funny, because I've used a very similar description where we are trying to chart a course to this ocean of opportunity somewhere. And we can't just send the whole team in a direction hoping that the ocean is in that direction. We have to have our Lewis and Clark. Somebody has to be the cartographer. Somebody has to be the explorer. And that means that there has to be a little bit more freedom for those explorers. I don't yet know how to translate that into software terms. I just know that that's a collaboration usually at most companies between product and development. That product is doing some of the exploring of the space and then development is doing some of the exploring of the technical capabilities and possibilities there. CHARLES: So, you see it. What's interesting is you see it in product planning, kind of in the large, with the waterfall. You see it in huge organizations. They have a research and development department. And I wonder if agile kind of saw the Balkanization of your feature set into very small component parts. Can you take the exact same principle and Balkanize your research and development and integrate it into micro-iterations? We have this R&D but we're going to integrate it into our day-to-day and week-to-week process. BRANDON: I think that is a really noble goal and I think I see some people making progress toward that. The company I interviewed with does it almost to a pathological degree where there is a point of diminishing returns where you're sort of bound to this process of experimentation. And at a certain point you can only achieve incremental results. CHARLES: Some of these problems, you just need to be able to think about them for a long, long time. I actually didn't read, I actually didn't see the talk. But everything from the title, Rich Hickey's ‘Hammock Driven Development', just that title resonated with me so much. I was like, “Yes,” because sometimes you just need to be in the hammock for six hours at a time. Or in the shower. Or hiking. Or doing whatever it is that you need to do to put yourself in a zen state where you're just, your brain is slowly turning its wheels. And it can follow every lead to its conclusion without any interruption. And sometimes that process can take hours. Sometimes it needs to take weeks. BRANDON: Right. I want to kind of pivot on that. Because that's actually one of the biggest things that I've learned in the intervening time since leaving Frontside, which is creating space instead of trying to maximize – one thing that I did when I was at Frontside and then did again at my next place and I'm realizing is really has long-term negative implications is cram as much into a work day, as much output out as possible. I'm very output-oriented. I want to jam as much into my day as possible. I want to jam as much software out the door as possible. And people describe working at Frontside while I was there as one of the most intense work experiences they'd ever had. Literally, I can project that, literally just from my own intensity of trying to cram all that stuff. And providing that space for developers to ruminate on hard problems, on some of the harder problems they encounter, providing space for managers, I've learned that a big chunk of what it is to be a manager is to be available. And so, I actually want to write a sign – I was on the fence about doing this but I think I'm actually doing to do this – I have an office and I'm going to write a sign and put it up on the door that says, “If I look busy, interrupt me and remind me I'm not doing this right.” So, creating the space to ruminate or to be available for discussion, people that protect their breathing room sometimes are made fun of, especially in American corporate culture. I walked in and they were just reading a newspaper. What the heck are you doing at work if you're just going to read a newspaper? Like no, this is actually really important time. CHARLES: I think it's, yeah, it's something that I think about a lot. And I know I've shared this analogy with you before. I don't know if I've done it on the podcast. But I saw and I can't take credit for it. I actually saw it at DevOpsDays I think in 2013. There was a woman giving a talk and she was just talking about managing developers. But one of the things that she was saying was that if you looked at a microservices architecture or you looked at just even your operating system, and if your CPU was constantly pegged, you were squeezing out 100% of every time slice, instructions were just flowing through that, you're going to have a very unhealthy, very brittle, very prone to failure software system. If our microservices were not available to actually service requests, and service excess requests, and service spikes of requests, then something is fundamentally wrong. BRANDON: I want to add to that a little bit, because the thing that I noticed in managing a team where I received a ton of pressure to peg everybody out at 100% – and it jived with my philosophy at the time of, “Hey, I'm 100% guy. Everybody I work with is 100% type people. And then, let's peg everybody at 100%. This is a startup. Let's get everything going,” and I realized very, very quickly that if you don't preserve a little buffer, 20% buffer in that level of intensity, there is no ability to share resources. Everything is now a silo. So, if you're going to peg all your CPUs out, part of that thrashing is that there's no time for people to share things with each other. And people become very protective over their little silo all of a sudden. And it causes us – it's actually like the first stage of a catastrophic cultural collapse if everybody's pegged out at 100%. And literally, just dialing down the intensity is often the only thing that's necessary to get people to feel comfortable sharing some of their time with each other. You do a really good job of that with the lunch and learns. You mentioned that y'all are doing better thoughtful lunch and learns and stuff like that. It's one of those forcing ways that you can force that and say, “Hold on. Stop the development and do some stuff where you're actually sharing things with your teammates.” CHARLES: Yeah. And we do that. My biggest concern is that that actually increases the intensity. So, one of the things we've done is we used to actually be very formal about our lunch and learns. It's like, “We've got to generate content and put it out on the web so that people can see us.” We backed away from saying – we're not going to do them as often and make sure that people can actually do them. Yeah, making sure that people don't feel overwhelmed by, “I've got a lunch and learn coming up.” The point is to share something that you're passionate about and maybe introduce some really cool ideas to ferment in people's head. Rather, that's kind of the goal. There are certain things that we do very much feel interested in generating content. But I think, we've kind of been dancing around the ideas of distributed computing and IoT and what are some of the others? BRANDON: If you say blockchain, I'm going to just virtually punch you in the face. CHARLES: [Laughs] I actually didn't. Did I say blockchain? BRANDON: No. I just was waiting for you to say it. CHARLES: Okay, no. I haven't. Well, because that's – but it is distributed computing in Web 3.0, right? These problems – and we're actually going to be podcasting about this next, so in two weeks you can tune in to listen to us talk about blockchain but in the context of distributed computing – and one of the things that we're seeing is now we're starting to pay the price of outsourcing all of our lives to these central services like Facebook and Google and Amazon. And I think now they're starting to build a credible and more mainstream movement to wrestle back that control and say, “What would it mean to have software as a service that wasn't actually dependent on some central thing?” What would it look like to have Slack where it's Slack that looked like email? Where everybody had their own email server, maybe not a bad example. But you've got an email at Gmail or Microsoft or Yahoo or your company-run that's big enough its running its own Outlook client or something like that. Email is actually a really great example. Now probably people are going to crucify me for saying this, but I think it's actually a good example of a distributed system that's worked well. I own all of my email. All the messages that you send to me, I own, and all the messages I send to you, I also own. But you also own the messages that I send to you. Information is duplicated. And it's fine. If I send you an image, yes it's on your hard drive or it's on your Google Drive. You send a message to me, it's got an attachment, I also have that attachment. But the point is that we can each own our email and we each own our email service. And we can change it up. That's not possible with Slack. That's not possible with Facebook. That's not possible with all these other sharing platforms. All of them are controlled by this one thing. And so, I think that that's something that we've been exposed to through the lunch and learns and I'm actually certainly very excited about it. It's not something that we're going to be investing in immediately. We're kind of dancing around that idea. But that's something that's come out of that. So yeah, we've kind of refocused it on, what is something that you feel good about? But back to the original point, I think that this is something that applies on all fronts. If you have a business where you can't actually take opportunities because you don't actually have people – so there's maxing out at the individual level, filling up people's workspace with client work or filling it up with what have you or having them work nights and weekends. There's individual maxing out but then there's like maxing out of your business. So, if you have – we're a consultancy – if you have 100% utilization or you're shooting for 100% utilization, that everybody is placed on a project, that is a brittle and unsustainable system. BRANDON: I wish you would have told me that 18 months before I left there. There were like two years where we were at 100% for two solid years. CHARLES: Yeah, yeah. We're still at 100%. BRANDON: Yeah. I wonder what would have happened if we'd had a little, if we had figured out how to build in space. CHARLES: Part of the problem – so, here's the thing though. Space, nice space costs nice money. BRANDON: Yeah. CHARLES: And so, that's the thing, is you have to charge more. And you have to say, “We are going to be more expensive than other people.” You have to be dedicated to be at the forefront of a cultural battle, essentially. In the same way that people were with testing, where it was very [controversial]. BRANDON: Yeah. You were with CI. CI is a given now, right? CI is… CHARLES: Yeah, like [inaudible]. BRANDON: This idea was semi-revolutionary when you and I were talking about this in 2012, 2013, that we ship to production on the first day. We don't even start building software until the CI system is set up. The first thing we do is set up Jenkins and tests and get everything, the pipeline working. And now, that's just what people do. By and large, that's how software is expected to be built. And the tooling has really come up around that. But that was an expensive way to sell software five years ago, that, “Hey, this is going to cost more than bringing in Cowboy Bob and having them come jam in your console for 40 days and ship a bunch of stuff that then will most likely collapse and you won't know about it and Cowboy Bob has ridden off into Juarez, Mexico.” CHARLES: Right, with his saddlebag stuffed with your cash. BRANDON: Yup. CHARLES: Yeah, no. So, you have to – the problem is, you know when you pick these battles, you need to be prepared to fight the war of attrition of they're not going to be able to perceive the value for six months, a year, right? You're going to have to ask your clients to bet on this strategy. And it's a bet. And you're going to have to say, “It's going to pay off in six months. It's going to pay off in a year.” And you're really going to start raking in like five years. That's when… BRANDON: Yeah. Try making that pitch to a startup founder that is borderline, that is on the verge of an anxiety attack, and you can kind of just figure out what my last year was like. And the… CHARLES: So, that's one of the reasons we don't really work with startups anymore. They have a five-year plan, but not really. BRANDON: Yeah. CHARLES: They're fighting for their survival. And they're fighting for the opportunity to have a legitimate five-year plan. And so, in that sense, it's maybe not a good fit for the way that we develop software, because you either need an extraordinarily prescient founder who has been through this before, knows the true costs of software development, and is pretty well-funded so that they can actually – because we're more expensive upfront, like a lot more expensive upfront and so sometimes they flat out don't even have the cash. And that's something that you can make a quick, “It's not a good fit,” but then there also needs to be this understanding and an acknowledgment that what you're really shooting for is your five-year dividend. BRANDON: Yeah. It is really interesting, the turn that occurs when a company finds product/market fit. By then it's too late to fix the problems. So, it's really tricky to find the balance of: how much energy do you put into the success case for a company before they have product/market fit? How much time and energy do you invest in betting that this is going to be successful versus betting that if it is successful, hopefully we'll have the time, money, and resources to redo a bunch of the things that we are going to have to apologize for later? And I think that's what makes… CHARLES: Right. Like, where do those two lines cross on that graph? BRANDON: Yeah. Because you and I have both seen startups completely sunk by somebody who was overly focused on building a scaleable architecture in a company pre-product/market fit. That is a common story where an engineer that doesn't understand the business value of what they're doing and only focused on “quality” will absolutely torpedo, they'll chew up your first million and a half of funding and leave the place in just a smoldering pile of ashes at the end. So, it is tricky. It's totally a difficult thing. But I think coming back to your point of being sort of a vanguard of cultural, the tip of the spear on somebody's cultural changes – DevOps would be one. People that were really investing in DevOps culture in 2010, 2012 saying, “Hey, this, automation, is the future of how software gets shipped, maintained, observed, supported.” And so, now it sounds like, so what is your big bet for the future? CHARLES: Boy. That's a great question. There are two bets. One you're going to like, one you're going to vomit. BRANDON: [Laughs] CHARLES: But that's okay. BRANDON: Yeah. I don't work for you. CHARLES: You need to serve, what is it? You need to serve the spiny urchin with the yellow tail. BRANDON: Is that a Sonic the Hedgehog reference? CHARLES: It's just a sushi reference. BRANDON: Oh, okay. CHARLES: Some people don't like urchin. Or maybe they don't like eggs. What it like, the roe that come with sushi. But they're on the same plate. So, I would say the first one that I've been thinking about a lot is optimizing for capacity and being able to handle spikes and not being at 100% both for people and for utilization. I think that's something that is – I don't see how you could have a healthy software development process if people are completely spiked on delivering, heads down delivering features for product. That is something that I'm betting on. Essentially, you could call it the 25% time but it's really about having excess capacity to exploit opportunities as they arise. And then being protective of that excess capacity. Because you can exploit an opportunity. Your CPU has a spike load up to 100%. But then make sure you [inaudible] down to 50% at some point, or 75%. And so, I would very much like to see Frontside have a bench where people can rotate out and they're working on different stuff that are not even client-related. They can recharge their creative tanks. They're not going to be idle. BRANDON: Yeah, I've really come around on – and I really hated this at the time – but I've actually come around to the thoughtbot style of working on a product where – because owning and managing a product and developing it as a side quest, the goal is not necessarily for that product to catch fire and become the world's next big thing and to replace your consulting revenue. The goal is to give people a sense of – think about all the stuff that you've learned in your side projects that you went back and brought to your work. And some of my biggest gains as a developer have come from having a side gig of some kind, some side project that that's how I learned Ember. That changed my life. And I would never have gotten to try it if I was waiting for somebody at work to tell me it was okay to do it. So, it's about taking that permission back for yourself and giving yourself permission to try stuff. So, it could be something like that, or it could be the content stuff that y'all do. Or it could be conference talks. It could be whatever. But the goal isn't necessarily to produce things that have a direct return. It is to create the space to allow people to flex some muscles of creativity that you may not get in your day-to-day work. And that's very difficult to offer to people in any company. Now having explored startups and larger companies, but I would say especially in a consultancy where the exchange rate is dollars for days. It's sort of like when I was freelancing. I could feel every vacation I took draining both real money and opportunity money out of my bank account. That's such a hard, difficult thing to do. And so, you actually have to create the budget ahead of time and say, “This budget is allocated to these things and it's already spent.” Anyway, that's really tough to do. CHARLES: It is hard. BRANDON: If you can exercise the discipline necessary to do that and create the environment for that, I would say you're ahead of 90% of companies in the industry. CHARLES: Yeah. Yeah, so that's something I definitely want to bet on, because I think that's where the best things come from. BRANDON: Okay. So, what's the thing I'm going to hate? CHARLES: Functional programming. BRANDON: Oh, Charles. Okay, I have to stop you. Do you know what I'm doing? Did I tell you this yet? That I am participating. When I told them this, I was like, “Charles is going to have a field day with this,” but I am participating in a Haskell study group. CHARLES: No way. BRANDON: And I'm like four exercises into this thing. I have to do four more for next week. And I'm like, “This is bizarrely easy, actually,” after as much JavaScript as you and I did in sort of a functional style and then learning Elixir. And I was like, “Wait a minute. The case statement is, Elixir just stole Haskell's case statements.” So like, so far I'm not finding functional programming to be onerous. Or anyway, but we'll see when we get to the static typing. But so far, I'm not getting any of that in the earlier lessons of the book. CHARLES: Yeah, the static typing. But the thing is, you can do – it's not 100% necessary. It isn't in Haskell, for sure. But I'm surprised. What inspired you? BRANDON: We have an architect at the office that was like, “Hey, I want to do sort of a functional programming book club.” So, we have a Slack group for FP study group. CHARLES: Are you doing ‘Haskell: From First Principles'? BRANDON: No. That one was a little actually intimidating. CHARLES: Really? BRANDON: Yeah. It gets into the lingo a little early. And we're doing one called ‘Get Programming with Haskell' that is a little more – ‘Haskell: From First Principles' is kind of math-oriented. So, for somebody with a math background but not necessarily a programming background, it's perfect. But for somebody with a programming background that is just trying to understand functional programming principles using Haskell, ‘Get Programming with Haskell' is actually a really great option. CHARLES: Okay. Actually, I have not heard of that one. BRANDON: The stuff that I'm looking at looks just like Elixir. So, it's early. But it's very comfortable so far. CHARLES: Yeah. So, this is the thing. It's all a matter of messaging and marketing. Because I really feel – so, it is like there are a lot of behaviors that you see sometimes in currently entrenched functional programming communities that I think are, well I think they're objectively repulsive. But I think they're also pragmatically repulsive and that they repulse potential community members. But I think a lot of it too is people talk about these things that are, they use abstruse terminology. And they're kind of chattering back and it's very jargon-oriented. And there's just – people operate with a different set of concrete things. So, when you and I are talking, for example we might talk about a Rails controller and that's a very concrete thing. You know exactly what I'm talking about. It's something that you have held in your hand, literally. Remember when we got that Rails codebase that came as a thumb drive? BRANDON: Yes I do. CHARLES: But the point is you knew that this had a Rails codebase on it. There were any number of controllers. And when I say controller to you, a controller is an abstraction, but not really. Once you work with an abstraction long enough, it becomes concrete. And so, part of the problem is just a mismatch in language where people are talking in their world about concrete things, things that you can touch and you can feel and you can exchange and they're very relatable. But from another person's perspective, they're talking about something that's totally abstract and totally opaque and totally what have you. And so, I feel like yeah there's a huge mismatch there. And that's been one of the big bets. The other big bet that I'm making is on this trying to make what is currently abstract to JavaScript and Ruby developers be concrete. And I think that we're going to see type classes like functor and monoid and semigroup and all these things, they're abstract to you now, become concrete over the next five years. And so, that's something that I'm betting on. BRANDON: Check out this – and I know that you have a good relationship with the people that did the other book, but it really does tend to come from more of a mathematical background. And this one actually does speak to people with JavaScript, Ruby, Python experience. Like, “Hey, here is how you will perceive these things.” And so, it's much more approachable. I'm still in the first unit of the book. But having sort of tasted it a little bit, it's like, “Wait a minute. This is actually extremely familiar and not super intimidating.” CHARLES: Exactly. And that was kind of – so, I read the other book. And I think I was also aided by the fact that I tried to learn Haskell probably for five times in the past. And so, I also had the benefit of jumping against the wall with the velcro suit and bouncing off four times. And fifth time, it stuck. So, I had just temerity on my side and a general feeling. But that's definitely – the lesson that I actually came away from reading that book was like, “Oh, there's a mismatch in concrete concepts.” It's using concrete concepts that are concrete to people with a CS background or mathematics background, or people who are brand new. Honestly, people who are brand new to programming who don't actually have JavaScript or Elixir or Ruby or any other thing to lean on, I think that the First Principles book is actually pretty decent for them, too. Because they don't have anything to compare to. BRANDON: They don't have anything to unlearn. CHARLES: Yeah, they don't have anything to unlearn whereas one of the things I took away was I was like, “Oh, man. I'm using semigroups all the time. This is something that I do constantly.” When I'm coding, I might do it eight times in a day. I just didn't have a name for it. BRANDON: Right. They're like design patterns, just at a micro level. CHARLES: Yes, micro-design patterns. Yeah, it's like a RESTful architecture for your code. In REST you only get five verbs. There's five methods, man. That's all you got. BRANDON: Okay, so those are two bets. And I want to cover one more thing because I know we're super overtime. But the last thing I want to be able to say about talking about what we've learned since I left Frontside but I want to put a bow on that. So, the two things that you're betting heavily on are functional programming as a basis for solid architectures in the future, like the work that you all are doing. And… CHARLES: I would also like to say, and this is something – let me just add one more thought. What I don't understand, and this is in no way like, I don't understand people who do the, “Saying goodbye to framework X.” That's not me with object-oriented programming. BRANDON: Often abstractions are like oversimplifications but they're really useful, sort of like Rich Hickey's Simple versus Easy. Like, “Hey, there's a lot of promise with that metaphor. It's a leaky abstraction but it's a useful abstraction.” And Gary Bernhardt's ‘Functional Core, Imperative Shell' is a leaky abstraction but it's a useful abstraction. If people haven't seen or experienced that, it's pretty good. The subtlety is that these are tools that are suited to certain situations a little better. And those same situations can exist in the same codebase, can exist in the same program. CHARLES: Yeah. I still, I love Ruby. I adore it. And in some ways, I've been researching functional programming and it's been going on for the last four years. So many times, people are like, “Oh, I just can't stand this tool anymore.” And I'm like, “Man, I still love Java.” I don't understand how learning to love something decreases your love for something else. BRANDON: That happens the first two times that you fall in love, is that you feel like you have the old thing less in order to love the new thing. And then you start realizing, “No, you are allowed to fall in love with new things without falling out of love with the old things.” I would almost use that as an interview question. Is there some way to use that as a way to gauge somebody's actual real concrete maturity as a developer? Because that is a mark of maturity. CHARLES: Yeah. I mean, you could say, “What's some tool that you no longer use that still informs your day-to-day routine?” BRANDON: Yeah. I guarantee you, people that were doing Smalltalk in the 80s think about it all the time. CHARLES: [Laughs] Yup. Yeah, exactly. Exactly. BRANDON: Alright. So, I want to cover one last thing. CHARLES: It's part of growing, right? If you're going to grow as a developer, you can't be shrinking at the same time as you're growing. Otherwise, you're like the same size, just in a different place. BRANDON: However, you don't get any Medium think piece points. Nobody does the one clap, two clap, forty, for blog posts that are like, “Why I'm still using some programming language but using one a little more than I used to use and this one a little less.” CHARLES: [Laughs] Zero claps. BRANDON: Yeah, zero claps on that think piece. I just want to cover one last thing before we wrap this up, and it is the fact that Frontside, the biggest gift that Frontside gave me was the mission for the next 20 years of my career. I think it could change, but I'm pretty confident about this, at this point. Being approximately 20 years into my career, I feel like I kind of have a feeling for what the next 20 years is about. And the Frontside really drilled that into me and helped me focus it and helped me dial it in. And it is this idea that there is an incoming generation of programmer that thinks about things differently than the previous generation in a pretty radical way. Because the previous generation all came out of the same schools. They all look the same. They all have a similar shared set of values in general. They created the Sil- – you know, I'm not actually going to be overly critical of the Silicon Valley culture that exists now. It is a result of the type of people that came out at the time that value innovation over almost anything else. People talk about ripe for disruption. The fact is, that has been an engine of economic growth and progress for society in a lot of ways that has a lot of costs that weren't factored in by a bunch of people who all thought the same way. And now, with people coming through code schools and people coming from different backgrounds and people coming from different environments, they're looking at programming and software as either an economic opportunity or something they didn't see that they could possibly do. Those doors were not open to that group of people before. There is a natural influx of people but many of them are bouncing out because they're not finding that group of people, they don't have a shared enough set of values that the people that are new are coming in and finding job opportunities, finding promotions, finding leadership positions. And so, I know now that my mission over the next 20 years of my career is to create those opportunities for people that have different backgrounds from me and different experiences. The career tracks, the promotions, endorsing and supporting and kind of sponsoring this incoming group of freshmen into our industry that come from different places, different backgrounds, different problems that they care about solving. They want to figure out how to solve the Flint Michigan water crisis instead of delivering socks to people in Silicon Valley, you know? So, I feel like we're at the beginning of a seed change in the value system potentially of our entire industry. But that's going to require training up the next generation of technical leadership. And I felt like the best thing I could do right now is learn to be a better manager, because I really like that job. And it provides the opportunity to find, hire, sponsor, promote and encourage those people to move into their own leadership positions. There are lots of other things that a person, you could be a VC and care about that stuff. You could have lots of different positions and put yourself in a position to do that. You could be a consultancy owner. You know what I mean? There are jobs that you can do that you can accomplish that goal. But it gives me such a sense of direction that when I'm looking for a job, I was looking for a home for that mission rather than just the thing that I felt like doing. Like okay, this job is important to me because I need it to house me and this mission so that I can support my family but have enough emotional overhead to participate in community stuff, but enough ability to lead within an organization, enough influence to actually push that agenda. So that the next generation of people are making better companies. So anyway, all of that came out of my time at Frontside where you and I sat around talking about: how do we build a place that is like a monastery? These were your words. You remember this? We want a monastery for code where people can just focus on becoming better developers. And underneath that though was the sense that this was a place of opportunity for people that might go somewhere else and stagnate as a developer. This will be a place to accelerate them. And so, that kind of spun me out and accelerated me into my mission. So anyway, I just wanted to point out that that was like, with a bullet, is the most important thing that came out for me in my time at Frontside, was that it clarified for me what I was trying to accomplish with the next couple of decades of my career. CHARLES: Wow. Well, that's fantastic. You definitely did a lot of that both here at Frontside and I mean you're continuing to do that. I definitely want to see more public speaking from you. Maybe some [inaudible] perfect. [Inaudible] at EmberConf was actually fantastic. But I mean, you're also able to help people find their mission, too. Like the talks you have at Keep Ruby Weird and even really, the first talk you gave at LoneStarRuby about moving Ember. It's always, how do I adapt what I'm feeling to my overall mission and then relate that back to technology? Man, I just can't wait. I can't wait. When are you going to hit the road again? BRANDON: I think this is the year. I'm going to start thinking about this stuff. I'm looking at the stuff that I wanted to talk about on this podcast and I was like, “Oh no, wait. That's like a dozen podcasts.” Like, no. Absolutely not. Not possible. I will say, I miss so much, this time that I spend with you. I don't want to let it go. I really miss working with you. I really miss having these conversations whenever I want. This has been a very, very special privilege for me to be able to do this with you today. And congratulations on Frontside continuing to thrive and grow and become more of its own entity and more of its own special flavor. And it makes me really happy to see the people coming out of there, that it's still doing its mission of making great software by making great developers. It makes me real happy. CHARLES: Yeah, yeah. Hopefully we can keep on keeping on. I do miss working with you. I miss the conversations that we would have in the kitchen which are basically an extension of this podcast. But I also, man, I really, really, really, really like working with the group of people that are here today. I've just seen them producing just some absolutely amazing things. And honestly, there's a selfish aspect to it, too. I get stimulated. My own thinking and learning is stimulated by the people that I work with. And like I said, the whole side note we had about distributed systems and IoT and just a constant ferment of things. So, I still really, really, really enjoy it. BRANDON: That makes me happy. CHARLES: And I'm really glad that we got to kick it today. BRANDON: Yeah, me too. CHARLES: I thought you were going to say that your 20-year mission was to have your perfect Emacs initialization setup. BRANDON: Oh my gosh. Some of these days, I'm going to figure out RuboCop. CHARLES: Actually, do you want to pair on that? BRANDON: Yeah, let's do that. CHARLES: Alright, everybody. I'm going to sign off. If anyone wants to continue the conversation, obviously you can get in touch with Brandon. He is misspelled @tehviking on Twitter. T-E-H-V-I-K-I-N-G. Always come at him. BRANDON: Don't @ me. CHARLES: [Laughs] BRANDON: I work for a really cool company and if you ask me about it on Twitter, I'll tell you all about it. CHARLES: Awesome. And we of course are Frontside. You can get us on Twitter at @TheFrontside or just drop us a line to contact@frontside.io. And we would love to talk to you more about this podcast and all the wonderful things that we do here, which includes building custom software that you can stake your future on, that's going to be good for the five-year outlook. So with that, goodbye Brandon. Goodbye everybody. And we will see you… BRANDON: Bye Charles. I love you. CHARLES: Me too.
In this weeks episode we are lucky to have Scott Wlaschin back on the show. We start of discussion by highlighting his most recent talk on composition and some useful analogies to Lego, Brio and Unix. From here we move on to investigate function and type composition, the difference between a paradigm shift compared to simply a syntax one and the advantages of an opinionated language. This leads us on to mention how within application design pushing the side-effects to the edge and keeping the core domain pure is beneficial. Finally, we touch upon testing in functional languages, experiences whilst consulting and Rich Hickey’s ‘Effective Programs’ talk.
Jamison and Dave answer these questions: It’s been a year and I still haven’t touched the codebase. What should I do? All my hobbies revolve around computers. How do I develop other interests? Jamison mentioned Dan Luu’s article on how bad teams are always hiring. Here it is. Rich Hickey’s Hammock-Driven Development talk was also mentioned.
Toran Billups @toranb | GitHub | Blog Show Notes: 01:44 - New Developments in ember-redux 04:23 - New Developments in the Wider Redux Community 06:26 - Using Redux in Ember 09:40 - Omit 10:45 - Reducers 25:42 - Fulfilling the Role of Middleware in Ember 28:12 - Ember Data in Redux-land 31:24 - What does Toran do with this stuff?? Links: The Frontside Podcast Episode 55: Redux and Ember with Toran Billups The Frontside Podcast Episode 18: Back-End Devs and Bridging the Stack with Toran Billups redux-offline ember-redux-yelp create-react-app "Mega Simple redux” Twiddle ember-concurrency Thomas Chen: ember-redux The Frontside Podcast Episode 067: ember-concurrency with Alex Matchneer normalizr Rich Hickey: Simple Made Easy Other Noteable Resources: ember redux: The talk Toran prepared for EmberJS DC in April 2017 github.com/foxnewsnetwork/ember-with-redux Transcript CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 69. My name is Charles Lowell. I'm a developer here at The Frontside and your podcast host-in-training. With me Wil Wilsman, also a developer here at The Frontside. Hello, Wil. WIL: Hello. CHARLES: Today, we have a special guest, an actual elite member of the three-timers club, counting this appearance. We have with us Toran Billups. Thank you for coming on to the show today. TORAN: Absolutely. I'm not sure how the third time happened but I'll take it. CHARLES: Well, this is going to be the second one, we're going to be talking about Redux and then I believe you're on the podcast back in 2014 or 2015. TORAN: That's right. CHARLES: That's one of our first episodes. Make sure to get in touch with our producer afterwards to pick up your commemorative mug and sunglasses to celebrate your third time on the show. Awesome. I'm glad to have you. We actually tend to have people back who are good podcast guests. TORAN: Thank you. CHARLES: Yeah, I'm looking forward to this one. This is actually a continuation of a podcast that we did back in January that was actually one of our more popular episodes. There was a big demand to do a second part of it. That podcast we talked about the ember-redux library, which you're a maintainer and just kind of working with Redux in Ember in general. We're going to continue where we left off with that but obviously, that was what? Almost six months ago? I was wondering maybe you can start there and there been any kind of new developments, exciting things, what's kind of the state of the state or the state of the reducer or the state of the store in ember-redux. TORAN: For ember-redux, in particular, we're working on three initiatives right now. The first is making the store creation more customizable. A lot of people that come from the React background in particular are very used to hand crafting how the stores put the other with the right middleware and enhancers and reducers and that's been fine. I wanted to drop people into the pit of success and everybody's cool with that but now we're getting to a point where there are people wanted to do different things and it's great to open the door for those people if they can, while keeping it very simple so we're working on that. We have here that's just undergoing some discussion. We're also, just as the wider Ember community -- you guys maybe involved in this as well -- and trying to get the entire stack over to Babel 6, the ember-cli Babel 6.10 plus stack. There is a breaking change between Babel 5 and 6 so we're also having some discussions about the ember-redux 3.0 version bump at some point later this year, just because we really can't adopt this without introducing basically a breaking change for older ember-cli users. CHARLES: Just in general, this is a little bit off topic, what does it mean to go from Babel 5 to Babel 6, if I'm an add-on author. TORAN: I would probably ensure that need to speak more with Robert Jackson about this. We just kind went back and forth because I thought I had a Babel compile error. He's like, "No, you're missing this dependency which is the object spread." Unfortunately, the object spread is rampant in React projects and this is totally cool. I had to actually add that and that's just a breaking difference between these two. If we adapt the new version of this in the shims underneath of it as an Ember 2.43 user, if you're on node four which is still supported, you will break without this. I'm trying to get some discussion going about what we should do here and if we even should push ahead and just say only node six is supported. There's some discussion and then back to your original question, the third piece is we've introduced the ability to replace the reducer but we need to get some examples for hot reloading the reducer. That's a separate project but it needs to be enabled by ember-redux. Those are the three main initiatives. CHARLES: Being able to you hot load your reducers, just to make changes to your reducer and you just thunk them into the application without having to lose any of the application state and that one of the reasons that's possible then is because you're reducers have no state themselves. They're just pure functions, right? TORAN: Exactly. CHARLES: Okay. Awesome. That sounds like there's a lot of cool stuff going on. Beyond ember-redux, are there any developments to look for on the horizon in the wider Redux community that might be coming to Ember soon? TORAN: Actually one of them is actually fairly new and it's already in Ember and just because I have already got a shim up for it is redux-offline, which I remember you had Alex on two episodes ago about breaking your brain around Rx. I feel like that happened for me trying to build apps offline first. This is, of course just another library that can drop in that place nicely with Redux but I feel like the community, at least it's got me thinking now about what an absolute that would really disrupt someone who's a big player today. I feel like you've built a great offline experience with true and well done data syncing. You could really step in and wreck someone who's in the space today. CHARLES: Right, so this is a sim around... what was the name of the library again? TORAN: Redux-offline. CHARLES: Okay, so it's just tooling for taking your store and making sure that you can work with your store if you're not actually connected to the network and like persisting your store across sessions? TORAN: Yeah. It uses a library that called redux-persist that takes care of and kind of hydrating the store if you have no network connectivity. But it's also beginning to apply some conventional pattern around how to retry and how to roll back. It's just an interesting look at the offline problem through the lens of an action-based immutable data flow story. It's interesting. I don't have a ton of experience with that kind of tweet and rewrote my Yelp clone with it and that was tough. That's what I mean by this. It's like I thought this is very trivial but you have to do a lot of optimistic rendering and then sort of optimistically gen primary keys that gets swapped out later and it's tricky. If you've never done offline first, which I have not, I just think offline is pretty cool and along those lines, there's been a lot of discussion around convention. There's of course, Create React app which is like a little library or CLI tool to kick off your Redux in React projects. It's kind of ember-cli, very trimmed down version of that right now and that's just getting incrementally better. Of course, you guys are in the React space so you may touch upon that story if you haven't already. CHARLES: All right. We talked to the very high level, I think the last time we had you on the show but now that the idea is gaining traction, kind of delve into more specifics about how you use Redux in Ember. I asked at the end of the last podcast, let's step through a use case like what would deleting an item look like in ember-redux land. Maybe we could pick that up right now and just understand how it all connects together. TORAN: Yeah, absolutely. Without understanding how this entire flow or this event bubbling happens is hard to get your head around it. The process we're going to walk through is exactly that use case you laid out, Charles. We're going to have a button in our component and that button, on-click the idea is to remove an item in a list that we happen to be rendering, let's say. If this is a child component like the very primitive literally button that you have and you just have your on-click equal probably a closure action in Ember, the parent component or the outer context is going to be responsible for providing the implementation details for this closure action and what it does. This is kind of the meat of what you're getting at. The high level here is there is a single method on the Redux store that you have access to and it is called dispatch. The nice thing about Redux again, the API surface area is very small. It's just a very handful of methods you need to get your head around. This one, in particular takes one parameter, this dispatch method. That parameter is a JavaScript object. Now, if you're just playing around, you just want to see the event flow up, there's only one requirement asked of you and that is type property. This JavaScript objects have a type that is often a string so it's very human friendly, you just put in there and the string remove item, let's say. Now of course, if you want to remove a particular item in this remove example, you of course want to pass some information as well beyond the type. The type is mostly just a Redux thing to help us identify it but in this case, you'll definitely know the primary key or the ID value, let's say of the item you want to remove. In addition to type, this JavaScript object, let's just say has an ID property and that can come up from the closure action somehow if you want. Once you click this, what's going to happen is you're going to fire this closure action, the closure action is going to invoke dispatch with the JavaScript object and dispatch is going to run through the reducer which is the very next step and what we do is we take the existing state. Let's say we have a list of three items, that's going to be the first argument now in the reducer. The second argument is this action which is just, as I describe it a JavaScript object with two properties: type and ID. You can imagine an ‘if' statement, conditional switch statement that says, "Is this the remove item action? If it is, okay." We had the ID of the item that want to remove and since we have a dictionary where the primary key of the item is the ID, we can just use lodash-omit and we basically use omit to filter out the ID and then use object assigned to a transform or produce a new state and then a callback occurs after this that tells your list component that somewhere in your Ember tree to re-render, now only showing two items. CHARLES: One of the things I want to point out there, you just touched on it but I think it's an interesting in the subtle point is that the lodash method that you used was omit and that's how this is kind of tangential or I'll say parallel to programming in this way is that you don't actually use methods that mutate any state. You calculate new states based on the old states. I think that's a great example of that -- that omit function -- omission is the way that you delete from something in an immutable fashion. You're actually filtering or you're returning a new copy of that dictionary that just doesn't have that entry. You're just a omitting it. You're not destroying the old one. You're not deleting anything. You're not changing it. You're just kind of Xeroxing it but without that one particular entry, which is ironic because the effect on the UI is that you have done a delete but really what you're doing is omitting there. I just think that's cool. I think it's one of the ways that using the systems teaches you to think about identity differently. Then the question I want to ask you was, this all happens in the reducer, what does that mean? Was that word mean -- reducer? I kind of like danced around that idea and I've tried to understand where that term came from and how it helps give insight into what it's doing and I come up short a lot. Maybe we can try and if we can't explain what the name of reducers, maybe there's some alternate term that we can help come up with to aid people's understanding of what is the responsibility of this thing? TORAN: Yeah, I think we can just break down reduce first then we'll talk about how it ends up looking. But I think it's reduced almost like it's defined in the array context. If I have an array of one, two and three and I invoke the reduce method on that, we actually just produced a single value, sort of flatten it out and produce a single value as the result to that, so three plus two plus one, six is the end result. What we've glossed over this entire time, probably last episode is that this reduce word, I believe is used because in Redux, we don't really just have one massive monolithic reducer for the entire state tree. We instead have many small reducers that are truly combined to do all the work across the tree. In my mind is I think reducer fits well here because we're actually going to combine all these reducers and they're all going to work on some small part of the state tree. But at the end of the day, we still have just one global add-on and that's the output. We want one global add-on with a new transform state and that is the reduced state. CHARLES: You take some set of arguments. One of which is the prior state and you reduce that into a single object. I like that. The other place where I've seen this term applied in a similar context is in Erlang. They talk about reduction so that you have an Erlang server where the way that the servers modeled is as a recursive function call where you pass in the prior state of the server plus any arguments if you're handling a request or something like that, bundled in there is going to be arguments of the request and then what that function returns is a new state, which is then used to pass in to the next state. They call this process reduction. We've got two data points. Maybe there, we can go search for the mathematical foundation of that later -- TORAN: I like it. CHARLES: -- if you want to geek out. I think that helps a lot. Essentially, to sum up the responsibility of the action is you take a set of arguments, it's going to take the existing state of the store, run it through your reducers and then it's going to set the next state of the store or yield the next state of the store. Is that a fair summary of what you would say the responsibility action is? TORAN: Yeah. I think you're right. In fact, in preparation for this talk, I just threw together really small Ember Twiddle that will link in a show notes what I call the mega trimmed down version of ember-redux. It's basically a really naive look but for conceptualizing this flow, it's about 24 lines of code that show exactly what you're saying which is I have this reducer, it's passed into this create store method in the syntax, how do this actually look? It better illustrates how the reducer is used when dispatch is invoked so a dispatch, if I was actually to walk line by line through this, which will be probably pretty terrible. But the very first line of dispatch is to just call the reducer. From that new state transformation, we just push in so the store gets a new entry into it's array of here's the next state and because we had never tampered or side effect-ted the old state, we could easily go back in time just by flipping the pointer in the array or that indexing the array back point. CHARLES: I guess my next question is we've talked a little bit about immutability and we know this reducer that we call at the very first point of the dispatch is a pure function. You were dealing with pure functions, immutable data but at least in perception for our users, our system is going to have a side effect. There's going to be calls to the network. We are, in the least in theory deleting something from that list. How do you go about modeling those side effects inside Redux? TORAN: This is a great intro because in fact a friend of mine is actually a teacher at a boot camp and he was telling me the other day that he was asked to do a brief look at Redux and his very first feeling when he was watching some of the Egghead.io videos is like, "Oh, so the reducer is pure but I have to side effects something so where do I do this?" It's not very clear, I think for the very beginner which is why we left it out of that part one podcast. Today, we're kind of hit that head on but before we get into that list, we can talk about what having actions in their simplest form look like today in your Ember app. As I mentioned earlier in this remove example, you got the button, it just takes the closure action the on-click. No big deal. The bigger work was on the parent contexts or the parent component to provide this action, which sounded very simple but imagine instead of just dispatching synchronously, which is we talked through. Imagine we only wants to dispatch that officially to change it if we have gotten a 204 back and the fetch request is deleted on the server -- normal Ajax or fetch-type flow. In this case, you start to add a little more code and imagine for the moment this is all inline in your component JS file. The component now is started to take on an additional responsibility. In addition to just providing a simple dispatch, as I say, "Go and remove this Mr Reducer later," you're now starting to put some asynchronous logic and as you imagine a real application, you grows and try catch stuff, some error handling, some loading, some modals. This gets out of hand. One of the things that I want to touch on briefly is, at least in the ember-redux case we ship this Promise-based middleware and I want to stop right there for just a second because I use that word 'middleware' and immediately we've got at least highlight what this is doing. In that pipeline -- we talked about earlier -- in the component I dispatch and it just goes right to the reducer. Well, technically there's actually a step or an extension point right before the reducer is involved and that is where middleware comes in. Technically, you could dispatch from this action and then you could handle and do the network IO type request in middleware instead, then porting on another dispatch of a different type with the final arguments to be transformed. That's really the role. CHARLES: Can middleware actually dispatch its own actions? TORAN: Correct, yep. In fact, one of the first big differences between this example as I'm kind of hacking around the component and I've got access to dispatch but there's two things I'm really actually lacking if I don't leverage middleware full on. The first is I do have some state in the component but often, it's actually just a very small slice of state that this component renders. If there's actually a little bit more information I need or actually need to tap into the full store, I don't have it and that might be considered a good constraint for most people. But there are times you imagine more complex apps, you need the store. You might even see a little bit more state, middleware provides that is where you trapped with just that slice of state and dispatch the keyword. That's about it in the component. But the other side effects are the benefit, as you break this out is you get another seam in the application where the component now is not involved with error handling and Promises and async flows or generators. It just does the basic closure action set up and fires dispatch almost synchronously like you did in our very simple example, allowing the middleware to actually step in and play the role of, "Okay, I'm going to do a fetch request and I'm going to use a generator." It's almost like the buffer for IO or asynchronous work that it was missing in our original equation. Imagine, you want to debounce something or you want to log something or you want to cancel a Promise, which you can't do, any of that stuff that's going to happen in this middleware component. That's one of the things I like about middleware as I learn more about it and the moment you get to a very complex async task where you're actually doing the typical type ahead, where you literally want to cancel and not do the JavaScript work or you like to cancel the Promise as quickly as you can, you can very quickly dive into something kind of like what you and Alex talk about with ember-currency in the Redux-land. It's called redux-saga. It's just a generator-based async work. CHARLES: Is saga kind of emerged as async library that everybody uses? I know it's very popular. TORAN: Yeah and a good reason, I mean it solves a lot of the problems that if you were to try and do the cancelation token Promise stuff that came out a while back where we're trying to figure out how to cancel Promises. There's a lot more ceremony and just a lot more state tracking on your own that generators and even when I played with this last week, which is actually redux-observable which is an Rx-based middleware. It's built by, I think Ben Lesh and Jay Phelps from Netflix or... sorry, Jay is still at Netflix but anyway... You could use Rx, you could use generators. This really is just the escape valve for async and complex side effect programming that can't or shouldn't take place in the reducer because it's pure. It shouldn't take place necessarily in a component because one of the best pieces of advice I got when I was younger was, " Toran, make sure you do or delegate," and we're talking really about levels of depth in your methods at the simplest. But it applies here as well, which is I would love it if I had just a very declarative component and I didn't have to get into the weeds as I was looking at it about, is this a Promise? Is this Redux thunk, as they call it? Is this a generator or is it Rx? I don't even care in the component for the most part. I just need to know the name of the action and the arguments. If I'm having a problem with the Rx side of the generator, I'll go into the middleware and work from that particular abstraction but you can see the benefits of the seam there. CHARLES: Are the middlewares match on the action payload in the same way that the reducers do? Is that fair to say? TORAN: That is fair and I will warn. If that seems very strange, you're probably not alone. In fact, the first time I did this with redux-saga, I was dispatching, only to turn around then dispatch again. It feels very strange the first time but again, keeping in mind that you're really trying to have a separation from the work that is side effect and the work that is pure. The first action in that scenario, we'll call it remove-saga because it's actually going to fire something to a middleware. That work is all going to be network-heavy and it's not really as easily undo-redo because it's not pure. But the second event invoked from the actual middleware itself that says, "Remove item. Here's the ID. We're good." That work could be undone-redone all day because it hits the reducer, which is sure you can in and out. CHARLES: It sounds like basically the middleware is allowing you to have a branching flow structure because they all do involve more actions getting dispatched back to the store to record any bookkeeping that needs to happen as part of that. If you want to set some spinner state, that will be an action that gets dispatched. But in terms of sync, they allow you to set up sequences of actions or if you basically have one action that will actually get resolved as ten actions or something like that. If you think about an asynchronous process, you have the action that starts it but that might end up being composed of five different actions, right? Like I want to set the application into some state that knows that I've started my delete and that means I want to show like the spinner. Then at some later point, I might want to show progress like this deletes really taking a long time and I might want to dispatch five different actions indicating each one of those little bits of progress. Then finally, I might want to say it's done or it fails so really those got decomposed into 10 actions or five actions or whatever so the middleware is really where you do that, where you decompose high level actions into smaller actions? Or it's one of the places? Is that a correct understanding? TORAN: Yeah, I think if you're an old school developer for a minute, it will cater to the audience that maybe came from early 2000s backend dev. Now, they're still pretty current in web dev. I see it talked about as business logic. I feel like this is really the bulk of the complex work, especially if you're using Rx. You're actually creating these declarative pipelines for the events to flow through. My components are much thinner by comparison. They're truly just fire off this action with the information to kick the async pipeline but in the async piece of it, there's a lot more work happening and that's I think because there's a lot of complexity in async programming. CHARLES: Right, and it's almost like with the reducers then, there's not so much business logic because you're just resolving the implications of the new state. Is that fair to say? It's like now we've got this information, what does this imply directly? TORAN: Yeah, I think there is this old [inaudible] thing where they're talking about what's should be thick. You know, thick controllers or thick models, what should it be? Of course, we never want 'thick' anything, is the right answer, I think. But the apps on building today, I feel like if any was thick-er -- a measure of degree bigger in effort or work -- it is these middleware components right now. I think the nature of what you describe, which is the reducer is not supposed to be doing anything complex. It's literally taking a piece of data in, producing a new piece of data out. Logically thinking about that takes much less effort, I think than the human brain applies to async programming in JavaScript. CHARLES: Right. I think it makes sense and some of these things are just going to be necessarily gnarly and hairy because that's where the system is coupled. I can't say anything about whether the delete succeeded or failed until I've actually fired off the request. Those are implicitly sequenced. There has to be some glue or some code declaring that those things are sequenced. That has to be specified somewhere, whereas theoretically with your reducers, you could just run them all in parallel, even. If JavaScript supported multithreading, there's absolutely no dependency between those bits of code. TORAN: I think so, yeah. I think there are still some challenges because in the reducer sometimes. We can talk about this in a few minutes but you may actually be changing several top level pieces of the tree. If you're de-normalizing, which is what we probably should touch on next, there are some cases where you want to be a little careful but like you said, generically immutable programming enables multithreading. We're not touching the same piece of state at same time. CHARLES: Right as long as that piece of state that you're touching, like you need to resolve the leaf nodes of the tree first but at any siblings, I guess is what I'm saying on there, ought to be able to be resolved in parallel. It's more an exercise in theory or just a way of thinking about it because like why you're able to do those reducers as kind of these pure functions is because there's no dependency between them. I guess I'm just trying to point out that to wrap my head around, there are places where there are just clear sequences and dependencies and those are things that would be in the middleware. TORAN: Gotcha. I came a little scared of service worker. [Laughter] CHARLES: Actually a great point is what kind is the analogous -- if there is anything analogous -- in Ember today that's fulfilling the role of middleware. What's the migration path? What's the alternative, just trying to explore like where you might be able to use these techniques that we've been describing inside your app? TORAN: I think, at least my look at it has been a service injected into a service, which sounds completely bad or sounds broken the first time you see because you're like, "We're injecting a service into an existing service." I say that because, for me at least there is a top level service that owns the data and provides read-only attributes but there should be some other piece of code -- not the component -- that is doing this asynchronous complex processing, we just talk about as middleware, that is often a different service than the service that owns the state add-on. That's what I meant by service-injected. There's some Ember service whose job is to manage the complexity and probably ended up in middleware from the Redux perspective or ember-concurrency is literally solving that in my opinion. They do a lot for you: solving the async problem generally and I haven't dug into ember-concurrency enough to know. The pipeline stuff, I think which you guys talked about, which is an RFC, that may have eventually be what I consider the Rx or redux-saga of the middleware today. CHARLES: Right. I think ember-concurrency is just absolutely fantastic but it is a hairy problem but there's some overlap in terms of what it is solving. I think that is interesting. I guess a case where you would use middleware would be anywhere that you would use ember-concurrency. I think the interesting thing to compare in contrast there is one of maybe advantages or disadvantages -- let's just call it a tradeoff -- is that with ember-concurrency, you have this middleware that is associated with an object. It's associated usually with a component or a route. When things happen to that component, you're able to affect your ember-concurrency process but it does mean, these things are sprinkles throughout your application and the rules that are governing them can be really different, depending on which part you're operating in or just because sometimes you're using them on a route. Sometimes, you're doing it on a component. Sometimes, you're doing it on a service, whereas with the putting in the middleware, it sounds like they're going to all be in one particular place. All right. Let's move on from the simple to the more complex because that's where it's at. We've talked about modeling async processes, we've talked about handling state transitions and all that, nothing typifies that more profoundly in Ember community than Ember Data as just a foundation for state and syncing it over to the network. Love it or hate it, it's very popular. What are the things that you do in ember-redux land? One, is it fundamentally incompatible with Ember Data or is it just more easily served with other alternatives? How do you handle those foundational interactions, those fun foundational async loading network interactions with Ember Data, just using an ember-redux? TORAN: Yeah, for myself, I don't have an experience actually using the two together on purpose. There is a gentleman who did a talk sometime last year and I'll dig up the YouTube clip for you guys, where he talked about his approach where you would actually produce new states so it's still Redux friendly. Ember Data itself just be a new Ember Data model, every time you transformed it. But one of the tricky points is the philosophy of both so in Ember Data, you're just invoking set on everything and not just how it works. That's how the events bubble through the system as you re-render. You never actually create a new state of the system that's a copy, minus or plus, other attributes. You just always touching a single source of truth. I felt like that was always a sticking point. Anyway, Thomas who did this talk and I'll point you guys to it, did a great job of saying like, "Look, if you're stuck in this world with a lot of Ember Data, you're having some pain points with it and you want to try Redux to see if this alleviates those by not changing the state, here is a middle ground," which he did, I think a fabulous job driving it out. Although I must admit, there's got to be some challenges in there just because of the philosophical difference between the two. CHARLES: Yeah. It definitely sounds like there's some challenges but I'm actually pretty eager to go and watch that, to see what they came up with. If you're using these snapshot states of your Ember objects, would it be possible then to take all of your save, delete records, even query and have them inside of middlewares like have a redux-saga for every single operation you want to take on the Ember Data store. TORAN: The example he showed is basically the best of both worlds. You don't want to ember-mutate so he has a special bit of code to do that. But because the rest of it is vanilla Ember, you could drop in concurrency if you want to do the saga-type generator stuff. But you could also just make your changes as you would otherwise. You use the adapters, you fetch, you save, you delete, whatever you want to do for the most part. It saves a lot on the de-normalized side, which you would have to do manually. You don't write any Ajax code, which you have to do manually on the Redux side. I think there are benefits if someone could get it to work where you're just not changing the state of Ember Data, which may actually just be the future Ember Data at some point as well. CHARLES: It sounds like there is a pathway forward if that's the way that you want to go and the road that you want to walk so we'll look for that in the show notes. But my question then is, you're here on the podcast, what do you do? TORAN: I do want to have one disclaimer here, just so that I'm not a complete poser but I am. If you guys don't know this, I'm not trying to hide this from the community but I don't work on ember-redux at work. I don't have a side gate, making money with it. I don't use it ever. I literally just build examples to try stuff out. There's both a blessing and a curse of that. The curse is that you're asking, "Hey, Toran, you're the author. How does this work, man?" I can give you my run at it which I will but there is the other side which is very clear is that I have not built and shipped Facebook -- the current company I'm with -- with X million people hitting every month. We're not using it. This isn't exactly 'Toran-stamp of approval' here but I do mess around with -- this week in particular -- Rx which I like. I think Rx is just something that it changes the way you think about the way programming, especially async programming works. I definitely cannot comment much on Rx other than I like Alex's challenge to the community on your podcasting. Go use Rx, even if you use ember-concurrency or don't use ember-concurrency, how to break your brain. It will be for the better. Actually, Jay did a mini code review with me because my first pass at Rx was just using the fetch-promise because I was like, "I want Rx for side effect modeling but I wanted to still work with Ember acceptance testing," because I still feel like Ember is leading in that way, as you guys talk about in podcast recently. What was really cool is actually there's a shim that obviously Rx has its own little Ajax thing but it is not actually Promise-based. The advantage of this that Jay called out is in the Promise-based, where I'm using ember-fetch, let's say and I'm just wrapping it with Rx, those Promises are still not really cancel-able so what Jay was warning me about is like, "If you're going to use this quick and dirty, great. but in a real app, these will still queue up in Chrome or IE and block the amount of network requests you can actually make," so don't use Promises, even though they're very familiar. Use this operator, I think it is or a helper inside Rx which is the Ajax non-Promise-based operator. Long story short though, there's a good amount of work involved that grass is greener. In Ember Data, if you've ever used 'belongs-to' or 'has-many', you have done the most magical thing right there. In all the right ways, it is amazing because once you're in Redux and you're like, "I have this very nested object wrap," but Redux isn't meant to operate on this nested object wrap. It's meant to operate on this single tree structure, at these many top level entities as I can. As a project that's pretty popular in React called normalizer, you will likely use this project eventually. Maybe, not your first 'Hello, world', but you'll use this to actually break apart or truly de-normalized the structure. What that does a lot of times is if you have a blog with comments nested all inside of it from your JSON API call or your GraphQL call, that's fine coming from the network. But since you're going to have different components listening for just the comments, maybe or different components that just listen and re-render when the blog name changes and they don't care about the comments, you want to actually de-structure that so you have a separate blog top level item and you have a separate comments top level item. They're still related so the blog can get through its comments and vice versa as belongs-to and has-many works in Ember Data but you've got to do that work now. There is, of course magical projects like redux-orm that I just can't speak to how well they work or don't work but they try and solve the more Ember Data, look at the problem which has define this and there's the RM take care of the magic for you. I actually don't mind normalizer. It's just something people should be aware of because it's more work. You've got to break that apart yourself, just as much as writing your own network requests. You're, hopefully not going to duplicate Ajax all over the place but you will have to do the work that you otherwise do not do in Ember Data for sure. CHARLES: It's very interesting. If you look at the Ember Data internals, it sounds like the Ember Data store is actually structured very similarly to the way you had structure a Redux store using something like normalizer where you have these top level collections and then some mechanism to both declare and then compute the relationships between these collections of top level objects. But I want to go back to your other point too. I just wanted to say this. Toran, you know, don't sell yourself short because you give an incredible amount of time, an incredible amount of support to the community. You're very active in the Ember Redux channel. When problems come up, you think about them, you fix them. Even if you're not actually watering the trees, you're planting the seeds. I think that's actually great. I think that is a very valuable and necessary role in any community is to have people who are essentially the Johnny Appleseed of a particular technology. I think you go around and you throw these seeds around and see where they take root, even if you're not there. You're on to the next shady lane to plant seeds, rather than stay and enjoy the shade and the fruit of the apple trees. TORAN: Yeah. I appreciate your kind words there because a couple of years ago, I got into open source because it provides good personal branding. It's like, "This project, it's Toran's. We should hire Toran." It just makes you look more from that perspective. It also gives you almost a way to skip out on tough interviews at times if people are like, "This guy have a decent program. Let's take a look at his PR. He communicates with other humans online." It gets rid of some of that. But there is a dark side. We don't talk about it because there is an upside to it, especially for consulting but the dark side can be time commitment: how much bandwidth do you have outside of your family life, your hobby, if you have one and any other open source or work-related stuff that you already have to do. For me, this is really an exercise in thought leader-y stuff. I saw the benefits of this. It made my Ember better. Even if I wasn't using Redux, it just made the Ember code I wrote at work better. It inspired me to look at different things like ember-concurrency and Rx, things that are just way out of my comfort zone two years ago. I think those are all the benefits that come with it but the easiest part is got to be some value from it. The juice is it worth the squeeze. I think the community we've built and the people using it and the problems we're solving are all definitely worth the squeeze. CHARLES: It definitely is and you can tell from the vibrancy of that community that a lot of people are experiencing that value from it. To your point, I think something that is often lost on people is that you can actually use a project without actually using it. I think that there might be many people, for example in the Ember community that have never use React but are actually in fact using it because of the wonderful patterns that have come out and it's brought to the fore. I had thought about immutability, certainly on the server side but I had thought about it really deeply on the client side until a library like React came along and people started talking about it. I would say before we actually started using React the library, we were using it in thought. You touched on that when you were saying, "It's changed the way I think, it's changed the way I code." Even it's changed the way that I do things at work, in fact you're using it in spirit, if not the actual structure but I almost feel like that's more important. It's longer-lasting and has a greater impact on you, 10 years down the road or even five years down the road when neither of the technologies that we're actually talking about today are even going to be in wide use. TORAN: Yeah, that's true. In fact the one thing I would call out that people check out, I think Alex mentioned this or at least you guys have to talked about it in passing but definitely, sometime this weekend, watch the Simple Made Easy talk by Rich Hickey. It will definitely make you think differently, regardless of the simple side or the easy side that you follow on, projects of course make tradeoffs both sides of that but it is a great talk. Especially, if someone who's been programming six months or a year or two years, they're going to get huge benefit from it, just as much as someone older like myself who has got 10 plus years in the biz. CHARLES: Yeah, I know. That is a fantastic talk. We need to link to it at every single show. TORAN: Exactly. CHARLES: Well, I think that is a fantastic note to end on so we will wrap it up. That's it from Frontside for this week. We're going to have you back, obviously Toran there's so much that we could cover. Six months down the road, we'll do part three but for now, that's it. Thank you, Wil. Thank you, Toran for podcasting with us this morning. TORAN: Thanks for having me on guys. I really appreciate it. CHARLES: Then everybody else, take it easy and we'll see you all next week.
Alex Matchneer: @machty | FutureProof Retail Show Notes: 01:07 - The Introduction of ember-concurrency 02:15 - What is ember-concurrency? What are the problems it solves? 05:37 - Why not use observables or other alternatives? 09:49 - Could observables be used in conjunction with ember-concurrency? 12:16 - Simple Made Easy 14:23 - Coming Soon to ember-concurrency 16:04 - Communicating Changes in State; Glimmer Reference Primitives 23:09 - Using References 29:31 - Submitting RFCs; Adding Pipelines 32:10 - Pipeline Use Cases Resources: ember-concurrency The Frontside Podcast Episode 007: The Ember Router with Alex Matchneer The Frontside Podcast Episode 019: Origin Stories with Tom Dale and Alex Matchneer Introduction to ember-concurrency by Alex Matchneer from Global Ember Meetup RxJS Rich Hickey: Simple Made Easy Glimmer.js redux-saga Lauren Tan's RFC: Cancellable task pipelines Railway Oriented Programming Apache Kafka Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 67. My name is Charles Lowell, a developer here at The Frontside and podcast host-in-training. With me today also is Elrick Ryan, a developer here at The Frontside. Hello, Elrick. ELRICK: Hey, what's going on, Charles. CHARLES: Now, we have with us today someone who we love to have on the show. Everybody probably already know him. I know the first time I actually heard about him was when we had him on the podcast the first time, I was like, "Who the hell is this guy?" But since then, he's become one of my favorite developers, just with all of the things that he's done, from Router.js to more recently ember-concurrency. We have Alex Matchneer on the program. ALEX: Hey, everybody. Thanks for having me. CHARLES: Hey, Alex and you know what? I pronounced your name right this time. First time out of the gate. Boom! ALEX: Nice. Which one did you go with? Matchnear? Matchner? [Laughter] ALEX: I really actually don't even know which ones correct anymore. CHARLES: Was it about a year ago that you first introduced ember-concurrency? ALEX: Yeah, I had a really embarrassing introduction of it at an Ember Meetup in January before it was really done and I just kind of botched it and didn't really introduce why it was even solving problems. Then a month later, I had some time to refine it, driven by the feel of that embarrassment. I guess around February of last year, it's been pretty much in its present state. CHARLES: I remember when it came out. I must've seen the non-botched version because I remember hitting the ground running with it and being able to refactor all of this code. I definitely know that I got the honed version because you provided in that initial blog post a whole host of examples like what are the symptoms, what are the cases where it solves and then before presenting the solution. I think that was great because I didn't even realize that I had a lot of pain. I didn't realize that at all. I didn't realize I had a problem but then you were very, very elegantly packaged the problem with the solution which is always great because otherwise, it's just complaining. Maybe we should talk a little bit about -- I don't think we've officially talked about -- ember-concurrency. Even though it's been out for quite a while, the way that you model these concurrent processes using the stack is just pretty incredible. Do you want to just very briefly touch on what the problem is and what have lead you to this solution? ALEX: Sure. It's a little bit difficult to sort of succinctly say what ember-concurrency is because it kind of hits them like five different separate but kind of related but not really pain points. At its core, it's just like a task primitive and it's definitely not the first library to ever introduce that the JavaScript, I think particularly when the generator function syntax was introduced into the spec, I think a few years back. Dave Herman who's also known as, I think a Little Calculist. I think he works on the TC39. I always get those groups a little confused in my head but he introduced a task.js library that let you use the generator function syntax and then lets you yield Promises to sort of pause where you were in that task and then continue when it resolved. It had some support for cancellation. It played well with Promises and I brought that to Ember in a way that fit really nicely with Ember more than it probably does or will with other frameworks like React or Vue. By bringing it to Ember, basically if you're implementing any feature that involves async, if it's a button that needs to show that it's been clicked while you're waiting for some response to come back from the server, instead of using Promises, instead of using actions, here's an ember-concurrency task. It makes it easier to express that operation you're trying to do and it makes it really easy to drive your UI with state that comes from the state of that operation -- Is the test still running? Is your form still submitting? -- Rather than having to manage a bunch of mutable flags and properties on a component or state yourself and likely get it wrong. CHARLES: Right. Essentially asynchronous processes is like a state machine and before, we were kind of managing that state machine by hand but I think what's so brilliant about this task-oriented programming, I guess is maybe a way to put it because I really think that some of these ideas are universal and not specific to ember-concurrency. But it almost like it uses the stack, just your normal programming stack to track where you are inside of a process, rather than what it felt like what we were doing before, which was managing this state machine by hand, if that makes any sense. ALEX: It does make sense a lot of sense. A lot of people ask me, if you're going to go into sort of async territory, why aren't you using something like RxJS? Rx is observables and kind of popularized by the Netflix crowd who did a bunch of presentations on them. It's super popular these days. But one of the things I really like about RxJS or at least one of the realizations I had is that I think you're still building a state machine. You're just expressing it using different primitives. In Rx, you're still building a state machine but in Rx, they make you think about it in terms of streams and events firing over time. In ember-concurrency, also you're still building a state machine but you're using the generator function syntax and the call stack like you mentioned as another way of expressing that state machine but with a lot less code. CHARLES: Right. I was actually talking to someone about ember-concurrency just a few days ago and they were saying the same thing, "Why not use observables," and at least from my perspective, maybe I didn't quite understand the question because I feel like observables are kind of only one of the concerns that ember-concurrency addresses. I'm curious when people talk about alternatives to ember-concurrency and put observables forward, maybe I don't understand it because I usually think you might be able to use observables to register the currently executing task state and every time it changes, emit a new state and is then observed by your observable subscribers. But modeling the actual process using observables does seem weird to me because with observables, they seem like very purely functional and not heavily stateful. I don't really have that much experience with it. What's meant by using observables as an alternative? Maybe we can get more into those like how you would construct a stream or something like that? ALEX: I think the canonical Rx example of something that's elegantly expressed in Rx that would be really hard to do in just normal JavaScript, if you weren't able to use observables, is that typeahead search where as you type characters into a text field, it's already beginning to hit the server and see what you might be searching for so it can drive the state of some drop down menu. That's probably the most popular example out there because one of the things it demonstrates is that if you want to debounce, you want to allow for the user to stop typing for like 200 milliseconds before it actually hits the servers so you don't overwhelm your server, then just add a debounce operator. You've basically transformed a stream of keyboard events into that text field into something that only kicks off after it hasn't gotten an event for 200 milliseconds. If you already had a working prototype in vanilla JS and you had to debounce it, you've got to move a bunch of stuff around, you've got extract something into a function, you've got to deal with cancellation. But all those things are kind of pretty elegantly built into Rx and if you can train yourself to think in terms of streams of events, that inspires you to think about where else you could apply that in your app. I think a lot of people have felt that it's like winning, most powerful abstraction that you could think about. That's why things like cycle.js are a thing or redux-observable or just anybody working with observables in the Netflix territory. I personally find [inaudible]. If you're going to express certain processes, Rx is the way to go but it has drawbacks which is it is really hard to learn. It took me a very long time and I'm pretty good at it but if you're going to adopt Rx in your code base, then a new developer comes on, it's going to be a pretty long time. In my experience, sharing some of the Rx code I've written with fellow very talented developer, it takes a really long time to explain how to invert your thinking and think of things in terms of events. If you can get to that point, more power to you but what I found with ember-concurrency stuff is you don't have to completely invert your thinking and think of everything in terms of events and streams of events. You can use this task primitive which feels really pretty close to the code you're already writing but gives you a lot of the safety guarantees and just makes it really easy to use this derived state to drive templates. Rx is a powerful paradigm and sometimes you need that sort of event-driven push based model but I think when people wonder why aren't you just using observables, they haven't really grasped how easy and familiar it is to use task and get it right on the first try and with a lot less code. CHARLES: Right. You're able to leverage the fact that I understand what a JavaScript function looks like and the sequencing is implicit by just the order in which you were numerate the steps, right? ALEX: Right. Because I think that Rich Hickey of Clojure popularized the divide between simple versus easy and Simple Made Easy is one of his popular talks that everyone should probably -- CHARLES: It's a great talk. Yeah. ELRICK: Do you see an area where observables could be used in conjunction with ember-concurrency? ALEX: It's kind of. It's been hard for me to find that use case. Probably, there's a handful of use cases where maybe it's a little awkward to think about to have something that would be elegantly handled in Rx would be model using tasks but it really hasn't struck me enough in some of the apps that I'm building, to really try and flesh that out. CHARLES: I would be curious to see a side by side comparisons. I build a lot of auto completes using ember-concurrency. I built a lot of asynchronous processes using ember-concurrency. What would that look like using nothing but Rx and just be able to have it on the left-hand side of the paper, then Rx on the right hand side of the paper are easy. ALEX: I'd be very surprised if you could find an Rx example that is less code than the task equivalent because as much as I think the autocomplete example is the best canonical example of Rx, once you actually start making something that's production-ready, you want to start driving the button state while it's running or to show a loading indicator. When you start deriving other observables off of the source observable which is the user typing into the text field, you start having to worry about, "I'm dealing with a cold observable. If I create another stream based on it, it might double subscribe and I might kick off two things. I actually want to use a published.ref version of the stuff." To actually get away from a toy example into something that's actually production-ready, requires a lot of code. From my own conversations with the people working on Rx, there's a lot of people that are working on it and they're pragmatic about it and don't think that you have to be just purist functional all the way. But when they actually ship production code, they usually resort to using like the do operator. With Rx observables, which is basically an escape hatch to let you do mutations and side effects in what is supposed to be this monadic functional thing. If the paradigm is breaking that quickly to do production code, I'd wonder if maybe there is something better out there. I just kind of keep that in mind but I'd definitely think there should be a bake-off or comparison of how you do things in both the task paradigm and observable paradigm but I think you'd find that in most cases, just do a lot more with a task, with a lot less code. CHARLES: I want to go back to the point you were about to make about Simple Made Easy. ALEX: On the divide, ember-concurrency is very easy. I still choose easy. In the case of reservable, I'm constantly choosing easy over simple and then it always helps me out because I've made that decision. I think most people inspired by Rich Hickey from the Clojure community, would look at ember-concurrency and be like, "At a task that combines derive state and does five different things seems kind of gross. Why don't you just use observables," and the result of that if you follow it through is that you end up writing a bunch of observable code that is a mess in streams and going in different directions and you've written something that's really hard to understand, even if it's seasons Rx developers looking at the code. It's just very easy to write things that are tangled. CHARLES: It's always good to have simplicity but also a system that simple without ease, I think is far less useful because like I said, it's always going to be a tradeoff between simple and easy but the problem is if your system is too simple, then it means that you're shouldering your day-to-day programming task or shouldering the complexity and you have this emergent complexity that you just can't shake because your primitives are just too simple. You could be programming in assembly language or something like that. That's really simple. You need to be able to construct simple primitives on top of simple primitives so for your immediate need, you have something that is both simple and easy, if that's ever possible. Certainly, ember-concurrency is easy and I think it just means there's maybe work to do in trying to tease apart the different concerns because like you said, there are five. But in real complex systems, there are five bajillions, maybe teasing apart those individual concerns that is composed out of simple primitives. I'm sure you've thought about that a little bit of how do I separate this and make these tasks compose a little bit better and things like that. ALEX: This is a nice segue because it might tie into some of the work that's going to be going into ember-concurrency in the next few months. A big theme of EmberConf is actually, a lot of people are joking that it should just be called GlimmerConf because a lot of it was talking about how Glimmer is going to be this composable subset of Ember, like Glimmer is going to be the rendering layer and then there might be a Glimmer router and a bunch of these Glimmer components that once you npm install them, you get Ember. Glimmers is a chance to think about Ember as a bunch of components working together under a really nice rendering layer. There's definitely some interest in bringing ember-concurrency in thinking what is so-called Glimmer-concurrency going to look like. Part of thinking about that is going to mean teasing apart some of these details as you were just saying. I don't have a lot of specifics to give right now, just that there is a lot of interest in making sure at the very early on, there is some sort of Glimmer-concurrency equivalent. Generally speaking, as part of that process is the question of how do we bring these magical ember-concurrency parameters to just Node or just vanilla JavaScript in general. Perhaps you could use these kinds of tasks on a server and in other environments. I think there's some questions of the way the ember-concurrency bundles together derived state with the actual tasks runner, are you actually going to use that derived state in the server setting? I think some of these pieces are going to have to come apart a little bit. I don't have very concrete ideas for how that's all going to look in the end. Just that I have faith that it will happen pretty easily and the result of it is going to be something that fits pretty nicely into Glimmer as well. CHARLES: Yeah, I hope so. It certainly seems like one of the core issues right because Glimmer-concurrency really should be universal. It should be some -- I don't want to prescribe your work for you -- ALEX: I don't mind. CHARLES: That wouldn't be cool. I mean, Glimmer is very stripped down. You have a very little bit on top of a raw JavaScript environment so if you're going to go there, it'll makes sense. What is this concurrent process builder look like using nothing but JavaScript? It seems like one of the hardest problems is to disentangle it from Ember object because the way that it currently computes that derive state is very intertwined with Ember object. You know the details of this more than I do but it seems like that's one of the biggest challenges is how do you communicate those changes in state without using that? That what I was thinking, it would be a good case for using observable for ember-concurrency, although not for probably the reason that people are thinking, which is for task composition and stuff but I'm very curious. ALEX: Likely the first stab at that direction would probably be using something similar, if not exactly these Glimmer reference primitives. Maybe it is worth talking about that. References are one of the core primitives that's used by Glimmer and it represents a value that might change over time and it's a value that can be lazily gotten, whereas observables, you have something that's firing events every time something changes and it makes the whole pipeline process it right then and there. With references, when something changes, you just tell the world like something's dirty. Then at a later time, maybe when in a request animation frame or some point where it actually makes sense to get the latest values, then it goes through and finds out everything that changes, does a single rerender. What it means is that you don't have the observe recode that's firing every time some value has changed. It's one of the guiding abstractions in Glimmer that makes it possible for it to be so fast and performant. It is very likely that a vanilla version of what ember-concurrency does uses references because those are already separate from the Ember object model and actually are used today in conjunction with Ember object model with the Glimmer that works with Ember today. I think that's probably, to me a first step. Clearly the reference attraction has worked wonders for Glimmer. I prefer to probably use that than observables and the push-based. CHARLES: Observables or something else. That is really, really interesting because there's nothing like vanilla JavaScript programming these days, like the equivalent of a Haskell thunk where you're just passing these things around but you're not actually using them until you actually want to pull a value. At that point, you kick off the whole chain of computation required to get that one value that you need. But it immediately brings to mind and I don't know if this is of concern to you but I was very, very enamored of Ember objects back in the day, in 2012. I was like, "Wow, this is amazing. This solves every problem that I've had." It has been a great companion and I've built some really great stuff on top of it. But now it's definitely turned into baggage. I think it's baggage for libraries that I've written and we're talking about it in the context of it being baggage here and being making it more available to the JavaScript community so I worry a little bit about Glimmer references. Would they possibly turn into something like that and could you counteract that by maybe trying to evangelize them to the wider JavaScript community like, "Here's a new reactive primitive," so that we don't end up in an eddy of the JavaScript community, do you think there's value in trying to say, "There should be some standard in the same way of observable, which is an emerging standard is for eager reactivity, have some lazy reactivity standard," or maybe it's too early for that. That might be a way of future-proofing or getting insurance for the future so you can say, "We can confidently build systems on top of this primitive." ALEX: If the worry is something based on Glimmer reference as it's going to turn into the same baggage or [inaudible] or whatever, that maybe Ember object has turned into some apps, some applications, some libraries. I'm not sure. I guess I don't really see that happening and I know that it's already gotten some validation from some of the people that have worked on Rx. In fact, a very useful primitives for certain kinds of workloads. As much as evangelism certainly helps. It's already off to a much better start than this all-powerful, god object that you can only interact with if you're using .get and .set functions. It's very lightweight. What I'm trying to say here is that there's UI workloads and then there's server-driven workloads and using Rx for both cases means that Rx suffers as a library because in the UI workload, you want something like references where you want to let a bunch of things change and then update stuff in one pass just a tick later or later in the micro task queue. But in Rx, they make you think about things in event-driven way, which might make sense for servers and stream processing but it's ugly when you want to actually build UIs with it. I think if we pay due respect to the fact these workloads are pretty different, I think the reference is way better of an abstraction for things that are UI-centric. They're simple and their performant and I think it's often much better foot than Ember object which is kind of bloated and huge and very hard to optimize. CHARLES: Right. I like that because you have to be precise with the server side things but ironically, with the references, you only care about the state at the point at which you observe it -- when the user observes it, not when the code observes it. The user observes stuff with every animation frame and there can be any number of intermediate states that you can just throw away and you don't care about. You don't need to compute them. I think what you're saying is Rx forces you to compute them. ALEX: Right and you wouldn't use a Glimmer reference for something if you're trying to batch. But in the end, keep all of the events that were fired on all the change events. You wouldn't use references because you're losing all that information until you do that poll and you get the latest value. But 99% of the time, when you're building UIs, that's what you want to use. CHARLES: Are Glimmer reference is their own standalone library or do they currently bundled with Glimmer? ALEX: I'm not sure. If they are not now, I believe the intention is for them to be at their separate repo. I was talking to Kris Selden at EmberConf and I got the impression that the intent, it might not be there now and if I want to start extracting ember-concurrency stuff into vanilla JS, I'd probably want to use a reference-ish thing, if not the official one from Glimmer. CHARLES: I know we talked about this so then, how will you able to use these lazy references to compute tasks state? How that might work or play out? ALEX: The fundamental problem right now is that everything in ember-concurrency is so glued to the Ember object model. What that means is that all ember-concurrency has to do is broadcast so the changes has happened to the state of a certain task so that you can, maybe put a loading spinner up on your template. All it has to do is use object.set and then the built in computed property observer change detection that is in Ember object model. It's going to sort of propagate these changes but that's a bunch of heavy Ember stuff that is going to exist and a lighter weight Glimmer or vanilla JS context. Instead of using .sets and expecting that the thing you're setting it on is a big, heavy Ember object model, you could just use references. Then whoever wants to get a reference to whether a task is running or not, it is running reference. Then just using the standard Glimmer abstractions. At the Glimmer-concurrency task runner, it would just basically kick those references and anyone who has one of those references can flush and get the latest value at some later point in time and then update the UI based on that. Already, as a maintainer at ember-concurrency, I see all the pieces work with that and I could probably just start working on that today. But there's just a handful of other things that I want to align with the vision of Glimmer and Glimmer-concurrency before I start working on that. ELRICK: What would be the referency equivalent in just plain JavaScript outside of Glimmer that you would use to build this on top of --? CHARLES: Like what would the API look like? If you're like, "I don't have a Glimmer. I don't have anything. I'm just --?" ELRICK: Yeah, you just have plain JavaScript. What would be the primitive that you will build this on top of? ALEX: Whether we use a standalone Glimmer references library or this separate reference thing, then I would use the term based on something Kris Selden said. In the end, the APIs is going to be pretty similar between those two but if one thing is requires, as far as I understand it, you've got to set up where in an event loop, your response is something that's changed and then you schedule at a later request animation frame, to actually do the rendering based on that. In order to use something like references, it implies that you've got to flush at a later tick or flush at a later call back. If you've got that in whatever app you're working on, it should be pretty easy to figure out where references fit in. CHARLES: I see, so you would basically say like new task, give it your task class or whatever -- I'm just making stuff up -- then you would just schedule, do a request animation frame and then just pull the task state or something like that? Or a new task reference or something like that? ALEX: You might have some function that's schedule render pass, if not yet scheduled. Then if it hasn't been scheduled, then use requests animation frame. If you call that function again, it's going to no op until that requests animation frame happened. CHARLES: Could you explain that again? ALEX: Sure. If you're actually thinking of a really low level vanilla JavaScript to your app, in the browser or something and you were just using references, then you probably have something where the thing that kicks off the reference or dirties the reference in some way, also run some function called 'schedule rerender', if one hasn't already been scheduled or something less verbose. That would just make sure that some request animation frame has been scheduled. When it flushes, then it will get all the changes but if more references are dirty at that mean time, it won't schedule additional request animation frames. I don't know, that's kind of blue-skying but that's when -- CHARLES: Right. Here's the other things, you see like being able to integrated with a third-party state management solution like Redux or something. Basically, I've got my ember-concurrency tasks and their state is then reflected inside a Redux store. How might that work, if at all? Or was that a crazy idea? ALEX: I don't know. I played around with Redux toy examples and Redux community and Ember is only stronger by the day. I'm not entirely sure how all those pieces fit together because in Redux, they really want you to propagate all of your state changes using the reducer in that single global atom. A lot of people asked me about redux-sagas, which is another generator function-driven way of firing these state mutating actions over some async operation and this is hugely popular but I don't think they have any concept of the derived state that I've been trying to do with ember-concurrency of just being able to ask a task if it's running. You can't just do that. You've got to reflect that into the global atom -- the global store --somehow and to be honest, I don't really know if that's fundamentally at odds with the Redux model, to take something like Ember or Glimmer-concurrency and make it work that way. But ideally, you wouldn't have to forward all that state into the global atom. You just be able to reference that task object. CHARLES: If the task object itself is immutable, it would have seemed like fairly trivial, like you could generate programmatically the reducer required to do that. If you had the state encapsulated, you could come and say, "Now, there's a new state here." It seems like you would be able to adapt that but you would need to be able to react to any time if that state changed to fire and action in the Redux store or fire the Redux action. ALEX: Actually, this will be an easier question to answer because in the Ember community Slack, there's a Redux channel and I know everyone there is already starting to talk about how are references, how is Glimmer is going to, how can we kind of tie these things to Redux and I think when they have some solutions lined up, a lot of the stuff that will be in so-called glimmer-concurrency will just fit in nicely. If they've got nice models for tying references to the state atom, if you will, then it's going to work with the new way. CHARLES: Okay, cool. One of the things that I wanted to talk about was a proposal that Lauren Tan, who put on to the ember-concurrency issues list, although it's an RFC. Are you accepting RFCs now for ember-concurrency? ALEX: I'm not pompous enough to have a separate RFCs repo. Issue approaches perfectly humble for me. CHARLES: But is this the first RFC or have there been a bunch of ember-concurrency RFCs? ALEX: There's been a few. It's definitely great that Ember have standardize on this boilerplate RFC model that everyone can fit their proposal into because it means that all the add-ons that people really like and really want to invest in, they get these high-quality RFCs versus like, "Hey man. It kind of nice if you can just like, have a pipeline." [Laughter] ALEX: Just because Ember invested in that process, the whole add-on community benefits from it and it's great. There's been a few RFCs that are like that. I'm not sure how many of them have made it but I've seen a few that are in that format but this one's definitely one of the nicer ones and a lot of effort was put into it and it looks really nice. ELRICK: I'm not familiar with the RFC. What was the brief overview of what was proposed? ALEX: Lauren was basically proposing that we add concept of pipelines, which is if you have a series of tasks that are stepping the pipeline of operations, then we should standardize that and then define all the steps in the pipeline so rather than having each step in the pipeline, call the next step in the pipeline. They just return some their portion of that work and then the pipeline infrastructure will automatically run the next step in that pipeline. CHARLES: It seems like also then you can derive state about the entire pipeline, rather than just the individual task. You have to manage that a little bit by hand. But the other thing, I guess I would add is it seems like if you're going to go with pipelines, rather than being a simple list, you might want to think about it as being a tree because can you have pipelines that are composed of sub-pipelines, so to speak? ALEX: Yeah. I believe the answer is yes. I'm not sure if it's spelled out in this RFC but really a pipeline just fits the task interface so if there's a task-ish thing or taskable object that you declare as a step of a pipeline or sub-pipeline, it should just work. I'm not sure if there's more work that needs to be done in spelling that out but that seems baked into it. There's a lot of due consideration for making sure these things compose really well and it's already in a really good state. CHARLES: Yeah. What are some of the use cases where you might want to use a pipeline? I'm sure, everyone who's been writing concurrent tasks has probably been maintaining their own pipeline so what is it that you're doing and how is this going to save your time and money? ALEX: Let's use the example that I've actually used in EmberConf, which is something based on my own app, which is in my app, you have to geolocate and find nearby stores that you could walk into and that process is four async steps in a row. One is getting your geolocation coordinates and then the next step is passing those the store, getting values and the third step is maybe some additional validation or just setting a timer so that your animations or any of these little async things that you have to do. But it's really a sequential operation where each time, fetch your geolocation or get it out of a cache and then step to the next thing, then the next thing. It looks okay as I have it in my production app code but it still feels a little gross that you can just look at this thing and be like, "What is the sequence of steps here?" You have to actually get the implementation of each task to see what happens next and where will it go after that. Basically, with ember-concurrency in general, there's a lot of opportunities for finding more conventions for building apps. I don't even know if we really talk about this so far but derived state is part of it. But generally speaking before ember-concurrency, there wasn't as much opportunity, I guess for some of these conventions for building these pretty standard UI flows without feeling that you're just building your own thing every single time, with chains of Promises and your own improvise cancellation scheme and all these things. I see pipelines as a next step. Well, we're pre-building lots of pipelines in our apps. We have these processes that go through these multiple steps and right now, the best we can do is set a bunch of Boolean variables and the derived state that comes with ember-concurrency helps but with pipelines, there's even more and it also structures your thinking so that if something like pipelines catches on, hopefully as an Ember developer, you'll see them in a few different places and already have that tool in how to visualize a problem, visualize a component, visualize the async flow. CHARLES: If I spent my entire morning reading the talk that Lauren referenced in RFC, which was the Railway Oriented Programming, which I think, maybe not quite but basically a visual explanation of a Maybe monad or the Either monad or whatever it's called. One of the greatest explanations of why monads are helpful and through explanation using like the Maybe, where you can have a computation that could have more than one result, either success or failure and how do I take these functions and compose them with functions that might always succeed or might not have a return value or whatever and show the tracks that move through a computation and be able to normalize every function to have the same number of tracks. I realized, I'm getting into the description of it without actually having the visuals in front of it so I'm just going to stop myself and say everybody go read it. It'll take you 35 minutes but it will eliminate so much like the chatter that you've been hearing in the background for a couple years. ALEX: I used to tell people that they should learn Rx because regardless of me liking the task primitive a little bit better, it's great. It just scrambles your brain and reorganize your thought processes and it's such an interesting library to learn. CHARLES: All right. I like it. I'm going to go learn Rx. ALEX: I've been getting, on the server side, the sort of Kafka-based architecture, Apache Kafka. Particularly, they've released some libraries in maybe the last year or so. It's a very Rx-familiar feeling library for composing new data and new aggregates and joins between different topics and streams of events. It just seems like they're at the forefront of solving these really hard problems in a very conventional way. You get into some of that stuff and you'll find that you're doing a lot of server side processing with step that just feels a lot like Rx and I find it very interesting. I haven't actually build anything with it yet but it is likely in my future and anybody that's into the event-driven model should definitely know what people are working on in this Kafka-streams world. CHARLES: That is cool. It's so interesting to see how all the problems that you encounter on working on the server side, you will encounter on the client and vice versa. You can build up a huge corpus of knowledge on one side of the API divide and you'd be surprised that if you were to go work on the other side for a time, you'll be able to leverage 99% of that knowledge. That's fantastic. I would love to get into Kafka but unfortunately, I think we're going to have to save that for another time. That's another one of those words like... I don't know. Is Kafka descended from Storm or something like that? Is it a similar concept? I remember everybody was big on Storm. ALEX: I think Storm process the events and decides what to do with them. Kafka is really just a giant storage that plugs into something, I think like Storm or [inaudible] or any of these things that actually process the events. CHARLES: Yeah, it's all Kubernetes to me. ALEX: Yeah. CHARLES: All righty. Well, with that, I think we'll wrap it up. Thank you so much, Alex for coming to talk to us. It's always enlightening. I love your approach to programming. I love how deeply you think about problems and how humble you are in approaching them because they are big. ALEX: Well, thank you. It's great to be on here. It's fun. CHARLES: All right, everybody. Take care. Bye Elrick, bye Alex.
Simple Made Easy by Rich Hickey “Rich Hickey emphasizes simplicity's virtues over easiness', showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.” Talk Transcript, with slides Fatal Error episode 7: The Single Responsibility Principle Fatal Error episode 15: Not Invented Here Some examples we mention: Object-Relational Mapping vs Key-value Database CocoaPods vs. Carthage Slides mentioned in this post: Development Speed Simplicity Made Easy
How to design software? What are the techniques we can use? How can we become better at it? We've interviewed 3 engineers with completely different backgrounds to find out. Host: Andrey Salomatin https://twitter.com/flpvsk Guests: Craig Andera https://twitter.com/craigandera Eric Elliott https://twitter.com/_ericelliott Mario Zechner https://twitter.com/badlogicgames Mentions by Craig: Cognitect http://cognitect.com Cognicast http://blog.cognitect.com/cognicast/ You Are Not So Smart Podcast https://youarenotsosmart.com/podcast/ Rich Hickey, creator of Clojure PL https://twitter.com/richhickey Mentions by Eric: Blog https://medium.com/@_ericelliott Online Course https://ericelliottjs.com/ Mentions by Mario: LibGDX https://libgdx.badlogicgames.com/ http://www.gamefromscratch.com/ Books and talks that shaped you as an engineer, Craig: A book by Martin Fowler "Patterns of Enterprise Application Architecture" http://www.goodreads.com/book/show/70156.Patterns_of_Enterprise_Application_Architecture Rich Hickey talks: https://changelog.com/rich-hickeys-greatest-hits/ Books and talks that shaped you as an engineer, Eric: A book by Kent Beck "Test Driven Development: By Example" http://www.goodreads.com/book/show/387190.Test_Driven_Development Collection of links "Required JavaScript Reading" https://github.com/ericelliott/essential-javascript-links/blob/master/README.md Books and talks that shaped you as an engineer, Mario: A book by Andre LaMothe "Tricks of the 3D Game Programming Gurus" http://www.goodreads.com/book/show/2042298.Tricks_of_the_3D_Game_Programming_Gurus "The dragon book" by Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman, official name "Compilers: Principles, Techniques, and Tools" http://www.goodreads.com/book/show/703102.Compilers
What happens when your database is part of your application? Kenneth & Len are joined once again by Robert Stuttaford from Cognician to talk about Datomic. According to the Datomic website, Datomic is a distributed database designed to enable scalable, flexible and intelligent applications, running on next-generation cloud architectures. Robert shares with us how Datomic became a natural choice for them after switch to Clojure. Before Clojure, ClojureScript and Datomic their site was written in PHP and backed by MySQL. Choosing Datomic was very natural since they've already subscribed to Rich Hickey's "simple vs easy" mindset. Its immutable nature is a great fit for Clojure, and by following an "append-only" storage model they got loads of benefits. We discuss a wide variety of concepts, including how Datomic models data, the different ways it can be stored, how transactions work, the ability to travel back in time to see what your data looked like, and so much more. We were happy to learn that Datomic is accessible to everyone on the JVM, so learning Clojure isn't an initial requirement, but learning some Clojure will go a long way in informing your usage of Datomic. We would encourage everyone to experiment with Datomic and enjoy this different, flexible approach to modeling data. Follow Robert online: - https://twitter.com/RobStuttaford - http://www.stuttaford.me/ - http://www.cognician.com/ Here are some resources mentioned during the show: * Datomic - http://www.datomic.com/ * Datalog - https://en.wikipedia.org/wiki/Datalog * Logic programming - https://en.wikipedia.org/wiki/Logic_programming * Simple Made Easy by Rich Hickey - http://www.infoq.com/presentations/Simple-Made-Easy * Exploring four Datomic superpowers - http://www.slideshare.net/lucascavalcantisantos/exploring-four-datomic-superpowers * Learn Datalog Today - http://www.learndatalogtoday.org/ * Datomic Training Material - http://www.datomic.com/training.html * Clojure Cookbook - https://github.com/clojure-cookbook/clojure-cookbook * The Joy of Clojure, Second Edition - https://www.manning.com/books/the-joy-of-clojure-second-edition * Clojure Remote Keynote: Designing with Data - https://www.youtube.com/watch?v=kP8wImz-x4w Also listen to https://zadevchat.io/27/ for our previous discussion with Robert on Clojure. And finally our picks Robert: * "Learning Mindset" (Mindset by Carol Dweck) - http://mindsetonline.com/ * Lego - http://www.lego.com/ Thanks for listening! Stay in touch: * Homepage - https://zadevchat.io/ * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - http://bit.ly/zadevchat-itunes
Join us as we explore Clojure, the robust, practical and fast programming language. Kenneth, Kevin & Len talk to Robert Stuttaford (@RobStuttaford), co-founder and CTO of Cognician, about the Clojure programming language and his experience using it for the last few years. We discuss the language itself as well as some tools. We sing the praises of Rich Hickey, even if it just for his great talks, and stroll around the ecosystem including the obligatory stop at Datomic. Robert really did a great job of guiding us through the landscape and we're very excited about Clojure after this call. We'll definitely have Robert back in the future to cover Datomic and other parts of Clojure we didn't cover. Quick aside, the conversation was very organic and we skipped the formal introductions, and we had a few small technical snags with the recording, but the content is still great and we hope you enjoy listening as much as we did recording. Follow Robert and Cognician on the web: - https://twitter.com/RobStuttaford - http://www.stuttaford.me - http://www.cognician.com Here are some resources mentioned in the show: * emacs - https://www.gnu.org/software/emacs/ * Spacemacs - https://github.com/syl20bnr/spacemacs * Clojure Programming (O'Reilly) - http://www.clojurebook.com * Robert's emacs.d - https://github.com/robert-stuttaford/.emacs.d * Simple Made Easy by Rich Hickey - http://www.infoq.com/presentations/Simple-Made-Easy * Rich Hickey's Greatest Hits - https://changelog.com/rich-hickeys-greatest-hits/ * Lisp - https://en.wikipedia.org/wiki/Lisp_(programming_language) * DotLisp - http://dotlisp.sourceforge.net/dotlisp.htm * Clojurescript - https://github.com/clojure/clojurescript * edn (extensible data notation) - https://github.com/edn-format/edn * schema - https://github.com/plumatic/schema * Isomorphic JavaScript - http://isomorphic.net * Homoiconicity - https://en.wikipedia.org/wiki/Homoiconicity * algo.monads - https://github.com/clojure/algo.monads * Logic programming with core.logic - https://github.com/clojure/core.logic * Excel-REPL - https://github.com/whamtet/Excel-REPL * Arcadia, Clojure integration with Unity 3D - https://github.com/arcadia-unity/Arcadia * ClojureScript + React Native - http://cljsrn.org * Planck ClojureScript REPL - http://planck-repl.org * Clojure for the Brave and True - http://www.braveclojure.com * clojurians on Slack - http://clojurians.net * #clojure on Freenode * Clojure Google Group - http://groups.google.com/group/clojure * ClojureBridge - http://www.clojurebridge.org * Clojure Cup - http://www.clojurecup.com * Nikita Prokopov - https://github.com/tonsky * Datomic - http://www.datomic.com And finally our picks: Kenneth: * Simple Made Easy by Rich Hickey - http://www.infoq.com/presentations/Simple-Made-Easy Len: * Structure and Interpretation of Computer Programs - http://www.sicpdistilled.com/ * SICP Lecture Videos - http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/ Robert: * emacs - https://www.gnu.org/software/emacs/ * Mindfulness meditation - https://en.wikipedia.org/wiki/Mindfulness * Tim Ewald - Clojure: Programming with Hand Tools - https://www.youtube.com/watch?v=ShEez0JkOFw Kevin: * Spacemacs - https://github.com/syl20bnr/spacemacs * Coggle - https://coggle.it Stay in touch: * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - https://itunes.apple.com/za/podcast/zadevchat-podcast/id1057372777 PS: We'll be at RubyFuza in Cape Town on Feb 4th & 5th, and at Devconf in Fourways on March 8th. Please come say hi!
本期节目由 思客教学 赞助,思客教学 “专注 IT 领域远程学徒式” 教育。 本期由 Terry 主持, 采访到了过纯中, 和他聊聊他与微软的爱恨情仇,说说他如何用 Windows 作为桌面来进行“开源软件”开发的。 Visual Basic Silverlight WPF RIA Jon on Software EJB J2EE Development without EJB ADO.NET Ubuntu Django ASP.NET MVC UNIX is very simple, it just needs a genius to understand its simplicity. Rich Hickey Simplicity Matters Simple Made Easy Agile Web Development With Rails Sublime Mosh Quora Ruby社区应该去Rails化了 Cuba Express Aaron Patterson Journey active_model_serializers windows PR AppVeyor Lotus Trailblazer Rails Engine Concern SPA react-rails ECMAScript 6 Ember React Angular Vue Yehuda Bower webpack React Hot Loader Flux redux alt TypeScript Anders Hejlsberg CoffeeScript Haml Slim Been Hero EventMachine Basecamp 3 wechat gem state_machine state_machines-graphviz aasm Edsger React Native 轮子哥 Special Guest: 过纯中.
Ben talks to Craig Andera, developer at Cognitect, on his latest 20% time project, how Clojure gets made, and driving immutability into a database. Cognicast Kinesis Keyboards khordr Datomic "Simplicity Matters"- Rich Hickey Mile...Mile & a Half (Documentary) "Programming with Hand Tools"- Tim Ewald Falcon 4.0 Craig on Twitter
It is two days before Christmas but that will not stop us from recording a new episode. This week we discuss Frasers gambling hot-streak at Ascot, Micks move and how horrible Martinis are. We then move on to how Fraser is getting on in his new job up in London and Micks University presentation on Fuzzy Logic. This leads us on to how ‘unrandom’ humans are, and professional Rock-Paper-Scissors tournaments. Finally, Edd brings up some interesting talks by Greg Young (on EventStore) and Rich Hickey (on software design). Have a great Christmas everyone and thanks for your loyal listenership!
Ben talks with Pete Hunt, formerly of Facebook & Instagram, on React and what makes this unique JavaScript library tick, as well as shifting from a technical to a business focused mindset. React Fielding Dissertation Simple Made Easy- Rich Hickey Pete on Twitter
Наш гость - Николай Рыжиков Николай в соцсетях: Google+ Github Twitter Facebook Сообщества: SPRUG Clojure Russia Piter United SPB Frontend Книги: SICP SICP in clojure SICP видекурс на русском CTM Список книг от Равиля Видео: Rich Hickey Health Samurai: * Сайт * Профиль на Github Open Source: foodtaster + бонус в виде видоса formstamp fhirbase Материалы с конференций: * Слайды * Видео доклада “Pair Programming” c AgileDays-2014 А также выражаем благодарность Стасу Спиридонову за помощь с мастерингом этого выпуска.
本期由Terry Tai主持,邀请到了国内著名的 Common Lisp 程序员,实用 Common Lisp 编程 一书的译者田春(冰河),和我们一起聊聊 Lisp 的方方面面, 揭开 Lisp 的神秘面纱。 实用Common Lisp编程 KnewOne Solaris Studio On Lisp ANSI Common Lisp Common Lisp the Language, 2nd Edition SICP S-表达式 Simple Network Management Protocol SourceForge cl-net-snmp Racket Scheme Common Lisp HyperSpec LispWorks SLIME Hunchentoot Quicklisp Clojure Paul Graham Arc Hacker News LALR parser Rich Hickey ABCL yacc FileMaker Pro FrameMaker InDesign DocBook LaTeX The Little Schemer Paul Graham Special Guests: 李路 and 田春(冰河).
Can’t change your variables once you assign a value? WTF? Surely, this is something up with which we cannot put! Or: a round-about introduction to some of the concerns addressed by the functional programming paradigm. In This Episode: “I’ve been thinking about Object Oriented Programming.” “And you survived.” Concurrency and shared state Changing the value of 2 What if we can’t change a variable’s value? Side effects: minus the sides. And the effects. A functional primer and the unit test No side effects? No dependency injection! Does dependency injection break the “encapsulation” contract? Lock the doors so we can invent windows. Mock object makes a mockery of … well …. Separating values from behavior The inversion of the inversion Apparently, we’re really interested in address books. What about side-effects, though? I mean, right? Separating side-effects from calculations A walled-off garden of perfection Don’t mock me Recursion When copying is not copying What’s the difference between antidote and poison? Mocking and Dependency Injection, a heretical view Rich Hickey, Simon Peyton Jones, Martin Ordesky Clojure: What if I only use blue? Scala: What if we add blue, and every other color, too? After all this, why OO ever? Simple Made Easy Download MP3
本期由 Daniel 主持,参与嘉宾有 SaitoWu, Dingding Ye。武鑫(Saito) 是著名自托管 Git 项目仓库开源项目 GitLab 的核心开发者之一,也是 RubyConfChina 2012 的讲师,给大家做了期很精彩的 GitLab 实现介绍。本期我们很荣幸能请到 SaitoWu 同学来跟大家聊聊他的从业经历,他所感兴趣的话题,包括 Git,GitHub,以及 GitLab。 Why Git is better than X Git 为什么这么好? Unlocking the Secrets of Git Git scaling at GitHub Chatops at Github Sinatra How gitlab works SVN VPN git-svn 蓝光党 Ruby Tuesday http://clojure.org/ Haskell Rich Hickey 七周七语言 JDK8 Github Enterprise authorized_keys Gitosis Gitolite Twisted GitHub is Moving to Rackspace! Engine Yard And GitHub Transition Zach Holman Github Boxen libgit2 Hubot Play - Company's DJ Redcarpet Unicorn html-pipeline Resque Gitcafe Redmine Use pull request Scrum要素 Component Hexo Special Guest: saitowu.
Unsupported Operation 67JavaOracle pulls support for JavaFX ScriptJava 7u4b15 developer preview availableMiscDataStax has quietly made their Cassandra documentation available in PDFExcelTestNG - interesting - webdriver/selenium testing driven by Excel spreadsheets, and TestNG.Practical Unit Testing with Mockito and TestNG is nearing publication - now has an ISBN!LogBack 1.0.1Hibernate 4.1Hazelcast 2.0 released - release notes.AIDE - IDE for Android, ON AndroidTerminal-IDE is similar, but gives you a vi based environment.SmartGIT 3.0.1 available - changes - GUI Git client in Java. now supports mercurial and svn since I last checked it out.Gerrit 2.3rc0 availableAtlassian buys an IRC/IM client/server company - closes a 7 year ticket “won’t fix”Web StuffNettoSphere - A WebSocket and HTTP server based on Atmosphere and Netty.vert.x - node.js like asynchronous web server/platform - lets you write applications in js, ruby, and java. comes with distributed event bus, websocket support, tcp/ssl, pre made modules for mailer, authentication, work queuesThymeleaf 2.0 - XML/HTML specific template engine.GateIN 3.2.0 Final - people still use portal servers?JRebel 4.6 released, JRebel for Vaadin announcedApache / Maven / RelatedShavenmaven - super-lightweight dependency management - NO XML - just URLsGrails 2.0.1 now uses RichardStyle composites, and hopefully will make its way to “Apache Maven Central” soon.Apache Jena 0.9.0 - Java framework for building Semantic WebCommons Math 3.0Apache Camel 2.9.1Apache Hama 0.4 - incubating - metrics on HadoopApache Rave 0.8 - incubating - social mashupApache Tomcat Native 1.1.23Apache Ant 1.8.3Directory studio 2.0.M3ApacheDS 2.0.0-M6Apache Directory LDAP 1.0.0-M11Apache Commons Daemon 1.0.10Apache ACE has become a top level projectApache OFBiz 09.04.02 (2nd TLD in a month - DeltaCloud was the other)Apache MyFaces extension for CDI 1.0.4JetbrainsIntelliJ IDEA 11.1 to support JavaScript.next with Traceur compiler.AppCode 1.5 RCGroovyFirst official GroovyFX releaseScalaAkka moved to a new Akka Organisation on GithubAkka 2.0 also released!New Scala proposal for value typesClojureClojure 1.4 beta 4First Github got hacked, then node.js’s NPM, Clojars takes precautions:Hello folks!In light of the recent break-in to the Node.js package hosting site (https://gist.github.com/2001456), I’ve decided to bump the priority of increasing the security on Clojars. I’ve deployed a fix that uses bcrypt (http://codahale.com/how-to-safely-store-a-password/) for password hashing. The first time you log in, it will re-hash your password using bcrypt and wipe the old weak hash.Note that Clojars has NOT had a security breach at this time. This is a preventative measure to protect your password in the event of a future breach. We are also looking into allowing signed jars (and possibly requiring them for releases). If you’re interested in helping out with this effort, (design or code) please join the clojars-maintainers mailing list: http://groups.google.com/group/clojars-maintainersBecause we can’t ensure that everyone will log in to re-hash their password, at some point in the future (probably 2–3 weeks out) we will WIPE all the old password hashes. Otherwise users who have stopped using Clojars or missed the announcement could have their passwords exposed in the event of a future break-in. I will be sure to send out a few more warnings before this happens, but even if your password has been wiped it’s easy to reset it via the “forgot password” functionality.If you have any applications storing passwords hashed with SHA1 (even if you use a salt) I highly recommend you take the same steps; refer to http://codahale.com/how-to-safely-store-a-password/ for details.please log into Clojars to re-hash your password.Thanks for your attention.-PhilRelated news - Bouncy Castle 1.46 releasedStatic code analyzer for Clojure - kibit 0.0.2 now releasedMarginalia v0.7.0 - documentation generator for clojurelein 2.0 preview releases are out, and now preview2 is supported by Travis-CIlein-navem is a lein plugin that converts a maven pom.xml into lein project.cljDatomic is a new database service from Rich Hickey. And dayam it looks nice. Some really nice ideas in here.
Software Engineering Radio - The Podcast for Professional Software Developers
This episode is a coversation with Rich Hickey about his programming language Clojure. Clojure is a Lisp dialect that runs on top of the JVM that comes with - among other things - persistent data structures and transactional memory, both very useful for writing concurrent applications.
Software Engineering Radio - The Podcast for Professional Software Developers
This episode is a coversation with Rich Hickey about his programming language Clojure. Clojure is a Lisp dialect that runs on top of the JVM that comes with - among other things - persistent data structures and transactional memory, both very useful for writing concurrent applications.
Software Engineering Radio - The Podcast for Professional Software Developers
This episode is a coversation with Rich Hickey about his programming language Clojure. Clojure is a Lisp dialect that runs on top of the JVM that comes with - among other things - persistent data structures and transactional memory, both very useful for writing concurrent applications.
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: Cause there’s more to tapping to other people’s minds and sending something and asking for feedback. But listening to feedback through allowing other people to create in the same space that you create with the right people can definitely feel magical. 00:00:18 - Speaker 2: Hello and welcome to Meta Muse. Use as a tool for thought on iPad. This podcast isn’t about the product, it’s about music company and a small team behind it. I’m Adam Wiggins, joined by Mark McGranaghan. Hey, Adam, and Nicholas Klein of FIMA. Hey there. And Nico, I know you have been working from Europe with a US centric, maybe even a San Francisco-centric team for a few years. How do you find that experience of having the evening be your team time? 00:00:48 - Speaker 1: I think that looking at the upside of I haven’t set an alarm in the last 2 years to get up for work. I think that’s definitely on the plus side of this, but I like to kind of like keep my Friday evenings free, that kind of like gives me a little bit of like time of just spending a normal week evening, I would say. 00:01:07 - Speaker 2: Yeah, that’s right. I think the uh there was a nice thread recently about some Europe to US times and I think on the Europe side, the trick is, of course, you are giving up a lot of your evenings, but you gotta make some room in there for a social event, be, you know, be able to have dinner with friends or whatever here and there, and yeah, I agree, no alarms slash morning is more free form is a huge benefit. So for me, very well worth the extra cost of maybe needing to be a little more on my game in the evening than I would normally need to be. And let’s hear about your career journey a little bit, so I think you have quite a bit of interesting milestones along the way, including Sketch Runner and artifacts, which we talked about a little bit with Jason Wa recently here on the podcast, and I think it’s how I first discovered your work. Love to hear the steps that brought you along the way here. 00:02:01 - Speaker 1: Yeah, I studied interaction design in Schwebmund, and it’s a tiny, tiny school in a tiny city in the middle of nowhere in Germany. So I studied interaction design and I think what was very interesting kind of like studying interaction design was that you get taught these like behemoths of tools. So you get taught Flash, you get taught Illustrator, you get taught Photoshop in like classes, and you never really think about kind of like manipulating those tools themselves. And interaction design in general was really interesting because it was just about the relationship of humans to technology and application design, kind of a concrete UI design was one part of this. And I’ve never really thought about kind of like, hey, I’m learning how to design software. And tools are just software that is also being designed somewhere far, far away, but on a hack day in Hamburg where we were working on sketch plug-ins, kind of like started and like I continued to working with the team in kind of like designing and building sketch runner, and there was a plug-in with which you kind of like can still like insert components and apply styles from like a command like spot like UI. 00:03:09 - Speaker 2: I remember using this a little bit back in my sketch days, and it was quite remarkable to me at the time to bring a command line interface to a design tool. I feel like nowadays command palettes are fairly common and power tools, maybe superhuman, and some others. There’s an article from Repole where they describe a little bit the rise of the command palette, and the command lines traditionally uh kind of engineering centric, I don’t know, Unixy particular kind of power user making its way into much more of these tools, but I feel like Sketch Runner was a little ahead of its time insofar as bringing that to a design tool. 00:03:44 - Speaker 1: Yeah, it was fascinating. We’ve seen that like this aspect of I know the name of the command and originally it started with finding a way to make plug-ins more easily kind of like executable. That was the start during the hack day, like, hey, there are so many plug-ins being built for sketch. How can we make them more accessible and faster to kind of like execute? And then it kind of like we realized there are so many features that we can add on to this. And the moment that was like really exciting for me was that I was still studying in Schwabsmund. And I saw someone from the Airbnb design systems team talk about sketch runner kind of like on a meet up and then kind of like also tweeting about this. And I was just like, holy shit, this is really happening right now. And so at that moment I realized that like, hey, there is a potential for changing design tools. They’re also just software that are to be designed basically, and that kind of like got me hooked into design tools. After graduation, I was an intern at Shopify. And continued working on sketch plugins there. I was building Polaris telescope. It’s kind of like a tool from within Sketch, you could kind of like see the documentation for the design system components. 00:04:56 - Speaker 2: These were internal kind of plug-ins or tools at. Shopify or something for release to the outside world. 00:05:02 - Speaker 1: It started as an internal tool, but then since like Shopify is a public design system and is being used by third party people to design applications for the Shopify platform, we also kind of like made it available publicly. And during that time, I applied at FIMA. And one nice story was that at the end of my internship at Shopify, I had this option of going to FIMA and starting an internship there or staying at Shopify full time. And I remember my mentor telling me to kind of like take the job at Sigma because it was like, yeah, this is more interesting to you, you just kind of like go there and that was a nice kind of like end for this work at Shopify was very kind of like welcoming to let me go, if that sounds right. 00:05:43 - Speaker 2: That’s great, and this was early days for Figma, right? Pretty small team. I mean, I think nowadays it’s a giant in the design space slash startup space, but maybe this was a little riskier of a jump to go to this smaller, less proven team at the time. 00:05:59 - Speaker 1: Oh yeah, I think Stigma definitely hasn’t caught on as kind of like a major tool in the space at that time. Um, when I joined, we were, I think around 35, maybe 40 people in San Francisco, and that was it, like that was the whole company. And I think we’re now at above 250, but I’m not exactly sure when that is. I’m coming up on 3 years now, and it’s been fascinating to see. The change in the company itself or kind of like seeing it grow, but also just in the product and in the acceptance of the product in the market. Kind of like seeing how many people and how many companies have switched entirely of using FIMA, it’s still kind of like mind blowing that this actually has happened over the last years and yeah, it’s great to be a part of that. 00:06:42 - Speaker 2: Also seems fun to maybe grow in your career along with the company and see those, yeah, that rapid evolution, that hypergrowth over time can be nerve-wracking at times, at least in my experience, but also potentially really rewarding experience. It’s certainly a great learning experience. 00:06:59 - Speaker 1: Definitely, definitely, especially this aspect of Getting things kind of like onto a roadmap and actually making that happen. When you’re studying, you’re kind of like greenfield projects and you can like imagine the most beautiful things, but then when you’re building a product, you have to kind of like find a way for this to actually happen. It’s been interesting. I’ve been working on mostly focused on prototyping things and it’s been interesting that kind of like slowly we’re getting into this position where it’s like less features are immediately clear of what should happen, kind of like coming next. But it’s the things we’ve been talking about 3 years ago are slowly coming to the space where now they are actually being shipped, and we can now stand on top of them and look even further. And that’s pretty exciting to see that like these wild thoughts are now becoming reality, and now you’re thinking newer wild thoughts and I like that. 00:07:53 - Speaker 2: How do you find designing for designers? On one hand, maybe that sounds great cause you can maybe introspect your own needs a little bit, but on the other hand, it sounds miserable because they’re incredibly fussy. 00:08:05 - Speaker 1: I actually love it cause imagine the case where kind of like I would now just be a designer, basically, and I would like have all these ideas of how this design tool could be better. I kind of like love working for designers because seeing what they do. With the features that you imagine, is so much cooler than the feature itself. So kind of like building things where other people can build things, it’s just really rewarding that on one hand, and then the other hand is that having designers and user tests, but also kind of like having designers design features for you. Because I really want this feature. It is amazing. Just today, I’ve seen a tweet thread about how comments in Figma could work and it’s just amazing of how much detail and how much love people put into these ideas of helping us improve our product essentially. 00:08:54 - Speaker 2: And speaking of that, I’ll also throw out, you are a new user and customer, so thank you for your business. That came to mind because, yeah, you’ve given us really great long detailed feedback along the way, both in the forms of concrete suggestions, you know, it could work like this. But also I think cause you know what it’s like to be on the receiving end of that, sometimes more the why, like what’s the problem you’re trying to solve, what’s the feeling you’re having when you go to do a particular thing and you get this particular result, and I think you, you started with us around the time of the beta, and you know, then it was a pretty rough around the edges thing and you saw the potential, but it didn’t really Fit into your flow, but you gave us great feedback anyways and kind of check back periodically and eventually became something that hopefully fits into your creative workflow a little bit. 00:09:38 - Speaker 1: Oh yeah, it’s amazing. Like, I’ve been using it a lot more recently, especially since the alpha of like the 2D canvas. That has really changed the game for me, but I think especially kind of like seeing new too of like from a more, I would say maybe more research experiment to actually kind of like, hey, this is a day to day tool for me. And what I love a lot is how the relationship to the device changes based on the input. Just through using a pencil, it’s just a significantly different experience, a far more intimate experience really with the device, because it really feels like just I’m writing on paper. Paper with superpowers, right? Like I can drag things around and I can really easily switch my tools, and so I love using it. It’s really great. 00:10:22 - Speaker 2: Awesome, thank you, and thanks for the new marketing slogan. We might need to swap that out on the website. People with superpowers. So our topic today is collaborative creativity. And this is something, you know, Mark and I have been talking a lot about, we’ve been talking about a lot of the team because as we think about sort of multi-user features and when or if those make sense for you, and in general, I think the incredible collaboration features that are in a lot of the current, let’s say, suite of tools that a lot of folks in the tech world use, that’s Figma, of course, but it’s also something like Notion, Google Docs going back a little bit further, maybe something like Air Table, and so then you have this question about like how does solo work work or how do we sort of interleave together the solo time and then the working with others, you know, pairing or whatever you wanna call that, there’s feedback cycles and all that sort of thing. So to me it’s a very vast and interesting topic and I know you have a pretty developed, it seems to me from our conversations in the past on it, you have a pretty developed or rapidly developing, let’s say thesis on this, so why don’t you tell us a little bit about how you think about collaborative creativity. 00:11:31 - Speaker 1: I think it’s interesting also kind of like tying back to how you introduced me in the beginning, that this is a topic I’ve ultimately been working on for a couple of years now, on and off really. But my bachelor’s thesis was on this aspect of personal creativity and knowledge management, and I think at the core it’s kind of like, where do ideas come from and how could computers be set up to support these. But then recently kind of like flipping a lot more around this value of iteration, as kind of like working on Figma as a design tool, but also the value of collaboration and the combination of those two. And I think that the concept of collaborative creativity includes all of those aspects and kind of like brings it together. And I think it’s interesting that really fruitful moments where working together with other people, those memories just always kind of like relate to being together in the same physical space. And being able to work on top of each other’s ideas really fluently, and because we trust each other, we can like figure out a problem that we have in our heads really, really quickly. And this kind of rapid iteration, this rapid building on top of each other’s ideas is, I think, at the core of collaborative creativity or is collaborative creativity itself. 00:12:45 - Speaker 2: So, give us some examples of collaborative creativity. There’s obviously like, I guess what you described there is sort of being with your colleagues, you know, in a meeting room, brainstorming on a whiteboard, but how do you see this, especially in the modern distributed world. 00:13:10 - Speaker 2: What I’ve recently seen on Twitter a lot, it’s also funny but like I’ve seen these things on Twitter, but like these TikTok remixes, and I think just recently there’s this like sea shank, the sea shanty TikToks, those are great to describe what those are in case you haven’t seen them is basically people singing these songs in harmony, but they do it by one person records singing. And then the next person essentially layers their singing on top of that video, and you see all the faces and hear all the voices together, but of course it’s a very much an asynchronous process in many cases I think these people didn’t even necessarily know each other. 00:13:37 - Speaker 1: And I think that’s just so fascinating because it’s really good and I think it’s a different example. So while this collaborative creativity in the whiteboarding space feels more like an immediate way of collaborative creativity, this is definitely, it’s still the same core idea. It’s just kind of like happening asynchronously. And I think those tools like TikTok allow for this to happen because I’m able to build on top of your idea. I’m able to take your idea and not necessarily manipulate. directly, but adds to it, which creates this fascinating effect. 00:14:10 - Speaker 2: I feel like that takes us to the whole realm of sort of maybe like remix culture, certainly open source is very much built on that as well. And of course a lot of discussion, maybe not so currently, but maybe in the last decade about kind of copyright law and how that in many ways interferes with this potentially great remix culture. You had DJs and that sort of thing. You see that in the spectrum of collaborative creativity. 00:14:36 - Speaker 1: Oh yeah, definitely. I think it’s an important aspect, and we’ll get later in more detail to this that like the ultimate or kind of like original owner of ideas should be in full control over what others can do with this, essentially. I think that’s a key part of establishing trust in such a kind of like network of people who could work on the same thing. And I think that that’s one aspect of how to kind of like establish this way of working. 00:15:04 - Speaker 2: I mean, idea ownership is so fuzzy, even if you leave the realm of, I don’t know, public copyright, intellectual property, whatever. I think even on a team making a shared document, in most cases the teams I’ve been on, I and others on the team feel sort of uncomfortable doing heavy edits to someone else’s documents unless they were very specifically invited. You know, you can leave comments, maybe you can make a little fix, good suggestion changes, you can add something to the bottom, but you have this sense of like, OK, they own this and you don’t kind of want to mess it up. You feel like you’re a guest there, even if it’s in a team workspace, just sort of an interesting, I don’t know, we have this innate sense of ownership, I think, over ideas or a creative output, which may or may not be logical, but nevertheless seems to be part of the human experience. 00:15:52 - Speaker 1: I wonder how much of this ultimately comes back to the tools themselves too, in the sense that what I’ve seen happening in teams using FIMA a lot, that kind of like allowed this very immediate way of collaboratively iterating on the same space that person A creates an idea, creates a couple of marks for this. Person B comes in and takes kind of like the second. and explores the second mark further. Person C kind of like uses something else and kind of like just draws out their their direction of this. And at some point, maybe some person zooms out and sees the connecting dots between of those and kind of like puts these things together. And I think at that point. What has happened is that people inspired each other, but it’s very, very fuzzy of kind of like who had the key spark of it. And so I think at that point what we’ve seen happening, that’s actually really fascinating is that the culture of teams changed towards a culture where it feels more like our ideas over my ideas. Where just because the tools are not just because of those tools, but also because of the tools, it enabled people to take that ownership less seriously, because they realized if we take that ownership less seriously, we can actually arrive at better solutions down the road. 00:17:08 - Speaker 2: Yeah, that makes sense. And even speaking in terms of just coming back to the more just brainstorming in a group verbally or whatever, one of the ways I know the best collaboration, some of the people that I’ve worked with over many years, including Mark here, is that often it’s just not really clear exactly as you said, where the idea came from, and every so often I feel like I catch it in the moment happening. There’s one case I remember of, we’re trying to, I think it was actually just a debugging kind of scenario pair programming kind of thing. And the way we found the idea that ultimately was the breakthrough was actually one person said something and I misheard them. I was like, oh, that’s brilliant, that’s totally it. And, you know, they respond with, oh no, that wasn’t what I was saying, but now that you mentioned it, and so, wait, whose idea was that exactly? Clearly it was the product of our back and forth to claim that was one person’s idea would be, I guess, like a pointless endeavor to try to assign it to a single name. 00:18:05 - Speaker 3: Yeah, I think it’s absolutely the case that creativity, whether it’s among multiple people or with yourself over time, is a very iterative process that involves taking a lot of ideas, remixing them, borrowing stuff, eliminating stuff, adding variants, exploring, playing. I know there’s something you’ve thought a lot about because I’m curious if you have more theories on how this works. 00:18:27 - Speaker 1: One thing that during our bassists thesis and also kind of like now getting back to this a lot, is this concept of bisociation from Arthur Koestler, and it’s essentially this idea that Any form of kind of like creativity, be it like humor or science or art or conflict just I would also just include problem solving, is this aspect where you have a spark that ultimately originates from two orthogonal kind of like planes of thought or two orthogonal kind of like spaces of ideas, and because they meet. They create a new thing or when they meet, they create a new thing. It’s slightly different than association, which just means the connection between those two things, but that the connection itself is a new thing, existing from two independent frames of thoughts. That’s like at the core of where ideas come from. 00:19:16 - Speaker 2: Yeah, I even go so far as to say, or maybe I’ve heard creativity defined as connecting unrelated ideas, but maybe where this fellow Arthur Koestler, I guess his last name, where his work maybe it’s this idea of two different frames or two different domains where it’s an unexpected connection, and in fact one of the things that I think I see written in kind of like how to have good ideas type. Books like Steven Johnson’s works or whatever, is often about people who are in different domains. They work in one field, for example, and then they go to solve a problem in another field and they’re able to apply ideas that are commonplace in one field in this new place, and that’s that weird intersection that produces something truly new. 00:19:59 - Speaker 3: Yeah, and I think part of the challenge here is the ideas need to be primed in a sense to be joined or synthesized. So that’s why things like chewing over ideas, discussing, debating, remixing, these are all different ways to basically ruminate on the content, and by doing so you sort of prepare it for synthesis with another idea. 00:20:18 - Speaker 1: Exactly, that was one of the things that was also really fascinating to read through, is basically kind of like debunking this myth of this eureka moment. Whereas like, you expect this eureka moment to be this like singular entity where everything kind of like goes from 0 to 100 and it’s like all kind of like falls in place, but then you look closely at these stories around Newton and around Darwin, and you kind of like see that they have had their theories around for years before this, and they were really close. And so it’s not that in this eureka moment everything fell into place. It’s just maybe this last thing connected. But 95% of this idea was likely existing already or of this theory or of this concept. 00:20:59 - Speaker 3: Yeah, and a sort of corollary of this is that you can’t stare at something too hard. Like if you just sit down and think really hard about a particular idea or even a particular problem, you’re likely to be too constrained in your thinking, you’re get a sort of tunnel vision that obscures these other ideas that you need to connect in. So you really have to step back, chew on some other domains, chew on some other topics, and then hope that eventually it will sort of pop out as a synthesis with your other problem domain. 00:21:24 - Speaker 1: There was some interesting research we’ve read into and if there’s any kind of like neuroscientists there and I’m like representing this inaccurately, let me know, but that basically you have a set of stacks of possible kind of like positions for thoughts or snippets of thoughts, and between that stack you can create connections. And if this is a new connection, that would be considered an idea, and you do that in your subconscious all the time. But basically, when you’re staring at something for too long, all of your stack will be kind of full with all the things you’ve read and worked on. And there is a point where you just don’t see any new angles on this content, cause like the stack is the same things since 3 hours, but then you go outside, you summarize these stacks. They become kind of like less defined and more blurry, and then you see a dog walking around and some other things kind of like are popping up, and suddenly they’re like, oh, I could connect those two together, because suddenly you are free of these distractions. That’s the perfect shower moment actually fits perfectly into this. Because in the shower, there’s just not a lot of things you can do in the shower. You’re kind of like just naked there and alone with your thoughts, quite literally. 00:22:37 - Speaker 3: Rich Hickey makes a similar point in his talk, hammock Driven Development, which I very highly recommend. 00:22:44 - Speaker 2: I’ve probably recommended it on this podcast before, Mark, it’s always tricky because I think you’ve mentioned that enough times now. I’m probably gonna stop putting it in the show notes. OK. But clearly I can see it’s a high impact piece, so everyone should go and read it. 00:22:56 - Speaker 3: He makes the point, there’s also a sort of priority que element to this, which is you have end domains that you’ve ever thought about, but to pick a number, the top 7 that you’ve thought about most recently are sort of candidates for this background mind synthesis to happen. That’s not exactly true, but there’s a sense of the things that you’ve chewed on more recently. are more likely to be part of a synthesis of an idea. And so part of the work is actually to constantly shuffle your priority cue around by changing the ideas that you read about or think about together in time, and eventually you kind of find the right combination of 7 things in your head in the shower and out pops the shower idea. 00:23:31 - Speaker 1: I think this is great. Yeah, there’s a ton of approaches on how computers, but also just processes and behaviors can support this concept of by association, kind of like make the right content available at the right time is something where I think all played with of recommended content, right? But also. As a way to structure your research in a different, more natural way, ultimately follows the same goal. It’s about kind of like making the content, the knowledge that you have available at the right time, so it can be in your head, so you can connect it to other things, to new ideas. And I think that’s also where I would place muse into the space, that kind of like it’s a space primarily for kind of like maybe marinating on your ideas and exploring it maybe in different ways. Here’s a PDF, here’s a video of someone explaining this. How do you see the role of muse in this personal creative process? 00:24:25 - Speaker 2: Yeah, for sure, that’s certainly exactly how I use it. I feel like one of the cornerstone maybe features we introduced was the excerpting, which the idea of pulling out pieces. This isn’t quite a remix, it’s almost the reverse of that. It’s almost like a deconstruction, and for me I often have successive stages of that, which is, OK, I’ve read a few books on a particular topic. Now I wanna go and kind of apply that knowledge to a domain. And I’ve got my Kindle highlights and I’m pulling those, and there’s a pretty easy way to pull that in this PDF to muse and then I’ve sort of got those there and I can go through it and then I can pull out of my highlights, sort of like highlight my highlights or something like that, but I exert out the ones I think are most relevant. And then importantly order them, so they’re sort of near each other in different combinations, or do a little bit of the affinity mapping thing or something like that, push it around, but yeah, part of what I’m trying to do there is boil down to some components that hopefully for me will add up into a call it a new idea or a strategy for whatever problem I’m specifically trying to solve in the moment. 00:25:35 - Speaker 1: I think this fits into what we learned during our special the well. We interviewed an historian and she had a word document, which was, I think, up to 300 pages long, and it was just a glossary of words and references to other places where she’s read about these words in other books and other sections. And just that document alone, it was just 300 pages of references to other content. And just seeing that and how people use even a very simple tool like Word basically for something like this knowledge management task, like this humongous knowledge management task was pretty inspiring too. 00:26:13 - Speaker 3: Yeah, I think there’s an interesting spectrum here with tools for thought in terms of how explicit they try to make these connections and how much the tool is actually designed to output those. So Muse is, I would say on the end of the spectrum, it’s more like you’re meant to marinate with your content, then it’s swimming around in your head and out are gonna pop new ideas from your head. And that’s good for like intuitive domains and coming up with new ideas and brainstorming and things like that. But then when you’re writing a history paper, for example, you need extremely specific documented references, and so there it’s more important to have a very explicit trace of every connection that you might have made in the past so you can substantiate all your claims and have all your sites. And I think both of those things have their place, but I think it’s important not to confuse their purposes. I think you can’t force having new ideas by kind of structuring all your stuff in a graph or something. And conversely, if you try to intuit your way to a history paper, you’re gonna have a bad time. So I think that both of those extremes have their uses. 00:27:10 - Speaker 1: Definitely, I think that another thing that fits into this is how can you frames of thought come into your mind, kind of like diving more more deeply into iteration itself. I love this model, this, I think it’s a mind sketch model from Bill Buxton that is kind of like outlined in sketching User Experiences. It’s an amazing book. My roommate recommended it to me because he did his bachelor’s thesis on how to prototyping tool, and he basically gave this to me, I think 1 year ago or something, after I was already working for nearly 2 years on prototyping at Figma, I hadn’t seen that book before. And then when I read this, like a lot of what is today originates from this book. And the core process of federation is this aspect that you create something, you externalize something. Because you externalize this knowledge, you can now take a step back and evaluate what you’ve created and learn from it. 00:28:05 - Speaker 2: Yeah, and in Buxton’s model, that’s the sketch. And when he talks about making a sketch that has this very, it’s not just a pencil on paper or that has a particular line width or something like that. It’s specifically that it is a very rough and purposely Not complete, leaves a lot to the imagination, maybe raises more questions than answers, but it is this externalization that then you can step back from. You can both share it with others, but even just yourself, you can step back from, you can look at it, kind of look at it from different angles, squint at it a little bit, and it will reveal new things that that same idea just purely in your mind might not. 00:28:45 - Speaker 1: Exactly, exactly, and I think that’s just amazing that that’s possible, that we as humans are capable of doing this, of externalizing our own ideas and then gaining new knowledge because we’ve done that. Like, where does this information come from? 00:28:59 - Speaker 3: I think there’s actually a lot going on there, right? Because some of the knowledge you get from the process of actually having to externalize it, cause you’re changing the format basically, and that involves processing of everything. You’re also learning by looking at it and seeing, for example, the empty space, which wouldn’t have been present in your associative mind. And you’re also learning at it by being able to show people. You’re also learning by being able to refer to it later in time, and you’re also freeing up space in your mental priority queue because you no longer are subconsciously thinking, I have to remember this, I have to remember this. So it seems like a simple thing, but there’s so many different ways in which you’re learning just by doing the simple process. 00:29:34 - Speaker 1: What I love is, or also where the core of my thesis is placed around is essentially, what are the models now with collaboration that fit into this? Cause you mentioned it that collaboration can help with this process as well. And of course I can show it to someone and they can kind of like communicate things back to me, and they can talk about this and directly give me some kind of advice on how to change things. But I think it’s interesting to look at it more closely on collaboration through creation, or communication through creation or manipulation, essentially, that if I create something and let’s say I create a file, I create a design file, and I sent this design file to you, and now you have a copy of this design file, and you make changes in this design file and send it back to me. Or I just kind of like take a screenshot and send it to you and you scribble on top of that screenshot and send it back. That’s the first step, kind of like the first model of collaborative iteration, and I would call it kind of redundant collaborative federation, cause we duplicate these objects, and because we’ve duplicated these objects, we can collaborate on those, and I think that has been in a lot of times the way we just collaborated on nearly anything in the digital space. Like duplicating things in the digital world is slightly harder. But in the digital world, it has been like this since email basically existed. 00:30:55 - Speaker 3: And I’m curious if you see that as a strictly inferior form of collaboration or if it’s more like a different mode. So to my mind, to my hand here. I feel like that’s one of a few possible modes of multi-user collaboration and it has its uses. So for example, when Adam and I are writing, we’ll often have a draft and we’ll send a bunch of other individuals their own unique copy of the draft so they can be able to write whatever they want and they’re not getting groupthink by seeing everyone else’s comments. And then we take all those comments and we synthesize them in another draft, and then you might go into another type of collaboration, which is everyone’s looking at the same document and making real-time edits because you’re kind of converging. It’s a different use case. 00:31:31 - Speaker 1: Oh yeah, definitely, and I think that that was one of the big steps basically that for me at least internally you kind of like wrapping my head around this, is not looking at these different modes of collaborative federation as good or bad, but it’s just solving different types of problems, solving different kind of like steps in the process essentially cause what you’re saying is totally right, like what this redundancy also helps is comparison. And when we talk kind of like more detail about these like open canvas tools like Figma. What happens a lot of times just inside of those is redundant iteration as well, right? Like I’m duplicating this frame, I’m just not changing this frame because I need the ability to compare this. What you’ve kind of like mentioned is the need for different audiences of people ultimately, and different audience levels have to respond to the relative content level inside of there. If there’s a lot of work in progress comments. That you don’t want leadership to see, you might want to bring this into a different document where there’s an empty collaborative space. So that definitely makes sense. I think it just solves for different purposes. 00:32:36 - Speaker 2: That potentially could take us to a whole other space or a whole other discussion topic, which is feedback, what is feedback, how to give good feedback, how to solicit good feedback. Probably we don’t wanna, uh, get too diverted on that, but it, it comes to mind because talking about the different audiences, if you’re presenting something to your boss, to a client, or to anyone where you know their time and attention bandwidth is limited, and you want to get there. Like big picture view on things or just kind of a thumbs up, thumbs down, or keep them in the loop. And that’s different from, here’s my teammate, we’re both collaborating on this thing and we want to really go into all the fine details together. You’re just seeking something different from the feedback and being aware of what it is that you’re seeking in that feedback loop can help you have the right format or the right level of detail. 00:33:25 - Speaker 1: Exactly, and I think that for a tool or for a creative tool, essentially, it is important that people are in control. Like this is kind of like looping back to what we’ve discussed at the start, that people can be fluently moving between the different ways of collaborating, and that they kind of can invite the stakeholder with certain permissions, and the client with certain permissions, and the teammate. And I think the question is kind of like, can this still happen in the same space, although those people have different permissions. 00:33:58 - Speaker 3: Yeah, this is something that I feel like we’re still organically discovering as tool makers. So if you go back to the before times where everyone was emailing attachments to each other, that worked very well for the what you call redundant collaboration use case. You just send someone a copy and they can do whatever they want, and then we’re done they can send it back. But then if you want to have a Shared unified state somewhere, that’s really hard in that world. And then we got this whole world of new tools including Sigma and Google Docs, and that makes the real-time synchronized shared collaborative space, first class, but I feel like sometimes it actually makes it hard to do the individual private collaboration, often just because it’s really hard to make a copy of stuff. I feel like in Google Docs, for example, just to make a copy of a document is a bit of a heavyweight operation, it takes a few seconds and makes weird names and so on. One of the reasons I think it happens more often in Figma is that it’s very easy to make a copy, especially if you’re doing a very lightweight copy on the same canvas, you just highlight command C, V, I think, and that just pops out a new version, then you can kind of scribble on that and then go back and do your merge later. Another tool example here would be Git, which I feel like has its UX challenges, but it does get this right. Well, plus GitHub. It didn’t have this before GitHub. You know, the local Git gives you the privacy to do whatever you want and mess with stuff, and then GitHub provides the unified central state. 00:35:13 - Speaker 1: Exactly, and I think that I would categorize all of those into kind of like restricted collaboration or restricted collaborative federation because they somehow constrain how the different people can manipulate these shared objects. Either they kind of like restricted through having a private copy first that you need to kind of update manually or through kind of like enabling people to limit someone’s access in there. One thing that I’ve seen quite often now is that in Google Docs and in paper, the like, is that people create their kind of like appendix, trash, don’t look below here. Yes, these kind of spatially close areas because it maps toigma too. I was like, here’s my trash area, don’t look at these things in here like, like, like please don’t, these are bad ideas. There’s an interesting aspect there that I would love to dive deeper into at some point around like, why can’t we let those things go. Oftentimes you don’t look at these things, but you kind of still want them to be there. You want them to keep them around because in the case you need them. You feel really bad if they’re gone. 00:36:17 - Speaker 2: Yeah, old notebooks is the same way. Even older muse boards for me in a lot of cases are things that are mostly just historically interesting. Every once in a while it’s kind of cool to be able to reference it, but the reality is, you want that end thing. You usually don’t need any of the steps that led up to it. Get history. the same thing. Like you could probably for almost any project, go in and chop off all the Git history from, you know, prior to a week ago, and it wouldn’t really make any difference for any day to day work, but yet there’s that feeling of something lost, something important that every once in a while it’s nice to be able to reference. 00:36:55 - Speaker 3: Yeah, so I feel like there’s that temporal angle of eventually you might want to archive something, but I also feel like there’s sometimes a tooling limitation where, especially in these modern apps, they’re very oriented around enterprise work groups, and so if you want to have a personal space, it’s a little bit unnatural, you either need to go out into your my driver. Something which is a whole ordeal, or you need to effectively carve off your own little personal space within a document by hitting enter 10 times and saying mark notes and typing below that. And one of the things we’ve explored in the lab and with views is, can you make that more fluid by making the transition between the personal and the collaborative space much more seamless. The analogy that I always come back to is the university department. where you have a private office and you have your faculty lounge, and you can take a few steps over and back and you can bring your papers over and back and you can check out the whiteboard across the hall. And that’s sort of very seamless collaboration, where it’s all the same office building, it’s just different zones are demarcated slightly differently, and it’s very lightweight to move in between them. That’s the kind of vibe I’m hoping for with digital tools. 00:37:57 - Speaker 1: Yeah, I think that would be amazing. Like the current solution basically in Figma is that like drafts or new files always open in drafts and drafts are private by default. So that creativity as an intimate process can start in private, because oftentimes there’s a ton of internal barriers in your head of like, is this really a right idea? Do I want to share this? There might be kind of like external barriers of a culture in which kind of like bad in quotes, bad. Ideas are shut down from the beginning, or you’re fearing being judged for those ideas or just sharing those ideas in general. And I think there’s a ton that like how this flow can just feel a lot more fluent as you described. I could imagine like news sports, basically, this is my private news board and we can be together in the same news port, but down here, like inside, I’m zooming into this space, that’s my office space, right? Yeah, exactly. This is new. Because office space, you’re just technically not allowed to go in there. I think there’s a ton of fun stuff of how the interface paradigms will change the relationship of how we look at these digital collaborative spaces and how we also kind of find ourselves leveraging the cultural habits that we have with shared physical spaces and bringing them into these digital spaces. If you’re in an office building, it seems like decades ago that you’re like in an office building, right? But like you have this cultural understanding that you don’t go into someone else’s office, especially when there’s other people sitting in there. You just wouldn’t do this, right? And in digital spaces, it feels different, but I’m interested to see kind of like how this will evolve over the next 5 to 10 years. 00:39:28 - Speaker 2: I think learning from the physical spaces and the social cues and all that that we’ve built up over a very long time and trying to bring some of that to digital. is certainly a rich well to tap. I also feel like sort of video chat and screen sharing and things around the live synchronous video and audio might also have some clues for us. One to me that’s pretty telling is the screen share stuff, which of course is just huge for a distributed team, and I’ve gotten pretty handy with setting up my screen in a particular way so that I’ve got a window to share that’s kind of the right size and orientation, so it’ll look reasonable on most people’s desktops. But then if you actually have a multi-window flow, you wanna show, now you kind of need to share your whole desktop, and for some reason that seems way more intimate. I don’t even have, like, I don’t know, text messages going to my Mac, so it’s not like someone’s gonna see a personal message come in on my notification center, I don’t think, but still there’s this. that that’s a much more really letting someone into your private space, which is kind of interesting. And then, of course, there’s all the stuff around. If you have other devices that you need to show like an iPad, or you’ve got an external camera that’s showing, which we often need for showing a person actually using the iPad with their hands. So I feel like there’s a lot there that affords opportunities, but also we need to adapt to and how we think about collaboration and privacy and synchronous and asynchronous for how we work together in, let’s say the modern virtual office. 00:40:57 - Speaker 3: Yeah, I’ve mentioned this theory before that a lot of collaborative and social technology first appears in games. And according to that theory, within a few years, professionals will need to use OBS to do exactly that. I don’t know if you’re familiar with OBS, but it’s a program for streamers to basically render their stream from a bunch of different windows and graphics and stuff and kind of. Deposits it all together into whatever they want to present. And I actually know some professionals who do use this for things like teaching classes where you need to composite a bunch of stuff together. Well, the best program in the world for that is what streamers use. So just use that. And I wouldn’t be surprised if that or a technology like that becomes standard in the same way that microphones and ring lights and all that stuff did become standard for office workers. 00:41:37 - Speaker 1: Zoom definitely, I think there’s Studio Beta, which is I think basically like Snapchat like filters for Zoom, and I think there’s some feature in there that look like integrate kind of like a PowerPoint slide presentation, right, into your background and maybe key things out or something. And I think that’s a start in this. I think you’re totally right that like these things will just become a lot more accessible for day to day work of kind of like creating these mixed media streaming environments. One thing I’m really interested in though is this aspect of kind of like what makes this work ultimately in the end, like, what is the oil for this collaborative iteration process of we are improving each other’s idea really work, and I think that there’s a bunch of things to dive into in this aspect around the culture for collaborative creativity. Cause we’ve touched on it a little bit, but this aspect of people can feel comfortable sharing bad ideas, essentially, is what at the beginning of an iterative process, right? Like the ideas you’re going to share are not ideal. And if we look at collaborative iteration and we see that there’s value in bringing people together that trust each other, what cultures would we have or kind of like what cultural shifts would need to happen for this to become more fluent. 00:42:55 - Speaker 2: Well, trust certainly seems like a huge part of it, and that’s how you actually build trust on a team, you know, it’s one thing if you’re longtime friends or longtime collaborators, but when you have particularly, for example, a fast growing company, as we were talking about earlier, and you have essentially relative strangers, maybe from different backgrounds that come together, it’s probably even harder when you have less or no in person time. In the world we live in now. And so, is that something software can solve at all or is this purely a classic human management problem and we need to like do exercises where we fall backwards into each other’s arms in order to be able to make a a shared document uh together successfully. 00:43:34 - Speaker 1: I think it’s actually kind of like interpersonal maturity and interpersonal relationship that we have to learn through the tools. Tools can give us guardrails. Like, if I know that this is a production thing, this is the thing that is used in production, I’m definitely going to kind of like use GitUp and will restrict the access to this and maybe only allow me to merge things into the main branch and like have these guardrails and structures in place so that collaboration can also grow in this environment. But then separately, being together in the same file at the same time. At any point in time, you could hit command A, select everything and hit the delete key and just get rid of everything that’s there. Yet we still don’t do it. So the tools still allow this. They still allow fucking up each other’s work. So the fallback has to be a cultural way of working together. But one thing that we’ve seen with Sigma is that Sigma grows rapidly inside of a company once you invite other people, and they kind of, they invite other people, they create content, they invite other people, so it’s beautiful to see that. But then separately, one thing that at the beginning seemed kind of like independent of all of this was that like Halloween 2019. I’ve seen a lot of people dressed up as figma cursors for Halloween. And I was like, why is this happening, right? Why are you dressing up as feeling my curses? Why do people have kind of like group costumes where everyone is a thing about curses and they’re just like roaming around this like space. And it’s been fascinating looking back at this, because I think looking at the culture and looking at the tools, is that what FigMA had enabled for these teams was that they trusted each other, and now they were able to build on top of each other’s ideas in a far more efficient way than they’ve ever done before. And it might have even helped them to establish these cultures in the first place. To be like, now that we are in the same space, this maturity of how we work together becomes more important. We see how beautiful it is when it works, and now we actively want to work towards this, so that it’s not kind of like, oh yeah, this is like randomly happening, that I’m able to have another idea because you’ve had an idea and put this down and shared it. It’s not serendipity, it’s actually something that we can actively to work for. And so I believe that like the tools that open up these collaborative processes actually can incite a change of making cultures more inclusive and more open and more respectful to work with, and especially getting rid of the Steve Jobsmith of like, hey, good feedback is like direct feedback, right? Like this is dog shit. It’s not gonna help you in the long run build better ideas or come up with better ideas. 00:46:17 - Speaker 2: On the feedback side, I feel like the culture, it’s culture, it certainly is trust, but when I’m working with a new person, whether it’s on a writing project, something product design related, or even things externally in my personal life, you know, collaborating with a cohabitation partner on Decor, for example, I feel like when you’re first doing a project together, you’re first exploring that part of a relationship with someone, a new colleague, whatever it is, and I’ve sort of learned to prime people a little bit though, like, if you share something with me, I’m gonna give you tons of feedback, usually. Like, often I’ve gotten the feedback on my feedback that it’s sort of a fire hose and can be overwhelming, and I’ve actually learned to even try to trim it down a little bit to like the key points. But that’s also because it’s kind of a golden rule thing, that’s what I like to receive. And in particularly I like really stream of consciousness feedback. I don’t want you to do my thinking for me. What I want you to do is react. I want your hot take, I want your snap reaction of this made me feel like this, and this made me feel Like this, and this made me angry, and this made me happy, and this made me confused. And, you know, it’s not to say that every single point of feedback is something I’m gonna do something about, but that overlaid with feedback from others is how I get a picture of how something I’ve created is. Perceived or potentially could impact an audience, but that’s not necessarily maybe how others work, and maybe they’re surprised by that in both directions. So I really try to establish that up front. You share the thing with me, I’m gonna give you this style of feedback, and likewise, if I’m sharing a thing with you, this is what I want, is this kind of heavy feedback. 00:47:52 - Speaker 1: Yeah, and I think like getting everyone to share these thoughts in the first place, I think is going to be a big change instead of companies where with tools like FIMA, people now have the ability to communicate visually. Anyone in the organization now basically has the ability to communicate visually. But that they are actively actually doing this and using this requires them ultimately to put down ideas that they might not be sure about at that point. And that might be common for designers, right, to kind of like share early thoughts. But if we talk about kind of like PMs or engineers who may have a design idea, or an architecture idea of how something could work, maybe slightly differently, or if the user flow kind of like breaks off here and goes to the path, those things can be amazing ideas even if they’re just shared in the form of a diagram, or a little scribble, or a little kind of like, I don’t know, just like jotting on something, yeah, but those people have to also feel comfortable in sharing this in the first place. And if you’re an engineer in a company or if you’re a PM in a company and you might not be sure of how this design tool space is owned by the designers, right? Can I use this? Does that make me a designer? If that makes me a designer, are other people like annoyed that I call myself a designer, like, there’s nothing about this. It’s just kind of like a core skill of being able to communicate visually, and it can help discussions, especially if that happens in spaces where other people can take those visual objects. And immediately iterate on them. Like we’re still in this concept, we’re still in a space where people can work on top of these ideas again. But I think the key barrier that we’ve often seen is that people kind of like are a little bit shy of sharing this idea in the first place, cause they might feel that, oh, this like, will shine badly back to me. And I think that’s a call for designers essentially of sharing the bad work more openly. We have a design work in progress channel and it’s fascinating to see how much is like work that’s just happening is visible there. Although it’s not always polished, although it’s not always kind of like perfect, it’s so just like, you see that these things are happening. And it has become kind of like one of the most active channels because it established a culture of a different kind of critique, not this culture of kind of like, hey, we shouldn’t ship this, right? Like if you share something in this official design critique channel, you might get feedback of like, hey, maybe we shouldn’t ship this. This is not up to our quality standards. But then it’s like work in progress channel where the quality was just said very differently. The feedback is a lot more of like, yes and style, of like, oh yeah, we could do this too, or like, hey, this could fit into this project that I’m working on, and it feels very different culturally. 00:50:25 - Speaker 2: I have the sense, maybe it’s a stereotype or just reflects some of the designers I’ve worked with over the years, but the designer archetype for me is someone who is much more likely to want to stay in their ivory tower longer and kind of really polish something until everything is completely perfect and without any conceivable critique, and maybe to a straw man a little bit like a delicates. Flake, where when someone says, you know, I don’t completely 100% like this, they’re very upset and maybe engineering types, again, this is perhaps just a stereotype, but are more likely to be a little more willing to take feedback on work in progress. I don’t know, do you think that’s accurate? Is that an outdated point of view, or is that accurate, but something you think you and your team are working to change with your product? 00:51:11 - Speaker 1: I’m lucky that I can say that it’s like outdated for me, that the people that I work with are at least kind of like don’t show this kind of behavior that significantly, at least. I think it definitely exists. It definitely exists. I remember reading the first comments of Figma being published on design and use. If this is the future of design, I’m like changing careers. And I even remember the video, I think. It was like from Sandwich video, this like first initial teaser ad of route Pigma when it first launched, and I remember kind of like it showing a use case where someone just moved something like 10 pixels. Some senior designer moved something 10 pixels and it’s like, oh yeah, I just tightened it up a pitch. And I’m like, if this is the future of collaboration, I wouldn’t be sure if that would have worked. But I think this aspect of once you feel the value of other people adding freely to your ideas, and at the same time also being respected for the things that you’ve done, and you realize that you can now take from all of these ideas and you can like combine them into new ideas, and those are maybe your ideas again, that feeling of being able to tap into everyone else’s mind. I think it is amazing. And one thing that comes to mind is something that started very early on at FIMA that ultimately kind of like kicked off this value of collaboration or this thinking about the value of collaboration a lot more for me, because I initially joined Figma because I liked the components overriding behavior. I was like, hey, this is cool, like I can overwrite more stuff than in sketch. So I got intrigued by that, but then I joined Figma and I was working on the common pins and I just like outlined a couple of the states that we need for common pins. And we joined into the design grid. There’s just a couple of people. Dylan was also working joining Design grids at the time, that was kind of like how small the company was. And then we just for 15 minutes just riffed on top of each other’s ideas. And then I went back from this design grid room with this file in my computer that everyone literally around that has something to do with design at FIA at the time worked on. And it was an amazing feeling because I’d sat there, I was like, there’s so many good ideas in here. And the beautiful thing was that they were not named. I wasn’t even sure who created which parts in this document, and my role as a diner was then to look at these things and see kind of like, how can I combine them into something that is most promising. And so coming back to your question, I hope that this experience pushes people towards working more in the open. Because they see the value of this open iteration, they see the innovative value in being able to tap into other people’s minds, cause there’s more to tapping to other people’s minds and sending something and asking for feedback. But listening to feedback through allowing other people to create in the same space that you create with the right people can definitely feel magical. 00:53:57 - Speaker 2: It’s really powerful, and yeah, it’s kind of vulnerability, but then if you open yourself to that, it’s simultaneously open yourself to it with a team of other people who are doing the same thing, and then have that experience of the shared mind and how much more powerful that is, then maybe that. Charges you up to see the value of it and be more open in the future. Whereas maybe if you get the reverse experience, if you try to open yourself that way, you don’t have the right team or the right culture or the right setting, and you get shut down or you feel rejected or something like that, and then that’s maybe a negative feedback cycle of the same kind. 00:54:32 - Speaker 1: Yeah, exactly, and this is also one of the underlying motivations of why I’m trying to build this model on top of the core aspect of what thought or creativity is for a single mind. That, you know, creativity is pushed through having a diverse set of thoughts in your head, and that the question is, how can these diverse set of thoughts come into your head, and that at that point, you realize that like if other people share their bold ideas and if you’re comfortable sharing their wildest dreams, even though they might be kind of like going against company policy or something, that those things can be the missing spark that someone else needs. And so that because this is tied to kind of like the core aspect of creativity in the mind, you can’t really argue with this. And so that I hope that through this and through experiencing this and the tools and the products that we build, that companies see the value in an open and inclusive design process where people can feel safe of sharing ideas and do not have these experiences that you describe. And I hope that in the next 50, 100 years or something. That’s just seen as an old way of working if you don’t allow people to work like this together. 00:55:40 - Speaker 2: I feel like I can see a parallel there with open source, and in fact the style of working in public with strangers on a code base over time or relative strangers, and that in turn fed back into even private collaboration on code, which is there’s just a different perspective or a different way to be creative, maybe, but you have to bootstrap and do it. So maybe you’re helping do that for design and maybe even the larger world of technology. 00:56:10 - Speaker 1: I think the beauty in this too is that I think it could help design, elevate from being seen as this thing that people do in making things pretty, to be a lot more focused on an aspect of problem solving, essentially, that problem solving in an open solution space. We just don’t have any idea of where to go next or how to evaluate your idea in the beginning, that design can kind of like feel bigger than that, and because it feels bigger than like UI design as we know it today. That through that it becomes more inclusive too, and people might identify more with, hey, I also work creatively. I also iterate on my ideas. These
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: But this totally changes how the data is persisted, and I think that’s important because the only way you get good results on sync systems, especially when you’re talking about offline versus online and partially online, it has to be the one system that you use all the time. You can’t have some second path that’s like the offline cache or offline mode that never works. It needs to be the one true data synchronization persistence layer. 00:00:29 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for thought on iPad and Mac, but this podcast isn’t about Muse the product, it’s about me as the company and the small team behind it. I’m here today with two of my colleagues, Mark McGranaghan. 00:00:43 - Speaker 3: Hey, Adam. 00:00:44 - Speaker 2: And Adam Wulf. 00:00:46 - Speaker 3: Yeah, happy to be here. 00:00:48 - Speaker 2: Now Wulf, you are not at all new to the Muse team, I think you’ve been with us for coming up on 2 years now, but it is your first appearance here on this podcast, a long overdue one I would say. So we’d love to hear a little bit about your background and how you came to the team. 00:01:03 - Speaker 3: Yeah, thanks, it’s exciting. Before Muse, I worked for a number of years with Flexits on their calendar app, Fantastical, both on the Mac and the iPhone and iPad. Really enjoyed that. At the same time, I was also working on an iPad app called Luose Leaf, which was an open source just paper inking app, kind of note taking app of sorts, really enjoyed that as well. 00:01:28 - Speaker 2: And I’ll know when we came across your profile, let’s say, and I was astonished to see loose leaf. It felt to me like a sort of the same core vision or a lot of the same ideas as Muse, this kind of like open-ended scratch pad, multimedia inking fluid environment, but I think you started in what, 2013 or something like that, the Apple pencil didn’t even exist, and you were doing it all yourself and, you know, in a way maybe too early and too much for one person to do, but astonishing to me when I saw the similarity, the vision there. 00:02:03 - Speaker 3: Yeah, thanks. I think the vision really is extremely similar. I really wanted something that felt physical, where you could just quickly and easily get to a new page of paper and just ink, and the, the app itself got out of your way, and it could just be you and your content, very similar to you sitting at your desk with some pad of paper in front of you. But yeah, it was, I think I started when the iPad 2 was almost released. And so the hardware capabilities at the time were dramatically less, and the engineering problems were exponentially harder as a result of that, and it was definitely too early, but it was a lot of fun at the time. 00:02:42 - Speaker 2: And I think one of the things that came out of that, if I remember correctly, is this open source work you did on ink engines, which is how we came across you. Tell us what you did there. 00:02:52 - Speaker 3: Yeah, there’s a few different libraries I ended up open sourcing from that work. One was the ink canvas itself, which that was the most difficult piece for me. The only way to get high performance ink on the iPad at the time was through OpenGL, which is a very low level. Usually 3D rendering pipeline. I had no background in that, and so it was extremely difficult to get something up and running with that low level of an architecture. And so, once I had it, I was excited to open source it and hopefully let other people use it without having to go through the same pain and horror that I did to make it work. But then one of the other things that was very useful that came out of loose leaf was a clipping algorithm for Bezier curves, which are just fancy ways to define ink strokes, basically, or fancy ways to describe long curvy, self-intersecting lines. And that work has also been extremely important for Muse as well. We use that same library and that same algorithm to implement our eraser and our selection algorithms. 00:04:05 - Speaker 2: And when you’re not deep in the bowels of inking engines, or as we’ll talk about soon sinking engines, what do you do with your time? 00:04:13 - Speaker 3: Oh, I live up in northwest Houston in Texas with my wife Christie and my daughter Kaylin. And she is in high school now, which is a little terrifying, and learning to drive and we’re starting that whole adventure, so that’s been fun for us. I try and get outside as much as I can. I’ll go backpacking or hiking a little bit. That can be fun, and the Houston summer, it’s rather painful, but the springs and the falls, we have nice weather for outdoors and so. 00:04:42 - Speaker 2: What’s the terrain like in the day trip kind of range for you? Is it deserty? Are there mountainous or at least hilly areas, or is it pretty flat? 00:04:52 - Speaker 3: It is extremely flat and lots and lots of pine trees, and that’s pretty much it. Just pine trees and flat land. Sometimes I’ll drive a few hours north. We have some state parks that are nice and have a bit of variety compared to what’s immediately around Houston, so that’s a good backup plan when I have the time. 00:05:14 - Speaker 2: Flat with a lot of trees sounds surprisingly similar to the immediate vicinity of Berlin. I would not have expected Texas and northern Germany to have the commonality there. It gave me a lot of appreciation for the San Francisco Bay Area, while that city didn’t quite suit. Me, as we’ve discussed in the past, one thing that was quite amazing was the nature nearby and a lot of that ends up being less the foliage or whatever, but more just elevation change. Elevation change makes hikes interesting and views interesting and I think itself leads to, yeah, just landscape elements that engage you in a way that flatness does not. 00:05:55 - Speaker 3: Yeah, absolutely. I lived in the Pacific Northwest for a while, and the trees there are enormous, and the amount of green and elevation change there is also enormous. And so when we moved back to Houston, it was a bit of a shock almost to see what I used to think were tall trees in Houston are really not very tall compared to what I lived around up in Portland, Oregon. 00:06:21 - Speaker 2: So our topic today is sync. Now Muse 2.0 is coming out very soon. We’ve got a launch date May 24th. Feels like tomorrow for our team scrambling to get all the pieces together here, but the biggest investment by far, even though we have the Mac app and we have text blocks are a part of it, the biggest kind of time, resource, energy, life force investment by far has been the local first sinking engine. And we’ve spoken before about local first sync as a philosophy generally in our episode with Martin Klapman, but I thought it would be good to get really into the details here now that we have not only built out this whole system, both the client side piece and the server piece. But also that we’ve been running it in, won’t quite call it production, but we’ve been running it for our beta for a few months now, and we have quite a number of people using that, some for pretty serious data sizes, and so we’ve gotten a little glimpse of what it’s like to run a system like this in production. So first, maybe Mark, can you describe a little bit how the responsibilities breakdown works in terms of between the two of you on the implementation? 00:07:32 - Speaker 1: Yeah, so I’ve been developing the back end or the server component of our sync system, and Wulf has been developing our iOS client that is the core of the actual app. 00:07:45 - Speaker 2: Yeah, on that side, I kind of think of the client persistence or storage layer as being the back end of the front end. So that is to say it’s in the client, which obviously is a user interface heavy and oriented thing, but then it persists the user data to this persistence layer which in the past was core data, is that right? Well the kind of standard iOS storage library thing. 00:08:08 - Speaker 3: Yeah, that’s exactly right. Yeah, we used core data, which is Apple’s fancy wrapper on top of a SQL light database. And that just stores everything locally on the iPad, like you were saying, so that way the actual interface that people see, that’s what it talks to. 00:08:25 - Speaker 2: And then that persistence layer within the client can talk to this back in the mark has created. And much more to say about that, I think, but I thought it would be nice to start with a little bit of history here, a little bit of motivation. I’ll be curious to hear both of your stories, but mine actually goes back to using my smartphone on the U-Bah, so that’s the subway system here in Berlin, when I was first working with some startups in the city back in, I guess it would have been 2014, so, 8 years ago I had this experience of using different apps and seeing how they handled both the offline state but actually the kind of unstable state because you have this thing where the train car goes in and out of stations and when you’re in the station, you usually have reception, weak reception, and you leave the station that fades off to you essentially fully offline, and so you’re in this kind of unreliable network state all the time. And two that I remember really well because they were really dramatic, was one was pocket, which is the relator tool I was using at the time, and it handled that state really well. If it couldn’t load an article, it would just say you’re offline, you need to come back later, but the things it had saved, you could just read. The other one I was using was the Facebook mobile app, and there I was amazed how many errors and weird spinners, and you go to load a thing and it would get half of it, but not the rest of it, and the app just seemed to lose its mind because the network was unreliable, and I found myself thinking, what would make it possible to make more apps to work the way the pocket does and less the way that Facebook works. And I also had the opportunity to work with some startups here, including Clue and Wunderlust and some others that had their own. Essentially everyone needs this. Everyone needs syncing because they want either one, the user to be able to access their stuff from different devices, or 2, they want some kind of sharing, and I think Vonunderlust was an interesting case because they built out this crack engineering team. To develop really good real-time syncing for a very simple case. It’s just a to do list, and the common case that people use it for, I think was, you know, a couple that’s grocery shopping and they want to like, make sure they don’t overlap and pick the same things in the cart. But it worked really well, but they built this huge, I think it was like a 15 person engineering team that spent years of effort to make really good real-time sin, and it seemed strange to me that you need this big engineering team to do what seems like a simple thing that every app needs. We went down this road of trying CouchDB and Firebase and a bunch of others, and all were pretty unsatisfying. And then that further led in, you know, that kind of idea, the sync problem lodged in my mind and then when we got started at ink and Switch, some of our early user studies there were on sync and how people thought about it. And one thing that stuck with me from those was we looked into just kind of syncing on. And note taking apps and talked to a whole bunch of people about this, and we didn’t have a product at the time, so it was just kind of a user research study, but we went and talked to a bunch of folks, most of whom were using Evernote was kind of the gold standard at the time. And almost everyone we talked to, when I asked what’s your number one most important feature from your notes app, they said sync and said, OK, so that’s why you chose Evernote, and they said, yeah, and they said, how well does it work? And they said terribly, it fails all the time. You know, I write a note on my computer, I close the lid, I go to lunch. Half an hour later, I go to pull it up on my phone. It’s not there. I have no idea why. And so some combination of those experiences sort of lodged this thing in my mind of the technology industry can just do so much better, and this is important and everyone needs it. What’s the missing piece. And I wasn’t really sure, but that led into once I met up with folks in the research world who indeed had been working on this problem for a while, and I got excited about the technologies they had to offer. 00:12:15 - Speaker 1: Yeah, and then I guess I was downstream of that because I got introduced to space by Peter Van Hartenburg with time was a principal at the Inn Switch Research Lab, and it’s now the director of the lab. And he showed me a demo of the Pixel pusher project, and we can link to the article on this, but essentially this is a Pixel art editing tool that was peer to peer collaborative, and the app itself is very standard, but was amazing to me was he had implemented this app and he had 2 devices or 2 windows on the same device, and they were doing real-time collaboration, but there was no server. And I had come from this world of wherever you add a feature to an app, you gotta write the front end and then you gotta write the back end, you gotta make sure they line up whenever anything changes, it’s a whole mess, and it was just magical to me that you could just type up this JavaScript app and have it collaborating with another client in real time. So I went down that rabbit hole, and there was the obvious attractions of the austere locations and, you know, minimal network connectivity and things like that. And also at the time the research was very oriented around P2P, so there was this notion of the user having more control of their data and perhaps not even requiring a central server, but a couple of things became even more appealing to me as I researched it more. One was that Potential of higher performance. And I ended up writing a whole article about software performance that we can link to. But one of the key insights was that it’s not physically possible to have acceptably fast software if you have to go anywhere beyond the local SSD. Now, certainly if you’re going to a data center in Virginia or whatever, you’re totally hosed. So it was very important to incorporate this performance capability into Muse. 00:13:49 - Speaker 2: Yeah, that article was eye opening for me and that you connected the research around human factors, things that looked at what level of latency you needed for something to feel snappy and responsive, and then separately the speed of light, which is how sort of the maximum possible speed that information can travel, and if you add those together or do very simple arithmetic on that, you can instantly see it’s not about having a faster network connection. You literally cannot make something that will feel fast in the way that we’re talking about if you have to make a network round trip. 00:14:21 - Speaker 1: Yeah, and the one other thing that was really interesting to me about this space was the developer experience. I alluded to this earlier with the Pixel Pusher demo, but in the before times there were two ways to develop apps. You had the local model where you were typically programming against the SQL database, and everything was right there and it sort of made perfect sense. You would query for what you need and you write when you have new information and so on. And then there was the remote model of you would make rest calls, for example, out to some service like admit this edit or add a new post or whatever. But then these two worlds were colliding where we always wanted to be adding sync and collaborative capabilities to our apps, we would try to kind of jam one into the other, like you would try to patch some rest onto the database or you try to patch some database on yours and it just wasn’t working, and I realized we need to do a pretty fundamental rethink of this whole architecture, which is what we end up doing in the research lab and then now with Muse. The last thing I’ll mention about my journey here was my background was in back in engineering and distributed systems engineering, and so I had encountered variants of the sync problem several times, for example, at Hiroku, Adam. We had this challenge of we had these routers that were directing HTTP requests to a back end that was constantly changing based on these dinos coming up and down, and the routers needed to maintain in memory router tables based on the control plan that was being adjusted by the API. And so we had a similar problem if you need to propagate consistently in real time state to the in-memory databases of all these router nodes, and sure enough that work kind of came full circle and we were applying some of the same lessons here with Muse. So it’s a problem I’ve had the opportunity, for better or worse, to make a few passes at in my career. 00:15:57 - Speaker 3: Yeah, I think it’s an extremely hard problem that comes up so often across so many projects is eventually you need data over here in Box A to look the exact same as data over here in Box B. and it’s one of those problems that’s just surprisingly hard to get right, and there just aren’t that many libraries and existing solutions for it to drop in and implement. A lot of other libraries you can just go out and find it, and there’s code out there, or you can license it or open source, whatever, but for whatever reason, sync is one of those things that’s for every project, it needs to be custom baked to that project, just about every time. 00:16:38 - Speaker 2: And that’s part of what blew my mind back 8 years ago when I was looking for a sinking layer for clue and realizing that, yeah, I just had this feeling like surely everyone has this problem, everyone needs it, everyone needs the same thing. It’s really hard, you know, an individual company shouldn’t be distracting from their core competency of building their app to create the sinking layer, and yet to my surprise, there really wasn’t much, and that continues to basically be true today. 00:17:06 - Speaker 1: Yeah, and this gets into our collaboration with Martin Klutman on CRDTs. So briefly you can think of there being two pieces to this problem. One is conveying the physical data around, and the other is, OK, you have all this data that synchronize, what do you do with it, because it’s all a bunch of conflicting edits and so on. And that’s where the CRDT technology came in. I think one of the reasons why we haven’t seen widespread standard libraries for this stuff is the thinking problem is hard. We’ll talk more about that. But another is that we haven’t had the computer science technology to make sense of all of these edits. Well, we sort of did. There was like operational transforms, but you literally need to hire a. Team of PhD computer scientists have any shot at doing stuff like that. And so Google Docs basically had it and maybe a few others, but normal humans couldn’t do anything with it. But the CRDT technology and automerge, which we’ll talk more about, made it much more accessible and possible to make sense of all these conflicting edits and merge them into some useful application state. So that’s the kind of why now of why now is a good time I think to be pursuing this. 00:18:06 - Speaker 3: Yeah, and I think almost surprisingly to me, the solution we came up with at Muse, I think is actually really generic, and I think we solve it in a really elegant way that’s even more foundational to the technology than solving just for use. I think the solution we have. Can certainly solve from use in the future and is futureproof in that regard, but is broad enough to be applicable to a whole number of different uses and applications, which I think is really exciting too. 00:18:37 - Speaker 2: Maybe it’s worth taking a moment to also mention why we think local first in the style of sync is important for you specifically. I think certainly Mark and I have had a long time interest in it. Well, if you have an interest in it, so it’s just something that’s more like we’d like to see more software working in this way where the user has a lot more sort of control and literal ownership over the data because it’s on their device. In addition to being mirrored in the cloud, certainly the performance element is huge for me personally, and I think for all of us on the team. But I think Muse, as we go to this multi-device world, on one hand, we think that every device has its own kind of unique mood. The iPad is this relaxed space for reading and annotating, whereas the Mac or a desktop computer is for focus, productivity, you know, the phone is for quick capture, the web is good for sharing. OK, so really you need your work to be seamlessly across all of them. But at the same time, you know, we want that sense of intimacy and certainly the performance and the feeling that it’s in your control and you own it, and it belongs to you. I think that maybe matters less for some consumer products, or maybe it matters less for more kind of B2B, you know, enterprisey products, but for this tool, which is for thinking. Which is very personal, which is very kind of needs to be at your fingertips and friction free. I think the local first approach would be a good fit for a lot of software, but I think Muse needs it even more than most. So that’s why I’m really excited to see how this works out in practice as people try it out, and we really don’t know yet, right? It may be that we’ve made this huge engineering investment and in the end customers just say, I’d be happy with the cloud, yeah, it’s fine. I have some spinners, I can’t access my work offline. I hope not. But that could happen. We could be like falsifying the business hypothesis, but I really believe that for our specific type of customer, you’ll go to use this product with the sinking layer, you know, once we shake out all the bugs and so on and say, you know, this feels really fundamentally different from the more cloud-based software that I’m used to like an ocean and also fundamentally different from the non syncing pure local apps that I might use. 00:20:51 - Speaker 3: Yeah, I really think that with as connected as this world is and is becoming, there’s always going to be places of low connectivity, there’s always going to be places of just dodgy internet, and having an application that you know just always works, no matter what’s going on, and figures itself out later once it has good internet, is just so freeing compared to Those times when, you know, your device is switching Wi Fi networks or the LTE is just not quite what it needs to be to make things happen. I think it really does make a really huge difference, especially when you’re deep in thought, working on your content in use, the last thing you want is to be interrupted for even half of a second with a small spinner that says please connect to the internet. And so just being able to free the application and free the user from even worrying about the internet at all, even if it works 99% of the time, it’s that 1% of the time that breaks your train of thought that is just really frustrating. And I think that’s what’s exciting about being able to be purely offline is it fixes that really huge problem of that really small percentage of time that it happens. 00:22:10 - Speaker 2: Very well said. Now with that, I’ll do a little content warning. I think we’re about to get a lot more technical than we ever have on this podcast before, but I think this is a topic that deserves it. So I’d love to, and me especially as someone who’s not deep in the technology and just observing from the sidelines, I’d love to hear about what’s the high level architecture, what are all the pieces that fit together here that make this syncs when you’re on the internet and lets you keep working even when you’re not? What is it that makes all that work? 00:22:41 - Speaker 1: Yeah, I’ll give a really quick overview and then we can dive into some of the specific pieces. So to start with the logical architecture, the basic model is a user has a bag of edits, so you might have 1000 edits or a million edits where each edit is something like I put this card here or I edit this picture, and over time the user is accumulating all these edits and the job of the sync system is to ensure that eventually all of the users' devices have the same bag of edits. And it passes those edits around as opaque blobs and different flavors of blobs we’ll talk about. Basically there’s a bunch of bits of binary data that all devices need to have the same, and then it’s the device’s responsibility to make sense of those edits in a consistent way. So given the same bag, each device needs to come up with the same view of the muse corpus of that individual user, what boards are alive and what cards are on them and so forth. And then briefly in terms of the physical architecture, there’s a small server that’s running on Hiokku, data is stored in post grass and S3 and it’s implemented in Go, and again the server is just shuffling binary blocks around basically. And then there’s different front ends, different clients that implement this synchronization protocol and present a use corpus model up to the application developers. So the most important of these is the SWF client. We also have a JavaScript client and both of these back to SOI databases locally. 00:24:09 - Speaker 3: Yeah, and I think what’s really interesting about this architecture is that we actually maintain the entire bag of edits. Edits only get added into the bag, but they never really get removed. And so the current state of the application is whatever the most recent edit is. So if I make a card bigger on my Mac, and then I go to my iPad and I make that same card smaller. And then we synchronize those two things. Well, at the end of the day, either the card is going to be smaller on both devices, or the card is gonna be bigger on both devices, and we just pick the most recent one. And that strategy of just picking the most recent edit actually makes conflicts essentially invisible or so small and so easy to fix that the user can just, oh, I want that big, let me make it big again. It’s really easy to solve. For the user side without showing up one of those really annoying, hello, there’s been an edit. There’s a conflict here. Would you like to choose copy A or copy B? Just being able to automatically resolve those is more than half of the magic, I think, of this architecture. 00:25:13 - Speaker 2: I also note this is a place where I think the muse domain, if you want to call it that, of the cards on a canvas model works pretty well with this sort of automated resolution, which is if you moved a card in one direction on one device and you moved it somewhere else on the other device, it’s not really a huge deal which one it picks as long as it’s all kind of like flows pretty logically. By comparison, text editing, so what you have in a Google Docs or certainly I know auto merge team and the incode switch team has done a huge amount of work on this, is a much harder space where you can get into very illogical states if you can merge your edits together, strangely, but I think a card move, a card resize, add remove, even some amount of reparenting within the boards, those things just are pretty natural to merge together, I think. 00:26:02 - Speaker 3: Yeah, I think so, and I think even with the new text block feature in Muse, we end up slicing what would be a really long form text document into much smaller sentences or paragraphs. And so then text edits, even though we’re only picking the kind of the most recent one to win, we’re picking that most recent at the granularity of the sentence or of the the paragraph, and so. Conflicts between documents for us are larger than they would be for automerge or for Google Docs, but are small enough that it’s still ignorable for the user and easily solvable by the user. 00:26:42 - Speaker 2: Which incidentally I think is a trick we sort of borrowed from FIMA, at least on the tech side, which is in FIGA and also in Muse. If one person edits, you know, the red car and someone else edits the blue car, you don’t get the red blue car, you just get one or the other, and it turns out for this specific domain, that’s just fine. 00:27:03 - Speaker 3: Yeah, I think we kind of lucked out having such a visual model, and we don’t need to worry about intricacies of multi-user live document editing. 00:27:13 - Speaker 1: Yeah, I would point to both Sigma and actual budget as two very important inspirations for our work. I would say those are two of the products that were most at the forefront of this space, and thought about it most similarly to how we did. And notably they, as well as us sort of independently arrived at this notion of basically having a bunch of last white wins registers. As the quote unquote CRDTs. So these are very, very small, simple, almost degenerate CRDTs where the CRDT itself is just representing one attribute, for example, the X coordinate of a given card. But this is an important insight of the industrial application of this technology, if you will. That’s a good trade-off to make it. It basically covers all the practical cases, but it’s still very simple to implement, relatively speaking. 00:28:03 - Speaker 2: I also mentioned briefly actual budget, great in the basically made by one person app and recently open source, so you can actually go and read the custom CRDT work there and maybe learn a thing or two that you might want to borrow from. 00:28:17 - Speaker 3: I think one of the really interesting problems for me about the CRDT was Deciding which edit is the most recent because it just makes logical sense to say, oh well, it’s 3 o’clock, and when I make this edit at 3 o’clock and I make a different edit at 3:02, obviously the one at 3:02 wins. But since computer clocks aren’t necessarily trustworthy, sometimes I have games on my iPad that reset every day and so I’ll set my clock forward or set my clock backward. Or if I’m on an airplane and there’s time zones, and there’s all kinds of reasons the clock might jump forward or jump backward or set to different problems, and so using A fancier clock that incorporates a wall clock, but also includes a counter and some other kind of bits of information, lets us still order edits one after the other, even if one of those clocks on the wall is a year ahead of schedule compared to the other clocks that are being synchronized. I don’t know how in depth we want to get on that, but it’s it’s called a hybrid logical clock. 00:29:23 - Speaker 1: Yeah, I think this is another great example along with choosing very simple CRDT structures of industrial style architecture where you could go for a full blown vector clock, and that gives you perfect logical ordering and a bunch of other nice properties, but it’s quite large and it’s expensive to compute and so on. Whereas if you choose a simpler fixed size clock, that can give you all the benefits that you need in practice, it can be easier to implement, it could be faster to run, and so on. 00:29:52 - Speaker 3: Like everything in life, it’s all about trade-offs, and you can get accuracy, but it costs more, or you can get a little bit less accuracy, and it costs a lot less, and for us that was the better trade-off to have a fixed size clock that gives us Enough of the ordering to make sense, but might not be exactly perfect ordering. 00:30:13 - Speaker 1: And we’ve been alluding to trade-offs and different options, so maybe it’s time to address it head on in terms of the other options that we considered and why they weren’t necessarily as good of a fit for us. So I would include in this list both iCloud and what you call like file storage. 00:30:27 - Speaker 2: It might be like cloud kit or something, but yeah, they have one that’s more of a blob, kind of, you know, save files, what people will think of with their sort of iCloud drive, almost kind of a Dropbox thing, and then they also have a cloud kit. I feel like it’s a key value store, but in theory, those two things together would give you the things you need for an application like ours. 00:30:47 - Speaker 1: Yeah, so there’s iCloud as an option, Firebase, automerge. CouchDB maybe, then there’s the role you’re on which we ended up doing. 00:30:57 - Speaker 2: Yeah, the general wisdom is, you know, you don’t write your own, if there’s a good off the shelf solution, you name some there that are commercial, some are built into the operating system we’re using, some are indeed research projects that we’ve been a part of, what ultimately caused us to follow our own path on that. 00:31:15 - Speaker 1: Yeah, so there was a set of issues that tended to come up with all of these, and it was more or less in different cases, but I think it’d be useful to go through the challenge that we ran into and talk about how they emerged in different ones of these other solutions. So one simple one, it would seem it’s just like correctness slash it works. And the simple truth is, a lot of the singing systems out there just do not work reliably. Hate to pick on Apple and iCloud, but honestly, they were the toughest in this respect where sometimes you would, you know, admit data to be synchronized and just wouldn’t show up, and especially with opaque closed source solutions and third party solutions, stuff would not show up and you couldn’t do anything about it, like you couldn’t see what went wrong or when it might show up or if there was some error. And then bizarrely, sometimes the stuff would pop up like 5 or 10 minutes later. It’s like, oh, it’s actually sort of worked, but it’s off by You know, several zeros in terms of performance. So that was a really basic one, like the syncing system has to be absolutely rock solid and it kind of goes back to the discussion Wulf had around being offline sometimes. If there’s any chance that the sync system is not reliable, then that becomes a loop in the user’s mind. Am I gonna lose this data? Is something not showing up because the sync system is broken. Our experience has been that if there’s any lack of reliability or lack of visibility into the synchronization layer. It really bubbles up into the user’s mind in a destructive way, so we want it to be absolutely rock solid. Another important thing for us was supporting the right programming model. So we’ve been working on news for several years now. We have a pretty good idea of what capabilities the system needed to have, and I think there were 4 key pillars. One is the obvious transactional data. It’s things like what are the cards and where are they on the board. This is data that you would traditionally put in a SQL database. Another thing that’s important to have is blob support, to a lot of these binary assets in use, and we wanted those to be in the same system and not have to have another separate thing that’s out of band, and they need to be able to relate to each other correctly. 00:33:09 - Speaker 2: This is something where a 10 megabyte PDF or a 50 megabyte video just has very different data storage needs than the tiny little record that says this card is at this X and Y position and belongs to this board. 00:33:23 - Speaker 1: Right, very different, and in fact you’re gonna want to manage the networking differently. Basically you want to prioritize the transactional data and then load later, or even lazily, the binary data, which is much larger. Yeah, so there was transactional data, blob data, and then real-time data slash ephemeral data. So this is things like you’re in the middle of an ink stroke or you’re in the middle of moving a card around and this is very important to convey if you’re gonna have real time and especially multi-user collaboration, but again, you can’t treat this the same as certainly blob data, but even transactional data, because if you store every position a card ever was under your finger for all time, you’re gonna blow up the database. So you need those 3 different types of data, and they all need to be integrated very closely. So for example, when you’re moving a card around, that’s real time, but basically the last frame becomes a bit of transactional data, and those two systems need to be so lined up to each other that it’s as simple as changing a flag. If you’re going on a 2nd or a 3rd band for real-time data and need to totally change course for saving the transactional data, it’s not gonna be good. It was quite rare. I don’t know if we found any systems that support all three of these coherently. 00:34:33 - Speaker 2: The ephemeral data element I found especially interesting because you do really want that real timey feeling of someone wiggles a card with their finger and you can see the wiggling on the other side. That just makes the thing feel live and Just responsive in a way that it doesn’t otherwise. But yeah, at the same time, you also don’t want hundreds of thousands of records of the card moved 3 pixels right, and then 3 pixels left. And one thing I thought was fascinating, correct me if I misunderstood this, but is that because the client even knows how many other devices are actively connected to the session, it can choose to not even send that ephemeral data at all. It doesn’t even need to tap the network. If no one else is listening, why bother sending ephemeral data? All you need is the transactions over time. 00:35:21 - Speaker 1: Right, this is actually a good example of how there’s a lot of cases where different parts of the system need to know or at least benefit from knowing about other parts. So it becomes costly or or maybe just an outright bad idea to separate them, especially as we’re still figuring out as industry how they should work. I think there’s actually quite a bit of benefits to them being integrated. Another. that we could talk about eventually is prioritizing which data you download and upload, you might choose to first download blobs that are closer to you in board space, like it’s in your current room or it’s in adjacent rooms, and then later you can download other blobs. So that’s something you could do if the application had no notion of the networking layer. It actually brings us to Another big challenge we saw with existing systems, which is multiplexing. So I’ll use an example of automerge here, and this is something we’ve seen with a lot of research oriented CRDT work. It’s very focused on a single document, so you have a document that represents, you know, say a board or whatever, and a lot of the work is around how do you synchronize that document, how do you maintain correctness, even how do you maintain performance when you’re synchronizing that document across devices. Well, the challenge with Muse with our model. You might have, you know, easily 1000, but, you know, potentially tens of thousands up to millions of documents in the system corresponding to all your individual cards and so on. And so if you do anything that’s order and in the number of documents, it’s already game over. It needs to be the case that, here’s a specific challenge that I had in mind for the system. You have a corpus, let’s say it’s a million edits across 10,000 documents or something like that, and it’s 100 megabytes. I wanted the time to synchronize a new device that is to download and persist that entire corpus, to be roughly proportional to the time it would take to just physically download that data. So if you’re on a 10 megabyte connection, 100 megabyte connection, maybe that’s 10 seconds. But the only way to do that is to do a massive amount of like multiplexing, coalescing, batching, compression, so that you’re taking all these edits and you’re squeezing them into a small number of network messages and compressing them and so on. So you’re sort of pivoting the data, so it’s better suited to the network transfer and the persistence layer. And again, you need to be considering all these things at once, like how does the application model relate to the logical model, relate to the networking protocol, relate to the compression strategy, and we weren’t able to find systems that correctly handle that, especially for when you’re talking about thousands or millions of documents being synchronized in parallel. And the last thing I’ll mention is what I call industrial design trade-offs. We’ve been alluding to it in the podcast so far, but things like simplicity, understandability, control, these are incredibly important when you’re developing an industrial application, and you tend not to get these with early stage open source projects and third party solutions and third party services. You just don’t have a lot of control and it was too likely to my mind that we would just be stuck in the cold at some point where system didn’t work or it didn’t have some capability that we wanted, and then you’re up a dead end road, and so what do you do? Whereas this is a very small, simple system. You could print out the entirety of the whole system it would be probably a few pages, well it’s a few 1000 lines of code, it’s not a lot of code, and it’s across it’s a couple code bases, and so we can load the whole thing into our head and therefore understand it and make changes as needed to advance the business. 00:38:38 - Speaker 3: Yeah, I think that last point might honestly be the most important, at least for me. I think having a very simple mental model of what is happening in sync makes things so much easier to reason about. It makes fixing bugs so much easier. It makes preventing bugs so much easier. We’ve been talking about how sync is hard and how almost nobody gets it right, and that’s because it’s complicated. There’s a bajillion little bitty edge cases of if this happens, but then this happens after this happens, and then this happens. What do we do? And so making something really really simple conceptually, I think was really important for the muse sync stability and performance at the end of the day. 00:39:21 - Speaker 2: I’m an old school web developer, so when I think of clients and servers, I think of rest APIs, and you maybe make kind of a version API spec, and then the back end developer writes the endpoint to be called to and the front end developer figures out how to call that with the right parameters and what to do with the response. What’s the diff between a world that looks like that and how the new sync service is implemented? 00:39:50 - Speaker 1: Yeah, well, a couple things. At the network layer, it’s not wildly different. We do use protocol buffers and binary encoding, which by the way, I think would actually be the better thing for a lot of services to do, and I think services are increasingly moving in that direction, but that core model of you have, we call them endpoints. You construct messages that you send to the endpoint and the server responds with a response message. That basic model is pretty similar, even if it’s implemented in a way that’s designed to be more efficient, maintainable, and so on than a traditional rest server. But a big difference between A traditional rest application and the muse sync layer is that there are two completely separate layers, what we call the network layer and the app layer. So the network layer is responsible for shuffling these binary blobs around the transactional data, the ephemeral data, and the big binary assets. And the server knows absolutely nothing about what’s inside of them by design, both because we don’t want to have to reimplement all of the muse logic about boards and cards or whatever in the server, and also because we anticipate eventually end to end encrypting this, and at that point, of course, the server can’t know anything about it, it’s not gonna be possible. So that’s the networking layer and then if you sort of unwrap that you get the application layer, and that’s the layer that knows about boards and cards and edits and so on. And so it is different, and I would say it’s a challenge to think about these two different layers. There’s actually some additional pivots that go on in between them, versus the traditional model of you would like post V1 slash boards directly and you’d put the parameters of the boards and then the surfer would write that to the boards table and the database. There’s a few different layers that happen with this system. 00:41:30 - Speaker 2: So if we want to add a new card type, for example, or add a new piece of data to an existing card, that’s purely in the application layer on the back end, or it doesn’t know anything about that or no changes are needed on the back end. 00:41:44 - Speaker 1: Yeah, no changes are needed. In fact, one of the things I’m most proud about with this project is we basically haven’t changed the server since last year, December, and we’ve been, you know, rigorously iterating on the app, you know, adding features, changing features, improving a bunch of stuff, and the servers, it’s basically the same thing that was up 4 months ago, just chunking along, and that’s a benefit. It’s a huge benefit, I think, of this model of separating out the application model and the network model, because the network model is eventually gonna move very slowly. You basically figure that out once and I can run forever. And the application model has more churn, but then when you need to make those changes, you only need to make them in the client or the clients that maybe you update the application schema so that current and future clients can understand that, and then you just start including those data in the bag of edits. 00:42:26 - Speaker 3: Yeah, I think one thing that’s really nice is that those protocol buffers that you were talking about are type safe and kind of statically defined, so that way it’s when we’re sending that message over the wire, we know 100% we’re sending exactly the correct messages no matter what, and that guarantee is made at compile time, which I think is really nice because it means that a lot of bugs that could otherwise easily sneak in if we’re using kind of a generic JSON framework, we’re gonna find out about when we hit the please build muse button. Instead of the I’m running views and I randomly hit a bug button. And that kind of confidence early on in the build process has been really important for us as well to find and fix issues before they even arise. 00:43:11 - Speaker 1: Yeah, to my mind this is the correct way to build network clients. You have a schema and it generates typesa code in whatever language you want to use. There’s just enormous benefits to that approach. I think we’re seeing it with this on use and again, I think more systems, even more traditional B2B type systems are moving in this direction. By the way, everyone always made fun of Amazon’s API back in the day. I had this crazy XML thing where There’s a zillion endpoints. I actually think they were closer to the truth and the traditional, you know, nice rest crud stuff because their clients are all auto generate and sure enough they have like literally a zillion endpoints, but everything gets generated for free to a bunch of different languages. Anyways, one challenge that we do have with this approach is, you know, one does not simply write a schema when you have these multiple layers. So again, if you look at a traditional application, you have a protocol buffer definition of, say, a board B board and probuffs. And that would have fields like title and width and height or whatever. And when you want to update the board, you would populate a memory object and you would encode this to a protocol buffer and you would send this off to the server. Well, it’s not quite that simple for us because we have this model of the small registers that we call atoms. So an atom is the entity, say a board, the attributes say the title, the value say use podcast, and the time stamp. And your bag of edits is comprised of all these different atoms, but the problem is, how do you encode both how you’re gonna send an atom, which is as those twopos, as well as what a logical board is, you know, what the collection of atoms is meant to look like, you know, it’s gonna have a title and the width and height and so on. So that’s been honestly a pretty big challenge for us where it doesn’t fit into any of the standard schema definition approaches, certainly not the regular protocol buffer schema, which again we use for the network and for encoding the messages that are wrapped up in the network, but you need a separate layer that encodes the application model, as we call it, you know, what is a board, what is a card, what attributes that they have and so on. 00:45:06 - Speaker 2: And Wulf, if I recall you have a blog post about these atomic attributes. I’ll link that in the show notes for folks. 00:45:14 - Speaker 3: Yeah, so unfortunately no relation between my name and Adam. It’s a TOM. 00:45:18 - Speaker 2: Yes, we have two Adams on this podcast. The ADAM is different from the ATOM. 00:45:25 - Speaker 1: Yeah. A big inspiration on this, by the way, is Tomic, I don’t know if we’ve mentioned that yet on this podcast, but Atomic is a database system developed by Rich Hickey and team who is also the creator of Closure. And it uses this model in contrast to the traditional relational model you have tables and columns and rows. The atomic model is more like a bag of time stamped attributes where you have an entity, an attribute, a value and a time stamp. And from that, it could be more challenging to work with that model, but it’s infinitely flexible. You can sort of put whatever model you want on top of that, and it works well for creating a generic database system. You know, you couldn’t have a generic post graphs, for example, that could run any application. You need to first create tables that correspond to the actual application you’re trying to build, whereas with an atom oriented database, you basically have one huge table which is atoms. So it’s useful again for having this slower moving more stable synchronization layer that handles moving data around that you build the application on top of that moving quickly. 00:46:27 - Speaker 3: Yeah, and like we talked about earlier, it’s so much simpler to reason about. All of the problems of my iPad is on version 1, my Mac is on version 2, and my iPad Mini is on version 3. They’re sending data back and forth. At the end of the day, every single database on all three of those clients is gonna look the same, even though they have completely different logic, maybe different features. But all the simplicity of that data store makes it much, much easier to reason about as the application gets upgraded or as two different versions of the client are synchronizing back and forth. 00:47:03 - Speaker 2: How does that work in practice? So I can certainly imagine something where all of the data is sent to every client, but a V1 client just doesn’t know what to do with this new field, so just quietly stores it and doesn’t worry about it. But in practice, what happens if I do have pretty divergent versions between several different clients? 00:47:23 - Speaker 1: Recall some podcasts ago, we suggested that everything you emit should have a UU ID and a version. Well sure enough that’s advice that we take to heart with this design, where all the entities, all the messages, everything has a UU ID and also everything’s version, so there’s several layers of versioning. There’s the network protocol is versioned and the application schema is versioned. So by being sure to thread those versions around everywhere, the application can then make decisions about what it’s gonna do and Wulf can speak to what the application actually chooses to do here. 00:47:54 - Speaker 3: Yeah, exactly. If we’re sending maybe a new version of a piece of data on the network layer that device A just doesn’t physically know how to decode from that work, then it’ll just save it off to the side until it eventually upgrades and then it’ll actually read it once it knows what that version is. 00:48:11 - Speaker 2: So is there someone like, can I make a crude metaphor here, someone emails me a version of a Word doc from a later version that I don’t have yet, I can save that on my hard drive, and later on when I get the new version, I’ll be able to open the file. 00:48:25 - Speaker 3: Yeah, exactly right. It’s very similar to that. And then I think there’s a different kind of upgrade where we’re actually talking the same language, but I don’t know what one of the words is that you said. So let’s say we add a completely new content type to muse called the coffee cup, and everyone can put coffee cups on their boards, right? That coffee cup is gonna have a new type ID attribute that kind of labels it as such. New clients are gonna know what type 75 means coffee cup, and old clients are gonna look at type 75 and say, oh, I don’t know about Type 75, so I’ll just ignore it. And so the data itself is transferred over the network schema and kind of the app schema and understands those versions, but it might not understand the physical data that arrives in the value of that atom. And in that case, it can happily ignore it and will eventually understand what it means once the client upgrades. And so there’s a number of different kind of safety layers where we version something. If we’re unable to even understand kind of the language that’s being spoken, it’ll be saved off to the side. If we do understand the language that’s spoken, but we don’t understand the word, we can just kind of safely ignore it, and then once we are upgraded, we can safely understand both the language and the word. 00:49:47 - Speaker 1: Yeah, so maybe to recap our discussion of the low level synchronization protocol before we go on to the developer experience and user experience, might be useful to walk through a sort of example. So suppose you are doing a thinking session in your nice comfy chair on your iPad, you’re offline, you’re making a few dozen edits to a series of different boards and cards in your corpus. Those are going to write. New atoms in your database, and those are essentially gonna be flagged as not yet synchronized, and then when you go online, those atoms, assuming it’s some plausible number, you know, it’s maybe less than 1000 or so. Those are all gonna be combined into a single network message. So this is that multiplexing efficiency where you certainly don’t need to check every document in your corpus, and you don’t even need to do one network right per every edit or even one network right per document. You can just bundle up all of your recent changes into a single protocol buffer message and could potentially compress it all with GSIP, and then you send that out to the server. The server doesn’t know anything about these edits, you know it’s just one big binary packet. The server persists that, and then it sends a response message back to the client and says, OK, I’ve successfully received this. You can now treat this as synchronized and the server will take responsibility for broadcasting it out to all the clients. And then clients as they come online, if they’re not already online, they will immediately receive this new packet of data called a pack. And then they can decompress and unpack that into its constituent atoms, and once they’ve processed those, tell the server, I have successfully saved this on my device and in the background of the server is essentially maintaining a high watermark of for each device that’s registered for this user, what’s the latest pack or block they successfully persisted, and that way as devices come on and offline, the server knows. What new data needs to send to each individual device, and that works both for essentially conveying these updates in near real time as they happen, as well as for doing big bulk downloads if a device has been offline for a long time. And I know we’ve mentioned a few times, but to my mind this multiplexing and batching and compression is so important, so it’s the only thing that makes this even remotely feasible with the Muse data model of having a huge number of objects. And then I think this leads pretty naturally to a discussion of the developer experience. So we’ve talked about this sort of sync framework, and that essentially is gonna present a developer interface up to the application developer. So Wulf, maybe you can speak a little bit to that. 00:52:19 - Speaker 3: Yeah, we’ve talked some about the simplicity that we’re aiming for, just conceptually and how synchronization works. I think it’s equally important for this to be extremely simple for the end developer to use as we’re building new features in use or as we’re, you know, changing the user interface around. That developer, whether it’s me or Julia or anybody else working on Muse, doesn’t need to be able to think in terms of sync at all. We just need to be able to write the application is the ideal world, needs to be very, very simple. And so keeping that developer experience simple was a really big piece of designing what sync looks like inside of Swift for iOS. Since we had been built on core data beforehand, a lot of that developer interaction ends up looking extremely similar to core data. And so we build our models in Swift, it’s a Swift class. We have all of the different attributes where there’s position, and size, and related document, and things like that, and we just stick what’s called in Swift a property wrapper, it’s just a small little attribute. In front of that property that says, oh by the way, this thing, this thing belongs in the sync database. This property is gonna be merged, and that one little piece of code, that one little kind of word in the code program is what fires up the sync database and the sync engine behind it to make all of this stuff work. And that has been really important both for conceptually building new features, but also for migrating from core data to sync. Because the code that core data looks like beforehand, and the code that sync looks like now, is actually extremely similar. Early on in the development process, our very first kind of internal beta, internal alpha, pre-alpha, whatever version you want to call it. Very early on in the process, we actually ran both core data and the sync engine side by side. So some of the data in Muse would load from core data and other bits and pieces would load from sync, but both of those, because they looked very, very similar from the developer’s perspective, from kind of how we use both of those frameworks. It allowed us to actually slowly migrate use over from one to the other, by replacing bits of core data with bits of sync, and then this little bit of core data with this little bit of sync. I mean there’s, you know, thousands and 10s of thousands of lines of custom logic to make muse muse. And so it was really important to keep all of that logic running, and to keep all that logic separate from the physical data that logic was manipulating. And so making those appear similar to the developer, let us do that. It let us keep all of that logic essentially unchanged in use while we kind of swap out that foundation from underneath it. 00:55:15 - Speaker 2: And I remember when you implemented the first pass at this persistence library, and I forget if Yuli was maybe away on holiday or maybe she was just working on another project, but then she came in to use your kind of first working version and start working on the sort of porting it across and she had a very positive reaction on the developer experience, you know, you are sort of developing the persistence layers so you naturally like it because. Here, maybe the way you like it and you’re thinking about the internals, she’s coming at it more from the perspective as a consumer of it or a client or a user, and the developer experience, and I found that to be promising because I mean, she, like most, I think iOS developers has spent many, many years in her career using core data. Which is a long developed and well thought through and very production ready persistence layer that has a lot of edge cases covered and is well documented and all that sort of thing. So in a way it’s a pretty high bar to come in and replace something like that and have someone just have a positive reaction to using it as a developer. 00:56:21 - Speaker 3: Yeah, I was so happy when she said that she kind of enjoyed using it and kind of understood how it worked, because of course every developer likes their own code, but when a developer can use and is comfortable with another developer’s code, that’s really, really important. And that was absolutely one of my goals is to make sure that it was simple for Julia to use and simple for any other developer that comes onto the Muse team that doesn’t have background in Muse and in our code base. For them to be able to jump in quickly and easily and understand what’s going on, was a really important piece of how this framework was built. 00:56:57 - Speaker 1: Yeah, I think this is a really important accomplishment, and Wulf is maybe even underselling himself a little bit, so I want to comment on a few things. One is, while there’s just this simple annotation of I think it’s at merged is that it Wulf. Yeah, that’s right. Sometimes when you see that, that annotation instructs the framework to do something additionally on the side, on top of the existing standard relational database, you know, like basically do your best, try to synchronize this data out of band with some third party service or whatever. But this in fact totally changes how the data is persisted and managed in the system, so it’s sort of like a whole new persistent stack for the application. And I think that’s important because we constantly see that the only way you get good results on sync system, especially when you’re talking about offline versus online and partially online, it has to be the one system that you use all the time. You can’t have some second path, that’s like the offline cache or offline mode that never works. It needs to be the one, you know, true data synchronization and persistence layer. So I think that’s as important though. There’s another subtle piece here, which is the constraints that you have with the industrial setup. So a lot of the research on CRDTs and synchronization basically assumes that you have one or a small number of documents in memory, and that if you’re going to be querying these documents or managing synchronization of these documents that you have access to the full data set in memory. But it’s not practical for our use case, both because of the total size of memory and the number of documents we’d be talking about. So a lot of critical operations can take place directly against the database or the data layer can smartly manage bringing stuff in out of memory. It’s not like we have the whole new corpus up in memory at any one time, the system has to smartly manage what gets pulled up from disk in the memory and what gets flushed back, and then that. Introduce a w
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: On the academic side, you’re very limited by your work has to fit in the box of like a peer reviewed quantifiable research paper and in the commercial world, it needs to be commercializable in the next, you know, probably a year or two, maybe, maybe 3, but all the good ideas don’t fit in one of those two boxes. 00:00:27 - Speaker 2: Hello and welcome to Meta Muse. We use the software for your iPad that helps you with ideation and problem solving. But this podcast isn’t about Muse the product, it’s about Muse the company, the small team behind it. I’m Adam Wiggins. I’m here today with my colleague, Mark McGranaghan. Mark, you reading anything good lately? 00:00:43 - Speaker 1: Yeah, just last night, I actually reread an ultra classic, you and your Research by Hamming, who’s a famous scientist, and it’s about how you build a really impactful research program over the course of your career, and I was inspired to reread it because it’s one of the chapters in the classic book, The Art and Science of Doing Engineering, which is about to be republished by Stripe Press. 00:01:05 - Speaker 2: Stripe Press is really on a tear these days. 00:01:09 - Speaker 1: Yeah, for sure, highly recommended. 00:01:10 - Speaker 2: And also perhaps relevant to our topic today, and I’m happy to say that our topic today was requested by a listener. So Fetta Sanchez wrote in to ask us, how do you get into the HCI slash interaction slash new gestures research field. So probably we need to start at the top there. Maybe you want to tell us what HCI is. 00:01:32 - Speaker 1: Sure, so HCI stands for human-computer interaction, and this is things like the way humans interface with computers, and also the way they use computers as a tool in their lives, how they get things done, how they learn. To use them, how they accomplish their goals, things like that. 00:01:48 - Speaker 2: And I did a couple of years of a computer science undergraduate degree that I did not finish. And during that time, I really remember everything in the curriculum was algorithms, databases, compilers, maybe some network type of things. And I only learned about HCI as a field a couple of years ago. And to me it was a bit of a revelation because this concept of How the user interacts with the computer and that being a whole field of study. Well, I was very excited about, but stood for me in very stark contrast to the System the algorithms oriented computer science that I sort of knew from my brief time in academia. 00:02:29 - Speaker 1: Yeah, likewise, it was pretty new to me, and it’s a whole huge world, you know, there’s conferences and papers and many professors who’ve dedicated their entire careers to it. 00:02:37 - Speaker 2: It was fun for me to dive in and learn about that world a little bit, and you and I were both part of this independent research lab called Inot Switch. Uh, and through that process, we began publishing and then made some connections with folks in this field, and then you and I went to a conference called Kai last year that I think really kind of opened the door for us there. Maybe one thing that would be worth doing is um categorizing here a little bit. There’s Human-computer interaction as a branch of computer science in the academic tradition, that is say mostly done in universities, sort of the the pure sciences. Then there’s corporate R&D which is more associated with for profit businesses, but actually it’s where a lot of the HCI innovations that are maybe the most famous, uh, we think of places like Bell Labs or Xerox PARC, maybe today, Microsoft Research. And then there’s a small but growing space of called them independent computer science labs, independent HCI researchers, of which I think we we had some contact with. How would you define the difference between those three categories? 00:03:39 - Speaker 1: Yeah, well, like you said, the academic side is grounded in these research universities, and this is often directed by a professor or graduate students, and there the values are really around evidence, rigor, review, publication and communication, and creating knowledge over time, which is a whole thing we should talk about. And then on the industrial side, it’s often more integrative because you need to consider. Not only the the pure HTI elements, but the business elements and the hardware constraints and the how easy the thing is to learn for the user and practice and things like that. And then on the indie side, this is a smaller domain, but that’s tends to be more experimental, free form. People can bring their own wild ideas to it and just try stuff. So it’s a nice injector of new ideas. 00:04:22 - Speaker 2: One way we can maybe make this concrete is to describe the path from let’s say the lab to commercial product. And I’ve I’ve struggled to find full stories on this in many cases, I think this is something that happens behind closed doors a little bit, even though science does have open publishing, the exact story of how something went from basic research or early um HCI research to a product that’s in the hands of end users is not well understood or well or written down anywhere. Um, I think the Xerox PARC case is one that has a lot of um, Fame and certainly in the tech circles that we run in, there’s there’s some books about it. There, they invented things like the modern GUI, uh, as well as what you see is what you get word processing, and was really a pretty special place. And notably there was a branch of Xerox, the copier company, and they were looking for innovations. I think their theme was the Office of the Future. And they were looking for innovations around that and, and clearly, you know, this is the 1970s, they knew that would have to do with computers, personal computing was, didn’t really exist yet or was, you know, still just an emerging idea. So that’s one famous example. Uh, maybe more recently, you have something like Microsoft Research, and I think, you know, I don’t 100% know what the path is for some, you know, for example, interesting innovations that emerged from Microsoft, to what degree were those laboratory projects versus some other path. Uh, one that I find quite interesting is what we now on the Apple platform, we talk about face ID on the Apple platform we use face ID rather. And that uses stereoscopic cameras and infrared, and infrared camera, which gives you depth sensing, right? So this is why you can’t fool your iPad into unlocking by holding up a picture of your face, because it can actually sense the the shape of it. And that idea was first in Windows Hello, which sort of was the Microsoft implementation of facial recognition. And that in turn, the technology there, I think came from the Microsoft Kinect, which is actually a gaming. Device, um, and I’ve tried to like dig into the history on this. I don’t know if it came out of a Microsoft lab. I think it may have come out of some other independent place. So you often have these very winding paths where a promising technology like stereoscopic cameras emerges, but you’re still trying to figure out the application of it. And it’s actually quite a long distance between when these early researchers are doing the work, and it’s in the hands of consumers as a usable product. 00:07:00 - Speaker 1: Yeah, and I think honestly, that’s the best case that you have this long winding path, but it does eventually find its way into commercialization. I think one of the ideas we had originally behind the lab was these two domains are kind of spinning in circles. So it’s a lot of good ideas from the academic world that are getting stuck or don’t have the appropriate context from the commercial world, so they’re not transferring over. And on the flip side, the commercial world isn’t tapping into the academic tradition and the way that it should be. So you have a lot of like the, the Microsoft research and the, the Googles and so on, they do a lot of internal research. 00:07:36 - Speaker 1: Google X maybe is their, their internal lab, or they have a bunch of computer science just doing research on, you know, search and stuff like that, uh, some of which gets thrown out as papers and some of which doesn’t, but the kind of the classic path from uh academic labs through commercialization I hypothesize is actually weaker than it, it should be or could be and perhaps was in the, in the past. And one of our ideas with the lab was to help bridge that gap with something that was kind of in between with the with the so-called industrial research lab. 00:08:01 - Speaker 2: Actually, Google search is another case. It’s not an HCI thing, it’s more of an algorithms thing, but the founders of Google, they were doing academic research work at Stanford, if I’m not mistaken, came up with this page rank algorithm, which was a science paper published like any other. At some point, I’m not super knowledgeable about the story, but at some point they decided to turn that into a working prototype. They set up this search engine, they found it worked way better than anything else out there, and they realized they could spin that out into a commercial. Entity. And so those two individuals took it from that early lab work all the way through to a commercially viable product, but it takes pretty extraordinary individuals and probably extraordinary circumstances or at least serendipitous circumstances for that to happen. And so what you’re alluding to there with the the gap between The academic researchers who are exploring wild new ways we can interact with computers and commercial companies that can bring these to people in their everyday lives. Um, that’s, you know, in the Google case, these, these extraordinary individuals took it across that threshold, but what can we do to create more movement there? 00:09:12 - Speaker 1: Yeah, exactly. I think We’ll see as we get more into HCI specifically here, that the HCI domain isn’t as obviously susceptible to the academic tactics as other domains, so things like algorithms are very quantifiable, they’re very repeatable, they’re very discreet, and those are things that work well in the the traditional academic model of of measurement and confidence intervals and so on, whereas HCI is often much more multi-dimensional, maybe case based, maybe hard to quantify. 00:09:39 - Speaker 2: Yeah, for sure, I think how it feels is like a huge dimension of making interfaces, but that is something that is very hard for science to evaluate. Uh, it’s something that is more of a taste or judgment call, but then science is and should be about rigor and the academic tradition and fitting into these and and sometimes I think that does mean from what I’ve seen of the HCI field. Sometimes I read these papers where, I don’t know, one example was, um, I think it was also a Microsoft research project. They did an interesting thing where they rigged up some projectors where you could essentially put windows from your computer, uh, individual windows, whether it’s like a document app or something else up on the wall and they had projectors, so basically all the walls. We were 100% turned into these screens, but it was collaborative. So I could put up one window, and it’s not like, while I’m, you know, screen sharing, no one else can, someone else could put up their window and you had this shared space that was very spatial and that sort of thing. This sort of stuff was, was, you know, part of what was inspiring us and we were thinking about the new opportunity. But notably there. It’s a really interesting prototype, you can look at their video and look at what they’ve done and read the paper and think about how this might be applied in the real world, but they have to, it’s not enough to just build the thing and say, hey, we liked it or we didn’t like it, then you need to go and do some kind of quantifiable test. And they did a usability test or user test, which is as near as I could tell was just grabbing 7 random people that happened to be walking by in the office and having them use it for 2 minutes and then, you know, giving them a little survey and writing it down. And it seems like, OK, well, I guess that makes it science because you’re measuring a thing. But that’s not where we make great breakthrough new interfaces, but it’s very difficult because you just leave it to, well, did you like the thing you built? People always are attached to the things they built. They always like the thing they built. How do we, how do we measure that? That’s probably an unsolved problem a little bit for the academic side. 00:11:34 - Speaker 1: Yeah, I think so. Thinking about things that do work well in this space, reflecting on my own journey. I started not so much with the HCI as like proposing a certain windowing system or a specific gesture model. I started more on the fundamental side. So we think about human computer interaction, you need to understand the human body, like biomechanics and things like that. You need to understand the human mind, like cognition, and then you need to understand the computer science fundamentals, things like the graphics pipeline. So I found it very useful to go and study those fundamentals, both within. And outside the HCI literature, and there again, that area is much more susceptible to traditional scientific methods, so it’s very good information. um, and then you really understand that the fundamentals, the ground truth. 00:12:21 - Speaker 2: You know, the point about humans and computers are equal participants in this. And I think there is a tendency for computer people to focus on the computer. Maybe one thing that HCI tries to do, or at least um some of the HCI teams that I’ve had chance to interact with, including this team out of UCSD that we met at this conference we went to, they try to have maybe a cognitive science person or behavioral sciences person on the team, and they are concerned more with that, how does the human mind work, how does our attention work? How does our how do our bodies work, and then, but you also have to connect that. Together with what’s possible with the technology, both in the moment and of course, also in the future where we think technology might go. And I think, you know, for example, VR AR stuff is maybe in some ways a hot or buzzy space or maybe was, maybe that’s died down a little bit. But if you go read a lot of research about that, you see that for example, one of the biggest problems with that is just a simple case of, OK, if you got these controllers, you’re waving around in the air as the main way you interact with it, your arms just get tired. And it’s, it’s like they, they’ve measured this, right? They, they put people in situations where they’re using these kinds of controllers for long lasting tasks and they see that after an hour, you got to take a rest and they’re they’re, they’ve tried lots of different things to try to make that to be able to let you do a full work day the way you would at a standard desktop computer or whatever, and they haven’t found a solution. And so if you’re coming in, if you’re a commercial company that’s coming in and wants to do something with this space, you probably want to read that literature and keep those, uh, keep that challenge, that unsolved problem in mind. Yeah, one place to fill in more of the picture on the academic side, for me, the big eye opener was going to, uh, the biggest conference in the space, which is Kai last year, you and I kind of spontaneously both decided to go. This is when we were still within the lab, but thinking about the use. Idea and that was a really great experience because we both got to meet a lot of the professors and researchers that were working in this space, got to see how many people were there. I, I don’t know, it was 2000, 3000 people, there’s hundreds of papers submitted, many, many tracks of talks, and then we saw all of these people who are working really hard at thinking big and thinking future facing about what, what computers can do for us and how we can interact with them. Some examples of just for fun, I pulled up my old notes, uh, had a very early version of Muse. Uh, back then, a prototype that I was working with, and I was able to dig that out of my, my archives, or dig the the Muse board exports out of my archives. Um, we had, for example, there was a talk on peripheral notifications, and this is where they’re basically testing, OK, so if you have a slack notification or an email notification or something pop up, and it’s on screen somewhere. What can we do to put it in your peripheral vision so that it won’t break your state of flow, or a better way to put it is just trying to understand what what kinds of sizes and colors and motions and shapes for a particular notification in a particular place in your field of view, how likely that is to get your attention. And then as a person who’s implementing something that wants to give a notification, you can go read this literature and they have this very extensive data set. And if you say, hey, I want something that’s absolutely certain to grab your attention, you should do it like this. If I want something that’s more a little bit of a note to the side, but I don’t want to distract you if you’re in the middle of something, maybe you should use this shape and this color and be in this space in your in your field of view. And there’s things there about keyboards and different ways to improve typing on mobile, there was lots of things about wall mounted displays. Uh, there was, um, Ken Hinckley’s group, uh, which has been a source of inspiration for us at use. They do a lot of stuff with tablets, particularly around the surface platform. They had one that was, I don’t know, they attached a bunch of extra sensors, they basically strapped a bunch of extra sensors onto a standard consumer tablet and they use that to detect, I think what they called like postures, so they could tell better the grip, like how you were holding the tablet at the time and then they can make the software behave differently. And clearly this is not something you can use in production. They, this is the equivalent of a raspberry pi taped onto the back and a bunch of sensors, you know, kind of hot glued on around the edges. This would never work in commercial environment, but it suggested some things you could do if such a capability. Existed and I think that that is a good example of what um what I think this field of this best does is it it it gives you possibilities to draw from and then it’s the applied people, what we would normally call just people building products that can potentially go and draw from that pool of ideas and that pool of things, finding things that have been learned and use them to make potentially new products that solve uh new problems or old problems in new ways. 00:17:07 - Speaker 1: Yeah, this experimental slash prototype approach is probably the thing that we um most think of when we think of HCI. Another type of work that I found very helpful is the ethnography, where you go and you understand how people actually work day to day and what’s worked for them and what hasn’t. Couple of examples there. One is a book called, I think it’s a small matter of programming or the simple matter of programming. This is a study of uh end user programming in the wild, things like Excel spreadsheets, CADS, and what actually works there, and because they talk to these people who are actually doing work every day and and having success or not in these environments, they’re able to pretty deeply understand what is useful in the way, in a way that you probably couldn’t get with either theorizing or experiments. 00:17:50 - Speaker 2: And I’ll just interject to say that one was a big inspiration for uh Hiroku. And it’s also a good indicator of how much the academic world is ahead of in a, in a strange way. We think of maybe in the startup world or the tech world or whatever, oh, we’re so on the cutting edge of things, but a small amount of programming was written in 1993, if I’m not mistaken. And this was 2006 or 2007 when I was reading this and and applying some of what it, um, some of the ideas that were in it went into Hiroku. And so at that point, the book was already 15 years old, but a lot of the research and understanding in it and ideas that suggested were still really bold, innovative, or just thought provoking, in a way that current technology and software products and certainly programming tools um had not taken advantage of or um learned from. 00:18:45 - Speaker 1: Yeah, a lot of the ideas that one tends to think of in HCI perhaps as as a supposedly novel interaction or approach has actually been tried before. I think it’s very important to understand that prior art, especially if it basically didn’t make it into the commercial world and like, why is that? Or else you’re liable to make the same mistakes again. Um, another example that I’m thinking of was the study. Of so-called folk practices with computer programs. This is like little habits or techniques that people have picked up to make themselves more productive with programs, and they found two examples. One is lightweight version control by making copies. So if you’re in, if you’re editing a photo and you want to, you know, have some quick version control. Uh, you might, uh, duplicate the item in your canvas, like in Figma, you know, make another copy of it, and then fiddle with the new version, and then you can kind of compare it to the old version, even if you don’t have like a, you know, get for Figma or whatever. Um, another one was this idea of everyone likes to have a little scratch space where you can like put, you know, your little clippings and bits and things you’re working on, and that was one of the inspirations for. the shelf in the original Muse prototype. 00:19:47 - Speaker 2: Another book we both read around that time was The Science of managing our digital stuff, and they had a lot of insights, again, things that I think we borrowed from a little bit from Muse, but because they come into it from this ethnographic or academic perspective, they just want to learn, they want to collect the data, they want to understand users. They’re not coming in with the point of view of like, we have a product we want to sell you or or just a uh A product we believe in and we’ve already bought into the mindset of, they just want to learn. And so one insight there was people who have been designing file systems, that is the way we store documents on our computers for decades have talked about the hierarchical file system, that is to say, folders that nest inside each other, uh, is no one thinks that way and hard drives get messy and no one wants that, maybe we want a tagging system, I think BOS had a version of that, um, maybe we want fast search or whatever. And these folks just did a bunch of studies of people including how they use Dropbox or Google Drive or their own hard drives or just the way they manage their files, and pretty reliably, people like putting files in folders. And they like pretty shallow hierarchies and they can remember where it is and it’s best for them if it’s only in one place. And you can sit there and talk about how that’s not the best solution or whatever, but they, they did a pretty broad survey and just saw this is what people want to do despite the existence of other ways of doing it and the other kinds of solutions, including search and tagging and so forth. At some point you have to acknowledge the reality of this is how humans behave, and even if we don’t like that behavior, we need to think about that when we build tools for them. 00:21:27 - Speaker 1: Yes, if you’re contemplating doing a search-based or tag-based information management system, please read this book. It’s, it’s super critical. 00:21:35 - Speaker 2: There’s an interesting tension there between, I think the academic world. is not only good at, but is science is essentially built on prior art and you’re building on what came before, right? Any paper that doesn’t start with a survey of other research that this is built on or related to or other people have tried similar things, and you’re you’re extending the tip of human knowledge, hopefully, by building on everything we already know. Um, and so for that reason, the academic world is very good at the the prior art thing. And maybe the startup world is all about, hey, I’m a 24 year old that doesn’t know anything and I’m totally naive, but I have this wild idea for a thing I want to build, and 99% of that at the time, that turns out to be an idea that a bunch of other people tried, it doesn’t work and fail for all the same reasons as everyone else does, but 1% of the time it turns out that some assumptions about the world have changed, and it is that naivety, it is that. Not looking at why people failed before that it allows you maybe to find an opportunity. So there is, there is a bit of attention there, but sometimes the um I’m very appreciative of the what people have thought about this, they studied it in depth, there’s a lot of prior art here, like look that up before you start building things, um, and I think that that would be advice I would give to my younger self, I think at a minimum. Alright, so that gives us a little bit of the landscape of of HCI. Now the next part of the question was, how do you actually get into this field? I think that’s kind of a tough one, so I’m gonna actually say that for the end. Uh, but in the meantime, there was a follow on question here and Fetta says, how do you forget or ignore current patterns and come up with new ones? You have some thoughts on that, Mark? 00:23:14 - Speaker 1: Yeah, I come back to this first principles idea of really understanding the basis for all of this, the biomechanics, the cognitive science, the computer science, and then understanding the Um, assumptions or lemmas, uh, of the current design paradigms. So, you know, for example, Uh, one thing we see with with phones is most apps are designed for only one finger to be used at a time, and it would be a mistake to translate that design constraint or design decisions over to a tablet, we think, but a lot of apps just kind of blindly do that do that because they’re both iOS and they’re both touch apps. Um, another example even more relevant to use is the pencil. A lot of the gesture space of tablet apps can’t assume that the user has a pencil because Apple and the various app developers just aren’t willing to make that assumption. Uh, with, with muse, we realized that was, uh, assumption that people were making and one that you could take the other side of. So we’ve basically said you really need a pencil to use muse and therefore we’re gonna have some of the functionality behind that, you know, that, that, that physical gesture. 00:24:15 - Speaker 2: Yeah, the status quo is a powerful force for all of us, and we, we tend to act on not quite habit, but this stack of assumptions about the world and what the right way to do something is. And here’s where I like to think in terms of maybe a spectrum between on one far extreme is the research thinking, the out of the box, wild ideas, weird ideas, when you go to one of these HCI conferences, this is what you see a lot of just Sometimes frankly pretty wacky mad scientist kind of stuff. Now, um, but actually there’s only certain times where that is appropriate and in fact, doing research is a place where that is appropriate. Typically, if you’re making a product that you expect people to use in the real world, it’s actually a bad thing to have weird out of the box ideas, particularly about basic interactions. You want the status quo, you want the known path, they usually called the best practice. And I’ve certainly run into this on. Teams where I don’t know, you’re building a basic e-commerce site or something like that, and there’s someone there that wants to do something fun and exciting and so they’re like, and so they say, why not, let’s try this wild idea, you know, instead of checking out like this, you you do this crazy thing and 99% of the time that’s just a bad idea. Please do it the way that other people do it. And this is one of the things that I think tends to make software so high quality in the Apple ecosystems, both Mac and then even more so on iOS is you have this pretty stringent set of, you know, they call it guidelines, but in many cases are just outright rules to get your app approved. They have this very extensive culture and set of principles and so forth in the human interface guidelines and in all the precedent with Apple apps and the wider ecosystem there. It’s all really good and it all hangs together and it works well and people know how to use it. And so most of the time you actually should do the boring, expected common known path thing. And it required, but it’s a shift in mindset, a fun one, but, but also takes some stretching of the brain, you challenge yourself a little bit to go into the research thinking mindset as both of us did, we went to to Ink & Switch. 00:26:29 - Speaker 1: Yeah, I think that’s an important point and a balance to strike. Another big source of inspiration for me has been the world of analog tools. We’ve been thinking about how to build good digital tools for maybe 50 years or so. We have a couple of 1000 years of explicit and implicit study of how to create analog work environments, so things like personal libraries, uh, studies, uh, workshops, artist studios, in some cases, there’s explicit treatises about how you organize one’s library, but there’s also just a huge amount of implicit and embedded knowledge in the patterns that we use every day and that people have kind of habitually used to organize, you know, say the library. So I like to look at the, the physical world and see, how can we just like, as a baseline, make it as good as that. So a simple example would be, if you use ink on a pen, it has zero latency. If you use ink on a really good tablet app, it might have 15 to 20 milliseconds, which is a lot. And if you use it on a bad tablet app, it might have 50 milliseconds. Um, so that’s a really basic example of how there’s a, there’s a simple bar to set. Uh, another one that I think about a lot is multitasking. So if you have a desk, and you have your main piece of work in front of you, and you have some notes to the side or uh up on the top of the table. It’s super fast and easy to multitask your attention, just like you kind of move your eyes or you move your neck and your eyes re refocus, maybe you lean into one side or the other, um, but it’s it’s super fast and lightweight. What you think about a typical iOS app, it’s like, you know, press next page, transition animation, spinner, loads, fonts come in, right? And so it’s it’s very discouraging to actually do this kind of multitasking work. 00:28:06 - Speaker 2: And maybe the flip side of that of taking physical world information practices, things from artist studios and offices, file folders. Scissors, rulers, pencils, desks, you do tend to get, especially the first time an analog process comes on to is digitized. So you think it’s something like desktop publishing going on to computers in the 1980s or yeah, word processors was taking what was a typewriter or a typesetter and moving that onto the screen, spreadsheets that were that way, um maybe PowerPoint, uh taking overhead transparencies, bringing onto the computer in the late 80s, early 90s. In all of these cases, they tend to be very literal. Like the first version of PowerPoint was a way to print out overhead print transparencies, and it wasn’t until much later that the idea of a slide deck that would be all digital and you would never need to print out and put on a projector, uh, showed up. And then often you when you look back at these first transliterations from the analog world to the screen, you see this thing where it’s, oh, isn’t this funny? You know, there’s the little, the little picture of the trash can and a little picture of the Um, you know, often very literal and kind of heavy handed and not taking advantage necessarily of what can be done in the new medium. Do you have a, I don’t know, a sense for the how we take the best parts and the things that work about the physical world, knowledge tools that we’ve been working with for so long and are so adapted to human needs, but not also get stuck in a weird rut of translating them directly so that we don’t get the benefits of the computer. 00:29:39 - Speaker 1: Yeah, I don’t think there’s a simple rule for that, but again, I come back to the fundamentals. A lot of the stuff is driven by the like the biomechanics or the cognitive structures of our mind, which isn’t going to change. So for example, we have a very realistic, deeply embedded expectation that when we like touch something and move our hands that it moves, and that I think is basically not going away, and it would be a mistake to think it’s going to go away. Uh, likewise, I think we have quite embedded cognitive arch. texts around both spatial memory and associative memory. I think those are basically baked in and they’re not going to go anywhere. 00:30:10 - Speaker 2: I guess that comes to mind because I feel like that tension or it’s not even the right word for the interleaving of try to draw the best parts of the physical world workspaces, but also really embrace this digital space and it’s part of the pitch, I guess, or the the value hypothesis for use as a product is that. We are going to take taking something you previously did with Post-it notes and your whiteboard and your notebook and some printouts of some screenshots that you scribble on that are on your desk, and moving them into this expensive and fragile computing device. That it will have new capabilities and new powers that you couldn’t get. And so getting bringing those best parts across, which is, for example, that yeah, you touch something and it moves right away and there’s this instantaneousness to it, and then you’re not like looking at spinners and loading screens and whatever, um, but also taking advantage of all the Um, incredible capabilities and the great depth of possibility that exists within once you move to the digital virtual workspace. 00:31:18 - Speaker 1: Yeah, one idea for an exercise here and this kind of gets into our next question would be just to try to understand and catalog the properties of these physical workspaces that are interesting. So for example, I have a desk here that I think is 6 ft by 3 ft. 00:31:32 - Speaker 2: For our non-American listeners, that’s probably about 2 m by 1.5. 00:31:38 - Speaker 1: Yes, thanks, Adam. So you have this desk and imagine it’s covered with like textbooks and notes and photo printouts at, you know, say 200 DPI. What’s the resolution of that? And if you do that exercise, you’ll see that it’s like massively bigger than even our most advanced displays, it’s not even close, and just being kind of aware of those basic fundamental properties of the physical world and how they might or might not be reflected in your app, I think is a good baseline. 00:32:02 - Speaker 2: So we mentioned academic HCI work, which tends to happen in universities and funded by grant money and the output is published papers, and then there’s corporate R&D which is divisions, separated divisions, but still departments within some large company that has a lot of cash, like a bell, or a Xerox or a Google to throw at potential new innovations, but there’s a third category that Or at least I hope it’s a category now, uh, that it’s much more rare, but I can switch falls into this, and that would be the independent research lab. And the hypothesis behind I and Switch was what if we take the corporate R&D lab, but we cut off the corporation. And this quickly leads you into how does this stuff get funded and our um. Our mutual friend, Ben Reinhard has a whole series of excellent articles about how innovation happens and particularly the different kinds of funding models that can happen and how it gets funded in turn leads into the incentives of the people doing it and there’s quite a, quite a rabbit hole there for those who are interested in it. But the concept behind it and switch was that we could get some grant money to do independent research. With the idea that it would generate called intellectual property. I don’t love that term, but basically, ideas that could potentially be commercialized and ideas with enough depth to them and research, and where we falsified ideas that were no go, and we had some really compelling ones. One of those turned out to be Muse, which we we went ahead and spun out to begin the commercialization project process. But there There are a few others that I know of that are independent labs. One is um Dynamicland, which is sort of Brett Victor’s effort to bring computing and programming in particular into a more spatial, a physical spatial environment, not just on a screen. And then another one that I know of is um maybe more in its nascent stages, but Andy Maze has done amazing work on mnemonic devices. And he’s, I think funding and stuff maybe started with Patreon and maybe led up to institutional funding kind of more of a kind of a, what’s the word for it, a nonprofit, more of a philanthropy type approach. But I think there’s no great answer for how independent research can get done, but I at least I hope that I could switch is an interesting example, if not role model for others that might want to see how they can push the frontiers forward in a particular space. 00:34:24 - Speaker 1: Yeah, that’s both the challenge and the promise of this third type of institution on the academic side, you’re very limited by your work has to fit in the box of like a peer reviewed quantifiable research paper and in the commercial world, it needs to be commercializable in the next, you know, probably a year or 2, maybe, maybe 3, but all the good ideas don’t fit in one of those two boxes. As hard as it is to collect them with this third organizational type, I think it’s worth trying. 00:34:47 - Speaker 2: It’s a great point. I think the time horizon is one of the key. Variables, let’s say that defines what I would call research for for anything, but certainly for human computer interaction, which is, um, I believe Xerox Park actually had an explicit time horizon of 10 years. Which is definitely way beyond what a commercial entity would normally do. Um, and I think, you know, basic science even has a longer time horizon than that sometimes. But yeah, when you look at maybe university labs, they’re thinking forward really, really far, um, maybe corporate R&D labs are thinking further than their commercial counterparts. And then if you talk about a startup, particularly something. combinator, you’ve got to build that MVP, get it to market, validate it, get customers. You can’t be building it on some shaky technology that one, you don’t know if it’ll work, and two might take many years of development yet to come to come to enough maturity that you can base something that people really want to build a product that people will depend on. 00:35:44 - Speaker 1: Yeah, and I also think you get a bit more wildcard energy in these independent orgs, you know, the, the academic institutions and the, the big commercial labs are just necessarily more constrained and structured, and you can have just more eccentric people doing stuff on the independent side, which sometimes leads you down weird dead ends, but sometimes you get really interesting results and it kind of injects a new idea into the mix. I’m actually we talked mostly about like independent research labs or research efforts. I also consider like indie creators, artists, tinkerers in this bucket too. One example that comes to mind is that the video game Braid, which is this amazing like time traveling based game where the time traveling is like very smooth and scrubbed frame by frame. Um, that’s actually been something of an inspiration for me thinking about like version control and time travel for productivity tools. 00:36:33 - Speaker 2: Yeah, I think that’s Jonathan Blow, and he also went on to make. Other like category breaking games, uh, trying to remember the name of it, there was a puzzle game that was actually really nice on the the iPad that I played with my girlfriend at the time. And then if I’m not mistaken now, he’s working on inventing a new programming language. So yeah, so that the, uh, maybe it just takes a certain mindset, a desire to perhaps even a um a drive to think outside the box and do weird stuff. And yeah, I certainly agree that Labs depend on weird, wild, I think I saw the word maverick used quite a bit when describing um there’s this book called Dealers of Lightning, which I think covers, covers Xerox Park and and kind of those glory days pretty well, and it talks about, yeah, there are these, I don’t know, kind of long hair types and, you know, don’t wear shoes in the office and of course those aren’t the qualities that make them good researchers, but it’s connected to this. Maybe desire to do a weird thing to not conform to try stuff at the fringes, to be actually fascinated by things that are at the fringes, as opposed to, this is weird, who cares? I want to work on something more mainstream, let’s say, um, and not to say that that’s a better or worse approach to bring to your work, uh, just that it, it fits in a different space in the innovation cycle. Well, maybe that brings us around to the core of the original question, how do you get into this field? 00:38:04 - Speaker 1: Yeah, and I, I feel like there might be two different questions embedded there. One is maybe how do you participate or contribute or even just kind of find, find out what’s going on, uh, and the other is how do you make a living doing it. And, uh, I, I think making a living doing it is, is harder, but it’s maybe simpler to answer. There, there are two main paths right now. There’s the academic path and there’s the corporate path. Um, the academic path you you basically you go to graduate school and you get a PhD. Uh, but even after that, it’s, it’s quite challenging just because it’s so competitive in the corporate path, you become a practitioner and you, you do good, you know, engineering or product work and eventually you can enter this more researching ladder. But I’m not sure we have that much to contribute on that front because neither you or I have gone down those paths, maybe more of the how do you engage with the community where we should focus here. 00:38:43 - Speaker 2: Yeah, absolutely. Well then, you teed up really nicely. How should we engage with the community? 00:38:49 - Speaker 1: Well, step here I would say is start digging into the literature, you know, it sounds obvious, but I think a lot of people haven’t done this either they don’t realize it’s there or they’re intimidated by it. Um, but this reminds me of Rich Hickey’s classic talk, hammock driven development. He’s like, if you’re working on something like you. I think you need a hash function that does X going to Google Scholar type hash function that does X enter and see what comes up. Like there’s almost certainly going to be something there. 00:39:12 - Speaker 2: Well, maybe there’s a great chance to talk about something again. I coming purely from the what what academics would call the industrial side, uh, yeah, working in companies that build products that they sell to people. That’s what I did my whole career. And so things like the fact that all this academic work tends to be published as PDFs in a particular format, there’s a lot tech to formatted to column PDFs, they have a particular style of writing, they have this particular style of citations, you typically, they’re not always open access, but when they are, they’re PDF on a web page, and the search engine for them is something like Google Scholar. I I actually didn’t know that. I didn’t know how to go find those things. And so as a Let’s say as a product developer, designer or engineer, I knew how to Google for stuff. I know how to find stack overflow. I read medium pieces, I read people’s blogs, I follow other folks in my field on Twitter, but the academic world of things was sort of a dark, yeah, was dark to me, except for on occasion, I would stumble across a book like the one you mentioned earlier, a small matter of programming. And I feel like I discovered this incredible trove of knowledge from someone that came at the the problem space from a very different perspective. And I think it also goes the other way, not as much, but I think academics are less likely to read the medium think piece posted by the product designer, the engineer, and basically the two, I think the two communities, if that’s the right way to put it. Uh, have different communications conventions and different ways that they share knowledge with each other and different systems for evaluating. Uh, importance and so on. So it’s very hard to, um, if you’re, if you’re steeped in one, it’s hard to cross the world into the other. So maybe that comes to all right, you find some hooks into this, you can follow some people, whether it’s on Twitter, whether it’s through their personal blogs, you can start to find some papers and Google Scholar on the topic, you can find some slack communities maybe that talk about this stuff and you can try to get hooked into it and and. Again, if you’re someone that comes from more the practitioner side, we might say, engineering products, design, uh and you haven’t been exposed to the academic side, going and and exposing yourself to that is a very good idea and maybe vice versa. 00:41:30 - Speaker 1: Yeah, and one other thing I would emphasize there is that you can do this citation crawling practice where you find a paper that you’re interested in, you can go look at the, the references, and this will refer to a bunch of other papers and sometimes books and in HCI it’s mostly papers, there are a few books, and then you can type those titles into Google Scholar and follow them that way. And a good way to kind of know if you’re getting your hand around the literature is if. When you read a new paper and like you basically recognize most of the citations or they’re kind of off the edge of your um your map in terms of your area of interest. So you’ve kind of identified the full graph of relevant papers and then you’re, you have a good handle on the literature. 00:42:05 - Speaker 2: And I think this is something that’s very much you learn this in the academic tradition, which is if you want to advance the state of the art in a field, first you need to know all the things that humans already know. And you do that by consuming all the literature, and you know when you’ve consumed all the literature exactly the way you described, kind of a crawling process, which is you start with a few seminal papers or you start with a few that are your starting point and you follow all the citations until you get to the edges of it and you feel like, OK, I’ve filled in this space now I know. in some kind of um general sense, what humanity knows about the subject. And now if I am, if I have novel ideas or I want to do new research or I see open questions that stand on top of this, now I can go do that in order to potentially contribute to this. 00:42:52 - Speaker 1: Yeah, and then speaking of taking that next step, it can be intimidating, certainly if you want to jump all the way to publishing in a peer reviewed journal, but I think you can take more incremental steps. One example that comes to mind is Dan Lew’s work on latency in computer systems. Uh, he did a series of measurements and experiments to assess uh the different latencies like from your keyboard. Your monitor for when you move your mouse to something happening, and uh but he was able to publish this on his personal website, and it’s not an academic peer reviewed paper, but it’s, that work has been quite influential, and you can indeed reach the kind of the caliber of academic work, even if you’re not participating in that full pipeline. 00:43:29 - Speaker 2: I’ll note that um work, and if I recall correctly, it’s published on kind of a really basic HTML page with very limited formatting and and whatever feels very um homegrown and authentic. But one of the things he does that’s so compelling is he says, he starts with this hunch, which is computer seems slower than I remember when I was younger, but then he goes to, you know, maybe the way if you don’t come at it from that scientific rigor position, you might go, you know, computers seems slower. I’m gonna like make some snap judgments. And then I’m going to go write a blog post and complain about it. But what he did was say, well, are they actually slower? And he got a, I don’t know, some kind of high speed camera set up and set that up and pointed it at the keyboard and the screen, and he recorded himself pushing a key, and then you can see on the camera when it appears on the screen, and then he, he wrote down exactly to the millisecond and he did that with a whole bunch of different devices, including some computers dating back to the 80s and then he put them all on the table and sorted them in order. And that’s a simple. Application of the scientific method to in this case, a very literal human computer interaction. How long does it take when I press a key when it appears on the screen? And that doesn’t say how long it should take or what would feel right, but you can put now real numbers to this intuition that maybe computers are more sluggish than they were at a different time. 00:44:55 - Speaker 1: Yep, exactly. And then if you are looking to take that step towards uh participating in these peer reviewed journals, a possibility that we’ve had some success with is collaborating with an established academic in the space. Um, Adam, you’ve kind of spearheaded our collaboration with Martin, maybe you want to describe that. 00:45:12 - Speaker 2: Right, well, we were lucky enough to get to work with Martin Klepman, who’s a one of the world’s experts on, say data and data synchronization, particularly around another track of research we had in the the lab around um what we eventually called local first. And he is someone who was in the indust, let’s say the industry world, he was doing startups and at some point felt that he can contribute more to the industry or the world by jumping over to the academic world to do more basic research around algorithms having to do with um synch data synchronization. And so we were lucky enough to get the chance to work with him within the context of the you can switch lab on a kind of a light part-time basis. And that led pretty naturally to, OK, well, we want to write a piece and publish it. And he wanted to publish some of his findings and he said, hey, you know, I think this could go into the academic format. And I said, well, Well, how does that work? He’s like, well, basically we take this web page we wrote, we put it into a lot of tech, we change some of the wording to remove, make it less emotional, uh, we changed the links into the citations where that makes sense, and we, we had a whole process to make it into something fits this format that’s expected by the academic world, and then we submitted it to a conference, uh, where it was accepted and eventually I actually ended up going to present it for. Um, various travel logistics reasons. Um, but yeah, that was a very interesting experience because the four authors on the page, uh, the paper, I think you and Peter maybe both have a good bit of academic experience, although I don’t know if you’ve published that way before. Martin is extremely good at that stuff, and then I knew very little about that world, but working with someone that knows all the ins and outs of it was a very um rewarding way to to learn about it. 00:46:58 - Speaker 1: Yeah, exactly. And to be clear, we didn’t just jump right to that, you know, a collaboration with one of the world’s leaders in synchronization technologies. There’s a little bit of a. 00:47:09 - Speaker 2: Yeah, don’t email Martin and ask him whether he’ll write a paper with you, he doesn’t know who you are. That’s not what I’m advocating for. 00:47:15 - Speaker 1: There’s there’s a bit of a proof of work function here where if you do some of your independent research in the space, and especially if you publish something that’s coherent and compelling, it becomes much more. You know, reasonable to establish a collaboration. Actually, when we did some of our publications around Muse and our latency measurement work, we had a few academics reach out to us and you know, say that’s interesting, maybe we should, you know, do some work together. I don’t think we’ve brought any of those yet to the point of writing a paper together, but it just shows that once you have some, some work out in the world that shows that you’re serious, that you’re engaging somewhat in the academic tradition that you’re aware of the literature, that you have contributions, um, it becomes a more feasible to have those collaborations. 00:47:54 - Speaker 2: Yeah, perhaps like any other intellectual or maker or tradition, this is a world or a community or a society that thrives on seeing what else you’ve done, and if you see that someone has done great work that overlaps with work you’re interested in, and that creates opportunity to connect, to learn from each other and then maybe lead to, can lead to collaborations. And yeah, maybe it’s not such a huge leap from do a weekend hack project and write up your learnings about it to eventually doing something a little more deeper and a little more serious that brings you in the direction of um the academic recognized academic world. Well, it’s interesting to note then that In doing the research lab, we came to it not from the perspective of how do we become a part of HCI, but rather we just wanted to see computers and computing interfaces get better uh in in some particular ways that led us to doing maybe some interesting experiments that led to some novel research that we we published about, and that in many ways opened the door to us to be more connected to this larger academic field. Is that something that a path you would recommend for others? 00:49:05 - Speaker 1: Yeah, there are certainly interesting paths there, you know, there’s this independent research lab path, and of course, there’s the academic and commercial path, and I think those are all interesting. I would also say though that being a scientist or being an innovator isn’t a hat that you’re granted by some external institution. It’s a way of thinking, it’s a way of navigating the world. You know, a scientific method is something anyone can use. Publishing is something anyone can do. Everyone can read the literature. So if you’re interested in this, I don’t feel like you’re, you’re stuck because you don’t have some credential like a PhD. Anyone can step into this world, go on to Google Scholar and read literature, and then maybe you have something to contribute on top of that. 00:49:40 - Speaker 2: It’s hard to think of a better place to leave it there. If any of our listeners out there have feedback, feel free to reach out to us at @museapphq on Twitter or hello at museApp.com by email. We’d love to hear your comments and ideas for future episodes, and big thank you to Fetta for giving us this very uh intriguing and deep topic to explore. I’ll catch you next time, Mark. 00:50:04 - Speaker 1: Great, thanks, Adam.