Webcomic
POPULARITY
Welcome to Episode 262In this episode we talk about Skazz excitement for the new Monster Hunter game, he does have some concerns if this game will be for him.Ram then talks about A Game About Digging a Hole, A cheap little game where you, unsurprisingly, dig a hole.We discuss the recent news of Gene Hackman death, we prove how well the podcast is recoreded, Ram use in a survival situation, Stubborn dogs and people complaining about cheap games NotesThe sad death of one of the great actor Gene HackmanA series that ram does not like, Pride and PrejudiceAnd the almost certainly superior version made better by adding zombiesSimon Pegg recreating an iconic scene from Shaun of the DeadNo prize guessing what you do in A Game About Digging a HoleA classic British personality Fred DibnahAnd the mine he decided to build When it comes what easy vs impossible XKCD says it bestWe look forward to seeing you all on the next podcast on March 13th 2025, at 18:30 GMT
As a post-election palate cleanser, Jess talks with webcomic artist, author, and cultural phenomenon creator Randall Munroe. They discus stick figure science cartoons and endless curiosity, delve into science mysteries, and even workshop lava moats.
Burnie and Ashley discuss Joker Part 2, 419 scams, making a sequel for a new audience, R-rated box office successes, spam in Scotland vs US, strong passwords, XKCD, AI art theft, and the world's most digital violin.Support the show
This week we're talking about a backdoor inserted into a popular Linux file compression tool, which had the potential to massively undermine the security of vast swathes of the internet. What happened? How did it happen? And how was it thwarted? Links - Andres Freund's Mastodon - where he revealed the backdoor: https://mastodon.social/@AndresFreundTec - Read more in Ars Technica's article about it: https://arstechnica.com/security/2024/03/backdoor-found-in-widely-used-linux-utility-breaks-encrypted-ssh-connections/ - Read more in the verge's article about it https://www.theverge.com/2024/4/2/24119342/xz-utils-linux-backdoor-attempt- Read more in Wired's article about it https://www.wired.com/story/jia-tan-xz-backdoor/ - Check out this excellent and very helpful diagram: https://twitter.com/fr0gger_/status/1775759514249445565 - The XKCD comic we mention: https://xkcd.com/538/
Coming up in this episode * Does it do Passkeys tho? * So What Happened to Xz anyway? * How do we fix the internet? The Video Version (https://www.youtube.com/watch?v=I3bN3PRmHJY) https://www.youtube.com/watch?v=I3bN3PRmHJY Timestamps 0:00 Cold Open 1:36 Amazingly Self-Hosted 34:13 The History of Xz and the Hack*! 49:58 How to Fix Open Source 1:15:56 Next Time 1:20:42 Stinger
Thu, 11 Apr 2024 17:30:00 GMT http://relay.fm/rd/232 http://relay.fm/rd/232 Ham Means One 232 Merlin Mann and John Siracusa John guides Merlin through an unusual form of communication. John guides Merlin through an unusual form of communication. clean 6268 John guides Merlin through an unusual form of communication. This episode of Reconcilable Differences is sponsored by: Squarespace: Save 10% off your first purchase of a website or domain using code DIFFS. Links and Show Notes: John guides Merlin through an unusual form of communication. So unusual is this form of communication that its interrogation continues through the members-only bonus episode. Remember, you can sign up today to hear all the member episodes, get more bonus stuff, and, yes, support this program. (Recorded on Tuesday, April 2, 2024) Credits Audio Editor: Jim Metzendorf Admin Assistance: Kerry Provenzano Music: Merlin Mann The Suits: Stephen Hackett, Myke Hurley Get an ad-free version of the show, plus a monthly extended episode. You Ruined Everything, by Jonathan Coulton Kottke's post about Waffle House's Magic Marker System Waffle House's Pull Drop Mark Order Calling Method - YouTube Waffle House's Food Safety Training - YouTube Waffle House Marker System Poster "…wheat" - YouTube Multimodal Literacy and the Myth of Low-Skilled Labor at Waffle HouseThe learning curve for a Waffle House server can be steep, and even steeper for a cook. The process by which an order cycles from the customer-menu interaction to the final presentation of food is complex, multimodal, and reliant on code-switching. Hamming codeJohn mistakenly used this name instead of the correct one. (See next link.) Huffman codingThis is what John should have said. The XKCD comic about generalized systems Merlin's pegboard
Thu, 11 Apr 2024 17:30:00 GMT http://relay.fm/rd/232 http://relay.fm/rd/232 Merlin Mann and John Siracusa John guides Merlin through an unusual form of communication. John guides Merlin through an unusual form of communication. clean 6268 John guides Merlin through an unusual form of communication. This episode of Reconcilable Differences is sponsored by: Squarespace: Save 10% off your first purchase of a website or domain using code DIFFS. Links and Show Notes: John guides Merlin through an unusual form of communication. So unusual is this form of communication that its interrogation continues through the members-only bonus episode. Remember, you can sign up today to hear all the member episodes, get more bonus stuff, and, yes, support this program. (Recorded on Tuesday, April 2, 2024) Credits Audio Editor: Jim Metzendorf Admin Assistance: Kerry Provenzano Music: Merlin Mann The Suits: Stephen Hackett, Myke Hurley Get an ad-free version of the show, plus a monthly extended episode. You Ruined Everything, by Jonathan Coulton Kottke's post about Waffle House's Magic Marker System Waffle House's Pull Drop Mark Order Calling Method - YouTube Waffle House's Food Safety Training - YouTube Waffle House Marker System Poster "…wheat" - YouTube Multimodal Literacy and the Myth of Low-Skilled Labor at Waffle HouseThe learning curve for a Waffle House server can be steep, and even steeper for a cook. The process by which an order cycles from the customer-menu interaction to the final presentation of food is complex, multimodal, and reliant on code-switching. Hamming codeJohn mistakenly used this name instead of the correct one. (See next link.) Huffman codingThis is what John should have said. The XKCD comic about generalized systems Merlin's pegboard
Sportsball weekend. Jon can't pause TV and he's annoyed. Eric still has negative opinions of JavaScript and adds cell towers to the list. Followup on political ad clones. Eric uses ChatGPT to find missing US States. Almost. LassPass is not a Dating App. Phishception is the word of the day. FTC says Fraud Losses top 10 Billion in 2023. For fun, Eric reminds you about XKCD and some random math facts about the number 323. Jon reads up on Lake Kivu's Potential Energy. 0:00 - Introduction 8:17 - AI Political Ads 10:42 - FCC Ruling 12:12 - Robocall Investigation 16:10 - LassPass in the AppStore 18:54 - Phishception 21:52 - Scammy Snapshot 26:19 - XKCD 29:09 - Lake Kivu's Explosive Energy
Recorded on-stage at Øredev 2023 just after her keynote, Fredrik chats to Na'Tosha Bard about picking good building blocks, getting products done, and code outliving you. Software outlives you. How early is it meaningful to consider that fact? Will we get better at handling long-lived software? Make tradeoffs with open eyes. Na'Tosha has worked on many different levels of hardware and software, as well as many different levels in organizations - what can be picked up from the various levels? Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlundand @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Øredev The Øredev 2023 video playlist on Youtube Na'Tosha Na'Tosha's keynote is not out yet XKCD about standards Sandy Mamoli talked about lessons from handball applied to software Premature optimization Cloud-agnosticism Unity KMD - where Na'Tosha works now Titles A lot of nodding Perfect is maybe also a delusion Microservice theater Solving a problem for humans Software outlives you Sitting on a mainframe somewhere
Brandon Hendrickson (creator of scienceisweird.com) says no one's ever asked him about the sabertooth tiger skull in his Zoom background - until now! Brandon's a teacher steeped in the ideas of Kieran Egan - a prolific educational theorist who believes the world is FASCINATING and that IMAGINATION is key to how we humans learn. We explore how Egan's approach could work for autodidact software engineers, offer untold book suggestions, and, of course, propose some ways that ChatGPT might be able to help us along the way.Shownotes:Science is WEIRDBrandon's 2023 Astral Codex Ten book review contest winning review of Kieran Egan's THE EDUCATED MINDKieran Egan (wikipedia)A New History of Greek Mathematics - Reviel NetzDie Hard water jug challenge XKCD someone is wrong on the internet
The kerfuffle in the writing world this week-where an author created sockpuppet accounts to downvote other books and upvote her own-why writers aren't in competition and the key importance of friendship.Sign up for FaRoAdvent here! https://farofeb.com/faroadvent/The XKCD comic is here https://xkcd.com/386/ Nyad is here https://www.imdb.com/title/tt5302918/Join my Patreon and Discord for mentoring, coaching, and conversation with me! Find it at https://www.patreon.com/JeffesClosetYou can always buy print copies of my books from my local indie, Beastly Books! https://www.beastlybooks.com/If you want to support me and the podcast, click on the little heart or follow this link (https://www.paypal.com/paypalme/jeffekennedy).Sign up for my newsletter here! (https://landing.mailerlite.com/webforms/landing/r2y4b9)You can watch this podcast on YouTube here https://youtu.be/oFKI9i6uWPYSupport the showContact Jeffe!Tweet me at @JeffeKennedyVisit my website https://jeffekennedy.comFollow me on Amazon or BookBubSign up for my Newsletter!Find me on Instagram and TikTok!Thanks for listening!
Guests Amanda Casari | Julie Ferraioli | Juniper Lovato Panelist Richard Littauer Show Notes In today's episode of Sustain, Richard is joined by guests, Amanda Casari, devrel engineer and open source researcher at Google Open Source Programs Office, Julie Ferraioli, an independent open source strategist, researcher, practitioner, and Partner at Open Chapters, and Juniper Lovato, Director of partnerships and programs at the Vermont Complex Systems Center at UVM and Data Ethics researcher. Amanda, Julia, and Juniper join the discussion, bringing a wealth of expertise in the open source domain. The conversation gravitates towards an article co-authored by the guests, striking a balance between open source software and open source ecosystems research. The episode dives deep into the “10 simple things” format of the article, the crucial importance of collective conversations, and a keen exploration of open source researchers. Hit download now to hear more cool stuff! [00:01:29] Richard tells us why he invited our three guests today and he talks about their previous accomplishments and backgrounds. [00:02:17] Our discussion moves to the title of a new article co-authored by the guests. We hear about the intended audience of the article and the distinction made between open source software and open source ecosystems research. [00:03:31] Richard brings up where the article fits in the academic landscape, and it's revealed to be more editorial than research. [00:04:17] There's a conversation about the “10 simple things” format, its origin, and the motivation behind it. They put an emphasis on the need for collective conversation and the value of sharing experiences and knowledge. [00:07:28] Richard brings up the idea of open source researchers and mentions various figures and institutions involved in open source research. Juniper clarifies the target audience for the article and its intentions, Julie shares her perspective from the industry side and the importance of a critical framework, and Amanda expresses her emotional response to some researchers' approach towards the open source community. [00:12:03] Julie discusses the emotional challenges that inspired the paper's best practices emphasizing not repeating negative behaviors, and Juniper notes tension in research between benefits for the community and for the researchers emphasizing understanding norms and values for studying open source communities. [00:13:52] Richard mentions there are nine principles in the paper and asks about the principle regarding treating open source ecosystems as systems “in production.” Amanda highlights the importance of considering the real-world impact of research in open source and mentions an incident where a university was banned from the Linux kernel due to disruptive changes. [00:16:33] Julie emphasizes the potential broader impact on industry systems when modifying open source systems and she raises the point that tampering with open source systems might inadvertently affect critical infrastructure. Amanda comments on the increasing cybersecurity concerns around open source. [00:19:18] Richard brings up a real-world example of a university introducing bugs to the Linux kernel and points out the need for considering ethical implications beyond just production systems. [00:20:59] Richard draws parallels between addressing these issues and addressing racism, and Juniper adds that the scientific process is ongoing and should evolve with technology and societal values. [00:21:53] Julie describes the complexity of open source funding and compensation and points out the challenge in understanding motivations and expectations of open source participants. [00:24:07] Amanda emphasizes the difficulty of summarizing each section, noting that each one could be a chapter or book and she expresses her concerns about not just individual equity but organizational equity. [00:25:59] Juniper raises the issue of invisible labor in open source. [00:26:39] Julie highlights the importance of recognizing that open source repository data might not capture all the activity and contributions made by community members. [00:27:37] Amanda discusses the challenges and importance of capturing data, especially when it may put individuals at risk. Juniper stresses the importance of involving communities in the research process and gaining their consent, ensuring their dignity, security, and privacy. [00:29:49] Julie discusses the complexities of identity within the open source community, highlighting that individuals can hold multiple identities in this space. [00:31:10] Richard adds that the insight shared are not just for open source researchers but also for anyone involved in the open source ecosystem. He emphasizes the need to be aware of biases and the importance of understanding the data one works with. [00:32:22] Richard prompts a summary of the main points in the paper, which are read by our guests. [00:34:48] Find out where you can learn more about our guests and their work online. Quotes [00:20:08] “Production as the end line for ethical values leads to a lot of really thorny edge cases that are going to ultimately hurt the communities of people who aren't working on production ready systems.” [00:21:20] “Just as open source is always in production, so is the scientific process.” [00:23:24] “Even having the privilege of time to dedicate to open source is not available to all.” [00:24:26] “It's just not individual equity but organizational equity.” [00:25:47] “We can't ignore the very large industry that is open source that has all that money moving around and where it's going is a question we should all be asking.” [00:26:00] “There's a lot of invisible labor in open source.” [00:28:32] “Leaving out communities from the scientific process of the research process leaves open these vulnerabilities without giving them a voice to what kind of research is being done about them without their consent.” [00:29:17] “What we are starting to consider acceptable surveillance in public is really being challenged.” [00:29:33] “It's really important for us to make sure that we're maintaining people's dignity, security, and privacy while we're doing this kind of research.” Spotlight [00:35:45] Richard's spotlight is The Long Trail that he's going to hike. [00:36:17] Amanda's spotlight is contributor-experience.org and the PyPI subpoena transparency report. [00:37:20] Julie's spotlight is the book, Data Feminism. [00:38:09] Juniper's spotlight is a new tool called, XGI. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Twitter (https://twitter.com/richlitt?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Amanda Casari Twitter (https://twitter.com/amcasari) Amanda Casari Mastodon (https://hachyderm.io/@amcasari) Google Open Source (https://opensource.google/) Open Source Stories (http://opensourcestories.org/) Julia Ferraioli Twitter (https://twitter.com/juliaferraioli) Julia Ferraioli Website (https://www.juliaferraioli.com/) Open Chapters (https://openchapters.tech/) Juniper Lovato Website (https://juniperlovato.com/) Juniper Lovato Twitter (https://twitter.com/juniperlov) Vermont Complex Systems Center-UVM (https://www.complexityexplorer.org/explore/resources/75-vermont-complex-systems-center) Sustain Podcast-Episode 111: Amanda Casari on ACROSS and Measuring Contributions in OSS (https://podcast.sustainoss.org/111) XKCD (https://xkcd.com/) Beyond the Repository: Best practices for open source ecosystems researchers by Amanda Casari, Julia Ferraioli, and Juniper Lovato (https://dl.acm.org/doi/pdf/10.1145/3595879) Operationalizing the CARE and FAIR Principles for Indigenous data futures (scientific data) (https://www.nature.com/articles/s41597-021-00892-0) The Long Trail (https://www.greenmountainclub.org/the-long-trail/) Welcome to the Contributor Experience Handbook (https://contributor-experience.org/) Contributor experience-Why it matters (SciPy 2023) (https://blog.pypi.org/posts/2023-05-24-pypi-was-subpoenaed/) PyPI was subpoenaed by Ee Durbin (https://blog.pypi.org/posts/2023-05-24-pypi-was-subpoenaed/) Data Feminism by Catherine D'Ignazio and Lauren F. Klein (https://mitpress.mit.edu/9780262547185/data-feminism/) The CompleX Group Interactions (XGI) (https://xgi.readthedocs.io/en/stable/index.html) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Amanda Casari, Julia Ferraioli, and Juniper Lovato.
Laurent Doguin, Director of Developer Relations & Strategy at Couchbase, joins Corey on Screaming in the Cloud to talk about the work that Couchbase is doing in the world of databases and developer relations, as well as the role of AI in their industry and beyond. Together, Corey and Laurent discuss Laurent's many different roles throughout his career including what made him want to come back to a role at Couchbase after stepping away for 5 years. Corey and Laurent dig deep on how Couchbase has grown in recent years and how it's using artificial intelligence to offer an even better experience to the end user.About LaurentLaurent Doguin is Director of Developer Relations & Strategy at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced: Couchbase: https://couchbase.com XKCD #927: https://xkcd.com/927/ dbdb.io: https://dbdb.io DB-Engines: https://db-engines.com/en/ Twitter: https://twitter.com/ldoguin LinkedIn: https://www.linkedin.com/in/ldoguin/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? Solo.io is here to be your guide to connectivity in the cloud-native universe!Solo.io, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application. Solo.io's got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with Solo.io.And here's the real game-changer: a common interface for every connection, in every direction, all with one API. It's the future of connectivity, and it's called Gloo by Solo.io.DevOps and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit solo.io/screaminginthecloud today and level up your networking game.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Couchbase. And before we start talking about Couchbase, I would rather talk about not being at Couchbase. Laurent Doguin is the Director of Developer Relations and Strategy at Couchbase. First, Laurent, thank you for joining me.Laurent: Thanks for having me. It's a pleasure to be here.Corey: So, what I find interesting is that this is your second time at Couchbase, where you were a developer advocate there for a couple of years, then you had five years of, we'll call it wilderness I suppose, and then you return to be the Director of Developer Relations. Which also ties into my personal working thesis of, the best way to get promoted at a lot of companies is to leave and then come back. But what caused you to decide, all right, I'm going to go work somewhere else? And what made you come back?Laurent: So, I've joined Couchbase in 2014. Spent about two or three years as a DA. And during those three years as a developer advocate, I've been advocating SQL database and I—at the time, it was mostly DBAs and ops I was talking to. And DBA and ops are, well, recent, modern ops are writing code, but they were not the people I wanted to talk to you when I was a developer advocate. I came from a background of developer, I've been a platform engineer for an enterprise content management company. I was writing code all day.And when I came to Couchbase, I realized I was mostly talking about Docker and Kubernetes, which is still cool, but not what I wanted to do. I wanted to talk about developers, how they use database to be better app, how they use key-value, and those weird thing like MapReduce. At the time, MapReduce was still, like, a weird thing for a lot of people, and probably still is because now everybody's doing SQL. So, that's what I wanted to talk about. I wanted to… engage with people identify with, really. And so, didn't happen. Left. Built a Platform as a Service company called Clever Cloud. They started about four or five years before I joined. We went from seven people to thirty-one LFs, fully bootstrapped, no VC. That's an interesting way to build a company in this age.Corey: Very hard to do because it takes a lot of upfront investment to build software, but you can sort of subsidize that via services, which is what we've done here in some respects. But yeah, that's a hard road to walk.Laurent: That's the model we had—and especially when your competition is AWS or Azure or GCP, so that was interesting. So entrepreneurship, it's not for everyone. I did my four years there and then I realized, maybe I'm going to do something else. I met my former colleagues of Couchbase at a software conference called Devoxx, in France, and they told me, “Well, there's a new sheriff in town. You should come back and talk to us. It's all about developers, we are repositioning, rehandling the way we do marketing at Couchbase. Why not have a conversation with our new CMO, John Kreisa?”And I said, “Well, I mean, I don't have anything to do. I actually built a brewery during that past year with some friends. That was great, but that's not going to feed me or anything. So yeah, let's have a conversation about work.” And so, I talked to John, I talked to a bunch of other people, and I realized [unintelligible 00:03:51], he actually changed, like, there was a—they were purposely going [against 00:03:55] developer, talking to developer. And that was not the case, necessarily, five, six years before that.So, that's why I came back. The product is still amazing, the people are still amazing. It was interesting to find a lot of people that still work there after, what, five years. And it's a company based in… California, headquartered in California, so you would expect people to, you know, jump around a bit. And I was pleasantly surprised to find the same folks there. So, that was also one of the reasons why I came back.Corey: It's always a strong endorsement when former employees rejoin a company. Because, I don't know about you, but I've always been aware of those companies you work for, you leave. Like, “Aw, I'm never doing that again for love or money,” just because it was such an unpleasant experience. So, it speaks well when you see companies that do have a culture of boomerangs, for lack of a better term.Laurent: That's the one we use internally, and there's a couple. More than a couple.Corey: So, one thing that seems to have been a thread through most of your career has been an emphasis on developer experience. And I don't know if we come at it from the same perspective, but to me, what drives nuts is honestly, with my work in cloud, bad developer experience manifests as the developer in question feeling like they're somehow not very good at their job. Like, they're somehow not understanding how all this stuff is supposed to work, and honestly, it leads to feeling like a giant fraud. And I find that it's pernicious because even when I intellectually know for a fact that I'm not the dumbest person ever to use this tool when I don't understand how something works, the bad developer experience manifests to me as, “You're not good enough.” At least, that's where I come at it from.Laurent: And also, I [unintelligible 00:05:34] to people that build these products because if we build the products, the user might be in the same position that we are right now. And so, we might be responsible for that experience [unintelligible 00:05:43] a developer, and that's not a great feeling. So, I completely agree with you. I've tried to… always on software-focused companies, whether it was Nuxeo, Couchbase, Clever Cloud, and then Couchbase. And I guess one of the good thing about coming back to a developer-focused era is all the product alignments.Like, a lot of people talk about product that [grows 00:06:08] and what it means. To me what it means was, what it meant—what it still means—building a product that developer wants to use, and not just want to, sometimes it's imposed to you, but actually are happy to use, and as you said, don't feel completely stupid about it in front of the product. It goes through different things. We've recently revamped our Couchbase UI, Couchbase Capella UI—Couchbase Capella is a managed cloud product—and so we've added a lot of in-product getting started guidelines, snippets of code, to help developers getting started better and not have that feeling of, “What am I doing? Why is it not working and what's going on?”Corey: That's an interesting decision to make, just because historically, working with a bunch of tools, the folks who are building the documentation working with that tool, tend to generally be experts at it, so they tend to optimize for improving things for the experience of someone has been using it for five years as opposed to the newcomer. So, I find that the longer a product is in existence, in many cases, the worse the new user experience becomes because companies tend to grow and sprawl in different ways, the product does likewise. And if you don't know the history behind it, “Oh, your company, what does it do?” And you look at the website and there's 50 different offerings that you have—like, the AWS landing page—it becomes overwhelming very quickly. So, it's neat to see that emphasis throughout the user interface on the new developer experience.On the other side of it, though, how are the folks who've been using it for a while respond to those changes? Because it's frustrating for me at least, when I log into a new account, which happens periodically within AWS land, and I have this giant series of onboarding pop-ups that I have to click to make go away every single time. How are they responding to it?Laurent: Yeah, it's interesting. One of the first things that struck me when I joined Couchbase the first time was the size of the technical documentation team. Because the whole… well, not the whole point, but part of the reason why they exist is to do that, to make sure that you understand all the differences and that it doesn't feel like the [unintelligible 00:08:18] what the documentation or the product pitch or everything. Like, they really, really, really emphasize on this from the very beginning. So, that was interesting.So, when you get that culture built into the products, well, the good thing is… when people try Couchbase, they usually stick with Couchbase. My main issue as a Director of the Developer Relations is not to make people stick with Couchbase because that works fairly well with the product that we have; it's to make them aware that we exist. That's the biggest issue I have. So, my goal as DevRel is to make sure that people get the trial, get through the trial, get all that in-app context, all that helps, get that first sample going, get that first… I'm not going to say product built because that's even a bit further down the line, but you know, get that sample going. We have a code playground, so when you're in the application, you get to actually execute different pieces of code, different languages. And so, we get those numbers and we're happy to see that people actually try that. And that's a, well, that's a good feeling.Corey: I think that there's a definite lack of awareness almost industry-wide around the fact that as the diversity of your customers increases, you have to have different approaches that meet them at various points along the journey. Because things that I've seen are okay, it's easy to ass—even just assuming a binary of, “Okay, I've done this before a thousand times; this is the thousand and first, I don't need the Hello World tutorial,” versus, “Oh, I have no idea what I'm doing. Give me the Hello World tutorial,” there are other points along that continuum, such as, “Oh, I used to do something like this, but it's been three years. Can you give me a refresher,” and so on. I think that there's a desire to try and fit every new user into a predefined persona and that just doesn't work very well as products become more sophisticated.Laurent: It's interesting, we actually have—we went through that work of defining those personas because there are many. And that was the origin of my departure. I had one person, ops slash DBA slash the person that maintain this thing, and I wanted to talk to all the other people that built the application space in Couchbase. So, we broadly segment things into back-end, full-stack, and mobile because Couchbase is also a mobile database. Well, we haven't talked too much about this, so I can explain you quickly what Couchbase is.It's basically a distributed JSON database with an integrated caching layer, so it's reasonably fast. So it does cache, and when the key-value is JSON, then you can create with SQL, you can do full-text search, you can do analytics, you can run user-defined function, you get triggers, you get all that actual SQL going on, it's transactional, you get joins, ANSI joins, you get all those… windowing function. It's modern SQL on the JSON database. So, it's a general-purpose database, and it's a general-purpose database that syncs.I think that's the important part of Couchbase. We are very good at syncing cluster of databases together. So, great for multi-cloud, hybrid cloud, on-prem, whatever suits you. And we also sync on the device, there's a thing called Couchbase Mobile, which is a local database that runs in your phone, and it will sync automatically to the server. So, a general-purpose database that syncs and that's quite modern.We try to fit as much way of growing data as possible in our database. It's kind of a several-in-one database. We call that a data platform. It took me a while to warm up to the word platform because I used to work for an enterprise content management platform and then I've been working for a Platform as a Service and then a data platform. So, it took me a bit of time to warm up to that term, but it explained fairly well, the fact that it's a several-in-one product and we empower people to do the trade-offs that they want.Not everybody needs… SQL. Some people just need key-value, some people need search, some people need to do SQL and search in the same query, which we also want people to do. So, it's about choices, it's about empowering people. And that's why the word platform—which can feel intimidating because it can seem complex, you know, [for 00:12:34] a lot of choices. And choices is maybe the enemy of a good developer experience.And, you know, we can try to talk—we can talk for hours about this. The more services you offer, the more complicated it becomes. What's the sweet spots? We did—our own trade-off was to have good documentation and good in-app help to fix that complexity problem. That's the trade-off that we did.Corey: Well, we should probably divert here just to make sure that we cover the basic groundwork for those who might not be aware: what exactly is Couchbase? I know that it's a database, which honestly, anything is a database if you hold it incorrectly enough; that's my entire shtick. But what is it exactly? Where does it start? Where does it stop?Laurent: Oh, where does it start? That's an interesting question. It's a… a merge—some people would say a fork—of Apache CouchDB, and membase. Membase was a distributed key-value store and CouchDB was this weird Erlang and C JSON REST API database that was built by Damian Katz from Lotus Notes, and that was in 2006 or seven. That was before Node.js.Let's not care about the exact date. The point is, a JSON and REST API-enabled database before Node.js was, like, a strong [laugh] power move. And so, those two merged and created the first version of Couchbase. And then we've added all those things that people want to do, so SQL, full-text search, analytics, user-defined function, mobile sync, you know, all those things. So basically, a general-purpose database.Corey: For what things is it not a great fit? This is always my favorite question to ask database folks because the zealot is going to say, “It's good for every use case under the sun. Use it for everything, start to finish”—Laurent: Yes.Corey: —and very few databases can actually check that box.Laurent: It's a very interesting question because when I pitch like, “We do all the things,” because we are a platform, people say, “Well, you must be doing lots of trade-offs. Where is the trade-off?” The trade-off is basically the way you store something is going to determine the efficiency of your [growing 00:14:45]—or the way you [grow 00:14:47] it. And that's one of the first thing you learn in computer science. You learn about data structure and you know that it's easier to get something in a hashmap when you have the key than passing your whole list of elements and checking your data, is it right one? It's the same for databases.So, our different services are different ways to store the data and to query it. So, where is it not good, it's where we don't have an index or a service that answer to the way you want to query data. We don't have a graph service right now. You can still do recursive common table expression for the SQL nerds out there, that will allow you to do somewhat of a graph way of querying your data, but that's not, like, actual—that's not a great experience for people were expecting a graph, like a Neo4j or whatever was a graph database experience.So, that's the trade-off that we made. We have a lot of things at the same place and it can be a little hard, intimidating to operate, and the developer experience can be a little, “Oh, my God, what is this thing that can do all of those features?” At the same time, that's just, like, one SDK to learn for all of the features we've just talked about. So, that's what we did. That's a trade-off that we did.It sucks to operate—well, [unintelligible 00:16:05] Couchbase Capella, which is a lot like a vendor-ish thing to say, but that's the value props of our managed cloud. It's hard to operate, we'll operate this for you. We have a Kubernetes operator. If you are one of the few people that wants to do Kubernetes at home, that's also something you can do. So yeah, I guess what we cannot do is the thing that Route 53 and [Unbound 00:16:26] and [unintelligible 00:16:27] DNS do, which is this weird DNS database thing that you like so much.Corey: One thing that's, I guess, is a sign of the times, but I have to confess that I'm relatively skeptical around, when I pull up couchbase.com—as one does; you're publicly traded; I don't feel that your company has much of a choice in this—but the first thing it greets me with is Couchbase Capella—which, yes, that is your hosted flagship product; that should be the first thing I see on the website—then it says, “Announcing Capella iQ, AI-powered coding assistance for developers.” Which oh, great, not another one of these.So, all right, give me the pitch. What is the story around, “Ooh, everything that has been a problem before, AI is going to make it way better.” Because I've already talked to you about developer experience. I know where you stand on these things. I have a suspicion you would not be here to endorse something you don't believe in. How does the AI magic work in this context?Laurent: So, that's the thing, like, who's going to be the one that get their products out before the other? And so, we're announcing it on the website. It's available on the private preview only right now. I've tried it. It works.How does it works? The way most chatbot AI code generation work is there's a big model, large language model that people use and that people fine-tune into in order to specialize it to the tasks that they want to do. The way we've built Couchbase iQ is we picked a very famous large language model, and when you ask a question to a bot, there's a context, there's a… the size of the window basically, that allows you to fit as much contextual information as possible. The way it works and the reason why it's integrated into Couchbase Capella is we make sure that we preload that context as much as possible and fine-tune that model, that [foundation 00:18:19] model, as much as possible to do whatever you want to do with Couchbase, which usually falls into several—a couple of categories, really—well maybe three—you want to write SQL, you want to generate data—actually, that's four—you want to generate data, you want to generate code, and if you paste some SQL code or some application code, you want to ask that model, what does do? It's especially true for SQL queries.And one of the questions that many people ask and are scared of with chatbot is how does it work in terms of learning? If you give a chatbot to someone that's very new to something, and they're just going to basically use a chatbot like Stack Overflow and not really think about what they're doing, well it's not [great 00:19:03] right, but because that's the example that people think most developer will do is generate code. Writing code is, like, a small part of our job. Like, a substantial part of our job is understanding what the code does.Corey: We spend a lot more time reading code than writing it, if we're, you know—Laurent: Yes.Corey: Not completely foolish.Laurent: Absolutely. And sometimes reading big SQL query can be a bit daunting, especially if you're new to that. And one of the good things that you get—Corey: Oh, even if you're not, it can still be quite daunting, let me assure you.Laurent: [laugh]. I think it's an acquired taste, let's be honest. Some people like to write assembly code and some people like to write SQL. I'm sort of in the middle right now. You pass your SQL query, and it's going to tell you more or less what it does, and that's a very nice superpower of AI. I think that's [unintelligible 00:19:48] that's the one that interests me the most right now is using AI to understand and to work better with existing pieces of code.Because a lot of people think that the cost of software is writing the software. It's maintaining the codebase you've written. That's the cost of the software. That's our job as developers should be to write legacy code because it means you've provided value long enough. And so, if in a company that works pretty well and there's a lot of legacy code and there's a lot of new people coming in and they'll have to learn all those things, and to be honest, sometimes we don't document stuff as much as we should—Corey: “The code is self-documenting,” is one of the biggest lies I hear in tech.Laurent: Yes, of course, which is why people are asking retired people to go back to COBOL again because nobody can read it and it's not documented. Actually, if someone's looking for a company to build, I guess, explaining COBOL code with AI would be a pretty good fit to do in many places.Corey: Yeah, it feels like that's one of those things that would be of benefit to the larger world. The counterpoint to that is you got that many business processes wrapped around something running COBOL—and I assure you, if you don't, you would have migrated off of COBOL long before now—it's making sure that okay well, computers, when they're in the form of AI, are very, very good at being confident-sounding when they talk about things, but they can also do that when they're completely wrong. It's basically a BS generator. And that is a scary thing when you're taking a look at something that broad. I mean, I'll use the AI coding assistance for things all the time, but those things look a lot more like, “Okay, I haven't written CloudFormation from scratch in a while. Build out the template, just because I forget the exact sequence.” And it's mostly right on things like that. But then you start getting into some of the real nuanced areas like race conditions and the rest, and often it can make things worse instead of better. That's the scary part, for me, at least.Laurent: Most coding assistants are… and actually, each time you ask its opinion to an AI, they say, “Well, you should take this with a grain of salt and we are not a hundred percent sure that this is the case.” And this is, make sure you proofread that, which again, from a learning perspective, can be a bit hard to give to new students. Like, you're giving something to someone and might—that assumes is probably as right as Wikipedia but actually, it's not. And it's part of why it works so well. Like, the anthropomorphism that you get with chatbots, like, this, it feels so human. That's why it get people so excited about it because if you think about it, it's not that new. It's just the moment it took off was the moment it looked like an assertive human being.Corey: As you take a look through, I guess, the larger ecosystem now, as well as the database space, given that is where you specialize, what do you think people are getting right and what do you think people are getting wrong?Laurent: There's a couple of ways of seeing this. Right now, when I look at from the outside, every databases is going back to SQL, I think there's a good reason for that. And it's interesting to put into perspective with AI because when you generate something, there's probably less chance to generate something wrong with SQL than generating something with code directly. And I think five generation—was it four or five generation language—there some language generation, so basically, the first innovation is assembly [into 00:23:03] in one and then you get more evolved languages, and at some point you get SQL. And SQL is a way to very shortly express a whole lot of business logic.And I think what people are doing right now is going back to SQL. And it's been impressive to me how even new developers that were all about [ORMs 00:23:25] and [no-DMs 00:23:26], and you know, avoiding writing SQL as much as possible, are actually back to it. And that's, for an old guy like me—well I mean, not that old—it feels good. I think SQL is coming back with a vengeance and that makes me very happy. I think what people don't realize is that it also involves doing data modeling, right, and stuff because database like Couchbase that are schemaless exist. You should store your data without thinking about it, you should still do data modeling. It's important. So, I think that's the interesting bits. What are people doing wrong in that space? I'm… I don't want to say bad thing about other databases, so I cannot even process that thought right now.Corey: That's okay. I'm thrilled to say negative things about any database under the sun. They all haunt me. I mean, someone wants to describe SQL to me is the chess of the programming world and I feel like that's very accurate. I have found that it is far easier in working with databases to make mistakes that don't wash off after a new deployment than it is in most other realms of technology. And when you're lucky and have a particular aura, you tend to avoid that stuff, at least that was always my approach.Laurent: I think if I had something to say, so just like the XKCD about standards: like, “there's 14 standards. I'm going to do one that's going to unify them all.” And it's the same with database. There's a lot… a [laugh] lot of databases. Have you ever been on a website called dbdb.io?Corey: Which one is it? I'm sorry.Laurent: Dbdb.io is the database of databases, and it's very [laugh] interesting website for database nerds. And so, if you're into database, dbdb.io. And you will find Couchbase and you will find a whole bunch of other databases, and you'll get to know which database is derived from which other database, you get the history, you get all those things. It's actually pretty interesting.Corey: I'm familiar with DB-Engines, which is sort of like the ranking databases by popularity, and companies will bend over backwards to wind up hitting all of the various things that they want in that space. The counterpoint with all of it is that it's… it feels historically like there haven't exactly been an awful lot of, shall we say, huge innovations in databases for the past few years. I mean, sure, we hear about vectors all the time now because of the joy that's AI, but smarter people than I are talking about how, well that's more of a feature than it is a core database. And the continual battle that we all hear about constantly is—and deal with ourselves—of should we use a general-purpose database, or a task-specific database for this thing that I'm doing remains largely unsolved.Laurent: Yeah, what's new? And when you look at it, it's like, we are going back to our roots and bringing SQL again. So, is there anything new? I guess most of the new stuff, all the interesting stuff in the 2010s—well, basically with the cloud—were all about the distribution side of things and were all about distributed consensus, Zookeeper, etcd, all that stuff. Couchbase is using an RAFT-like algorithm to keep every node happy and under the same cluster.I think that's one of the most interesting things we've had for the past… well, not for the past ten years, but between, basically, 20 or… between the start of AWS and well, let's say seven years ago. I think the end of the distribution game was brought to us by the people that have atomic clock in every data center because that's what you use to synchronize things. So, that was interesting things. And then suddenly, there wasn't that much innovation in the distributed world, maybe because Aphyr disappeared from Twitter. That might be one of the reason. He's not here to scare people enough to be better at that.Aphyr was the person behind the test called the Jepsen Test [shoot 00:27:12]. I think his blog engine was called Call Me Maybe, and he was going through every distributed system and trying to break them. And that was super interesting. And it feels like we're not talking that much about this anymore. It really feels like database have gone back to the status of infrastructure.In 2010, it was not about infrastructure. It was about developer empowerment. It was about serving JSON and developer experience and making sure that you can code faster without some constraint in a distributed world. And like, we fixed this for the most part. And the way we fixed this—and as you said, lack of innovation, maybe—has brought databases back to an infrastructure layer.Again, it wasn't the case 15 years a—well, 2023—13 years ago. And that's interesting. When you look at the new generation of databases, sometimes it's just a gateway on top of a well-known database and they call that a database, but it provides higher-level services, provides higher-level bricks, better developer experience to developer to build stuff faster. We've been trying to do this with Couchbase App Service and our sync gateway, which is basically a gateway on top of a Couchbase cluster that allow you to manage authentication, authorization, that allows you to manage synchronization with your mobile device or with websites. And yeah, I think that's the most interesting thing to me in this industry is how it's been relegated back to infrastructure, and all the cool stuff, new stuff happens on the layer above that.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Laurent: Thanks for having me and for entertaining this conversation. I can be found anywhere on the internet with these six letters: L-D-O-G-U-I-N. That's actually 7 letters. Ldoguin. That's my handle on pretty much any social network. Ldoguin. So X, [BlueSky 00:29:21], LinkedIn. I don't know where to be anymore.Corey: I hear you. We'll put links to all of it in the [show notes 00:29:27] and let people figure out where they want to go on that. Thank you so much for taking the time to speak with me today. I really do appreciate it.Laurent: Thanks for having me.Corey: Laurent Doguin, Director of Developer Relations and Strategy at Couchbase. I'm Cloud Economist Corey Quinn and this episode has been brought to us by our friends at Couchbase. If you enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you're not going to be able to submit properly because that platform of choice did not pay enough attention to the experience of typing in a comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Joël was selected to speak at RubyConf in San Diego! After spending a month testing out living in Upstate New York, Stephanie is back in Chicago. Stephanie reflects on a recent experience where she had to provide an estimate for a project, even though she didn't have enough information to do so accurately. In this episode, Stephanie and Joël explore the challenges of providing estimates, the importance of acknowledging uncertainty, and the need for clear communication and transparency when dealing with project timelines and scope. RubyConf 2023 (https://rubyconf.org/) How to estimate well (https://thoughtbot.com/blog/how-to-estimate-feature-development-time-maybe-don-t) XKCD hard problems (https://xkcd.com/1425/) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Big piece of news in my world: I recently got accepted to speak at RubyConf in San Diego next month in November. I'm really excited. I'm going to be talking about the concept of time and how that's actually multiple different things and the types of interactions that do and do not make sense when working with time. STEPHANIE: Yay. That's so exciting. Congratulations. I am very excited about this topic. I'm wondering, is this something that you've been thinking about doing for a while now, or was it just an idea that was sparked recently? JOËL: It's definitely a topic I've been thinking about for a long time. STEPHANIE: Time? [laughs] JOËL: Haha. STEPHANIE: Sorry, that was an easy one [laughs]. JOËL: The idea that we often use the English word time to refer to a bunch of, like, fundamentally different quantities and that, oftentimes, that can sort of blur into one another. So, the idea that a particular point in time might be different from a duration, might be different from a time of day, might be different to various other quantities that we refer to generically as time is something that's been in the back of my mind for quite a while. But I think turning that into a conference talk was a more recent idea. STEPHANIE: Yeah, I'm curious, I guess, like, what was it that made you feel like, oh, like, this would be beneficial for other people? Did everything just come together, and you're like, oh, I finally have figured out time [laughs]; now I have this very clear mental model of it that I want to share with the world? JOËL: I think it was sparked by a conversation I had with another member of the thoughtbot team. And it was just one of those where it's like, hey, I just had this really interesting conversation pulling on this idea that's been in the back of my mind for years. You know, it's conference season, and why not make that into a talk proposal? As often, you know, the best talk proposals are, at least for me, I don't always think ahead of time, oh, this would be a great topic. But then, all of a sudden, it comes up in a conversation with a colleague or a client, or it becomes really relevant in some work that I'm doing. It happens to be conference season, and like, oh, that's something I want to talk about now. STEPHANIE: Yeah, I like that a lot. I was just thinking about something I read recently. It was about creativity and art and how long a piece of work takes. And someone basically said it really just takes your whole life sometimes, right? It's like all of your experiences accumulated together that becomes whatever the body of work is. Like, all of that time spent maybe turning the idea in your head or just kind of, like, sitting with it or having those conversations, all the bugs you've probably encountered [laughs] involving date times, and all of that coalescing into something you want to create. JOËL: And you build this sort of big web of ideas, not all that makes sense to talk about in a conference talk. So, one of the classic sources of bugs when dealing with time are time zone and daylight savings. I've chosen not to include those as part of this talk. I think other people have talked about them. I think it's less interesting or less connected to the core idea that I have that, like, there are different types of time. Let's dig into what that means for us. So, I purposefully left that out. But there's definitely a lot that could be said for those. STEPHANIE: Awesome. Well, I really look forward to watching your talk when it is released to the public. JOËL: So, our listeners won't be able to tell, but we're on a video call right now. And I can see from your background that you are back at home in Chicago. It's been a few weeks since we've recorded together. And, in the last episode we did, you were trying out living somewhere in Upstate New York. How was that experience? And what has the transition back to Chicago been for you? STEPHANIE: Yeah, thanks for asking. I was in Upstate New York for the whole month of September. And then I took the last two weeks off of work to, you know, just really enjoy being there and make sure I got to do everything that I wanted to do out there before I came home to, you know, figure out, like, is this a place where I want to move? And yeah, this is my first, like, real full week back at work, back at home. And I have to say it's kind of bittersweet. I think we really enjoyed our time out there, my partner and I. And coming back home, especially, you know, when you're in a stage of life where you're wanting to make a change, it can be a little tough to be like, uh, okay, like, now I have to go back [laughs] to what my life was like before. But we've been very intentional about trying to bring back some of the things that we enjoyed being out there, like, back into just our regular day-to-day lives. So, over the weekend, we were making sure that we wanted to spend some time in nature because that's something that we were able to do a lot of during our time in New York. And, yeah, I think just bringing a bit of that, like, vacation energy into day-to-day life so the grind of kind of work doesn't become too much. JOËL: Anything in particular that you've tried to bring back from that experience to your daily life in Chicago? STEPHANIE: Yeah. I think, you know, when you're in a new place, everything is very exciting and, like, novel. Before work or, like, during my breaks, I would go out into the world and take a little walk and, like, look at the houses on the street that I was staying at. Or there's just a sense of wonder, I suppose, where everywhere you look, you're like, oh, like, this is all new. And I felt very, like, present when I was doing that. And over time, when you've been somewhere for a long time, you lose a little bit of that sense of, like, willingness to be open to new things, or just, like, yeah, that sense of like, oh, like, curiosity, because you feel like you know somewhere and, like, you kind of start to expect oh, like, this street will be exactly like how I've walked a million times [laughs]. But trying to look around a little more, right? Like, be a little aware and be like, oh, like, Halloween is coming around the corner. And so, enjoying that as, like, the thing that I notice around me, even if I am still on the same block, you know, in my same neighborhood, and, yeah, wanting to really appreciate, like, my time here before we leave. Like, I don't want to just spend it kind of waiting for the next thing to happen. Because I'm sure there will also be a time where I miss [laughs] Chicago here once we do decide to move. JOËL: I don't know about you, but I feel like a sense of change, even if it is cyclical, is really helpful for me to kind of maintain a little bit of that wonder, even though I've lived in one place for a decade. So, I live in New England in the Northeast U.S. We have pretty marked seasons that change. And so, seeing that happen, you know, kind of a warm summer, and now we're transitioning into fall, and the weather is getting colder. The trees are turning all these colors. So, there's always kind of within, like, a few weeks or a few months something to look forward to, something that's changing. Life never feels stagnant, even though it is cyclical. And I don't know if that's been a similar experience for you. STEPHANIE: Yeah, I like that a lot because I think one of the issues around feeling kind of stuck here in Chicago was that things were starting to feel stagnant, right? Like, we're wanting to make a big change in our life. That's still on the table, and that's still our plan. But noticing change, even when you think like, ugh, like, this again? [laughs] I think that could really shift your perspective a little bit or at least change how you feel about being somewhere. And that's definitely what I'm trying to do, kind of even when I am in a place of, like, waiting to figure out what the next step is. Speaking of change, I had a recent lesson learned or, I suppose, a story to share with you about a new insight or perspective I had about how I show up at work that I'd like to share. JOËL: What is this new perspective? STEPHANIE: Well, I guess, [chuckles], first of all, I'm curious to get your reaction on this. Have you ever heard anyone tell you estimates are lies? JOËL: Yes, a lot. It's maybe cynical, but there are a lot of cynics in our industry. STEPHANIE: That's true. Part of this story is me giving an estimate that was a lie. So, in some ways, there is a grain of truth to it [laughs]. But I wanted to share with you this experience I had a few weeks ago where I was in kind of a like, project status update meeting. And I was coming to this meeting for the first time actually. And so, it was with a group of people who I hadn't really met before. It was kind of a large meeting. So, there were a handful of people that I wasn't super familiar with. And I was coming in to share with this bigger group, like, how the work I had been doing was going. And during that time, we had gotten some new information that was kind of making us reassess a few things about the work, trying to figure out, like, where to go next and how to meet our ultimate goal for delivering this feature. With that new information in mind, one of the project managers was asking me how long I thought it might take. And I did not have enough information to feel particularly confident about an answer, you know, I just didn't know. And I mentioned that this was kind of my first time in this meeting. There were a lot of people I didn't know, including the person who was asking the question. And they were saying, "Oh, well, you can just guess or, like, you know, it doesn't have to be perfect. But could you give us a date?" And I felt really hard-pressed to give them an answer in that moment because, you know, I kind of was stalling a little bit. And there was still this, like, air of expectancy. I eventually, I have to say, I made something up [laughs]. I was like, "Well, I don't know, like, three weeks," you know, just really pulling it out of thin air. And, you know, that's what they put down on the spreadsheet, and then they moved on [laughs] to the next item. And then, I sat there in the rest of the meeting. And afterwards, I felt really bad. I, like, really regretted it, I think, because I knew that the answer I gave was mostly BS, right? Like, I can't even say how I came up with that. Just that I, like, wanted to maybe give us some extra time, in case the task ends up being complicated, or, you know, there are all these unknowns. But yeah, it really didn't feel good. JOËL: I'm curious why that felt bad. Was it the uncertainty around that number or the fact that the number maybe you felt like you'd given, like, a ridiculously large number? Typically, I feel like when estimates are for a story, it's, like, in the order of a few days, not a few weeks. Or is it something else, the fact that you felt like you made it up? STEPHANIE: I think both, where it was such a big task. The larger and higher level the task is, the harder it is to come up with an answer, let alone an accurate one. But it was knowing that, like, I didn't have the information. And even though I was doing as they asked of me [chuckles], it was almost like I lost a little bit of my own integrity, right? In terms of kind of based on my experience doing software development, like, I know when I don't know, and I wasn't able to say it. At least in that moment, didn't feel comfortable saying it. JOËL: Because they're not taking no for an answer. STEPHANIE: Yeah, yeah, or at least that was my interpretation of the conversation. But the insight or the learning that I took away from it was that I actually don't want to feel that way again [laughs], that I don't want to give a lie as an estimate because it didn't feel good for me. And the experience that I have knowing that I don't have an answer now, but there are, like, ways to get the answer, right? What I wish I had said in that meeting was that I didn't know, but I could find out, or, like, I would let them know as soon as I did have more information. Or, like, here is the information that I do need to come up with something that is more useful to them, honestly, and could make it, like, a win for all of us. But yeah, I've been reflecting on [chuckles] that a lot. Because, in a sense, like, I really needed to trust myself and, like, trust my gut to have been able to do my best work. JOËL: I wonder if there's maybe also a sense in which, you know, generally, you're a very kind of earnest person. And maybe by giving a ridiculous number there just to, like, check a box, maybe felt like you gave way to a certain level of cynicism that wasn't, like, true to who you are as a person. STEPHANIE: Yeah, yeah, that feels real [laughs]. JOËL: Have you ever done estimation sessions where you put error bars on your number? So, you say, hey, this is my estimate, but plus or minus. And, like, the more uncertainty there is around a number, the larger those plus or minus values are to the point where I could imagine something as ridiculous as like, oh, this is going to take three weeks, plus or minus three weeks. STEPHANIE: I like that. That's funny. No, I have not ever done that before or even heard of that. That is a really interesting technique because that seems just more real to me, where, again, people have different opinions about estimation and how effective or useful it is. But for organizations where, like, it is somewhat valuable, or it is just part of the process, I like the idea of at least acknowledging the uncertainty or the ambiguity or, like, the level of confidence, right? That seems like an important piece of context to that information. JOËL: And that can probably lead to some really interesting conversations as well because just getting a big number by itself, you might have a pretty high certainty. I mean, three weeks is big enough that you might say, okay, there's definitely going to be some fuzziness around that. But getting a sense of the certainty can, in certain contexts, I find, drive really interesting conversations about why things are uncertain. And then, that can lead to some really good conversations around scoping about, okay, so we have this larger story. What are the elements of it that are uncertain that you might even talk in terms of risk? What are the risky elements of this story or maybe even a project? And how do we de-risk those? Is there a way that we could remove maybe a small part of the story and then, all of a sudden, those error bars of plus or minus three weeks drop down to plus or minus three days? Because that might be possible by having that conversation. STEPHANIE: Yeah, I like what you said about scope because the way that it was presented as this really big chunk of work that was very critical to this deadline, there was no room to do scope, right? Because we weren't even talking about what makes up this feature task. We hadn't really broken it down. In some ways, I think it was very, like, wishful, right? To be like, oh yeah, we want this feature. We're not going to talk too much about, like, the specific details [laughs], as opposed to the idea of it, right? And that, I think, is, you know, was part of what led to that ambiguity of, like, I can't even begin to estimate this because, like, it could mean so many different things. JOËL: Right. And software problems, often, a slight change in the scope can make a massive change in complexity. I always think of a classic xkcd comic where two people are talking about a task, and somebody starts by describing something that kind of sounds complex. But the person implementing it is like, "Oh yeah, no, that's, you know, it's super easy. I can do that in half a day." And then, like, the person making the ask is like, "Oh, and, by the way, one small detail," and they add, like, one small thing that seems inconsequential, and the person is just like, "Okay, sorry, I'm going to need a research team and a couple of PhDs. And it's going to take us five years." STEPHANIE: That's really funny. I haven't seen this comic before, but I need to [laughs] because I feel that so much where it's like, you just have different expectations about how long things will take. And I think maybe that is where, like, I felt really disappointed afterwards. Because in my inability to, like, just really speak up and say, like, "In my experience, like, this is kind of what happens when we don't have this information or when we aren't sure," yeah, I just wasn't able to bring that to the table in that, you know, meeting. And I really am glad we're having this conversation now because I've been thinking about, like, okay, when I find myself in this position again, how would I like to respond differently? And even just that comic feels really validating [laughs] in terms of like, oh yeah, like, other people have experienced this before, where when we don't have that shared understanding or, like, if we're not being super transparent about how long does a thing really take, and why does it make it complex, or, like, what is challenging about it, it can be, like, speaking in [chuckles] two different languages sometimes. JOËL: I think what I'm hearing almost is that in a situation like what you found yourself in, you're almost sort of wishing that you'd picked one extreme or the other, either sort of, like, standing up to—I assume this is a project manager or someone...to say, "Look, there's no number I can give you that's going to make sense. I'm not going to play this game. I have no number I can give you," and kind of ending it there. Or, on the other hand, leaning into, say, "Okay, let's have a nuanced conversation, and we'll try to understand this. And we'll try to maybe scope it and maybe put some error bars on this or something and try to come up with a number that's a little bit more realistic." But by kind of, like, trying to maybe do a middle path where you just kind of give a ridiculously large number that's meaningless, maybe everybody feels unfulfilled, and that feels, like, maybe the worst of the paths you could have taken. STEPHANIE: Yeah, I agree. I like that everyone [laughs] feels slightly unfulfilled point. Because, you know, my estimate is likely wrong. And, like, what impact will that have on other folks and, you know, their work? While you were saying, like, oh yeah, here were the kind of two different options I could have chosen, I was thinking about the idea of, like, yeah, like, there are different strategies depending on the audience and depending who you're working with. And that is something I want to keep in mind, too, of, like, is this the right group to even have the, like, okay, let's figure this out conversation? Because it's not always the case, right? And sometimes you do need to just really stand firm and say, like, "I can't give you an answer. And I will go and find the people [laughs] who I can work this out with so that I can come back with what you need." JOËL: And sometimes there may be a place for some sort of, like, placeholder data that is obviously wrong, but you need to put a value there, as long as everybody's clear on that's more or less what's happening. I had to do something kind of like that today. I'm connecting with a third-party SAML for authentication using the service Auth0. And this third party I'm talking to...so there's data that they need from me, and there's data that I need from them. They're not going to give me data until I give them our data first, so this is, like, you know, callback URLs, and entity IDs, and things like that you need to pass. In order to have those, I need to stand up a SAML connection on the Auth0 side of things. In order to create that, Auth0 has a bunch of required fields, including some of the data that the third party would have given me. So, we've got a weird thing where, like, I need to give them data so they can stand up their end. But I can't really stand up my end until they give me some data. STEPHANIE: Sounds like a circular dependency, if I've ever heard one [laughs]. JOËL: It kind of is, right? And so, I wanted to get this rolling. I put in a bunch of fake values for these callback URLs and things like that in the places where it would not affect the data that I'm giving to the third party. And so, it will generate as a metadata file that gets generated and stuff like that. And so, I was able to get that data and send it over. But I did have to put a callback URL whose domain may or may not be example.com. STEPHANIE: [laughs] Right. JOËL: So, it is a placeholder. I have to remember to go and change it later on. But that was a situation where I felt better about doing that than about asking the third party, "Hey, can I get your information first?" STEPHANIE: Yeah, I like that as sometimes, like, you recognize that in order to move forward, you need to put something or fill in that gap. And I think that, you know, there was always an opportunity afterwards to fix it or, like, to reassess and revisit it. JOËL: With the caveat that, in software, there's nothing quite so permanent as a temporary fix. STEPHANIE: Oof, yeah [laughs]. That's real. JOËL: So, you know, caution advised, but yes. Don't always feel bad about placeholders if it allows you to unblock yourself. STEPHANIE: So, I'm really glad I got to bring up this topic and tell you this story because it really got me thinking about what estimates mean to me. I'm curious if any of our listeners if you all have any input. Do you love estimates? Do you hate them? Did our conversation make you think about them differently? Feel free to write to us at hosts@bikeshed.fm. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeeee!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.
We're continuing spooky month with The Ruins (2008), which we reasonably assumed was about archaeology. Turns out the actual Maya ruins on which the movie takes place are really incidental to the plot, which is centred on the least scary thing we can imagine. Here's a list of things scarier than the monster in this movie: caterpillars; the X-Men franchise; poison ivy; AI-written books. Anyway, enjoy the episode. Get in touch with us! Twitter: @SotSA_Podcast Facebook: @SotSAPodcast Letterboxd: https://letterboxd.com/sotsa/ Email: screensofthestoneage@gmail.com In this episode: Maya architecture: https://www.worldhistory.org/Maya_Architecture/ “Nesting doll” structure of Maya pyramids: https://www.bbc.com/news/world-latin-america-38008546 Maya cities discovered with lidar: https://www.livescience.com/lidar-maya-civilization-guatemala Septicemia is blood infection, not bone infection: https://www.hopkinsmedicine.org/health/conditions-and-diseases/septicemia “Is there a doctor?” meme: https://imgflip.com/meme/165098350/Is-there-a-doctor-around Beware the Higad caterpillar!: http://avrotor.blogspot.com/2016/01/beware-of-spiny-caterpillar-higad.html The Russian sleep Experiment: Creepypasta: https://www.creepypasta.com/the-russian-sleep-experiment/ AI is writing books about foraging: https://civileats.com/2023/10/10/ai-is-writing-books-about-foraging-what-could-go-wrong/ Water Hemlock – the deadliest plant in North America: https://en.wikipedia.org/wiki/Cicuta Edibility test for wild plants: https://www.backpacker.com/skills/universal-edibility-test/ Safe plants for survival situations: https://www.sunnysports.com/blog/common-wild-plants-can-eat-survival/ Nettle beer recipe: https://www.greatbritishchefs.com/recipes/nettle-beer-recipe XKCD on Brassica: https://www.explainxkcd.com/wiki/index.php/2827:_Brassica
We're continuing spooky month with The Ruins (2008), which we reasonably assumed was about archaeology. Turns out the actual Maya ruins on which the movie takes place are really incidental to the plot, which is centred on the least scary thing we can imagine. Here's a list of things scarier than the monster in this movie: caterpillars; the X-Men franchise; poison ivy; AI-written books. Anyway, enjoy the episode. Get in touch with us!Twitter: @SotSA_Podcast Facebook: @SotSAPodcastLetterboxd: https://letterboxd.com/sotsa/ Email: screensofthestoneage@gmail.com In this episode:Maya architecture: https://www.worldhistory.org/Maya_Architecture/ “Nesting doll” structure of Maya pyramids: https://www.bbc.com/news/world-latin-america-38008546Maya cities discovered with lidar: https://www.livescience.com/lidar-maya-civilization-guatemala Septicemia is blood infection, not bone infection: https://www.hopkinsmedicine.org/health/conditions-and-diseases/septicemia“Is there a doctor?” meme: https://imgflip.com/meme/165098350/Is-there-a-doctor-around Beware the Higad caterpillar!: http://avrotor.blogspot.com/2016/01/beware-of-spiny-caterpillar-higad.html The Russian sleep Experiment: Creepypasta: https://www.creepypasta.com/the-russian-sleep-experiment/AI is writing books about foraging: https://civileats.com/2023/10/10/ai-is-writing-books-about-foraging-what-could-go-wrong/ Water Hemlock – the deadliest plant in North America: https://en.wikipedia.org/wiki/Cicuta Edibility test for wild plants: https://www.backpacker.com/skills/universal-edibility-test/ Safe plants for survival situations: https://www.sunnysports.com/blog/common-wild-plants-can-eat-survival/ Nettle beer recipe: https://www.greatbritishchefs.com/recipes/nettle-beer-recipe XKCD on Brassica: https://www.explainxkcd.com/wiki/index.php/2827:_Brassica
PropTech Deep Dive med Erik Stokkeland fra Futurehome!
TWiV notes the passing of virologist Michael BA Oldstone, a study to assess the performance of rapid antigen tests to detect symptomatic and asymptomatic SARS-CoV-2 infection, and the presence of antibodies to type I interferons in ~40% of patients with West Nile virus encephalitis. Hosts: Vincent Racaniello, Rich Condit, and Alan Dove Subscribe (free): Apple Podcasts, Google Podcasts, RSS, email Become a patron of TWiV! Links for this episode MicrobeTV Discord Server MicrobeTV store at Cafepress Position in Rosenfeld Laboratory (pdf) XKCD on antivaxxers RFK Jr. CDC or FDA head? (Politico) Performance of rapid antigen tests (Ann Int Med) Guidance on rapid antigen tests (FDA) Autoantibodies to IFN in West Nile virus encephalitis (JEM) Letters read on TWiV 1031 Timestamps by Jolene. Thanks! Weekly Picks Rich – The Final Covid-19 Grand Rounds: What Have We Learned? Alan – Teaching biology to Tibetan Buddhist monks Vincent – I got it from Agnes by Tom Lehrer and Why Oppenheimer has important lessons for scientists today Listener Picks Ryan – Paul Offit on PBS Newshour to explain RFK Jr.'s Congressional Hearing Intro music is by Ronald Jenkees Send your virology questions and comments to twiv@microbe.tv
What does the body of evidence say on fevers and whether or not we should treat them? Plus: more and more children are getting invasive streptococcal disease, and Chris fulfills two dreams of his: to become Dr. House and to write a book! Block 1: (1:47) Fever: what a fever is; the role the hypothalamus plays; why we shiver Block 2: (7:36) Fever: why fevers start; whether high fevers mean a worse condition; whether fevers help you fight an infection; febrile seizures; whether you should treat a fever; what a normal body temperature is; where you should stick that thermometre Block 3: (22:32) A rise in cases of invasive streptococcal disease in children Block 4: (32:21) Chris becomes House, MD. Play along! * Jingle by Jillian Correia of Roctavio Canada * Theme music: “Fall of the Ocean Queen“ by Joseph Hackl * Assistant researcher: Nicholas Koziris To contribute to The Body of Evidence, go to our Patreon page at: http://www.patreon.com/thebodyofevidence/. To make a one-time donation to our show, you can now use PayPal! https://www.paypal.com/donate?hosted_button_id=9QZET78JZWCZE Patrons get a bonus show on Patreon called “Digressions”! Check it out! References: 1) XKCD cartoon on killing cancer cells: https://xkcd.com/1217/ 2) Meta-analysis on the risks and benefits of fever reduction in adults: https://pubmed.ncbi.nlm.nih.gov/35820685/ 3) Prognosis of febrile seizures in children: https://pubmed.ncbi.nlm.nih.gov/18692714/ 4) The use of fever medication to prevent recurrent febrile seizures in children: a) https://pubmed.ncbi.nlm.nih.gov/23702315/ b) https://publications.aap.org/pediatrics/article/142/5/e20181009/38533/Acetaminophen-and-Febrile-Seizure-Recurrences?autologincheck=redirected# c) https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003031.pub3/full 5) Normal variation in body temperature: https://www.bmj.com/content/359/bmj.j5468 6) Diagnostic accuracy of ear thermometers: https://pubmed.ncbi.nlm.nih.gov/32398036/ 7) CBC's report on streptococcal disease in Waterloo, Ontario: https://www.cbc.ca/amp/1.6847676 8) The Medscape Emergency Medicine Case Challenge: https://reference.medscape.com/viewarticle/855744 It's Not Twitter, But It'll Do: 1) Jonathan's article on the Big Pharma gambit: https://www.mcgill.ca/oss/article/medical-critical-thinking-health-and-nutrition/what-big-pharma-accusation-gets-right-and-wrong-about-drug-industry 2) Conspirituality: How New Age Conspiracy Theories Became a Public Health Threat: https://www.penguinrandomhouse.ca/books/713402/conspirituality-by-derek-beres-matthew-remski-and-julian-walker/9781039005532 3) Chris' Gazette article on coronary artery calcium: https://montrealgazette.com/opinion/columnists/christopher-labos-calcium-deposits-in-the-arteries-not-always-a-worry 4) Chris wrote a book called Does Coffee Cause Cancer? Pre-order it now! https://ecwpress.com/products/does-coffee-cause-cancer?_pos=1&_sid=e672b52c1&_ss=r
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Dictatorship Problem, published by alyssavance on June 11, 2023 on LessWrong. (Disclaimer: This is my personal opinion, not that of any movement or organization.) This post aims to show that, over the next decade, it is quite likely that most democratic Western countries will become fascist dictatorships - this is not a tail risk, but the most likely overall outcome. Politics is not a typical LessWrong topic, and for good reason: it tends to impair clear thinking; most well-known political issues are not neglected; most political "debates" are simply people yelling at each other online; neither saying anything new, nor even really trying to persuade the opposition. However, like the COVID pandemic, it seems like this particular trend will be so impactful and so disruptive to ordinary Western life that it will be important to be aware of it, factor it into plans, and try our best to mitigate or work around the effects. Introduction First, what is fascism? It's common for every side in a debate to call the other side "fascists" or "Nazis", as we saw during (eg.) the Ukraine War. Lots of things that get called "fascist" online are in fact fairly ordinary, or even harmless. So, to be clear, Wikipedia defines "fascism" as: a far-right, authoritarian, ultranationalist political ideology and movement, characterized by a dictatorial leader, centralized autocracy, militarism, forcible suppression of opposition, belief in a natural social hierarchy, subordination of individual interests for the perceived good of the nation and race, and strong regimentation of society and the economy Informally, I might characterize "fascism" as: a system of government where there are no meaningful elections; the state does not respect civil liberties or property rights; dissidents, political opposition, minorities, and intellectuals are persecuted; and where government has a strong ideology that is nationalist, populist, socially conservative, and hostile to minority groups. (The last point is what separates fascism from, say, Stalinism. Stalinism is also very bad, but is not a major political force in 2023.) So by "fascism", I specifically mean a radical change in the basic form of government, not simply a state doing dumb things like making immigration hard or banning Bitcoin. Not all fascists are the same - eg. Mussolini's Italy was initially opposed to Nazi-style racism - but their movements, ideology, rhetoric, and leaders tend to share many common characteristics (see also eg. here). Fascism is very bad, and therefore, it would be really great if it were unlikely to happen in well-established democracies like the US. Unfortunately, as with AI risk, most arguments for that scenario being unlikely tend to resemble this comic from XKCD: or this comic about AI risk: The first argument goes, essentially, that things are basically fine now, and are unlikely to become bad immediately (next week or next month), so therefore we have nothing to worry about. The counterpoint, of course, is that if existing trends continue progressing - and there's no convincing reason why they must stop - the future a few decades from now will become very different from the present. The second argument is that the relevant scenario would be pretty weird by the standards of our current lives (as rich, educated Westerners living in 2023), so we should assume it's unlikely. However, our contemporary lives and civilization are themselves very weird historically (vs. the more typical peasant farming), and there's no fundamental reason why they have to keep on going forever. Indeed, them continuing forever is in some ways a contradiction, since current society relies on economic growth and innovation; we can't coherently forecast "everything stays the same, including the derivatives". Present Trends What do the deri...
Andy and Michael chat travel and play the XKCD “game” Escape Speed.00:00 Intro02:38 Escape Speed10:21 Michael's Travel Adventure34:48 Escape Speed Finale37:59 Travel Gaming Nostalgia: DS, 3DS, Game Gear, Etc.46:00 Outro
Should AI research be paused? How far ahead have deepfakes come? Join Patrick and Jason as they tackle their answers to these timely questions – plus an in-depth discussion on Perl in practice – with today's episode of Programming Throwdown. Resources mentioned in this episode:Join the Programming Throwdown Patreon community today: https://www.patreon.com/programmingthrowdown?ty=h News/Links: GPT4All & Stanford Alpacahttps://github.com/nomic-ai/gpt4all Giant AI Experiments 6 month pause open letterhttps://futureoflife.org/open-letter/pause-giant-ai-experiments/ Will Smith Eating Spaghetti generated videohttps://www.vice.com/en/article/xgw8ek/ai-will-smith-eating-spaghetti-hill-haunt-you-for-the-rest-of-your-life Robust image compression implementation from a NASA paperhttps://github.com/TheRealOrange/icer_compression Dig This Vegashttps://digthisvegas.com/ XKCD:https://xkcd.com/208/ AI Open Letter:https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Godbolt:https://godbolt.org/ Book of the Show: Jason: It Doesn't Have To Be Crazy At Workhttps://amzn.to/40PFgxH Patrick: Prince of Fools by Mark Lawrencehttps://amzn.to/3lWVEO9 Tool of the Show: Jason: ReMarkable 2: https://remarkable.com/store/remarkable-2 Patrick: Slay the Spire: https://store.steampowered.com/app/646570/Slay_the_Spire/ If you've enjoyed this episode, you can listen to more on Programming Throwdown's website: https://www.programmingthrowdown.com/ Reach out to us via email: programmingthrowdown@gmail.com You can also follow Programming Throwdown on Facebook | Apple Podcasts | Spotify | Player.FM Join the discussion on our DiscordHelp support Programming Throwdown through our Patreon ★ Support this podcast on Patreon ★
Oz and Charlie catch up with Felix Tripier - now a Senior Staff Software Engineer at quantum computing company IonQ - for the first time in three years! Felix was our first guest on Escaping Web - a double high school and college dropout who become a self-taught web developer and is now a quantum computing engineer - so it only made sense for him to be our first guest on The CS Primer Show.We discuss Felix's path to staff engineer, the engineering manager vs. staff engineer career choices, staff engineering archetypes, building software for and working with scientists, the movie Grave of the Fireflies, the book When We Cease To Understand The World, and, of course, reference an apt XKCD comic.
Where we discuss people’s tendencies to resist automation of tasks, for whatever reason. Comments for the episode are welcome - at the bottom of the show notes for the episode there is a Disqus setup, or you can email us at feedback@operations.fm. Links for Episode 135: XKCD 1205: Is it worth the time
Luca Casonato is the tech lead for Deno Deploy and a TC39 delegate. Deno is a JavaScript runtime from the original creator of NodeJS, Ryan Dahl. Topics covered: What's a JavaScript runtime How V8 is used Why Deno was created The W3C WinterCG for server-side JavaScript Why it's difficult to ship new features in Node The benefits of web standards Creating an all-inclusive toolset like Rust and Go Deno's node compatibility layer Use cases for WebAssembly Benefits and implementation of Deno Deploy Reasons to deploy on the edge What's coming next Luca Luca Casonato @lcasdev Deno Homepage Deploy Showcase Subhosting Fresh web framework The anatomy of an Isolate Cloud Deno Users Netlify Edge Functions Deno at Slack GitHub Flat Data Shopify Oxygen Other related links Cache Web API V8 (JavaScript and WebAssembly engine) TC39 (JavaScript specification group) Web-interoperable Runtimes Community Group (WinterCG) Cloudflare Workers (Deno Deploy competitor) How Cloudflare KV works CockroachDB (Distributed database) XKCD Standards Comic Transcript You can help edit this transcript on GitHub. [00:00:07] Jeremy: Today I'm talking to Luca Casonato. He's a member of the Deno Core team and a TC 39 Delegate. [00:00:06] Luca: Hey, thanks for having me. What's a runtime? [00:00:07] Jeremy: So today we're gonna talk about Deno, and on the website it says, Deno is a runtime for JavaScript and TypeScript. So I thought we could start with defining what a runtime is. [00:00:21] Luca: Yeah, that's a great question. I think this question actually comes up a lot. It's, it's like sometimes we also define Deno as a headless browser, or I don't know, a, a JavaScript script execution tool. what actually defines runtime? I, I think what makes a runtime a runtime is that it is a, it's implemented in native code. It cannot be self-hosted. Like you cannot self-host a JavaScript runtime. and it executes JavaScript or TypeScript or some other scripting language, without relying on, well, yeah, I guess it's the self-hosting thing. Like it's, it's essentially a, a JavaScript execution engine, which is not self-hosted. So yeah, it, it maybe has IO bindings, but it doesn't necessarily need to like, it. Maybe it allows you to read the, from the file system or, or make network calls. Um, but it doesn't necessarily have to. It's, I think the, the primary definition is something which can execute JavaScript without already being written in JavaScript. How V8 and JavaScript runtimes are related [00:01:20] Jeremy: And when we hear about JavaScript run times, whether it's Deno or Node or Bun, or anything else, we also hear about it in the context of v8. Could you explain the relationship between V8 and a JavaScript run time? [00:01:36] Luca: Yeah. So V8 and, and JavaScript core and Spider Monkey, these are all JavaScript engines. So these are the low level virtual machines that can execute or that can parse your JavaScript code. turn it into byte code, maybe turn it into, compiled machine code, and then execute that code. But these engines, Do not implement any IO functions. They do not. They implement the JavaScript spec as is written. and then they provide extension hooks for, they call these host environments, um, like environments that embed these engines to provide custom functionalities to essentially poke out of the sandbox, out of the, out of the virtual machine. Um, and this is used in browsers. Like browsers have, have these engines built in. This is where they originated from. Um, and then they poke holes into this, um, sandbox virtual machine to do things like, I don't know, writing to the dom or, or console logging or making fetch calls and all these kinds of things. And what a runtime essentially does, a JavaScript runtime is it takes one of these engines and. It then provides its own set of host APIs, like essentially its own set of holes. It pokes into the sandbox. and depending on what the runtime is trying to do, um, the weight will do. This is gonna be different and, and the sort of API that is ultimately exposed to the end user is going to be different. For example, if you compare Deno and node, like node is very loosey goosey, about how it pokes holds into the sandbox, it sort of just pokes them everywhere. And this makes it difficult to enforce things like, runtime permissions for example. Whereas Deno is much more strict about how it, um, pokes holds into its sandbox. Like everything is either a web API or it's behind in this Deno name space, which means that it's, it's really easy to find, um, places where, where you're poking out of the sandbox. and really you can also compare these to browsers. Like browsers are also JavaScript run times. Um, they're just not headless. JavaScript run times, but JavaScript run times that also have a ui. and. . Yeah. Like there, there's, there's a whole Bunch of different kinds of JavaScript run times, and I think we're also seeing a lot more like embedded JavaScript run times. Like for example, if you've used React Native before, you, you may be using Hermes as a, um, JavaScript engine in your Android app, which is like a custom JavaScript engine written just for, for, for React native. Um, and this also is embedded within a, like react native run time, which is specific to React native. so it's also possible to have run times, for example, that are, that can be where the, where the back backing engine can be exchanged, which is kind of cool. [00:04:08] Jeremy: So it sounds like V8's role, one way to look at it is it can execute JavaScript code, but only pure functions. I suppose you [00:04:19] Luca: Pretty much. Yep. [00:04:21] Jeremy: Do anything that doesn't interact with IO so you think about browsers, you were mentioning you need to interact with a DOM or if you're writing a server side application, you probably need to receive or make HTTP requests, that sort of thing. And all of that is not handled by v8. That has to be handled by an external runtime. [00:04:43] Luca: Exactly Like, like one, one. There's, there's like some exceptions to this. For example, JavaScript technically has some IO built in with, within its standard library, like math, random. It's like random number. Generation is technically an IO operation, so, Technically V8 has some IO built in, right? And like getting the current date from the user, that's also technically IO So, like there, there's some very limited edge cases. It's, it's not that it's purely pure, but V8 for example, has a flag to turn it completely deterministic. which means that it really is completely pure. And this is not something which run times usually have. This is something like the feature of an engine because the engine is like so low level that it can essentially, there's so little IO that it's very easy to make deterministic where a runtime higher level, um, has, has io, um, much more difficult to make deterministic. [00:05:39] Jeremy: And, and for things like when you're working with JavaScript, there's, uh, asynchronous programming [00:05:46] Luca: mm-hmm. Concurrent JavaScript execution [00:05:47] Jeremy: So you have concurrency and things like that. Is that a part of V8 or is that the responsibility of the run time? [00:05:54] Luca: That's a great question. So there's multiple parts to this. There's the part, um, there, there's JavaScript promises, um, and sort of concurrent Java or well, yes, concurrent JavaScript execution, which is sort of handled by v8, like v8. You can in, in pure v8, you can create a promise, and you can execute some code within that promise. But without IO there's actually no way to defer time, uh, which means that in with pure v8, you can either, you can create a promise. Which executes right now. Or you can create a promise that never executes, but you can't create a promise that executes in 10 seconds because there's no way to measure 10 seconds asynchronously. What run times do is they add something called an event loop on top of this, um, on top of the base engine and that event loop, for example, like a very simple event loop, for example, might have a timer in it, which every second looks at if there's a timer schedule to run within that second. And if it does, if, if that timer exists, it'll go call out to V8 and say, you can now execute that promise. but V8 is still the one that's keeping track of, of like which promises exist, and the code that is meant to be invoked when they resolve all that kind of thing. Um, but the underlying infrastructure that actually invokes which promises get resolved at what point in time, like the asynchronous, asynchronous IO is what this is called. This is driven by the event loop, um, which is implemented by around time. So Deno, for example, it uses, Tokio for its event loop. This is a, um, an event loop written in Rust. it's very popular in the Rust ecosystem. Um, node uses libuv. This is a relatively popular runtime or, or event loop, um, implementation for c uh, plus plus. And, uh, libuv was written for Node. Tokio was not written for Deno. But um, yeah, Chrome has its own event loop implementation. Bun has its own event loop implementation. [00:07:50] Jeremy: So we, we might go a little bit more into that later, but I think what we should probably go into now is why make Deno, because you have Node that's, uh, currently very popular. The co-creator of Deno, to my understanding, actually created Node. So maybe you could explain to our audience what was missing or what was wrong with Node, where they decided I need to create, a new runtime. Why create a new runtime? (standards compliance) [00:08:20] Luca: Yeah. So the, the primary point of concern here was that node was slowly diverging from browser standards with no real path to, to, to, re converging. Um, like there was nothing that was pushing node in the direction of standards compliance and there was nothing, that was like sort of forcing node to innovate. and we really saw this because in the time between, I don't know, 2015, 2018, like Node was slowly working on esm while browsers had already shipped ESM for like three years. , um, node did not have fetch. Node hasn't had, or node only at, got fetch last year. Right? six, seven years after browsers got fetch. Node's stream implementation is still very divergent from, from standard web streams. Node was very reliant on callbacks. It still is, um, like promises in many places of the Node API are, are an afterthought, which makes sense because Node was created in a time before promises existed. Um, but there was really nothing that was pushing Node forward, right? Like nobody was actively investing in, in, in improving the API of Node to be more standards compliant. And so what we really needed was a new like Greenfield project, which could demonstrate that actually writing a new server side run. Is A viable, and b is totally doable with an API that is more standards combined. Like essentially you can write a browser, like a headless browser and have that be an excellent to use JavaScript runtime, right? And then there was some things that were I on top of that, like a TypeScript support because TypeScript was incredibly, or is still incredibly popular. even more so than it was four years ago when, when Deno was created or envisioned, um, this permission system like Node really poked holes into the V8 sandbox very early on with, with like, it's gonna be very difficult for Node to ever, ever, uh, reconcile this, this. Especially cuz the, some, some of the APIs that it, that it exposes are just so incredibly low level that like, I don't know, you can mutate random memory within your process. Um, which like if you want to have a, a secure sandbox like that just doesn't work. Um, it's not compatible. So there was really needed to be a place where you could explore this, um, direction and, and see if it worked. And Deno was that. Deno still is that, and I think Deno has outgrown that now into something which is much more usable as, as like a production ready runtime. And many people do use it, in production. And now Deno is on the path of slowly converging back with Node, um, in from both directions. Like Node is slowly becoming more standards compliant. and depending on who you ask this was, this was done because of Deno and some people said it would had already been going on and Deno just accelerated it. but that's not really relevant because the point is that like Node is becoming more standard compliant and, and the other direction is Deno is becoming more node compliant. Like Deno is implementing node compatibility layers that allow you to run code that was originally written for the node ecosystem in the standards compliant run time. so through those two directions, the, the run times are sort of, um, going back towards each other. I don't think they'll ever merge. but we're, we're, we're getting to a point here pretty soon, I think, where it doesn't really matter what runtime you write for, um, because you'll be able to write code written for one runtime in the other runtime relatively easily. [00:12:03] Jeremy: If you're saying the two are becoming closer to one another, becoming closer to the web standard that runs in the browser, if you're talking to someone who's currently developing in node, what's the incentive for them to switch to Deno versus using Node and then hope that eventually they'll kind of meet in the middle. [00:12:26] Luca: Yeah, so I think, like Deno is a lot more than just a runtime, right? Like a runtime executes JavaScript, Deno executes JavaScript, it executes type script. But Deno is so much more than that. Like Deno has a built-in format, or it has a built-in linter. It has a built-in testing framework, a built-in benching framework. It has a built-in Bundler, it, it like can create self-hosted, um, executables. yeah, like Bundle your code and the Deno executable into a single executable that you can trip off to someone. Um, it has a dependency analyzer. It has editor integrations. it has, Yeah. Like I could go on for hours, (laughs) about all of the auxiliary tooling that's inside of Deno, that's not a JavaScript runtime. And also Deno as a JavaScript runtime is just more standards compliant than any of the other servers at Runtimes right now. So if, if you're really looking for something which is standards complaint, which is gonna like live on forever, then it's, you know, like you cannot kill off the Fetch API ever. The Fetch API is going to live forever because Chrome supports it. Um, and the same goes for local storage and, and like, I don't know, the Blob API and all these other web APIs like they, they have shipped and browsers, which means that they will be supported until the end of time. and yeah, maybe Node has also reached that with its api probably to some extent. but yeah, don't underestimate the power of like 3 billion Chrome users. that would scream immediately if the Fetch API stopped working Right? [00:13:50] Jeremy: Yeah, I, I think maybe what it sounds like also is that because you're using the API that's used in the browser places where you deploy JavaScript applications in the future, you would hope that those would all settle on using that same API so that if you were using Deno, you could host it at different places and not worry about, do I need to use a special API maybe that you would in node? WinterCG (W3C group for server side JavaScript) [00:14:21] Luca: Yeah, exactly. And this is actually something which we're specifically working towards. So, I don't know if you've, you've heard of WinterCG? It's a, it's a community group at the W3C that, um, CloudFlare and, and Deno and some others including Shopify, have started last year. Um, we're essentially, we're trying to standardize the concept of what a server side JavaScript runtime is and what APIs it needs to have available to be standards compliant. Um, and essentially making this portability sort of written down somewhere and like write down exactly what code you can write and expect to be portable. And we can see like that all of the big, all of the big players that are involved in, in, um, building JavaScript run times right now are, are actively, engaged with us at WinterCG and are actively building towards this future. So I would expect that any code that you write today, which runs. in Deno, runs in CloudFlare, workers runs on Netlify Edge functions, runs on Vercel's Edge, runtime, runs on Shopify Oxygen, is going to run on the other four. Um, of, of those within the next couple years here, like I think the APIs of these is gonna converge to be essentially the same. there's obviously gonna always be some, some nuances. Um, like, I don't know, Chrome and Firefox and Safari don't perfectly have the same API everywhere, right? Like Chrome has some web Bluetooth capabilities that Safari doesn't, or Firefox has some, I don't know, non-standard extensions to the error object, which none of the other runtimes do. But overall you can expect these front times to mostly be aligned. yeah, and I, I think that's, that's really, really, really excellent and that, that's I think really one of the reasons why one should really consider, like building for, for this standard runtime because it, it just guarantees that you'll be able to host this somewhere in five years time and 10 years time, with, with very little effort. Like even if Deno goes under or CloudFlare goes under, or, I don't know, nobody decides to maintain node anymore. It'll be easy to, to run somewhere else. And also I expect that the big cloud vendors will ultimately, um, provide, manage offerings for, for the standards compliant JavaScript on time as well. Is Node part of WinterCG? [00:16:36] Jeremy: And this WinterCG group is Node a part of that as well? [00:16:41] Luca: Um, yes, we've invited Node, um, to join, um, due to the complexities of how node's, internal decision making system works. Node is not officially a member of WinterCG. Um, there is some individual members of the node, um, technical steering committee, which are participating. for example, um, James m Snell is, is the co-chair, is my co-chair on, on WinterCG. He also works at CloudFlare. He's also a node, um, TSC member, Mateo Colina, who has been, um, instrumental to getting fetch landed in Node, um, is also actively involved. So Node is involved, but because Node is node and and node's decision making process works the way it does, node is not officially listed anywhere as as a member. but yeah, they're involved and maybe they'll be a member at some point. But, yeah, let's. , see (laughs) [00:17:34] Jeremy: Yeah. And, and it, so it, it sounds like you're thinking that's more of a, a governance or a organizational aspect of note than it is a, a technical limitation. Is that right? [00:17:47] Luca: Yeah. I obviously can't speak for the node technical steering committee, but I know that there's a significant chunk of the node technical steering committee that is, very favorable towards, uh, standards compliance. but parts of the Node technical steering committee are also not, they are either indifferent or are actively, I dunno if they're still actively working against this, but have actively worked against standards compliance in the past. And because the node governance structure is very, yeah, is, is so, so open and let's, um, and let's, let's all these voices be heard, um, that just means that decision making processes within Node can take so long, like. . This is also why the fetch API took eight years to ship. Like this was not a technical problem. and it is also not a technical problem. That Node does not have URL pattern support or, the file global or, um, that the web crypto API was not on this, on the global object until like late last year, right? Like, these are not technical problems, these are decision making problems. Um, and yeah, that was also part of the reason why we started Deno as, as like a separate thing, because like you can try to innovate node, from the inside, but innovating node from the inside is very slow, very tedious, and requires a lot of fighting. And sometimes just showing somebody, from the outside like, look, this is the bright future you could have, makes them more inclined to do something. Why it takes so long to ship new features in Node [00:19:17] Jeremy: Do, do you have a sense for, you gave the example of fetch taking eight years to, to get into node. Do you, do you have a sense of what the typical objection is to, to something like that? Like I, I understand there's a lot of people involved, but why would somebody say, I, I don't want this [00:19:35] Luca: Yeah. So for, for fetch specifically, there was a, there was many different kinds of concerns. Um, one of the, I, I can maybe list two of them. One of them was for example, that the fetch API is not a good API and as such, node should not have it. which is sort of. missing the point of, because it's a standard API, how good or bad the API is is much less relevant because if you can share the API, you can also share a wrapper that's written around the api. Right? and then the other concern was, node does need fetch because Node already has an HTTP API. Um, so, so these are both kind of examples of, of concerns that people had for a long time, which it took a long time to either convince these people or, or to, push the change through anyway. and this is also the case for, for other things like, for example, web, crypto, um, like why do we need web crypto? We already have node crypto, or why do we need yet another streams? Implementation node already has four different streams implementations. Like, why do we need web streams? and the, the. Like, I don't know if you know this XKCD of, there's 14 competing standards. so let's write a 15th standard, to unify them all. And then at the end we just have 15 competing standards. Um, so I think this is also the kind of concern that people were concerned about, but I, I think what we've seen here is that this is really not a concern that one needs to have because it ends up that, or it turns out in the end that if you implement web APIs, people will use web APIs and will use web APIs only for their new code. it takes a while, but we're seeing this with ESM versus require like new code written with require much less common than it was two years ago. And, new code now using like Xhr, whatever it's called, form request or. You know, the one, I mean, compared to using Fetch, like nobody uses that name. Everybody uses Fetch. Um, and like in Node, if you write a little script, like you're gonna use Fetch, you're not gonna use like Nodes, htp, dot get API or whatever. and we're gonna see the same thing with Readable Stream. We're gonna see the same thing with Web Crypto. We're gonna see, see the same thing with Blob. I think one of the big ones where, where Node is still, I, I, I don't think this is one that's ever gonna get solved, is the, the Buffer global and Node. like we have the Uint8, this Uint8 global, um, and like all the run times including browsers, um, and Buffer is like a super set of that, but it's in global scope. So it, it's sort of this non-standard extension of unit eight array that people in node like to use and it's not compatible with anything else. Um, but because it's so easy to get at, people use it anyway. So those are, those are also kind of problems that, that we'll have to deal with eventually. And maybe that means that at some point the buffer global gets deprecated and I don't know, probably can never get removed. But, um, yeah, these are kinds of conversations that the no TSE is going have to have internally in, I don't know, maybe five years. Write once, have it run on any hosting platform [00:22:37] Jeremy: Yeah, so at a high level, What's shipped in the browser, it went through the ECMAScript approval process. People got it into the browser. Once it's in the browser, probably never going away. And because of that, it's safe to build on top of that for these, these server run times because it's never going away from the browser. And so everybody can kind of use it into the future and not worry about it. Yeah. [00:23:05] Luca: Exactly. Yeah. And that's, and that's excluding the benefit that also if you have code that you can write once and use in both the browser and the server side around time, like that's really nice. Um, like that, that's the other benefit. [00:23:18] Jeremy: Yeah. I think that's really powerful. And that right now, when someone's looking at running something in CloudFlare workers versus running something in the browser versus running something in. it's, I think a lot of people make the assumption it's just JavaScript, so I can use it as is. But it, it, there are at least currently, differences in what APIs are available to you. [00:23:43] Luca: Yep. Yep. Why bundle so many things into Deno? [00:23:46] Jeremy: Earlier you were talking about how Deno is more than just the runtime. It has a linter, formatter, file watcher there, there's all sorts of stuff in there. And I wonder if you could talk a little bit to the, the reasoning behind that [00:24:00] Luca: Mm-hmm. [00:24:01] Jeremy: Having them all be separate things. [00:24:04] Luca: Yeah, so the, the reasoning here is essentially if you look at other modern run time or mo other modern languages, like Rust is a great example. Go is a great example. Even though Go was designed around the same time as Node, it has a lot of these same tools built in. And what it really shows is that if the ecosystem converges, like is essentially forced to converge on a single set of built-in tooling, a that built-in tooling becomes really, really excellent because everybody's using it. And also, it means that if you open any project written by any go developer, any, any rest developer, and you look at the tests, you immediately understand how the test framework works and you immediately understand how the assertions work. Um, and you immediately understand how the build system works and you immediately understand how the dependency imports work. And you immediately understand like, I wanna run this project and I wanna restart it when my file changes. Like, you immediately know how to do that because it's the same everywhere. Um, and this kind of feeling of having to learn one tool and then being able to use all of the projects, like being able to con contribute to open source when you're moving jobs, whatever, like between personal projects that you haven't touched in two years, you know, like being able to learn this once and then use it everywhere is such an incredibly powerful tool. Like, people don't appreciate this until they've used a runtime or, or, or language which provides this to them. Like, you can go to any go developer and ask them if they would like. There, there's this, there's this saying in the Go ecosystem, um, that Go FMT is nobody's favorite, but, or, uh, wait, no, I don't remember what the, how the saying goes, but the saying essentially implies that the way that go FMT formats code, maybe not everybody likes, but everybody loves go F M T anyway, because it just makes everything look the same. And like, you can read your friend's code, your, your colleagues code, your new jobs code, the same way that you did your code from two years ago. And that's such an incredibly powerful feeling. especially if it's like well integrated into your IDE you clone a repository, open that repository, and like your testing panel on the left hand side just populates with all the tests, and you can click on them and run them. And if an assertion fails, it's like the standard output format that you're already familiar with. And it's, it's, it's a really great feeling. and if you don't believe me, just go try it out and, and then you will believe me, (laughs) [00:26:25] Jeremy: Yeah. No, I, I'm totally with you. I, I think it's interesting because with JavaScript in particular, it feels like the default in the community is the opposite, right? There's so many different ways. Uh, there are so many different build tools and testing frameworks and, formatters, and it's very different than, like you were mentioning, a go or a Rust that are more recent languages where they just include that, all Bundled in. Yeah. [00:26:57] Luca: Yeah, and I, I think you can see this as well in, in the time that average JavaScript developer spends configuring their tooling compared to a rest developer. Like if I write Rust, I write Rust, like all day, every day. and I spend maybe two, 3% of my time configuring Rust tooling like. Doing dependency imports, opening a new project, creating a format or config file, I don't know, deleting the build directory, stuff like that. Like that's, that's essentially what it means for me to configure my rest tooling. Whereas if you compare this to like a front-end JavaScript project, like you have to deal with making sure that your React version is compatible with your React on version, it's compatible with your next version is compatible with your ve version is compatible with your whatever version, right? this, this is all not automatic. Making sure that you use the right, like as, as a front end developer, you developer. You don't have just NPM installed, no. You have NPM installed, you have yarn installed, you have PNPM installed. You probably have like, Bun installed. And, and, and I don't know to use any of these, you need to have corepack enabled in Node and like you need to have all of their global bin directories symlinked into your or, or, or, uh, included in your path. And then if you install something and you wanna update it, you don't know, did I install it with yarn? Did I install it with N pNPM? Like this is, uh, significant complexity and you, you tend to spend a lot of time dealing with dependencies and dealing with package management and dealing with like tooling configuration, setting up esent, setting up prettier. and I, I think that like, especially Prettier, for example, really showed, was, was one of the first things in the JavaScript ecosystem, which was like, no, we're not gonna give you a config where you, that you can spend like six hours configuring, it's gonna be like seven options and here you go. And everybody used it because, Nobody likes configuring things. It turns out, um, and even though there's always the people that say, oh, well, I won't use your tool unless, like, we, we get this all the time. Like, I'm not gonna use Deno FMT because I can't, I don't know, remove the semicolons or, or use single quotes or change my tab width to 16. Right? Like, wait until all of your coworkers are gonna scream at you because you set the tab width to 16 and then see what they change it to. And then you'll see that it's actually the exact default that, everybody uses. So it'll, it'll take a couple more years. But I think we're also gonna get there, uh, like Node is starting to implement a, a test runner. and I, I think over time we're also gonna converge on, on, on, on like some standard build tools. Like I think ve, for example, is a great example of this, like, Doing a front end project nowadays. Um, like building new front end tooling that's not built on Vite Yeah. Don't like, Vite's it's become the standard and I think we're gonna see that in a lot more places. We should settle on what tools to use [00:29:52] Jeremy: Yeah, though I, I think it's, it's tricky, right? Because you have so many people with their existing projects. You have people who are starting new projects and they're just searching the internet for what they should use. So you're, you're gonna have people on web pack, you're gonna have people on Vite, I guess now there's gonna be Turbo pack, I think is another one that's [00:30:15] Luca: Mm-hmm. [00:30:16] Jeremy: There's, there's, there's all these different choices, right? And I, I think it's, it's hard to, to really settle on one, I guess, [00:30:26] Luca: Yeah, [00:30:27] Jeremy: uh, yeah. [00:30:27] Luca: like I, I, I think this is, this is in my personal opinion also failure of the Node Technical Steering committee, for the longest time to not decide that yes, we're going to bless this as the standard format for Node, and this is the standard package manager for Node. And they did, they sort of did, like, they, for example, node Blessed NPM as the standard, package manager for N for for node. But it didn't innovate on npm. Like no, the tech nodes, tech technical steering committee did not force NPM to innovate NPMs, a private company ultimately bought by GitHub and they had full control over how the NPM cli, um, evolved and nobody forced NPM to, to make sure that package install times are six times faster than they were. Three years ago, like nobody did that. so it didn't happen. And I think this is, this is really a failure of, of the, the, the, yeah, the no technical steering committee and also the wider JavaScript ecosystem of not being persistent enough with, with like focus on performance, focus on user experience, and, and focus on simplicity. Like things got so out of hand and I'm happy we're going in the right direction now, but, yeah, it was terrible for some time. (laughs) Node compatibility layer [00:31:41] Jeremy: I wanna talk a little bit about how we've been talking about Deno in the context of you just using Deno using its own standard library, but just recently last year you added a compatibility shim where people are able to use node libraries in Deno. [00:32:01] Luca: Mm-hmm. [00:32:01] Jeremy: And I wonder if you could talk to, like earlier you had mentioned that Deno has, a different permissions model. on the website it mentions that Deno's HTTP server is two times faster than node in a Hello World example. And I'm wondering what kind of benefits people will still get from Deno if they choose to use packages from Node. [00:32:27] Luca: Yeah, it's a great question. Um, so I think a, again, this is sort of a like, so just to clarify what we actually implemented, like what we have is we have support for you to import NPM packages. Um, so you can import any NPM package from NPM, from your type script or JavaScript ECMAScript module, um, that you have, you already have for your Deno code. Um, and we will under the hood, make sure that is installed somewhere in some directory globally. Like PNPM does. There's no local node modules folder you have to deal with. There's no package of Jason you have to deal with. Um, and there's no, uh, package. Jason, like versioning things you need to deal with. Like what you do is you do import cowsay from NPM colon cowsay at one, and that will import cowsay with like the semver tag one. Um, and it'll like do the sim resolution the same way node does, or the same way NPM does rather. And what you get from that is that essentially it gives you like this backdoor to a callout to all of the existing node code that Isri been written, right? Like you cannot expect that Deno developers, write like, I don't know. There was this time when Deno did not really have that many, third party modules yet. It was very early on, and I don't know the, you either, if you wanted to connect to Postgres and there was no Postgres driver available, then the solution was to write your own Postgres driver. And that is obviously not great. Um, (laughs) . So the better solution here is to let users for these packages where there's no Deno native or, or, or web native or standard native, um, package for this yet that is importable with url. Um, specifiers, you can import this from npm. Uh, so it's sort of this like backdoor into the existing NPM ecosystem. And we explicitly, for example, don't allow you to, create a package.json file or, import bare node specifiers because we don't, we, we want to stay standards compliant here. Um, but to make this work effectively, we need to give you this little back door. Um, and inside of this back door. All hell is like, or like everything is terrible inside there, right? Like inside there you can do bare specifiers and inside there you can like, uh, there's package.json and there's crazy node resolution and underscore underscore DIRNAME and common js. And like all of that stuff is supported inside of this backdoor to make all the NPM packages work. But on the outside it's exposed as this nice, ESM only, NPM specifiers. and the, the reason you would want to use this over, like just using node directly is because again, like you wanna use TypeScript, no config, like necessary. You want to use, you wanna have a formatter you wanna have a linter, you wanna have tooling that like does testing and benchmarking and compiling or whatever. All of that's built in. You wanna run this on the edge, like close to your users and like 30 different, 35 different, uh, points of presence. Um, it's like, Okay, push it to your git repository. Go to this website, click a button two times, and it's running in 35 data centers. like this is, this is the kind of ex like developer experience that you can, you do not get. You, I will argue that you cannot get with Node right now. Like even if you're using something like ts-node, it is not possible to get the same level of developer experience that you do with Deno. And the, the, the same like speed at which you can iterate, iterate on your projects, like create new projects, iterate on them is like incredibly fast in Deno. Like, I can open a, a, a folder on my computer, create a single file, may not ts, put some code in there and then call Deno Run may not. And that's it. Like I don't, I did not need to do NPM install I did not need to do NPM init -y and remove the license and version fields and from, from the generated package.json and like set private to true and whatever else, right? It just all works out of the box. And I think that's, that's what a lot of people come to deno for and, and then ultimately stay for. And also, yeah, standards compliance. So, um, things you build in Deno now are gonna work in five, 10 years, with no hassle. Node shims and testing [00:36:39] Jeremy: And so with this compatibility layer or this, this shim, is it where the node code is calling out to node APIs and you're replacing those with Deno compatible equivalents? [00:36:54] Luca: Yeah, exactly. Like for example, we have a shim in place that shims out the node crypto API on top of the web crypto api. Like sort of, some, some people may be familiar with this in the form of, um, Browserify shims. if anybody still remembers those, it's essentially. , your front end tooling, you were able to import from like node crypto in your front end projects and then behind the scenes your web packs or your browser replies or whatever would take that import from node crypto and would replace it with like the shim that was essentially exposed the same APIs node crypto, but under the hood, wasn't implemented with native calls, but was implemented on top of web crypto, or implemented in user land even. And Deno does something similar. there's a couple edge cases of APIs that there's, where, where we do not expose the underlying thing that we shim to, to end users, outside of the node shim. So like there's some, some APIs that I don't know if I have a good example, like node nextTick for example. Um, like to properly be able to shim node nextTick, you need to like implement this within the event loop in the runtime. and. , you don't need this in Deno, because Deno, you use the web standard queueMicrotask to, to do this kind of thing. but to be able to shim it correctly and run node applications correctly, we need to have this sort of like backdoor into some ugly APIs, um, which, which natively integrate in the runtime, but, yeah, like allow, allow this node code to run. [00:38:21] Jeremy: A, anytime you're replacing a component with a, a shim, I think there's concerns about additional bugs or changes in behavior that can be introduced. Is that something that you're seeing and, and how are you accounting for that? [00:38:38] Luca: Yeah, that's, that's an excellent question. So this is actually a, a great concern that we have all the time. And it's not just even introducing bugs, sometimes it's removing bugs. Like sometimes there's bugs in the node standard library which are there, and people are relying on these bugs to be there for the applications to function correctly. And we've seen this a lot, and then we implement this and we implement from scratch and we don't make that same bug. And then the test fails or then the application fails. So what we do is, um, we actually run node's test suite against Deno's Shim layer. So Node has a very extensive test suite for its own standard library, and we can run this suite against, against our shims to find things like this. And there's still edge cases, obviously, which node, like there was, maybe there's a bug which node was not even aware of existing. Um, where maybe this, like it's is, it's now standard, it's now like intended behavior because somebody relies on it, right? Like the second somebody relies on, on some non-standard or some buggy behavior, it becomes intended. Um, but maybe there was no test that explicitly tests for this behavior. Um, so in that case we'll add our own tests to, to ensure that. But overall we can already catch a lot of these by just testing, against, against node's tests. And then the other thing is we run a lot of real code, like we'll try run Prisma and we'll try run Vite and we'll try run NextJS and we'll try run like, I don't know, a bunch of other things that people throw at us and, check that they work and they work and there's no bugs. Then we did our job well and our shims are implemented correctly. Um, and then there's obviously always the edge cases where somebody did something absolutely crazy that nobody thought possible. and then they'll open an issue on the Deno repo and we scratch our heads for three days and then we'll fix it. And then in the next release there'll be a new bug that we added to make the compatibility with node better. so yeah, but I, yeah. Running tests is the, is the main thing running nodes test. Performance should be equal or better [00:40:32] Jeremy: Are there performance implications? If someone is running an Express App or an NextJS app in Deno, will they get any benefits from the Deno runtime and performance? [00:40:45] Luca: Yeah. It's actually, there is performance implications and they're usually. The opposite of what people think they are. Like, usually when you think of performance implications, it's always a negative thing, right? It's always okay. Like you, it's like a compromise. like the shim layer must be slower than the real node, right? It's not like we can run express faster than node can run, express. and obviously not everything is faster in Deno than it is in node, and not everything is faster in node than it is in Deno. It's dependent on the api, dependent on, on what each team decided to optimize. Um, and this also extends to other run times. Like you can always cherry pick results, like, I don't know, um, to, to make your runtime look faster in certain benchmarks. but overall, what really matters is that you do not like, the first important step for for good node compatibility is to make sure that if somebody runs your code or runs their node code in Deno or your other run type or whatever, It performs at least the same. and then anything on top of that great cherry on top. Perfect. but make sure the baselines is at least the same. And I think, yeah, we have very few APIs where we behave, where we, where, where like there's a significant performance degradation in Deno compared to Node. Um, and like we're actively working on these things. like Deno is not a, a, a project that's done, right? Like we have, I think at this point, like 15 or 16 or 17 engineers working on Deno, spanning across all of our different projects. And like, we have a whole team that's dedicated to performance, um, and a whole team that's dedicated node compatibility. so like these things get addressed and, and we make patch releases every week and a minor release every four weeks. so yeah, it's, it's not a standstill. It's, uh, constantly improving. What should go into the standard library? [00:42:27] Jeremy: Uh, something that kind of makes Deno stand out as it's standard library. There's a lot more in there than there is in in the node one. [00:42:38] Luca: Mm-hmm. [00:42:39] Jeremy: Uh, I wonder if you could speak to how you make decisions on what should go into it. [00:42:46] Luca: Yeah, so early on it was easier. Early on, the, the decision making process was essentially, is this something that a top 100 or top 1000 NPM library implements? And if it is, let's include it. and the decision making is still short of based on that. But right now we've already implemented most of the low hanging fruit. So things that we implement now are, have, have discussion around them whether we should implement them. And we have a process where, well we have a whole team of engineers on our side and we also have community members that, that will review prs and, and, and make comments. Open issues and, and review those issues, to sort of discuss the pros and cons of adding any certain new api. And sometimes it's also that somebody opens an issue that's like, I want, for example, I want an API to, to concatenate two unit data arrays together, which is something you can really easily do node with buffer dot con cat, like the scary buffer thing. and there's no standards way of doing that right now. So we have to have a little utility function that does that. But in parallel, we're thinking about, okay, how do we propose, an addition to the web standards now that makes it easy to concatenate iterates in the web standards, right? yeah, there's a lot to it. Um, but it's, it's really, um, it's all open, like all of our, all of our discussions for, for, additions to the standard library and things like that. It's all, all, uh, public on GitHub and the GitHub issues and GitHub discussions and GitHub prs. Um, so yeah, that's, that's where we do that. [00:44:18] Jeremy: Yeah, cuz to give an example, I was a little surprised to see that there is support for markdown front matter built into the standard library. But when you describe it as we look at the top a hundred thousand packages, are people looking at markdown? Are they looking at front matter? I, I'm sure there's a fair amount that are so that that makes sense. [00:44:41] Luca: Yeah, like it sometimes, like that one specifically was driven by, like, our team was just building a lot of like little blog pages and things like that. And every time it was either you roll your own front matter part or you look for one, which has like a subtle bug here and the other one has a subtle bug there and really not satisfactory with any of them. So, we, we roll that into the standard library. We add good test coverage for it good, add good documentation for it, and then it's like just a resource that people can rely on. Um, and you don't, you then don't have to make the choice of like, do I use this library to do my front meta parsing or the other library? No, you just use the one that's in the standard library. It's, it's also part of this like user experience thing, right? Like it's just a much nicer user experience, not having to make a choice, about stuff like that. Like completely inconsequential stuff. Like which library do we use to do front matter parsing? (laughs) [00:45:32] Jeremy: yeah. I mean, I think when, when that stuff is not there, then I think the temptation is to go, okay, let me see what node modules there are that will let me parse the front matter. Right. And then it, it sounds like probably ideally you want people to lean more on what's either in the standard library or what's native to the Deno ecosystem. Yeah. [00:46:00] Luca: Yeah. Like the, the, one of the big benefits is that the Deno Standard Library is implemented on top of web standards, right? Like it's, it's implemented on top of these standard APIs. so for example, there's node front matter libraries which do not run in the browser because the browser does not have the buffer global. maybe it's a nice library to do front matter pricing with, but. , you choose it and then three days later you decide that actually this code also needs to run in the browser, and then you need to go switch your front matter library. Um, so, so those are also kind of reasons why we may include something in Strand Library, like maybe there's even really good module already to do something. Um, but if there's certain reliance on specific node features that, um, we would like that library to also be compatible with, with, with web standards, we'll, uh, we might include in the standard library, like for example, YAML Parser, um, or the YAML Parser in the standard library is, is a fork of, uh, of the node YAML module. and it's, it's essentially that, but cleaned up and, and made to use more standard APIs rather than, um, node built-ins. [00:47:00] Jeremy: Yeah, it kind of reminds me a little bit of when you're writing a front end application, sometimes you'll use node packages to do certain things and they won't work unless you have a compatibility shim where the browser can make use of certain node APIs. And if you use the APIs that are built into the browser already, then you won't, you won't need to deal with that sort of thing. [00:47:26] Luca: Yeah. Also like less Bundled size, right? Like if you don't have to shim that, that's less, less code you have to ship to the client. WebAssembly use cases [00:47:33] Jeremy: Another thing I've seen with Deno is it supports running web assembly. [00:47:40] Luca: Mm-hmm. [00:47:40] Jeremy: So you can export functions and call them from type script. I was curious if you've seen practical uses of this in production within the context of Deno. [00:47:53] Luca: Yeah. there's actually a Bunch of, of really practical use cases, so probably the most executed bit of web assembly inside of Deno right now is actually yes, build like, yes, build has a web assembly, build like yeses. Build is something that's written and go. You have the choice of either running. Um, natively in machine code as, as like an ELF process on, on Linux or on on Windows or whatever. Or you can use the web assembly build and then it runs in web assembly. And the web assembly build is maybe 50% slower than the, uh, native build, but that is still significantly faster than roll up or, or, or, or I don't know, whatever else people use nowadays to do JavaScript Bun, I don't know. I, I just use es build always, um, So, um, for example, the Deno website, is running on Deno Deploy. And Deno Deploy does not allow you to run Subprocesses because it's, it's like this edge run time, which, uh, has certain security permissions that it's, that are not granted, one of them being sub-processes. So it needs to execute ES build. And the way it executes es build is by running them inside a web assembly. Um, because web assembly is secure, web assembly is, is something which is part of the JavaScript sandbox. It's inside the JavaScript sandbox. It doesn't poke any holes out. Um, so it's, it's able to run within, within like very strict security context. . Um, and then other examples are, I don't know, you want to have a HTML sanitizer, which is actually built on the real HTML par in a browser. we, we have an hdml sanitizer called com or, uh, ammonia, I don't remember. There's, there's an HTML sanitizer library on denoland slash x, which is built on the html parser from Firefox. Uh, which like ensures essentially that your html, like if you do HTML sanitization, you need to make sure your HTML par is correct, because if it's not, you might like, your browser might parse some HTML one way and your sanitizer pauses it another way and then it doesn't sanitize everything correctly. Um, so there's this like the Firefox HTML parser compiled to web assembly. Um, you can use that to. HTML sanitization, or the Deno documentation generation tool, for example. Uh, Deno Doc, there's a web assembly built for it that allows you to programmatically, like generate documentation for, for your type script modules. Um, yeah, and, and also like, you know, deno fmt is available as a WebAssembly module for programmatic access and a Bunch of other internal Deno, programs as well. Like, or, uh, like components, not programs. [00:50:20] Jeremy: What are some of the current limitations of web assembly and Deno for, for example, from web assembly, can I make HTTP requests? Can I read files? That sort of thing. [00:50:34] Luca: Mm-hmm. . Yeah. So web assembly, like when you spawn as web assembly, um, they're called instances, WebAssembly instances. It runs inside of the same vm, like the same, V8 isolate is what they're called, but. it does not have it, it's like a completely fresh sandbox, sort of, in the sense that I told you that between a runtime and like an engine essentially implements no IO calls, right? And a runtime does, like a runtime, pokes holds into the, the, the engine. web assembly by default works the same way that there is no holes poked into its sandbox. So you have to explicitly poke some holes. Uh, if you want to do HTTP calls, for example, when, when you create web assembly instance, it gives you, or you can give it something called imports, uh, which are essentially JavaScript function bindings, which you can call from within the web assembly. And you can use those function bindings to do anything you can from JavaScript. You just have to pass them through explicitly. and. . Yeah. Depending on how you write your web assembly, like if you write it in Rust, for example, the tooling is very nice and you can just call some JavaScript code from your Rust, and then the build system will automatically make sure that the right function bindings are passed through with the right names. And like, you don't have to deal with anything. and if you're writing go, it's slightly more complicated. And if you're writing like raw web assembly, like, like the web assembly, text format and compiling that to a binary, then like you have to do everything yourself. Right? It's, it's sort of the difference between writing C and writing JavaScript. Like, yeah. What level of abstraction do you want? It's definitely possible though, and that's for limitations. it, the same limitations as, as existing browsers apply. like the web assembly support in Deno is equivalent to the web assembly support in Chrome. so you can do, uh, many things like multi-threading and, and stuff like that already. but especially around, shared mutable memory, um, and having access to that memory from JavaScript. That's something which is a real difficulty with web assembly right now. yeah, growing web assembly memory is also rather difficult right now. There's, there's a, there's a couple inherent limitations right now with web assembly itself. Um, but those, those will be worked out over time. And, and Deno is like very up to date with the version of, of the standard, it, it implements, um, through v8. Like we're, we're, we're up to date with Chrome Beta essentially all the time. So, um, yeah. Any, anything you see in, in, in Chrome beta is gonna be in Deno already. Deno Deploy [00:52:58] Jeremy: So you talked a little bit about this before, the Deno team, they have their own, hosting. Platform called Deno Deploy. So I wonder if you could explain what that is. [00:53:12] Luca: Yeah, so Deno has this really nice, this really nice concept of permissions which allow you to, sorry, I'm gonna start somewhere slightly, slightly unrelated. Maybe it sounds like it's unrelated, but you'll see in a second. It's not unrelated. Um, Deno has this really nice permission system which allows you to sandbox Deno programs to only allow them to do certain operations. For example, in Deno, by default, if you try to open a file, it'll air out and say you don't have read permissions to read this file. And then what you do is you specify dash, dash allow read um, maybe you have to give it. they can either specify, allow, read, and then it'll grant to read access to the entire file system. Or you can explicitly specify files or folders or, any number of things. Same goes for right permissions, same goes for network permissions. Um, same goes for running subprocesses, all these kind of things. And by limiting your permissions just a little bit. Like, for example, by just disabling sub-processes and foreign function interface, but allowing everything else, allowing reeds and allowing network access and all that kind of stuff. we can run Deno programs in a way that is significantly more cost effective to you as the end user than, and, and like we can cold start them much faster than, like you may be able to with a, with a more conventional container based, uh, system. So what, what do you, what Deno Deploy is, is a way to run JavaScript or Deno Code, on our data centers all across the world with very little latency. like you can write some JavaScript code which execute, which serves HTTP requests deploy that to our platform, and then we'll make sure to spin that code up all across the world and have your users be able to access it through some URL or, or, or some, um, custom domain or something like that. and this is some, this is very similar to CloudFlare workers, for example. Um, and it's like Netlify Edge functions is built on top of Deno Deploy. Like Netlify Edge functions is implemented on top of Deno Deploy, um, through our sub hosting product. yeah, essentially Deno Deploy is, is, um, yeah, a cloud hosting service for JavaScript, um, which allows you to execute arbitrary JavaScript. and there there's a couple, like different directions we're going there. One is like more end user focused, where like you link your GitHub repository and. Like, we'll, we'll have a nice experience like you do with Netlify and Versace, that word like your commits automatically get deployed and you get preview deployments and all that kind of thing. for your backend code though, rather than for your front end websites. Although you could also write front-end websites and you know, obviously, and the other direction is more like business focused. Like you're writing a SaaS application and you want to allow the user to customize, the check like you're writing a SaaS application that provides users with the ability to write their own online store. Um, and you want to give them some ability to customize the checkout experience in some way. So you give them a little like text editor that they can type some JavaScript into. And then when, when your SaaS application needs to hit this code path, it sends a request to us with the code, we'll execute that code for you in a secure way. In a secure sandbox. You can like tell us you, this code only has access to like my API server and no other networks to like prevent data exfiltration, for example. and then you do, you can have all this like super customizable, code in inside of your, your SaaS application without having to deal with any of the operational complexities of scaling arbitrary code execution, or even just doing arbitrary code execution, right? Like it's, this is a very difficult problem and give it to someone else and we deal with it and you just get the benefits. yeah, that's Deno Deploy, and it's built by the same team that builds the Deno cli. So, um, all the, all of your favorite, like Deno cli, or, or Deno APIs are available in there. It's just as web standard is Deno, like you have fetch available, you have blob available, you have web crypto available, that kind of thing. yeah. Running code in V8 isolates [00:56:58] Jeremy: So when someone ships you their, their code and you run it, you mentioned that the, the cold start time is very low. Um, how, how is the code being run? Are people getting their own process? It sounds like it's not, uh, using containers. I wonder if you could explain a little bit about how that works. [00:57:20] Luca: Yeah, yeah, I can, I can give a high level overview of how it works. So, the way it works is that we essentially have a pool of, of Deno processes ready. Well, it's not quite Deno processes, it's not the same Deno CLI that you download. It's like a modified version of the Deno CLI based on the same infrastructure, that we have spun up across all of our different regions across the world, uh, across all of our different data centers. And then when we get a request, we'll route that request, um, the first time we get request for that, that we call them deployments, that like code, right? We'll take one of these idle Deno processes and will assign that code to run in that process, and then that process can go serve the requests. and these process, they're, they're, they're isolated and they're, you. it's essentially a V8 isolate. Um, and it's a very, very slim, it's like, it's a much, much, much slimmer version of the Deno cli essentially. Uh, which the only thing it can do is JavaScript execution and like, it can't even execute type script, for example, like type script is we pre-process it up front to make the the cold start faster. and then what we do is if you don't get a request for some amount of. , we'll, uh, spin down that, um, that isolate and, uh, we'll spin up a new idle one in its place. And then, um, if you get another request, I don't know, an hour later for that same deployment, we'll assign it to a new isolate. And yeah, that's a cold start, right? Uh, if you have an isolate which receives, or a, a deployment rather, which receives a Bunch of traffic, like let's say you receive a hundred requests per second, we can send a Bunch of that traffic to the same isolate. Um, and we'll make sure that if, that one isolate isn't able to handle that load, we'll spin it out over multiple isolates and we'll, we'll sort of load balance for you. Um, and we'll make sure to always send to the, to the point of present that's closest to, to the user making the request. So they get very minimal latency. and they get we, we've these like layers of load balancing in place and, and, and. I'm glossing over a Bunch of like security related things here about how these, these processes are actually isolated and how we monitor to ensure that you don't break out of these processes. And for example, Deno Deploy does, it looks like you have a file system cuz you can read files from the file system. But in reality, Deno Deploy does not have a file system. Like the file system is a global virtual file system. which is, is, uh, yeah, implemented completely differently than it is in Deno cli. But as an end user you don't have to care about that because the only thing you care about is that it has the exact same API as the Deno cli and you can run your code locally and if it works there, it's also gonna work in deploy. yeah, so that's, that's, that's kind of. High level of Deno Deploy. If, if any of this sounds interesting to anyone, by the way, uh, we're like very actively hiring on, on Deno Deploy. I happen to be the, the tech lead for, for a Deno Deploy product. So I'm, I'm always looking for engineers, to, to join our ranks and, and build cool distributed systems. Deno.com/jobs. [01:00:15] Jeremy: for people who aren't familiar with the isolates, are these each run in their own processes, or do you have a single process and that has a whole Bunch of isolates inside it? [01:00:28] Luca: in, in the general case, you can say that we run, uh, one isolate per process. but there's many asterisks on that. Um, because, it's, it's very complicated. I'll just say it's very complicated. Uh, in, in the general case though, it's, it's one isolate per process. Yeah. Configuring permissions [01:00:45] Jeremy: And then you touched a little bit on the permissions system. Like you gave the example of somebody could have a website where they let their users give them code to execute. how does it look in terms of specifying what permissions people have? Like, is that a configuration file? Are those flags you pass in? What, what does that look? [01:01:08] Luca: Yeah. So, so that product is called sub hosting. It's, um, slightly different from our end user platform. Um, it's essentially a service that allows you to, like, you email us, well, we'll send you a, um, onboard you, and then what you can do is you can send HTTP requests to a certain end point with a, authentication token and. a reference to some code to execute. And then what we'll do is, we'll, um, when we receive that HTTP request, we'll fetch the code, it's spin up and isolate, execute the code. execute the code. We serve the request, return you the response, um, and then we'll pipe logs to you and, and stuff like that. and the, and, and part of that is also when we, when we pull the, um, the, the code for to spin up the isolate, that code doesn't just include the code that we're executing, but also includes things like permissions, and, and various other, we call this isolate configuration. Um, you can inspect, this is all public. we have public docs for this at Deno.com/subhosting. I think. Yes, Deno.com/subhosting. [01:02:08] Jeremy: And is that built on top of something that's a part of the public Deno project, the open source part? Or is this specific to this sub hosting
By Walt HickeyWelcome to the Numlock Sunday edition.This week, I spoke to Megan Garber who wrote the new essay collection On Misdirection: Magic, Mayhem, American Politics from The Atlantic. Megan is a writer at The Atlantic, and the magazine has compiled a number of her essays into the new book. It's a great read, an exploration into the ways that American political actors have parlayed the techniques of entertainment to their own ends. Today, we talked about amusing ourselves to death, what happens to a country when politics becomes entertainment, and Dwight Schrute. Megan can be found at The Atlantic and the book, as well as several other new compilations of essays from the magazine, is available wherever books are sold. This interview has been condensed and edited. Megan Garber, thank you so much for joining us.Thanks for having me.You have a new book, it's a collection of a lot of your essays at The Atlantic, it's called On Misdirection. What prompted you to figure out this beat and tease out that you were covering misdirection over the past couple years?A lot of the things that have really interested me about politics and political discourse, let's say, over the past few years are the ways that we are trained to see each other and then also to not see each other. It seems like so many things, so many of the big political stories, particularly at the beginning of the presidency of Donald Trump, and then up till now, so much has come down to are we seeing what we should be seeing, or are we in fact looking away from what we should be seeing?Ideas about vision is actually one of the main drivers of all of these essays, which are very different other than that. I'm a political junkie, I love to follow politics and all of that, but I kept feeling for myself just as a news consumer, "Is this really the most important thing right now?" All these shiny distractions, daily outrages that come and go, and I know I myself, as a news consumer, often feel very addled, almost, and just in a constant state of distraction.So these essays really do try to figure out what happens to that form of distraction on a mass scale. If I'm not the only one feeling this, but if a lot of people are feeling this, what are the consequences of that?I loved how also you kept it in some of the more conventional forms of media as well, too. I know that a lot of our conversation about distraction has been related to social media and algorithms and kind of blamed on Silicon Valley ghosts that are destroying our brains.But a lot of what you talk about is just super day to day. It's the way people talk about other people, whether it's on television or radio or things like that. Do you want to expand on how it's not just necessarily what we're doing online?The first essay, actually, is a look-back at the scholar Neil Postman, who's one of my favorite thinkers, critics, et cetera. He wrote a book called Amusing Ourselves to Death in 1985 that was looking at the impact of television, essentially, on American culture. And as you might guess from the title, making an argument that the entertainment has slipped the bonds of mere fun and mere escapism and distraction and has actually come into our lives and come to infiltrate lives in a lot of ways.Looking at him in retrospect is the first essay in the collection. We chose that specifically because I think one of the other arguments underlining a lot that's in the book is that entertainment, as much as I love it, and I am an inveterate lover of entertainment of all kinds, but it can, I think, also become fairly pernicious when it becomes our standard of judging things in the political realm.One example that's in another essay in there is the first impeachment trial of Donald Trump. The talking points, it seemed to me, among Trump's allies had nothing to do with the facts at hand. This was a legal proceeding, conducted by lawyers, by Meta lawyers, in fact, in Congress. Yet the arguments were nothing to do with the facts, but "this is boring." That was essentially what it came down to. "Ugh, snooze, ugh, no one's watching this." All that kind of stuff.When again, this was an impeachment trial of a president, there were facts at play, and yet the talking points completely elided that. What struck me as well, though, was it was not just partisan talking points. One news organization had an entire op-ed about the impeachment trial, sort of complaining that it lacked pizazz. Pizazz was literally the word that was used.I think there's this way that if we're not careful, the sort of logic of entertainment itself, this idea that everything has to be fun, that boring is its own kind of factual argument, that's what can happen. That was what Neil Postman was talking about.That, I think, is what's happening right now, too, where just entertainment becomes the only thing that matters at the end of the day. That can become, I think, pretty quickly dangerous and bad for us as a culture.That was a really remarkable argument in the book. Again, I'm a huge fan of pop culture. I like being entertained, but it just felt weird how so much of the language and the desire of pop culture was being adapted and weaved into politics. You mentioned obviously Trump, and rallies, and the impeachment, but you had an example in there about Pete Buttigieg after the Iowa Caucus that I thought was really potent where it's just, the question isn't like, "Did we win?" It's like, "Aren't we having so much fun?"Exactly right, and the Iowa Caucus is as famous and infamous for not having an immediate result. Very quickly, things went awry in a quite extreme manner there. Exactly what you said, Pete Buttigieg put out a talk saying, "We have shocked the nation," claiming victory even though no such victory had been claimed. Just like you said, this idea that shock is even part of the conversation, that shock is a value on its own, I think just speaks to the way that fun and high emotional stakes of everything are infiltrating, I think, our rhetoric and logic as a culture.I think also just we talk a lot about overheated rhetoric. Just everything is heated, and everything is ratcheting up at all times, and I think one of the extensions of the ratcheting is that we as news consumers and as citizens just become accustomed to evermore levels of drama, of outrage, of everything. We're sort of losing the ability, I think, to have a moderate anything in our conversations. Everything is just bigger, dramatic jazz hands.So, we may as well get to some of the heart of this. There's obviously a guy who comes up a couple times in your book who is very good at this, bit of a controversial figure, but you just keep on coming back to him, I think, for reasons that are clear.What draws you to Dwight Schrute?I will say, during the early days of the pandemic, I've always been a fan of the show The Office, and I went back to it as a comfort watch, a soothing watch in these really awful days. I was newly familiar with The Office.For anyone who might not be familiar, The Office is a U.S. sitcom, but it focuses on a very small office in Scranton, Pennsylvania. There is a boss, Michael Scott, who is kind of an oaf in a lot of ways. And then one of the other characters in the show is, yes, Dwight Schrute, who I've always been fascinated by, because he's this amazing contradiction, this walking category error.He is a beet farmer, but he has these authoritarian tendencies. I'm trying to think of how to describe Dwight. He's just a lot of things at once. I think one of the things that's so interesting about him is that he is this person who very much thinks that he knows better than everyone else what the rules are, that he can decide the rules for himself and then, importantly, inflict them on other people.So, Dwight thinks he is basically the ultimate agent of law enforcement, literally and otherwise, in the office. In fact, again and again is a physical danger to his colleagues. Just that tension in Dwight felt very resonant to me, as you say, for other political figures and power players as well. I wanted to look at Dwight as almost a character and a trope who conveys so much about the people in political power, often, who make up their own rules and then enforce them and inflict them on everyone else.This idea of, "We're doing it because I said so," and that's the only explanation you're going to get, and these lies that just, everyone just lies without any real sense of backlash or anything. And a lot of that, to me, seemed to be conveyed in Dwight.There's an appeal to him. You can understand, in a democracy where appeal is a key component of accessing power, that despite the obvious flaws in his leadership capabilities for a large duration, you can see how a guy like that just might appeal to a large group of people. I guess we can now broaden it out a bit, how do you think that applies to American society as a whole?A lot of the supporters of the fellow we've been talking about, poll after poll suggests that they feel a sense of encroachment. They feel like they used to be de facto at the top of American society and feel like now they are being pushed down a bit. I think there's a lot of indignation there and a lot of wanting to feel a little bit reassured that, "No, you still do have power. You still do. You can still say for everyone else, as you have throughout history."I think there's something about Dwight definitely that sort of conveys that idea. Donald Trump, very famously and infamously, promised, "I alone can fix it," with 'it' being fill in the blank. There's something in that message, there's something very reassuring to people who feel very caught in a tumult and who feel very unsettled and everything. So much is in flux right now and I think to just have that sort of authoritarian presence who can just say, "Trust me, I've got this. I can make the world make sense again," I think there's something very appealing just about that message.Then, of course, there's a question of how true that is, how politically problematic that is, et cetera. But in terms of rhetoric, I think that's very powerful. There's the adjunct to that message, which is if Donald Trump can say what's what, if he can look at an orange and say it's an apple, and just by force of will have the orange in some sense become an apple, I think there's also a silent message to people that they might have that same agency. They can still be the ones who decide. There's a very powerful message in that.You had a line toward the end of that essay, I think, that was just resolving Dwight's arc. You wrote this I think in October 2020, which was a fascinating time for a lot of people. You basically wrote that "his arc as an agent of chaos is simply not sustainable." Toward the end of it, he domesticates a little bit just because that's what folks want. I guess, how do you see that potentially applying beyond strictly the American television program The Office?One of the things that's so interesting to me about The Office itself is that you could see, or at least when I was rewatching it, what really struck me as a writer — not a writer of sitcoms, but a writer in general — I could see the type of arc that they were trying to give different characters. Just like you said, Dwight, after a while, a character like that can't simply stay an agent of chaos. There has to be some kind of evolution and some kind of arc to the character, or else it just gets too repetitive.Something about the arcs, I think, is very revealing because I think to the Neil Postman point, in the very broad sense, Americans are being conditioned to understand the world in roughly the same way as a sitcom understands the world, which is a character like Dwight needs, the arc needs, the evolution needs a bit of catharsis at the end.A lot of us are now coming to see the world itself in those terms, where we expect our political stories, we expect our real stories of everyday life to also have some tidy conclusions, to also mimic the flow of a TV show and a sitcom.That's one thing I would say, there is this logic of sitcom built into things, and I think that's what can make so many of the problems we have, which are so big and intractable — climate change would be one I would point to — that really resists a Schrutean narrative arc.It makes it sometimes hard for us to talk about. I would also say that The Office's writers recognized how deeply viewers — and I would also then say citizens and people and news consumers — how desperately we crave a catharsis at the end, in whatever form that might look like. Catharsis is a very important idea, both in sitcom writing and in the broader world.I like that idea. I do want to talk to you a little bit about the arc of your book, which was really, really great. It's a collection of essays, and I imagine that the order in which you present them, there was a lot of thought that went into that. You kick it off very much talking about irony and satire and how they're having a good moment, you talk a little bit about the Science March. I'll let you take it from there a little bit.But in the end, you also finish on the idea that "if you brand yourself an entertainer and not a journalist, you can spread falsehoods in the name of fun." You start off in a place where people are having fun for, one might think, deliberate and somewhat positive-facing means. And then in the end, that can get co-opted in a manner. Do you want to maybe talk about some of that?Sure, and thank you, that's such a good observation, totally.The book begins in this essay about Neil Postman looking at the March for Science, which it was put on in the same general time that the Women's March was happening, that people were trying to find ways to protest against the new presidency. This was a march that was very self-consciously designed to support science, facts, et cetera. I did not attend myself, but I was looking through Instagram afterward and looking at all the photos, and that's the way of the modern march, is to have your march, which happens in person, translate to Instagram, translate to memes.One of the things that you're supposed to do, really, as a good attender of these marches is to come up with a costume that will go viral, perhaps. I mean, there were some really good jokes, they were great, they were great costumes, great signs, all that stuff. But I just kept thinking, what now? Speaking of catharsis, is this enough catharsis for people? Is this going to feel like, okay, well we did this, so what else can we do? That's enough. We've had our catharsis, we've made our point?I don't mean to suggest that everyone involved just stopped at the march, but I do think that sometimes when this becomes our mode of political expression, there is a little bit of a, "Okay, but how are we going to actually defend science in real life? How are we going to defend women's rights in real life?" I worry sometimes that just the fun itself and the act of togetherness and all of that can be its own catharsis, and then not actually translate to additional action in the real world.That's a real, good point. I do want to stay here, because I know that we're on a roll, but it is interesting because the Science March, it seemed a very fun vibe. Everybody picked their favorite XKCD, it was a good time.Then if you were to compare that, as you just did, to the Women's March, which was not distinctly as much of a good time, one of those movements had a little bit more staying power, one might say.That's totally right. I want to also be clear that I think the fun elements of things can be great. Throughout American history, fun has been an important means of political expression. People sometimes forget the book Common Sense, the Thomas Paine track that at some level really did help to foment the revolution, not only was it passionately argued and this very compelling piece of rhetoric, it was also just really funny. It was a work of entertainment. People would read it aloud to each other around the fire, and it had that level of making politics fun.That is a really important element of politics, to make people feel engaged. But then I think for me the question is: To what extent does the fun encourage us? To what extent does it activate us? To what extent does it bring us together in community, or to what extent does it sort of alienate us from the reality of politics and condition us to see, again, everything as entertainment? In which case the fun isn't the means, but the fun is the end, essentially.Do you want to talk a little bit about how you close the book? I know you talk a little bit about Tucker, but you also talk a little bit about basically how everybody's having fun now.It's not just a technique used by those out of power to somewhat mock and undermine those in power, it's also used to enforce it a little bit, too.Speaking of Tucker Carlson, he famously in a legal case, his lawyer argued on his behalf that he is not a journalist, he is an entertainer and therefore can say anything he wants to say, and that argument won, that argument held sway.I think again and again, rhetoric that I would see as propaganda that really is designed to make certain Americans think that other Americans are less American and in some sense less human, that's a big part of the rhetoric going on in that show. It comes across, it is presented as entertainment. It's presented as, "Ah, we're just asking questions." Like, "Oh, it's not that big of a deal."There's a real minimization of rhetoric that I find to be very dangerous and frankly scary. We see that idea again and again. One of the subsidiary ideas that I tried to consider in these essays is, "What does propaganda actually look like?" Because at least for me, when I hear that word, I think of Soviet billboards and I think of the mid-century and very sort of overt, direct, "You should believe this."And now propaganda has taken on this much more insidious form where it's the same types of messages, it's the same attempt to win hearts and minds over to a cause, whatever the cause may be. But the propaganda itself is not overt; instead it is very buried in just messages that look like fun, that look like just entertainment. That is a really scary development because it means the propaganda can have even more power than it might otherwise to affect the way people see the world.Again, I really enjoy your work. I'm so happy that it's been compiled into this, On Misdirection.I have recently, and then for a little bit of a while, I have had increasingly complicated feelings toward The Daily Show with Jon Stewart. I was a teenager during the Bush administration, and that was very much, I think, something that was formative for me.But I think it's impossible to look at what came after and what that flowed into, even Tucker directly, somewhat, through that somewhat fateful Crossfire interview. I think it's impossible to look back at the past 15 years and not see the fingerprints of that on a lot of different political movements that are not necessarily what it was originally going for.Yeah, that's such a good point. I would say, too, I mean, I think The Daily Show in my mind is a little bit of a piece of a broader collapse, almost. The Daily Show is very much a response to the rise of just reality TV in general. I would argue that the whole point of that genre is to collapse the real and the fake into one thing and be entirely unclear about where the reality ends and the fiction begins.The Daily Show is very much an extension of that. Around the same time, you've had just so many other cultural works in that space where the whole point is just to poke fun at the idea that you can even distinguish between fact and fiction. That, to be clear, that is not propaganda on its own, but I would also say that this idea that fact and fiction on some level can't be extricated from each other, that is a very foundational argument of any propaganda.I think we're starting in the '90s with reality TV, to some extent with social media as well. Where are the people on social media, are they people at all or are they characters in a show? It can be very hard to tell. We've been on this path since at least the '90s, possibly before, where just everything blurs together, and the fact looks like fiction, the serious stuff looks like entertainment, the entertainment looks like serious stuff, and everything is just in this blurry, chaotic mess.Again, you mentioned the Science March, but I went to the Rally to Restore Sanity when I was 20. The fun vibes of that, "We're all in this together." But, like, that was also the thing in D.C. from January 5th to 7th, 2021. I love in your book just how you went through all the different ways that this is manifesting.Thank you so much. Speaking of the order, we were going for that arc, so I appreciate that that was really clear, because it really does feel like one of those sort of paths that you can see in retrospect. And at the time, it's hard to know what's exactly happening, but now even just 10 years later, five years later, things become much more clear. And then, too, at the end of the book, the final essay is about how endings themselves, the sense of things will come to a satisfying conclusion, that that alone, that logic — which is so much a product, I think, of movies and TV shows and all of that — how that logic alone can be really pernicious for people, because most things will not have an ending.Most things are fluid, news stories are fluid. Yes, there are some beginnings and some endings, but usually they're going to defy that in some way. I think as Americans, we are so conditioned to expect the catharsis, expect either the happy ending or the dramatic one. I think the arc of the past few decades really shows how connected everything is and how hard it is to distinguish the beginning of one thing and the end of the other.I've got to say, I almost wonder if it's systemic in the States. The thing that I envy the most about parliamentary systems is that inevitably, the country's leader, "the protagonist," will leave in shame. They will lose eventually. And you will have a conclusion to the end of the Winston Churchill arc of the United Kingdom. We don't have that. Barack Obama's still around, Donald Trump's still around. I wonder how much that's systemic.No, that's such a good point. I will admit this is a little bit extreme of me, but I actually do think it's true; you look in pop culture right now and what do we have but sequel, after sequel, after sequel? The highest grossing movies of 2022 were all sequels. We have this idea of the end of endings, essentially. And it's not just in politics, it's sort of everywhere.On the one hand, we crave the endings and expect the endings, but on the other hand, we live in a culture where nothing necessarily ends. The sitcom, however many years later, will get its almost inevitable reboot. Thanos will clap his hands, and that will all be undone. I won't say anything else for anyone who hasn't seen, but there is this sense, I think, that even the ending is not necessarily an ending. There can be resurrections and all of that stuff. Like you said, the presidency never ends, it just sort of takes its final form.Do you think that maybe that's going to get people a little bit more comfortable living in that ambiguity of things never necessarily ending?It might. It very much might, but then I also think that that desire for the ending is just so baked into our culture that I think it will be more of a tension becoming more comfortable with the flux.Well, you have teed this up perfectly because I would like to end this podcast. Megan, thank you so much for coming on.Thank you.This was such a great conversation. Why don't you tell folks where they can find the book, a little bit about it and where folks can find you?The book is called On Misdirection: Magic, Mayhem, American Politics. It's really just a look at ways of seeing in politics, and the ways that we have of not seeing in politics; how we look at each other, and then fail to look at each other; how our vision is often misdirected by the magicians in power in politics. You can buy the book, as far as I know, wherever books are sold. I know I have a big preference for IndieBound. I love that site, but everywhere books are sold.Great. I know some of your colleagues are coming out with other ones of these aggregations of essays.If I could share those, please, that would be great, too. We have Lenika Cruz, my colleague, writing on BTS. She is, I would say, one of the foremost experts on BTS and fandom and it's a lovely book, really. It actually made me very emotional reading it; it's wonderful.Past and future guest of this particular newsletter, Lenika Cruz.Oh, you're going to have so much fun. That's great. Then the other one is my friend and colleague, Sophie Gilbert, writing on womanhood and her experiences with womanhood, a feminist examination of pop culture and so much else, and that, too, is beautifully written. It's wonderful. So both of those books are excellent, excellent.Excellent. All right. Well, hey Megan, thanks so much for coming on. I really appreciate it.Oh, thank you. This is so nice to talk.Well, we'll see if we can reboot it next year.Yeah, inevitably, yes.If you have anything you'd like to see in this Sunday special, shoot me an email. Comment below! Thanks for reading, and thanks so much for supporting Numlock.Thank you so much for becoming a paid subscriber! Send links to me on Twitter at @WaltHickey or email me with numbers, tips or feedback at walt@numlock.news. Get full access to Numlock News at www.numlock.com/subscribe
Follow-up Een mooie uitleg over de rare baan die Orion moest afleggen Onderwerpen CES 2023: Eerste ‘betaalbare' zonneauto Lightyear 2 aangekondigd Een pratende BMW auto met wisselende kleurtjes (en een cringe filmpje met Arnie en The Hoff) Sony en Honda kondigen nieuwe EV aan: Afeela Het (Qualcomm) Digital Chassis is de toekomst van automotive. ZeroLabs Scalable EV Platform laat toe om je ‘classic car' om te toveren tot EV MicroLED: de heilige graal voor tv's? Eerste draadloze 4K OLED-tv van LG? Hisense claimt de beste betaalbare LED-tv met ULED. Eindelijk competitie op komst voor de Apple-schermen van Samsung en Dell. OLED breekt eindelijk door in gaming-monitoren. Slimme urinesensor van Withings | ‘Pipi analyse' van Withings Yogabook laptop met 2 schermen Tips Steven: Triage 2 Maarten: Kwis.app | Avatar in IMAX 3D HFR Ruurd: Personal Body Plan | AirPods Pro 2 Matty: AirParrot | Synology Photos | Session Karel: ‘Samen door 2022', de eindejaarsconference van Jade Mintjens. Nog 9 keer tot begin februari te zien op Vlaamse podia, maar ook integraal te bekijken op Streamz. Grappiger dan Geert Hoste ooit geweest is, subtieler en scherper dan Kamal Kharmach. ‘Wat Als' en ‘Wat Als 2.0' van Randall Munroe, auteur van XKCD. Serieuze wetenschappelijke antwoorden op absurde hypothetische vragen. Hij heeft er zelfs een TED-talk over gegeven!
Building With People For People: The Unfiltered Build Podcast
How long does it take to get your code into the hands of your customers? Do you manually copy your files to a production server? If you answered longer than a day and yes then the code deployment product, Semaphore, is what you need. Today, we chat with, Marko Anastasov, the co-founder of Semaphore, a code integration and delivery platform, about the inception, creation, and his team's learning journey building Semaphore. His story is riddled with encounters of monoliths and microservices, tales of building a learning culture, and reflections around the human factors in building tech products, like why do we make the technical decisions we make? Marko is a product guy and a programmer guy and has been a maker since he was a kid. He earned a Masters of Science in computer science from University of Novi Sad in Serbia. Currently, he is a founding partner of Rendered Text, a remote Rails consulting shop and the co-founder of Semaphore. When Marko is not helping companies ship code faster, he is exercising and spending time with his wife and 4 year old daughter. Connect with Marko: Twitter LinkedIn Website Show notes and helpful resources: Semaphore CI Marko's article on What is Proper Continuous Integration? XKCD comic about compiling Marko's article picked up by Hacker news, The Cracking Monolith: The Forces that Call for Microservices You should be able to describe your microservice in one sentence without saying the word “and” Marko's article on 7 ways continuous delivery helps build a culture of learning Building something cool or solving interesting problems? Want to be on this show? Send me an email at jointhepodcast@unfilteredbuild.com Podcast produced by Unfiltered Build - dream.design.develop.
Overview I have recently been wondering about the use of abbreviations which are built from the first letter of a word followed by a number and the last letter. The number represents the count of letters between the start and end letter. Thus accessibility becomes a11y. This came to light (to me anyway) during an email exchange with Mike Ray regarding the accessibility issues on the tag index page on the HPR site. The website issues were resolved, but I was left wondering how useful the term a11y is, or whether it just jars with me! According to the Wikipedia article this type of word is known as a numeronym, but they may also be referred to as alphanumeric acronyms, alphanumeric abbreviations, or numerical contractions. As the Wikipedia article notes these types of abbreviations are almost always used to refer to their computing sense — such as g11n for globalisation — in the context of computing, not the general context. Looking at a11y as an example While I sympathise with the motivation behind using 'a11y' to mean accessibility, I do find it odd and counter-intuitive. I often find myself pondering the acceptability of this type of abbreviation. How many other words in common English fit patterns like this I wonder? Quite a few I would expect. How does this affect the admissibility of such abbreviations? Not only are they adventurously strange to my simple brain, but I find them to be aesthetically displeasing. My experiments with the standard Linux dictionary looking for words that fit this pattern I find affirmatively supportive of this view. I describe this experiment later. Algebraically, it is to be expected that there are many dictionary words of 13 characters which start with 'a' and end with 'y'. Looking at them allegorically, such numeronyms convey little meaning except in very limited contexts since the motivation seems to be to reduce the need to type long words. Alternatively, if they were accepted by data entry software and expanded automatically a better case could be made for applicability, but only one word could be assigned to a numeronym. In my mind there is a certain artificiality in the use of these abbreviations. You might wonder at the weird rambling nature of the above section - this was my (small) joke to try and use many of the words that match the a11y pattern. Here's the result of transforming them: While I sympathise with the motivation behind 'a11y' to mean accessibility, I do find it odd and counter-intuitive. I often find myself pondering the a11y of this type of abbreviation. How many other words in common English fit these patterns I wonder? Quite a few I would expect. How does this affect the a11y of such abbreviations? Not only are they a11y strange to my simple brain, but I find them to be a11y displeasing. My experiments with the standard Linux dictionary looking for words that fit this pattern I find a11y supportive of this view. I describe this experiment later. A11y, it is to be expected that there are many dictionary words of 13 characters which start with 'a' and end with 'y'. Looking at them a11y, such numeronyms convey little meaning except in very limited contexts since the motivation seems to be to reduce the need to type long words. A11y, if they were accepted by data entry software and expanded a11y a better case could be made for a11y, but only one word could be assigned to a numeronym. In my mind there is a certain a11y in the use of these abbreviations. Make your own numeronyms The following piece of Bash scripting scans the file /usr/share/dict/words and picks out words which match the a11y pattern (after removing those ending in 's). It writes the word and the numeronym generated from it, which it computes, though it's unnecessary in this case because they all generate the same numeronym. I did it this way because I wanted to apply the algorithm to other words: while read -r word; do printf '%-20s %s\n' "$word" "${word:0:1}$((${#word}-2))${word: -1}" done <
James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, 1998.XKCD, Always try to get data good enough that you don't need to do statistics on it.Mark Twain, Life on the Mississippi, 1883.Jane Jacobs, The Death and Life of Great American Cities, 1961.Rosa Luxemburg, Organizational Questions of Russian Social Democracy, The Mass Strike, the Political Party and the Trade Unions, The Russian RevolutionCreditsImage of a cow being given a physical exam ("bright or dull") courtesy Dawn Marick.
About RamDr. Ram Sriharsha held engineering, product management, and VP roles at the likes of Yahoo, Databricks, and Splunk. At Yahoo, he was both a principal software engineer and then research scientist; at Databricks, he was the product and engineering lead for the unified analytics platform for genomics; and, in his three years at Splunk, he played multiple roles including Sr Principal Scientist, VP Engineering and Distinguished Engineer.Links Referenced: Pinecone: https://www.pinecone.io/ XKCD comic: https://www.explainxkcd.com/wiki/index.php/1425:_Tasks TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked into a vendor due to proprietary data collection, querying, and visualization? Modern-day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open-source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, wherever they may be at snark.cloud/chronosphere that's snark.cloud/chronosphere.Corey: This episode is brought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups. If you're tired of the vulnerabilities, costs, and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, that's V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest episode is brought to us by our friends at Pinecone and they have given their VP of Engineering and R&D over to suffer my various sling and arrows, Ram Sriharsha. Ram, thank you for joining me.Ram: Corey, great to be here. Thanks for having me.Corey: So, I was immediately intrigued when I wound up seeing your website, pinecone.io because it says right at the top—at least as of this recording—in bold text, “The Vector Database.” And if there's one thing that I love, it is using things that are not designed to be databases as databases, or inappropriately referring to things—be they JSON files or senior engineers—as databases as well. What is a vector database?Ram: That's a great question. And we do use this term correctly, I think. You can think of customers of Pinecone as having all the data management problems that they have with traditional databases; the main difference is twofold. One is there is a new data type, which is vectors. Vectors, you can think of them as arrays of floats, floating point numbers, and there is a new pattern of use cases, which is search.And what you're trying to do in vector search is you're looking for the nearest, the closest vectors to a given query. So, these two things fundamentally put a lot of stress on traditional databases. So, it's not like you can take a traditional database and make it into a vector database. That is why we coined this term vector database and we are building a new type of vector database. But fundamentally, it has all the database challenges on a new type of data and a new query pattern.Corey: Can you give me an example of what, I guess, an idealized use case would be of what the data set might look like and what sort of problem you would have in a vector database would solve?Ram: A very great question. So, one interesting thing is there's many, many use cases. I'll just pick the most natural one which is text search. So, if you're familiar with the Elastic or any other traditional text search engines, you have pieces of text, you index them, and the indexing that you do is traditionally an inverted index, and then you search over this text. And what this sort of search engine does is it matches for keywords.So, if it finds a keyword match between your query and your corpus, it's going to retrieve the relevant documents. And this is what we call text search, right, or keyword search. You can do something similar with technologies like Pinecone, but what you do here is instead of searching our text, you're searching our vectors. Now, where do these vectors come from? They come from taking deep-learning models, running your text through them, and these generate these things called vector embeddings.And now, you're taking a query as well, running them to deep-learning models, generating these query embeddings, and looking for the closest record embeddings in your corpus that are similar to the query embeddings. This notion of proximity in this space of vectors tells you something about semantic similarity between the query and the text. So suddenly, you're going beyond keyword search into semantic similarity. An example is if you had a whole lot of text data, and maybe you were looking for ‘soda,' and you were doing keyword search. Keyword search will only match on variations of soda. It will never match ‘Coca-Cola' because Coca-Cola and soda have nothing to do with each other.Corey: Or Pepsi, or pop, as they say in the American Midwest.Ram: Exactly.Corey: Yeah.Ram: Exactly. However, semantic search engines can actually match the two because they're matching for intent, right? If they find in this piece of text, enough intent to suggest that soda and Coca-Cola or Pepsi or pop are related to each other, they will actually match those and score them higher. And you're very likely to retrieve those sort of candidates that traditional search engines simply cannot. So, this is a canonical example, what's called semantic search, and it's known to be done better by these other vector search engines. There are also other examples in say, image search. Just if you're looking for near duplicate images, you can't even do this today without a technology like vector search.Corey: What is the, I guess, translation or conversion process of existing dataset into something that a vector database could use? Because you mentioned it was an array of floats was the natural vector datatype. I don't think I've ever seen even the most arcane markdown implementation that expected people to wind up writing in arrays of floats. What does that look like? How do you wind up, I guess, internalizing or ingesting existing bodies of text for your example use case?Ram: Yeah, this is a very great question. This used to be a very hard problem and what has happened over the last several years in deep-learning literature, as well as in deep-learning as a field itself, is that there have been these large, publicly trained models, examples will be OpenAI, examples will be the models that are available in Hugging Face like Cohere, and a large number of these companies have come forward with very well trained models through which you can pass pieces of text and get these vectors. So, you no longer have to actually train these sort of models, you don't have to really have the expertise to deeply figured out how to take pieces of text and build these embedding models. What you can do is just take a stock model, if you're familiar with OpenAI, you can just go to OpenAIs homepage and pick a model that works for you, Hugging Face models, and so on. There's a lot of literature to help you do this.Sophisticated customers can also do something called fine-tuning, which is built on top of these models to fine-tune for their use cases. The technology is out there already, there's a lot of documentation available. Even Pinecone's website has plenty of documentation to do this. Customers of Pinecone do this [unintelligible 00:07:45], which is they take piece of text, run them through either these pre-trained models or through fine-tuned models, get the series of floats which represent them, vector embeddings, and then send it to us. So, that's the workflow. The workflow is basically a machine-learning pipeline that either takes a pre-trained model, passes them through these pieces of text or images or what have you, or actually has a fine-tuning step in it.Corey: Is that ingest process something that not only benefits from but also requires the use of a GPU or something similar to that to wind up doing the in-depth, very specific type of expensive math for data ingestion?Ram: Yes, very often these run on GPUs. Sometimes, depending on budget, you may have compressed models or smaller models that run on CPUs, but most often they do run on GPUs, most often, we actually find people make just API calls to services that do this for them. So, very often, people are actually not deploying these GPU models themselves, they are maybe making a call to Hugging Face's service, or to OpenAI's service, and so on. And by the way, these companies also democratized this quite a bit. It was much, much harder to do this before they came around.Corey: Oh, yeah. I mean, I'm reminded of the old XKCD comic from years ago, which was, “Okay, I want to give you a picture. And I want you to tell me it was taken within the boundaries of a national park.” Like, “Sure. Easy enough. Geolocation information is attached. It'll take me two hours.” “Cool. And I also want you to tell me if it's a picture of a bird.” “Okay, that'll take five years and a research team.”And sure enough, now we can basically do that. The future is now and it's kind of wild to see that unfolding in a human perceivable timespan on these things. But I guess my question now is, so that is what a vector database does? What does Pinecone specifically do? It turns out that as much as I wish it were otherwise, not a lot of companies are founded on, “Well, we have this really neat technology, so we're just going to be here, well, in a foundational sense to wind up ensuring the uptake of that technology.” No, no, there's usually a monetization model in there somewhere. Where does Pinecone start, where does it stop, and how does it differentiate itself from typical vector databases? If such a thing could be said to exist yet.Ram: Such a thing doesn't exist yet. We were the first vector database, so in a sense, building this infrastructure, scaling it, and making it easy for people to operate it in a SaaS fashion is our primary core product offering. On top of that, this very recently started also enabling people who have who actually have raw text to not just be able to get value from these vector search engines and so on, but also be able to take advantage of traditional what we call keyword search or sparse retrieval and do a combined search better, in Pinecone. So, there's value-add on top of this that we do, but I would say the core of it is building a SaaS managed platform that allows people to actually easily store as data, scale it, query it in a way that's very hands off and doesn't require a lot of tuning or operational burden on their side. This is, like, our core value proposition.Corey: Got it. There's something to be said for making something accessible when previously it had only really been available to people who completed the Hello World tutorial—which generally resembled a doctorate at Berkeley or Waterloo or somewhere else—and turn it into something that's fundamentally, click the button. Where on that, I guess, a spectrum of evolution do you find that Pinecone is today?Ram: Yeah. So, you know, prior to Pinecone, we didn't really have this notion of a vector database. For several years, we've had libraries that are really good that you can pre-train on your embeddings, generate this thing called an index, and then you can search over that index. There is still a lot of work to be done even to deploy that and scale it and operate it in production and so on. Even that was not being, kind of, offered as a managed service before.What Pinecone does which is novel, is you no longer have to have this pre-training be done by somebody, you no longer have to worry about when to retrain your indexes, what to do when you have new data, what to do when there is deletions, updates, and the usual data management operations. You can just think of this is, like, a database that you just throw your data in. It does all the right things for you, you just worry about querying. This has never existed before, right? This is—it's not even like we are trying to make the operational part of something easier. It is that we are offering something that hasn't existed before, at the same time, making it operationally simple.So, we're solving two problems, which is we building a better database that hasn't existed before. So, if you really had this sort of data management problems and you wanted to build an index that was fresh that you didn't have to super manually tune for your own use cases, that simply couldn't have been done before. But at the same time, we are doing all of this in a cloud-native fashion; it's easy for you to just operate and not worry about.Corey: You've said that this hasn't really been done before, but this does sound like it is more than passingly familiar specifically to the idea of nearest neighbor search, which has been around since the '70s in a bunch of different ways. So, how is it different? And let me of course, ask my follow-up to that right now: why is this even an interesting problem to start exploring?Ram: This is a great question. First of all, nearest neighbor search is one of the oldest forms of machine learning. It's been known for decades. There's a lot of literature out there, there are a lot of great libraries as I mentioned in the passing before. All of these problems have primarily focused on static corpuses. So basically, you have a set of some amount of data, you want to create an index out of it, and you want to query it.A lot of literature has focused on this problem. Even there, once you go from small number of dimensions to large number of dimensions, things become computationally far more challenging. So, traditional nearest neighbor search actually doesn't scale very well. What do I mean by large number of dimensions? Today, deep-learning models that produce image representations typically operate in 2048 dimensions of photos [unintelligible 00:13:38] dimensions. Some of the OpenAI models are even 10,000 dimensional and above. So, these are very, very large dimensions.Most of the literature prior to maybe even less than ten years back has focused on less than ten dimensions. So, it's like a scale apart in dealing with small dimensional data versus large dimensional data. But even as of a couple of years back, there hasn't been enough, if any, focus on what happens when your data rapidly evolves. For example, what happens when people add new data? What happens if people delete some data? What happens if your vectors get updated? These aren't just theoretical problems; they happen all the time. Customers of ours face this all the time.In fact, the classic example is in recommendation systems where user preferences change all the time, right, and you want to adapt to that, which means your user vectors change constantly. When even these sort of things change constantly, you want your index to reflect it because you want your queries to catch on to the most recent data. [unintelligible 00:14:33] have to reflect the recency of your data. This is a solved problem for traditional databases. Relational databases are great at solving this problem. A lot of work has been done for decades to solve this problem really well.This is a fundamentally hard problem for vector databases and that's one of the core focus areas [unintelligible 00:14:48] painful. Another problem that is hard for these sort of databases is simple things like filtering. For example, you have a corpus of say product images and you want to only look at images that maybe are for the Fall shopping line, right? Seems like a very natural query. Again, databases have known and solved this problem for many, many years.The moment you do nearest neighbor search with these sort of constraints, it's a hard problem. So, it's just the fact that nearest neighbor search and lots of research in this area has simply not focused on what happens to that, so those are of techniques when combined with data management challenges, filtering, and all the traditional challenges of a database. So, when you start doing that you enter a very novel area to begin with.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open-source database. If you're tired of managing open-source Redis on your own, or if you are looking to go beyond just caching and unlocking your data's full potential, these folks have you covered. Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver and store data. To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit snark.cloud/redis. That's snark dot cloud slash R-E-D-I-S.Corey: So, where's this space going, I guess is sort of the dangerous but inevitable question I have to ask. Because whenever you talk to someone who is involved in a very early stage of what is potentially a transformative idea, it's almost indistinguishable from someone who is whatever the polite term for being wrapped around their own axle is, in a technological sense. It's almost a form of reverse Schneier's Law of anyone can create an encryption algorithm that they themselves cannot break. So, the possibility that this may come back to bite us in the future if it turns out that this is not potentially the revelation that you see it as, where do you see the future of this going?Ram: Really great question. The way I think about it is, and the reason why I keep going back to databases and these sort of ideas is, we have a really great way to deal with structured data and structured queries, right? This is the evolution of the last maybe 40, 50 years is to come up with relational databases, come up with SQL engines, come up with scalable ways of running structured queries on large amounts of data. What I feel like this sort of technology does is it takes it to the next level, which is you can actually ask unstructured questions on unstructured data, right? So, even the couple of examples we just talked about, doing near duplicate detection of images, that's a very unstructured question. What does it even mean to say that two images are nearly duplicate of each other? I couldn't even phrase it as kind of a concrete thing. I certainly cannot write a SQL statement for it, but I cannot even phrase it properly.With these sort of technologies, with the vector embeddings, with deep learning and so on, you can actually mathematically phrase it, right? The mathematical phrasing is very simple once you have the right representation that understands your image as a vector. Two images are nearly duplicate if they are close enough in the space of vectors. Suddenly you've taken a problem that was even hard to express, let alone compute, made it precise to express, precise to compute. This is going to happen not just for images, not just for semantic search, it's going to happen for all sorts of unstructured data, whether it's time series, where it's anomaly detection, whether it's security analytics, and so on.I actually think that fundamentally, a lot of fields are going to get disrupted by this sort of way of thinking about things. We are just scratching the surface here with semantic search, in my opinion.Corey: What is I guess your barometer for success? I mean, if I could take a very cynical point of view on this, it's, “Oh, well, whenever there's a managed vector database offering from AWS.” They'll probably call it Amazon Basics Vector or something like that. Well, that is a—it used to be a snarky observation that, “Oh, we're not competing, we're just validating their market.” Lately, with some of their competitive database offerings, there's a lot more truth to that than I suspect AWS would like.Their offerings are nowhere near as robust as what they pretend to be competing against. How far away do you think we are from the larger cloud providers starting to say, “Ah, we got the sense there was money in here, so we're launching an entire service around this?”Ram: Yeah. I mean, this is a—first of all, this is a great question. There's always something that's constantly, things that any innovator or disrupter has to be thinking about, especially these days. I would say that having a multi-year head, start in the use cases, in thinking about how this system should even look, what sort of use cases should it [unintelligible 00:19:34], what the operating points for the [unintelligible 00:19:37] database even look like, and how to build something that's cloud-native and scalable, is very hard to replicate. Meaning if you look at what we have already done and kind of tried to base the architecture of that, you're probably already a couple of years behind us in terms of just where we are at, right, not just in the architecture, but also in the use cases in where this is evolving forward.That said, I think it is, for all of these companies—and I would put—for example, Snowflake is a great example of this, which is Snowflake needn't have existed if Redshift had done a phenomenal job of being cloud-native, right, and kind of done that before Snowflake did it. In hindsight, it seems like it's obvious, but when Snowflake did this, it wasn't obvious that that's where everything was headed. And Snowflake built something that's very technologically innovative, in a sense that it's even now hard to replicate. Plus, it takes a long time to replicate something like that. I think that's where we are at.If Pinecone does its job really well and if we simply execute efficiently, it's very hard to replicate that. So, I'm not super worried about cloud providers, to be honest, in this space, I'm more worried about our execution.Corey: If it helps anything, I'm not very deep into your specific area of the world, obviously, but I am optimistic when I hear people say things like that. Whenever I find folks who are relatively early along in their technological journey being very concerned about oh, the large cloud provider is going to come crashing in, it feels on some level like their perspective is that they have one weird trick, and they were able to crack that, but they have no defensive mode because once someone else figures out the trick, well, okay, now we're done. The idea of sustained and lasting innovation in a space, I think, is the more defensible position to take, with the counterargument, of course, that that's a lot harder to find.Ram: Absolutely. And I think for technologies like this, that's the only solution, which is, if you really want to avoid being disrupted by cloud providers, I think that's the way to go.Corey: I want to talk a little bit about your own background. Before you wound up as the VP of R&D over at Pinecone, you were in a bunch of similar… I guess, similar styled roles—if we'll call it that—at Yahoo, Databricks, and Splunk. I'm curious as to what your experience in those companies wound up impressing on you that made you say, “Ah, that's great and all, but you know what's next? That's right, vector databases.” And off, you went to Pinecone. What did you see?Ram: So, first of all, in was some way or the other, I have been involved in machine learning and systems and the intersection of these two for maybe the last decade-and-a-half. So, it's always been something, like, in the in between the two and that's been personally exciting to me. So, I'm kind of very excited by trying to think about new type of databases, new type of data platforms that really leverages machine learning and data. This has been personally exciting to me. I obviously learned very different things from different companies.I would say that Yahoo was just the learning in cloud to begin with because prior to joining Yahoo, I wasn't familiar with Silicon Valley cloud companies at that scale and Yahoo is a big company and there's a lot to learn from there. It was also my first introduction to Hadoop, Spark, and even machine learning where I really got into machine learning at scale, in online advertising and areas like that, which was a massive scale. And I got into that in Yahoo, and it was personally exciting to me because there's very few opportunities where you can work on machine learning at that scale, right?Databricks was very exciting to me because it was an earlier-stage company than I had been at before. Extremely well run and I learned a lot from Databricks, just the team, the culture, the focus on innovation, and the focus on product thinking. I joined Databricks as a product manager. I hadn't played the product manager hat before that, so it was very much a learning experience for me and I think I learned from some of the best in that area. And even at Pinecone, I carry that forward, which is think about how my learnings at Databricks informs how we should be thinking about products at Pinecone, and so on. So, I think I learned—if I had to pick one company I learned a lot from, I would say, it's Databricks. The most [unintelligible 00:23:50].Corey: I would also like to point out, normally when people say, “Oh, the one company I've learned the most from,” and they pick one of them out of their history, it's invariably the most recent one, but you left there in 2018—Ram: Yeah.Corey: —then went to go spend the next three years over at Splunk, where you were a Senior Principal, Scientist, a Senior Director and Head of Machine-Learning, and then you decided, okay, that's enough hard work. You're going to do something easier and be the VP of Engineering, which is just wild at a company of that scale.Ram: Yeah. At Splunk, I learned a lot about management. I think managing large teams, managing multiple different teams, while working on very different areas is something I learned at Splunk. You know, I was at this point in my career when I was right around trying to start my own company. Basically, I was at a point where I'd taken enough learnings and I really wanted to do something myself.That's when Edo and I—you know, the CEO of Pinecone—and I started talking. And we had worked together for many years, and we started working together at Yahoo. We kept in touch with each other. And we started talking about the sort of problems that I was excited about working on and then I came to realize what he was working on and what Pinecone was doing. And we thought it was a very good fit for the two of us to work together.So, that is kind of how it happened. It sort of happened by chance, as many things do in Silicon Valley, where a lot of things just happen by network and chance. That's what happened in my case. I was just thinking of starting my own company at the time when just a chance encounter with Edo led me to Pinecone.Corey: It feels from my admittedly uninformed perspective, that a lot of what you're doing right now in the vector database area, it feels on some level, like it follows the trajectory of machine learning, in that for a long time, the only people really excited about it were either sci-fi authors or folks who had trouble explaining it to someone without a degree in higher math. And then it turned into—a couple of big stories from the mid-2010s stick out at me when we've been people were trying to sell this to me in a variety of different ways. One of them was, “Oh, yeah, if you're a giant credit card processing company and trying to detect fraud with this kind of transaction volume—” it's, yeah, there are maybe three companies in the world that fall into that exact category. The other was WeWork where they did a lot of computer vision work. And they used this to determine that at certain times of day there was congestion in certain parts of the buildings and that this was best addressed by hiring a second barista. Which distilled down to, “Wait a minute, you're telling me that you spent how much money on machine-learning and advanced analyses and data scientists and the rest have figured out that people like to drink coffee in the morning?” Like, that is a little on the ridiculous side.Now, I think that it is past the time for skepticism around machine learning when you can go to a website and type in a description of something and it paints a picture of the thing you just described. Or you can show it a picture and it describes what is in that picture fairly accurately. At this point, the only people who are skeptics, from my position on this, seem to be holding out for some sort of either next-generation miracle or are just being bloody-minded. Do you think that there's a tipping point for vector search where it's going to become blindingly obvious to, if not the mass market, at least more run-of-the-mill, more prosaic level of engineer that haven't specialized in this?Ram: Yeah. It's already, frankly, started happening. So, two years back, I wouldn't have suspected this fast of an adoption for this new of technology from this varied number of use cases. I just wouldn't have suspected it because I, you know, I still thought, it's going to take some time for this field to mature and, kind of, everybody to really start taking advantage of this. This has happened much faster than even I assumed.So, to some extent, it's already happening. A lot of it is because the barrier to entry is quite low right now, right? So, it's very easy and cost-effective for people to create these embeddings. There is a lot of documentation out there, things are getting easier and easier, day by day. Some of it is by Pinecone itself, by a lot of work we do. Some of it is by, like, companies that I mentioned before who are building better and better models, making it easier and easier for people to take these machine-learning models and use them without having to even fine-tune anything.And as technologies like Pinecone really mature and dramatically become cost-effective, the barrier to entry is very low. So, what we tend to see people do, it's not so much about confidence in this new technology; it is connecting something simple that I need this sort of value out of, and find the least critical path or the simplest way to get going on this sort of technology. And as long as it can make that barrier to entry very small and make this cost-effective and easy for people to explore, this is going to start exploding. And that's what we are seeing. And a lot of Pinecone's focus has been on ease-of-use, in simplicity in connecting the zero-to-one journey for precisely this reason. Because not only do we strongly believe in the value of this technology, it's becoming more and more obvious to the broader community as well. The remaining work to be done is just the ease of use and making things cost-effective. And cost-effectiveness is also what the focus on a lot. Like, this technology can be even more cost-effective than it is today.Corey: I think that it is one of those never-mistaken ideas to wind up making something more accessible to folks than keeping it in a relatively rarefied environment. We take a look throughout the history of computing in general and cloud in particular, were formerly very hard things have largely been reduced down to click the button. Yes, yes, and then get yelled at because you haven't done infrastructure-as-code, but click the button is still possible. I feel like this is on that trendline based upon what you're saying.Ram: Absolutely. And the more we can do here, both Pinecone and the broader community, I think the better, the faster the adoption of this sort of technology is going to be.Corey: I really want to thank you for spending so much time talking me through what it is you folks are working on. If people want to learn more, where's the best place for them to go to find you?Ram: Pinecone.io. Our website has a ton of information about Pinecone, as well as a lot of standard documentation. We have a free tier as well where you can play around with small data sets, really get a feel for vector search. It's completely free. And you can reach me at Ram at Pinecone. I'm always happy to answer any questions. Once again, thanks so much for having me.Corey: Of course. I will put links to all of that in the show notes. This promoted guest episode is brought to us by our friends at Pinecone. Ram Sriharsha is their VP of Engineering and R&D. And I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will never read because the search on your podcast platform is broken because it's not using a vector database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Randall Munroe visits Google to discuss his book "What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions." The millions of people around the world who read the first "What If?" book still have questions, and those questions are getting stranger. Thank goodness xkcd creator Randall Munroe is here to help. Planning to ride a fire pole from the moon back to Earth? The hardest part is sticking the landing. Hoping to cool the atmosphere by opening everyone's freezer door at the same time? Maybe it's time for a brief introduction to thermodynamics. Want to know what would happen if you rode a helicopter blade, built a billion-story building, made a lava lamp out of lava, or jumped on a geyser as it erupted? Okay, if you insist. But before you go on a cosmic road trip, feed the residents of New York City to a T. rex, or fill every church with bananas, be sure to consult this practical guide for impractical ideas. Unfazed by absurdity, Randall consults the latest research on everything from swing-set physics to airplane-catapult design to clearly and concisely answer his readers' questions. As he consistently demonstrates, you can learn a lot from examining how the world might work in very specific extreme circumstances. This book is filled with bonkers science, boundless curiosity, and Randall's signature stick-figure comics. Randall Munroe is the author of the number one New York Times bestsellers "How To", "What If?", and "Thing Explainer"; the science question-and-answer blog “What If?”; and the popular web comic xkcd. A former NASA roboticist, he left the agency in 2006 to draw comics on the internet full time. Visit http://g.co/TalksAtGoogle/WhatIf to watch the video.
For the answers to the rest of the weirdest questions you never thought to ask, the New York Times bestselling author is back with What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions. In conversation with Derek Thompson, a staff writer at The Atlantic, where he writes the “Work in Progress” newsletter, host of the weekly news podcast “Plain English,” and author of Hit Makers. This program was held on September 14, 2022.
Seth Fishman is one of the biggest movers and shakers in the world of sci-fi publishing, representing Cixin Liu, Anne Leckie, Mary Robinette Kowal, Becky Chambers, Mur Lafferty, River Solomon, P Djeli Clark, and many others (plus huge names in lots of other genres too, like Randall Monroe, the creator of XKCD). We had a great time talking with him about what's happening behind-the-scenes in the world of sci-fi publishing, as well as getting some great tips on how to get published. We also wanted to let everyone know we're taking a short break for the holidays this year (and so Cody can take his honeymoon!) - we'll be back on December 6th to talk about Octavia Butler's Parable of the Sower, and then returning to our regular schedule every two weeks starting January 3rd. We'll be missing y'all till then!
898. Randall Munroe joined me this week to talk about his language-themed xkcd cartoons, his simple-language project Up Goer V, his biggest pet peeve, his favorite words, and his new book "What If? 2." But I have to confess that my favorite part was his tidbits about the bee laws.| Transcript: https://grammar-girl.simplecast.com/episodes/randall-munroe-of-xkcd| Buy What If? 2.| Subscribe to the newsletter for regular updates.| Watch my LinkedIn Learning writing courses.| Buy the Peeve Wars card game. | Grammar Girl books. | HOST: Mignon Fogarty| VOICEMAIL: 833-214-GIRL (833-214-4475) or https://sayhi.chat/grammargirl| Grammar Girl is part of the Quick and Dirty Tips podcast network.Audio engineer: Nathan SemesEditor: Adam CecilAdvertising Operations Specialist: Morgan ChristiansonMarketing and Publicity Assistant: Davina TomlinDigital Operations Specialist: Holly HutchingsIntern: Kamryn Lacy| Theme music by Catherine Rannus.| Grammar Girl Social Media Links: YouTube. TikTok. Twitter. Facebook. Instagram. LinkedIn.
Since 2005, Randall Munroe has used his webcomic XKCD to comment on the world around him and express his love for pop culture, math, and science. Now, with the release of his new book “What If 2?: More Serious Scientific Answers to Absurd Hypothetical Questions”, the cartoonist is yet again using his physics expertise to answer the most interesting hypothetical questions with a dash of humor. He sits down with Recode's Peter Kafka to reflect on his career so far, and what he has planned ahead. Featuring: Randall Munroe (@xkcd), creator of XKCD and Author Host: Peter Kafka (@pkafka), Senior Editor at Recode More to explore: Subscribe for free to Recode Media, Peter Kafka, one of the media industry's most acclaimed reporters, talks to business titans, journalists, comedians, and more to get their take on today's media landscape. About Recode by Vox: Recode by Vox helps you understand how tech is changing the world — and changing us. Learn more about your ad choices. Visit podcastchoices.com/adchoices
If you follow XKCD you know the author, Randall Monroe, applies science to absurd questions. One of those questions had to do with the possibility of soccer ball size hail. Cover up with your steel umbrella and listen to find out if it's actually possible! XKCD you should take a look! images from Pixabay
Behind the Tech XKCD is one of our favorite webcomics - and it started out as doodles in Randall Munroe's college notebooks. Munroe describes his work as “a webcomic of romance, sarcasm, math, and language.” In this episode, he joins Kevin to talk about how he got started, where his inspiration comes from and his latest book, What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions. Find out why a surprising number of cartoonists are physicists by training, explore the joy of seeking answers to seemingly impossible questions, and much more!
XKCD is one of our favorite webcomics - and it started out as doodles in Randall Munroe's college notebooks. Munroe describes his work as “a webcomic of romance, sarcasm, math, and language.” In this episode, he joins Kevin to talk about how he got started, where his inspiration comes from and his latest book, What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions. Find out why a surprising number of cartoonists are physicists by training, explore the joy of seeking answers to seemingly impossible questions, and much more!
Watch the live stream: Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Brian #1: Uncommon Uses of Python in Commonly Used Libraries by Eugene Yan Specifically, Using relative imports Example from sklearn's base.py from .utils.validation import check_X_y from .utils.validation import check_array “Relative imports ensure we search the current package (and import from it) before searching the rest of the PYTHONPATH. “ For relative imports, we have to use the from .something import thing form. We cannot use import .something since later on in the code .something isn't valid. There's a good discussion of relative imports in pep 328 Michael #2: Skyplane Cloud Transfers Skyplane is a tool for blazingly fast bulk data transfers in the cloud. Skyplane manages parallelism, data partitioning, and network paths to optimize data transfers, and can also spin up VM instances to increase transfer throughput. You can use Skyplane to transfer data: Between buckets within a cloud provider Between object stores across multiple cloud providers (experimental) Between local storage and cloud object stores Skyplane takes several steps to ensure the correctness of transfers: Checksums, verify files exist and match sizes. Data transfers in Skyplane are encrypted end-to-end. Security: Encrypted while in transit and over TLS + config options Brian #3: 7 things I've learned building a modern TUI framework by Will McGugan Specifically, DictViews are amazing. They have set operations. Example of using items() to get views, then ^ for symmetric difference (done at the C level): # Get widgets which are new or changed print(render_map.items() ^ new_render_map.items()) Lots of other great topics in the article lru_cache is fast Unicode art in addition to text in doc strings The fractions module and a cool embedded video demo of some of the new css stuff in Textual Python's object allocator ascii art Michael #4: ‘Unstoppable' Python Python popularity still soaring: ‘Unstoppable' Python once again ranked No. 1 in the August updates of both the Tiobe and Pypl indexes of programming language popularity. Python first took the top spot in the index last October, becoming the only language besides Java and C to hold the No. 1 position. “Python seems to be unstoppable,” said the Tiobe commentary accompanying the August index. In the alternative Pypl Popularity of Programming Language index, which assesses language popularity based on Google searches of programming language tutorials, Python is way out front. Extras Brian: Matplotlib stylesheets can make your chart look awesome with one line of code. But it never occurred to me that I could write my own style sheet. Here's an article discussing creation of custom matplotlib stylesheets The Magic of Matplotlib Stylesheets XKCD Plots Michael: Back on 295 we talked about Flet. We now have a Talk Python episode on it (live and polished versions). Joke: Rakes and AWS
In this episode of Kubernetes Bytes, Ryan and Bhavin talk to the SIG Storage COSI Co-Lead Sid Mani about the Container Object Storage Interface (COSI) project, as it enters the Alpha phase of the maturity cycle. The discussion dives into the need for a different Object Storage standard, how it works with Kubernetes, the vision of the community, and how people/vendors can contribute to the ecosystem. Show links Acorn Labs - https://venturebeat.com/programming-development/open-source-acorn-takes-a-new-approach-to-deploy-cloud-native-application-on-kubernetes/ Ghost Security Emerges from Stealth, Announces Initial $15M in Funding at $50M validation - https://ghost.security/blog/ghost-security-emerges-from-stealth-announces-initial-15m-in-funding?hsLang=en Granulate launches a new free tool for optimizing K8s costs called gMaestro - https://sdtimes.com/kubernetes/granulate-launches-new-free-tool-for-optimizing-kubernetes-costs/ What's new with Kubernetes 1.25 - https://sysdig.com/blog/kubernetes-1-25-whats-new/ Kubernetes volumes for beginners - https://dev.to/iarchitsharma/kubernetes-volume-explained-for-beginners-3doj Intro to eBPF - https://chrisshort.net/intro-to-ebpf XKCD link - https://xkcd.com/927/ Kubernetes CSI 101 episode - https://anchor.fm/kubernetesbytes/episodes/Container-and-Kubernetes-Storage-101-e1647o1/a-a6cgu6a
Together, Matter and Thread are the new software and networking standards that promise to make all of your home automation and IoT gear work together, regardless of manufacturer. After Home Assistant's Paulus Schoutsen piqued our interest on the FOSS Pod, we decided to do a deep dive this week to demystify exactly what the two standards are and how they relate to one another, how they'll (hopefully) make things better, what they mean for your existing smart home equipment, and more.NOTESOur FOSS Pod ep about Home Assistant with maintainer Paulus Schoutsen: https://fosspod.content.town/episodes/home-assistant-with-paulus-schoutsenThe Connectivity Standards Alliance's Matter page: https://csa-iot.org/all-solutions/matter/The Verge's very helpful interview on Thread and Matter: https://www.theverge.com/23165855/thread-smart-home-protocol-matter-apple-google-interviewThe requisite XKCD: https://xkcd.com/927/Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough? They answer a listener's question about where to get started testing a legacy app. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. jnicklas/capybara_table: (https://github.com/jnicklas/capybara_table) Capybara selectors and matchers for working with HTML tables Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Just gotta hold on. Fly this thing straight to the crash site. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. CHRIS: And I'm Steph Viccari. STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs] CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are. STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened. CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari. STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody. CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time. STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world? CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever. But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain. It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it. And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into. First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while? Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass. So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something. The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough. STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes. But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take. CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate. And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something. But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen. There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal. And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common. But to highlight something that's really great and wonderful that I've been working with, there is a project called capybaratable. capybaratable is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it. And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it. And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found. STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future. CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you? [laughter] STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do. So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for. On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done. CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience. STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered. So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine. But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle. And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does. And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land. CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up? STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow. I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case. CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions. There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in. STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough? So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace? And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity? CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting. There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that. There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets. And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual. I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction. STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well. For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase? Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in. I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily. I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals. And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off. And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing. But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like. I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster? And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours. Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed. I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to? I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough. CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation. So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations. Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose. Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations. But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations. And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that. If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do. So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question. STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself. So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily. And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing. CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team. But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all. My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this. STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful. But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off. But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start. And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making. And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows. So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well. So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went. So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you. CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks. And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected. Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that. Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things. And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it. So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind. One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true? Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited. And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation. The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day. So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done. And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts. STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project. If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything? CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph? STEPH: I think those are fabulous thoughts. I think you covered it all. CHRIS: Sounds good. Well, in that case, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Episode Notes Episode summary Margaret talks with Elle an anarchist and security professional about different threat modeling approaches and analyzing different kinds of threats. They explore physical threats, digital security, communications, surveillance,and general OpSec mentalities for how to navigate the panopticon and do stuff in the world without people knowing about it...if you're in Czarist Russia of course. Guest Info Elle can be found on twitter @ellearmageddon. Host and Publisher The host Margaret Killjoy can be found on twitter @magpiekilljoy or instagram at @margaretkilljoy. This show is published by Strangers in A Tangled Wilderness. We can be found at www.tangledwilderness.org, or on Twitter @TangledWild and Instagram @Tangled_Wilderness. You can support the show on Patreon at www.patreon.com/strangersinatangledwilderness. Show Links Transcript Live Like the World is Dying: Elle on Threat Modeling Margaret 00:15 Hello, and welcome to Live Like The World Is Dying, your podcast for what feels like the end times. I'm your host, Margaret killjoy. And with me at the exact moment is my dog, who has just jumped up to try and talk into the microphone and bite my arm. And, I use 'she' and 'they' pronouns. And this week, I'm going to be talking to my friend Elle, who is a, an anarchist security professional. And we're going to be talking about threat modeling. And we're going to be talking about how to figure out what people are trying to do to you and who's trying to do it and how to deal with different people trying to do different things. Like, what is the threat model around the fact that while I'm trying to record a podcast, my dog is biting my arm? And I am currently choosing to respond by trying to play it for humor and leaving it in rather than cutting it out and re recording. This podcast is a proud member of the Channel Zero network of anarchists podcasts. And here's a jingle from another show on the network. Jingle Margaret 02:00 Okay, if you could introduce yourself, I guess, with your name and your pronouns, and then maybe what you do as relates to the stuff that we're going to be talking about today. Elle 02:10 Yeah, cool. Hi, I'm Elle. My pronouns are they/them. I am a queer, autistic, anarchist security practitioner. I do security for a living now that I've spent over the last decade, working with activist groups and NGOs, just kind of anybody who's got an interesting threat model to help them figure out what they can do to make themselves a little a little safer and a little more secure. Margaret 02:43 So that word threat model. That's actually kind of what I want to have you on today to talk about is, it's this word that we we hear a lot, and sometimes we throw into sentences when we want to sound really smart, or maybe I do that. But what does it mean, what is threat modeling? And why is it relevant? Elle 03:02 Yeah, I actually, I really love that question. Because I think that we a lot of people do use the term threat modeling without really knowing what they mean by it. And so to me, threat modeling is having an understanding of your own life in your own context, and who poses a realistic risk to you, and what you can do to keep yourself safe from them. So whether that's, you know, protecting communications that you have from, you know, state surveillance, or whether it's keeping yourself safe from an abusive ex, your threat model is going to vary based on your own life experiences and what you need to protect yourself from and who those people actually are and what they're capable of doing. Margaret 03:52 Are you trying to say there's not like one solution to all problems that we would just apply? Elle 03:58 You know, I love... Margaret 03:58 I don't understand. Elle 04:00 I know that everybody really, really loves the phrase "Use signal. Use TOR," and you know, thinks that that is the solution to all of life's problems. But it actually turns out that, no, you do have to have both an idea of what it is that you're trying to protect, whether it's yourself or something like your communications and who you're trying to protect it from, and how they can how they can actually start working towards gaining access to whatever it is that you're trying to defend. Margaret 04:31 One of the things that when I think about threat modeling that I think about is this idea of...because the levels of security that you take for something often limit your ability to accomplish different things. Like in Dungeons and Dragons, if you were plate armor, you're less able to be a dexterous rogue and stealth around. And so I think about threat modeling, maybe as like learning to balance....I'm kind of asking this, am I correct in this? Balancing what you're trying to accomplish with who's trying to stop you? Because like, you could just use TOR, for everything. And then also like use links the little like Lynx [misspoke "Tails"] USB keychain and never use a regular computer and never communicate with anyone and then never accomplish anything. But, it seems like that might not work. Elle 05:17 Yeah, I mean, the idea, the idea is to prevent whoever your adversaries are from keeping you from doing whatever you're trying to accomplish. Right? So if the security precautions that you're taking to prevent your adversaries from preventing you from doing a thing are also preventing you from doing the thing, then it doesn't matter, because your adversaries have just won, right? So there, there definitely is a need, you know, to be aware of risks that you're taking and decide which ones make sense, which ones don't make sense. And kind of look at it from from a dynamic of "Okay, is this something that is in my, you know, acceptable risk model? Is this a risk I'm willing to take? Are there things that I can do to, you know, do harm reduction and minimize the risk? Or at least like, make it less? Where are those trade offs? What, what is the maximum amount of safety or security that I can do for myself, while still achieving whatever it is that I'm trying to achieve?" Margaret 06:26 Do you actually ever like, chart it out on like, an X,Y axis where you get like, this is the point where you start getting diminishing returns? I'm just imagining it. I've never done that. Elle 06:37 In, in the abstract, yes, because that's part of how autism brain works for me. But in a, like actually taking pen to paper context, not really. But that's, you know, at least partially, because of that's something that autism brain just does for me. So I think it could actually be a super reasonable thing to do, for people whose brains don't auto filter that for them. But but I'm, I guess, lucky enough to be neurodivergent, and have like, you know, like, we always we joke in tech, "It's not a bug, it's a feature." And I feel like, you know, autism is kind of both sometimes. In some cases, it's totally a bug and and others, it's absolutely a feature. And this is one of the areas where it happens to be a feature, at least for me. Margaret 07:35 That makes sense. I, I kind of view my ADHD as a feature, in that, it allows me to hyper focus on topics and then move on and then not come back to them. Or also, which is what I do now for work with podcasting, and a lot of my writing. It makes it hard to write long books, I gotta admit, Elle 07:56 Yeah, I work with a bunch of people with varying neuro types. And it's really interesting, like, at least at least in my own team, I think that you know, the, the folks who are more towards the autism spectrum disorder side of of the house are more focused on things like application security, and kind of things that require sort of sustained hyper focus. And then folks with ADHD make just absolutely amazing, like incident responders and do really, really well in interrupt driven are interrupts heavy contexts, Margaret 08:38 Or sprinters. Elle 08:40 It's wild to me, because I'm just like, yes, this makes perfect sense. And obviously, like, these different tasks are better suited to different neuro types. But I've also never worked with a manager who actually thought about things in that way before. Margaret 08:53 Right. Elle 08:54 And so it's actually kind of cool to be to be in a position where I can be like, "Hey, like, Does this sound interesting to you? Would you rather focus on this kind of work?" And kind of get that that with people. Margaret 09:06 That makes sense that's.... i I'm glad that you're able to do that. I'm glad that people that you work with are able to have that you know, experience because it is it's hard to it's hard to work within....obviously the topic of today is...to working in the workplace is a neurodivergent person, but it I mean it affects so many of us you know, like almost whatever you do for work the the different ways your brain work are always struggling against it. So. Elle 09:32 Yeah, I don't know. It just it makes sense to me to like do your best to structure your life in a way that is more conducive to your neurotype. Margaret 09:44 Yeah. Elle 09:45 You know, if you can. Margaret 09:49 I don't even realize exactly how age ADHD I was until I tried to work within a normal workforce. I built my entire life around, not needing to live in one place or do one thing for sustained periods of time. But okay, but back to the threat modeling. Margaret 10:07 The first time I heard of, I don't know if it's the first time I heard a threat modeling or not, I don't actually know when I first started hearing that word. But the first time I heard about you, in the context of it was a couple years back, you had some kind of maybe it was tweets or something about how people were assuming that they should use, for example, the more activist focused email service Rise Up, versus whether they should just use Gmail. And I believe that you were making the case that for a lot of things, Gmail would actually be safer, because even though they don't care about you, they have a lot more resources to throw at the problem of keeping governments from reading their emails. That might be a terrible paraphrasing of what you said. But this, this is how I was introduced to this concept of threat modeling. If you wanted to talk about that example, and tell me how I got it all wrong. Elle 10:07 Yeah. Elle 10:58 Yeah. Um, so you didn't actually get it all wrong. And I think that the thing that I would add to that is that if you are engaging in some form of hypersensitive communication, email is not the mechanism that you want to do that. And so when I say things like, "Oh, you know, it probably actually makes sense to use Gmail instead of Rise Up," I mean, you know, contexts where you're maybe communicating with a lawyer and your communications are privileged, right?it's a lot harder to crack Gmail security than it is to crack something like Rise Up security, just by virtue of the volume of resources available to each of those organizations. And so where you specifically have this case where, you know, there's, there's some degree of legal protection for whatever that means, making sure that you're not leveraging something where your communications can be accessed without your knowledge or consent by a third party, and then used in a way that is conducive to parallel construction. Margaret 12:19 So what is parallel construction? Elle 12:20 Parallel construction is a legal term where you obtain information in a way that is not admissible in court, and then use that information to reconstruct a timeline or reconstruct a mechanism of access to get to that information in an admissible way. Margaret 12:39 So like every cop show Elle 12:41 Right, so like, with parallel construction around emails, for example, if you're emailing back and forth with your lawyer, and your lawyer is like, "Alright, like, be straight with me. Because I need to know if you've actually done this crime so that I can understand how best to defend you." And you're like, "Yeah, dude, I totally did that crime," which you should never admit to in writing anyway, because, again, email is not the format that you want to have this conversation in. But like, if you're gonna admit to having done crimes in email, for some reason, how easy it is for someone else to access that admission is important. Because if somebody can access this email admission of you having done the crimes where you're, you know, describing in detail, what crimes you did, when with who, then it starts, like, it gets a lot easier to be like, "Oh, well, obviously, we need to subpoena this person's phone records. And we should see, you know, we should use geolocation tracking of their device to figure out who they were in proximity to and who else was involved in this," and it can, it can be really easy to like, establish a timeline and get kind of the roadmap to all of the evidence that they would need to, to put you in jail. So it's, it's probably worth kind of thinking about how easy it is to access that that information. And again, don't don't admit to doing crimes in email, email is not the format that you want to use for admitting to having done crimes. But if you're going to, it's probably worth making sure that, you know, the the email providers that you are choosing are equipped with both robust security controls, and probably also like a really good legal team. Right? So if...like Rise Up isn't going to comply with the subpoena to the like, to the best of their ability, they're not going to do that, but it's a lot easier to sue Rise Up than it is to sue Google. Margaret 14:51 Right. Elle 14:51 And it's a lot easier to to break Rise Up's security mechanisms than it is to break Google's, just by virtue of how much time and effort each of those entities is able to commit to securing email. Please don't commit to doing crimes in email, just please just don't. Don't do it in writing. Don't do it. Margaret 15:15 Okay, let me change my evening plans. Hold on let me finish sending this email.. Elle 15:23 No! Margaret 15:25 Well, I mean, I guess like the one of the reasons that I thought so much about that example, and why it kind of stuck with me years later was just thinking about what people decide they're safe, because they did some basic security stuff. And I don't know if that counts under threat modeling. But it's like something I think about a lot is about people being like, "I don't understand, we left our cell phones at home and went on a walk in the woods," which is one of the safest ways anyone could possibly have a conversation. "How could anyone possibly have known this thing?" And I'm like, wait, you, you told someone you know, or like, like, not to make people more paranoid, but like... Elle 16:06 Or maybe, maybe you left your cell phone at home, but kept your smartwatch on you, because you wanted to close, you know, you wanted to get your steps for the day while you were having this conversation, right? Margaret 16:19 Because otherwise, does it even count if I'm not wearing my [smartwatch]. Elle 16:21 Right, exactly. And like, we joke, and we laugh, but like, it is actually something that people don't think about. And like, maybe you left your phones at home, and you went for a walk in the woods, but you took public transit together to get there and were captured on a bunch of surveillance cameras. Like there's, there's a lot of, especially if you've actually been targeted for surveillance, which is very rare, because it's very resource intensive. But you know, there there are alternate ways to track people. And it does depend on things like whether or not you've got additional tech on you, whether or not you were captured on cameras. And you know, whether whether or not your voices were picked up by ShotSpotter, as you were walking to wherever the woods were like, there's just there's we live in a panopticon. I don't say that so that people are paranoid about it, I say it because it's a lot easier to think about, where, when and how you want to phrase things. Margaret 17:27 Yeah. Elle 17:28 In a way that you know, still facilitates communications still facilitates achieving whatever it is that you're trying to accomplish, but sets you sets you up to be as safe as possible in doing it. And I think that especially in anarchist circles, just... and honestly also in security circles, there's a lot of of like, dogmatic adherence to security ritual, that may or may not actually make sense based on both, you, who your actual adversaries are, and what their realistic capabilities are. Margaret 18:06 And what they're trying to actually accomplish I feel like is...Okay, one of the threat models that I like...I encourage people sometimes to carry firearms, right in very specific contexts. And it feels like a security... Oh, you had a good word for it that you just used...ritual of security theater, I don't remember...a firearm often feels like that, Elle 18:30 Right. Margaret 18:31 In a way where you're like," Oh, I'm safe now, right, because I'm carrying a firearm." And, for example, I didn't carry a firearm for a very long time. Because for a long time, my threat model, the people who messed with me, were cops. And if a cop is going to mess with me, I do not want to have a firearm on me, because it will potentially escalate a situation in a very bad way. Whereas when I came out and started, you know, when I started getting harassed more for being a scary transwoman, and less for being an anarchist, or a hitchhiker, or whatever, you know, now my threat model is transphobes, who wants to do me harm. And in a civilian-civilian context, I prefer I feel safer. And I believe I am safer in most situations armed in that case. But every time I leave the house, I have to think about "What is my threat model?" And then in a similar way, sorry, it's just me thinking about the threat model of firearms, but it's the main example that I think of, is that often people's threat model in terms of firearms and safety as themselves, right? And so you just actually need to do the soul searching where you're like,"What's more likely to happen to me today? Am I likely to get really sad, or am I likely to get attacked by fascists?" Elle 19:57 Yeah. And I think that there is there's an additional question, especially when you're talking about arming yourself, whether it's firearms, or carrying a knife, or whatever, because like, I don't own any firearms, but I do carry a knife a lot of the time. And so like some questions, some additional questions that you have to ask yourself are, "How confident am I in my own ability to use this to harm another person?" Because if you're going to hesitate, you're gonna get fucked up. Margaret 20:28 Yeah. Elle 20:28 Like, if you are carrying a weapon, and you pull it out and hesitate in using it, it's gonna get taken away from you, and it's going to be used against you. So that's actually one of the biggest questions that I would say people should be asking themselves when developing a threat model around arming themselves is, "Will I actually use this? How confident am I?" if you're not confident, then it's okay to leave it at home. It's okay to practice more. It's okay to like develop that familiarity before you start using it as an EDC. Sorry an Every Day Carry. And then the you know, the other question is, "How likely am I to get arrested here?" I carry, I carry a knife that I absolutely do know how to use most of the time when I leave the house. But when I'm going to go to a demonstration, because the way that I usually engage in protests or in demonstrations is in an emergency medical response capacity, I carry a medic kit instead. And my medic kit is a clean bag that does not have any sharp objects in it. It doesn't have anything that you know could be construed as a weapon it doesn't have...it doesn't...I don't even have weed gummies which are totally like recreationally legal here, right? I won't even put weed in the medic kit. It's it is very much a... Margaret 21:52 Well, if you got a federally arrested you'd be in trouble with that maybe. Elle 21:55 Yeah, sure, I guess. But, like the medic bag is very...nothing goes in this kit ever that I wouldn't want to get arrested carrying. And so there's like EMT shears in there. Margaret 22:12 Right. Elle 22:13 But that's that's it in terms of like... Margaret 22:16 Those are scary you know...the blunted tips. Elle 22:21 I know, the blunted tips and the like safety, whatever on them. It's just...it's it is something to think about is "Where am I going...What...Who am I likely to encounter? And like what are the trade offs here?" Margaret 22:37 I remember once going to a demonstration a very long time ago where our like, big plan was to get in through all of the crazy militarized downtown in this one city and, and the big plan is we're gonna set up a Food Not Bombs inside the security line of the police, you know. And so we picked one person, I think I was the sacrificial person, who had to carry a knife, because we had to get the folding tables that we're gonna put the food on off of the top of the minivan. And we had to do it very quickly, and they were tied on. And so I think I brought the knife and then left it in the car and the car sped off. And then we fed people and they had spent ten million dollars protecting the city from 30 people feeding people Food Not Bombs. Elle 23:20 Amazing. Margaret 23:22 But, but yeah, I mean, whereas every other day in my life, especially back then when I was a hitchhiker, I absolutely carried a knife. Elle 23:30 Yeah. Margaret 23:31 You know, for multiple purposes. Yeah, okay, so then it feels like...I like rooting it in the self defense stuff because I think about that a lot and for me it maybe then makes sense to sort of build up and out from there as to say like...you know, if someone's threat model is my ex-partner's new partner is trying to hack me or my abusive ex is trying to hack me or something, that's just such a different threat model than... Elle 24:04 Yeah, it is. Margaret 24:05 Than the local police are trying to get me versus the federal police are trying to get me versus a foreign country is trying to get me you know, and I and it feels like sometimes those things are like contradictory to each other about what isn't isn't the best maybe. Elle 24:19 They are, because each of those each of those entities is going to have different mechanisms for getting to you and so you know, an abusive partner or abusive ex is more likely to have physical access to you, and your devices, than you know, a foreign entity is, right? Because there's there's proximity to think about, and so you know, you might want to have....Actually the....Okay, so the abusive ex versus the cops, right. A lot of us now have have phones where the mechanism for accessing them is either a password, or some kind of biometric identifier. So like a fingerprint, or you know, face ID or whatever. And there's this very dogmatic adherence to "Oh, well, passwords are better." But passwords might actually not be better. Because if somebody has regular proximity to you, they may be able to watch you enter your password and get enough information to guess it. And if you're, if you're not using a biometric identifier, in those use cases, then what can happen is they can guess your password, or watch, you type it in enough time so that they get a good feeling for what it is. And they can then access your phone without your knowledge while you're sleeping. Right? Margaret 25:46 Right. Elle 25:47 And sometimes just knowing whether or not your your adversary has access to your phone is actually a really useful thing. Because you know how much information they do or don't have. Margaret 26:01 Yeah. No that's... Elle 26:03 And so it really is just about about trade offs and harm reduction. Margaret 26:08 That never would have occurred to me before. I mean, it would occur to me if someone's trying to break into my devices, but I have also fallen into the all Biometrics is bad, right? Because it's the password, you can't change because the police can compel you to open things with biometrics, but they can't necessarily compel you...is more complicated to be compelled to enter a password. Elle 26:31 I mean, like, it's only as complicated as a baton. Margaret 26:34 Yeah, there's that XKCD comic about this. Have you seen it? Elle 26:37 Yes. Yes, I have. And it is it is an accurate....We like in security, we call it you know, the Rubber Hose method, right? It we.... Margaret 26:46 The implication here for anyone hasn't read it is that they can beat you up and get you to give them their [password]. Elle 26:50 Right people, people will usually if they're hit enough times give up their password. So you know, I would say yeah, you should disable biometric locks, if you're going to go out to a demonstration, right? Which is something that I do. I actually do disable face ID if I'm taking my phone to a demo. But it...you may want to use it as your everyday mechanism, especially if you're living in a situation where knowing whether or not your abuser has access to your device is likely to make a difference in whether you have enough time to escape. Margaret 27:30 Right. These axioms or these these beliefs we all have about this as the way to do security,the you know...I mean, it's funny, because you brought up earlier like use Signal use Tor, I am a big advocate of like, I just use Signal for all my communication, but I also don't talk about crime pretty much it in general anyway. You know. So it's more like just like bonus that it can't be read. I don't know. Elle 27:57 Yeah. I mean, again, it depends, right? Because Signal...Signal has gotten way more usable. I've been, I've been using Signal for a decade, you know, since it was still Redphone and TextSecure. And in the early days, I used to joke that it was so secure, sometimes your intended recipients don't even get the messages. Margaret 28:21 That's how I feel about GPG or PGP or whatever the fuck. Elle 28:24 Oh, those those.... Margaret 28:27 Sorry, didn't mean to derail you. Elle 28:27 Let's not even get started there. But so like Signal again, has gotten much better, and is way more reliable in terms of delivery than it used to be. But I used to, I used to say like, "Hey, if it's if it's really, really critical that your message reach your recipient, Signal actually might not be the way to do it." Because if you need if you if you're trying to send a time sensitive message with you know guarantee that it actually gets received, because Signal used to be, you know, kind of sketchy on or unreliable on on delivery, it might not have been the best choice at the time. One of the other things that I think that people, you know, think...don't think about necessarily is that Signal is still widely viewed as a specific security tool. And that's, that's good in a lot of cases. But if you live somewhere, for example, like Belarus, where it's not generally considered legal to encrypt things, then the presence of Signal on your device is enough in and of itself to get you thrown in prison. Margaret 29:53 Right. Elle 29:53 And so sometimes having a mechanism like, you know, Facebook secret messages might seem like a really, really sketchy thing to do. But if your threat model is you can't have security tools on your phone, but you still want to be able to send encrypted messages or ephemeral messages, then that actually might be the best way to kind of fly under the radar. So yeah, it again just really comes down to thinking about what it is that you're trying to protect? From who? And under what circumstances? Margaret 30:32 Yeah, I know, I like this. I mean, obviously, of course, you've thought about this thing that you think about. I'm like, I'm just like, kind of like, blown away thinking about these things. Although, okay, one of these, like security things that I kind of want to push back on, and actually, this is a little bit sketchy to push back on, the knife thing. To go back to a knife. I am. I have talked to a lot of people who have gotten themselves out of very bad situations by drawing a weapon without then using it, which is illegal. It is totally illegal. Elle 31:03 Yes Margaret 31:03 I would never advocate that anyone threaten anyone with a weapon. But, I know people who have committed this crime in order to...even I mean, sometimes it's in situations where it'd be legal to stab somebody,like... Elle 31:16 Sure. Margaret 31:16 One of the strangest laws in the United States is that, theoretically, if I fear for my life, I can draw a gun.... And not if I fear for my life, if I am, if my life is literally being threatened, physically, if I'm being attacked, I can I can legally draw a firearm and shoot someone, I can legally pull a knife and stab someone to defend myself. I cannot pull a gun and say "Back the fuck off." And not only is it illegal, but it also is a security axiom, I guess that you would never want to do that. Because as you pointed out, if you hesitate now the person has the advantage, they have more information than they used to. But I still know a lot of hitchhikers who have gotten out of really bad situations by saying, "Let me the fuck out of the car." Elle 32:05 Sure. Margaret 32:06 Ya know?. Elle 32:06 Absolutely. It's not....Sometimes escalating tactically can be a de-escalation. Right? Margaret 32:17 Right. Elle 32:18 Sometimes pulling out a weapon or revealing that you have one is enough to make you no longer worth attacking. But you never know how someone's going to respond when you do that, right? Margaret 32:33 Totally Elle 32:33 So you never know whether it's going to cause them to go "Oh shit, I don't want to get stabbed or I don't want to get shot," and stop or whether it's going to trigger you know a more aggressive response. So it doesn't mean that you know, you, if you pull a weapon you have to use it. Margaret 32:52 Right. Elle 32:53 But if you're going to carry one then you do need to be confident that you will use it. Margaret 32:58 No, that that I do agree with that. Absolutely. Elle 33:00 And I think that is an important distinction, and I you know I also think that...not 'I think', using a gun and using a knife are two very different things. For a lot of people, pulling the trigger on a gun is going to be easier than stabbing someone. Margaret 33:20 Yeah that's true. Elle 33:21 Because of the proximity to the person and because of how deeply personal stabbing someone actually is versus how detached you can be and still pull the trigger. Margaret 33:35 Yeah. Elle 33:36 Like I would...it sounds...it feels weird to say but I would actually advocate most people carry a gun instead of a knife for that reason, and also because if you're, if you're worried about being physically attacked, you know you have more range of distance where you can use something like a gun than you do with a knife. You have to be, you have to be in close quarters to to effectively use a knife unless you're like really good at throwing them for some reason and even I wouldn't, cause if you miss...now your adversary has a knife. Margaret 34:14 I know yeah. Unless you miss by a lot. I mean actually I guess if you hit they have a knife now too. Elle 34:22 True. Margaret 34:23 I have never really considered whether or not throwing knives are effective self-defense weapons and I don't want to opine too hard on this show. Elle 34:31 I advise against it. Margaret 34:32 Yeah. Okay, so to go back to threat modeling about more operational security type stuff. You're clearly not saying these are best practices, but you're instead it seems like you're advocating of "This as the means by which you might determine your best practices." Elle 34:49 Yes. Margaret 34:49 Do you have a...do you have a a tool or do you have like a like, "Hey, here's some steps you can take." I mean, we all know you've said like, "Think about your enemy," and such like that, but Is there a more...Can you can you walk me through that? Elle 35:04 I mean, like, gosh, it really depends on who your adversary is, right? Elle 35:10 Like, if you're if you're thinking about an abusive partner, that's obviously going to vary based on things like, you know, is your abusive partner, someone who has access to weapons? Are they someone who is really tech savvy? Or are they not. At...The things that you have to think about are going to just depend on the skills and tools that they have access to? Is your abusive partner or your abusive ex a cop? Because that changes some things. Margaret 35:10 Yeah, fair enough. Margaret 35:20 Yeah. Elle 35:27 So like, most people, if they actually have a real and present kind of persistent threat in their life, also have a pretty good idea of what that threat is capable of, or what that threat actor or is capable of. And so it, it's it, I think, it winds up being fairly easy to start thinking about things in terms of like, "Okay, how is this person going to come after me? How, what, what tools do they have? What skills do they have? What ability do they have to kind of attack me or harm me?" But I think that, you know, as we start getting away from that really, really, personal threat model of like the intimate partner violence threat model, for example, and start thinking about more abstract threat models, like "I'm an anarchist living in a state," because no state is particularly fond of us. Margaret 36:50 Whaaaat?! Elle 36:51 I know it's wild, because like, you know, we just want to abolish the State and States, like want to not be abolished, and I just don't understand how, how they would dislike us for any reason.. Margaret 37:03 Yeah, it's like when I meet someone new, and I'm like, "Hey, have you ever thought about being abolished?" They're usually like, "Yeah, totally have a beer." Elle 37:10 Right. No, it's... Margaret 37:11 Yes. Elle 37:11 For sure. Um, but when it comes to when it comes to thinking about, you know, the anarchist threat model, I think that a lot of us have this idea of like, "Oh, the FBI is spying on me personally." And the likelihood of the FBI specifically spying on 'you' personally is like, actually pretty slim. But... Margaret 37:34 Me? Elle 37:35 Well... Margaret 37:37 No, no, I want to go back to thinking about it's slim, it's totally slim. Elle 37:41 Look...But like, there's there is a lot like, we know that, you know, State surveillance dragnet exists, right, we know that, you know, plaintext text messages, for example, are likely to be caught both by, you know, Cell Site Simulators, which are in really, really popular use by law enforcement agencies. Margaret 38:08 Which is something that sets up and pretends to be a cell tower. So it takes all the data that is transmitted over it. And it's sometimes used set up at demonstrations. Elle 38:16 Yes. So they, they both kind of convinced your phone into thinking that they are the nearest cell tower, and then actually pass your communications on to the next, like the nearest cell tower. So your communications do go through, they're just being logged by this entity in the middle. That's, you know, not great. But using something... Margaret 38:38 Unless you're the Feds. Elle 38:39 I mean, even if you... Margaret 38:41 You just have to think about it from their point of. Hahah. Elle 38:42 Even if you are the Feds, that's actually too much data for you to do anything useful with, you know? Margaret 38:50 Okay, I'll stop interuppting you. Haha. Elle 38:51 Like, it's just...but if you're if you are a person who is a person of interest who's in this group, where a cell site simulator has been deployed or whatever, then then that you know, is something that you do have to be concerned about and you know, even if you're not a person of interest if you're like texting your friend about like, "All right, we do crime in 15 minutes," like I don't know, it's maybe not a great idea. Don't write it down if you're doing crime. Don't do crime. But more importantly don't don't create evidence that you're planning to do crime, because now you've done two crimes which is the crime itself and conspiracy to commit a crime Margaret 39:31 Be straight. Follow the law. That's the motto here. Elle 39:35 Yes. Oh, sorry. I just like I don't know, autism brain involuntarily pictured, like an alternate universe in which in where which I am straight, and law abiding. And I'm just I'm very... Margaret 39:52 Sounds terrible. I'm sorry. Elle 39:53 Right. Sounds like a very boring.... Margaret 39:55 Sorry to put that image in your head. Elle 39:56 I mean, I would never break laws. Margaret 39:58 No. Elle 39:59 Ever Never ever. I have not broken any laws I will not break any laws. No, I think that... Margaret 40:08 The new "In Minecraft" is "In Czarist Russia." Instead of saying "In Minecraft," because it's totally blown. It's only okay to commit crimes "In Czarist Russia." Elle 40:19 Interesting. Margaret 40:23 All right. We don't have to go with that. I don't know why i got really goofy. Elle 40:27 I might be to Eastern European Jewish for that one. Margaret 40:31 Oh God. Oh, my God, now I just feel terrible. Elle 40:34 It's It's fine. It's fine. Margaret 40:36 Well, that was barely a crime by east... Elle 40:40 I mean it wasn't necessarily a crime, but like my family actually emigrated to the US during the first set of pogroms. Margaret 40:51 Yeah. Elle 40:52 So like, pre Bolshevik Revolution. Margaret 40:57 Yeah. Elle 40:59 But yeah, anyway. Margaret 41:02 Okay, well, I meant taking crimes like, I basically think that, you know, attacking the authorities in Czarist Russia is a more acceptable action is what I'm trying to say, I really don't have to try and sell you on this plan. Elle 41:16 I'm willing to trust your judgment here. Margaret 41:19 That's a terrible plan, but I appreciate you, okay. Either way, we shouldn't text people about the crimes that we're doing. Elle 41:26 We should not text people about the crimes that we're planning on doing. But, if you are going to try to coordinate timelines, you might want to do that using some form of encrypted messenger so that whatever is logged by a cell site simulator, if it is in existence is not possible by the people who are then retrieving those logs. And you know, and another reason to use encrypted messengers, where you can is that you don't necessarily want your cell provider to have that unencrypted message block. And so if you're sending SMS, then your cell, your cell provider, as the processor of that data has access to an unencrypted or plain text version of whatever text message you're sending, where if you're using something like Signal or WhatsApp, or Wicker, or Wire or any of the other, like, multitude of encrypted messengers that you could theoretically be using, then it's it's also not going directly through your your provider, which I think is an interesting distinction. Because, you know, we we know, from, I mean, we kind of sort of already knew, but we know for a fact, from the Snowden Papers, that cell providers will absolutely turn over your data to the government if they're asked for it. And so minimizing the amount of data that they have about you to turn over to the government is generally a good practice. Especially if you can do it in a way that isn't going to be a bunch of red flags. Margaret 43:05 Right, like being in Belarus and using Signal. Elle 43:08 Right. Exactly. Margaret 43:10 Okay. Also, there's the Russian General who used an unencrypted phone where he then got geo located and blowed up. Elle 43:23 Yeah. Margaret 43:24 Also bad threat modeling on that that guy's part, it seems like Elle 43:28 I it, it certainly seems to...that person certainly seems to have made several poor life choices, not the least of which was being a General in the Russian army. Margaret 43:41 Yeah, yeah. That, that tracks. So one of the things that we talked about, while we were talking about having this conversation, our pre-conversation conversation was about...I think you brought up this idea that something that feels secret, doesn't mean it is, and Elle 43:59 Yeah! Margaret 44:00 I'm wondering if you had more thoughts about that concept? It's not a very good prompt. Elle 44:05 So like, it's it's a totally reasonable prompt, we say a lot that, you know, security and safety are a feeling. And I think that that actually is true for a lot of us. But there's this idea that, Oh, if you use coded language, for example, then like, you can't get caught. I don't actually think that's true, because we tend to use coded language that's like, pretty easily understandable by other people. Because the purpose of communicating is to communicate. Margaret 44:42 Yeah. Elle 44:43 And so usually, if you're like, code language is easy enough to be understood by whoever it is you're trying to communicate with, like, someone else can probably figure it the fuck out too. Especially if you're like, "Hey, man, did you bring the cupcakes," and your friend is like, "Yeah!" And then an explosion goes off shortly thereafter, right? It's like, "Oh, by cupcakes, they meant dynamite." So I, you know, I think that rather than then kind of like relying on this, you know, idea of how spies work or how, how anarchists communicated secretly, you know, pre WTO it's, it's worth thinking about how the surveillance landscape has adapted over time, and thinking a little bit more about what it means to engage in, in the modern panopticon, or the contemporary panopticon, because those capabilities have changed over time. And things like burner phones are a completely different prospect now than they used to be. Actually... Margaret 45:47 In that they're easier or wose? Elle 45:49 Oh, there's so much harder to obtain now. Margaret 45:51 Yeah, okay. Elle 45:52 It's it is so much easier to correlate devices that have been used in proximity to each other than it used to be. And it's so much easier to, you know, capture people on surveillance cameras than it used to be. I actually wrote a piece for Crimethinc about this some years ago, that that I think kind of still holds up in terms of how difficult it really, really is to procure a burner phone. And in order to do to do that safely, you would have to pay cash somewhere that couldn't capture you on camera doing it, and then make sure that it was never turned on in proximity with your own phone anywhere. And you would have to make sure that it only communicated with other burner phones, because the second it communicates with a phone that's associated to another person, there's a connection between your like theoretical burner phone and that person. And so you can be kind of triangulated back to, especially if you've communicated with multiple people. It just it is so hard to actually obtain a device that is not in any way affiliated with your identity or the identity of any of your comrades. But, we have to start thinking about alternative mechanisms for synchronous communication. Margaret 47:18 Okay. Elle 47:18 And, realistically speaking, taking a walk in the woods is still going to be the best way to do it. Another reasonable way to go about having a conversation that needs to remain private is actually to go somewhere that is too loud and too crowded to...for anyone to reasonably overhear or to have your communication recorded. So using using the kind of like, signal to noise ratio in your favor. Margaret 47:51 Yeah. Elle 47:52 To help drown out your own signal can be really, really useful. And I think that that's also true of things like using Gmail, right? The signal to noise ratio, if you're not using a tool that's specifically for activists can be very helpful, because there is just so much more traffic happening, that it's easier to blend in. Margaret 48:18 I mean, that's one reason why I mean, years ago, people were saying that's why non activists should use GPG, the encrypted email service that is terrible, was so attempt to try and be like, if you only ever use it, for the stuff you don't want to be known, then it like flags it as "This stuff you don't want to be known." And so that was like, kind of an argument for my early adoption Signal, because I don't break laws was, you know, just be like," Oh, here's more people using Signal," it's more regularized, and, you know, my my family talks on Signal and like, it helps that like, you know, there's a lot of different very normal legal professions that someone might have that are require encrypted communication. Yeah, no book, like accountants, lawyers. But go ahead. Elle 49:06 No, no, I was gonna say that, like, it's, it's very common in my field of work for people to prefer to use Signal to communicate, especially if there is, you know, a diversity of phone operating systems in the mix. Margaret 49:21 Oh, yeah, totally. I mean, it's actually now it's more convenient. You know, when I when I'm on my like, family's SMS loop, it's like, I constantly get messages to say, like, "Brother liked such and such comment," and then it's like, three texts of that comment and...anyway, but okay, one of the things that you're talking about, "Security as a feeling," right? That actually gets to something that's like, there is a value in like, like, part of the reason to carry a knife is to feel better. Like, and so part of like, like anti-anxiety, like anxiety is my biggest threat most most days, personally. Right? Elle 50:00 Have you ever considered a career in the security field, because I, my, my, my former manager, like the person who hired me into the role that I'm in right now was like, "What made you get into security?" when I was interviewing, and I was just like, "Well, I had all this anxiety lying around. And I figured, you know, since nobody will give me a job that I can afford to sustain myself on without a degree, in any other field, I may as well take all this anxiety and like, sell it as a service." Margaret 50:33 Yeah, I started a prepper podcast. It's what you're listening to right now. Everyone who's listening. Yeah, exactly. Well, there's a value in that. But then, but you're talking about the Panopticon stuff, and the like, maybe being in too crowded of an environment. And it's, and this gets into something where everyone is really going to have to answer it differently. There's a couple of layers to this, but like, the reason that I just like, my profile picture on twitter is my face. I use my name, right? Elle 51:03 Same. Margaret 51:04 And, yeah, and I, and I just don't sweat it, because I'm like, "Look, I've been at this long enough that they know who I am. And it's just fine. It's just is." One day, it won't be fine. And then we have other problems. Right? Elle 51:18 Right. Margaret 51:19 And, and, and I'm not saying that everyone as they get better security practice will suddenly start being public like it... You know, it, it really depends on what you're trying to accomplish. Like, a lot of the reasons to not be public on social media is just because it's a fucking pain in the ass. Like, socially, you know? Elle 51:36 Yeah. Margaret 51:36 But I don't know, I just wonder if you have any thoughts about just like, the degree to which sometimes it's like, "Oh, well, I just, I carry a phone to an action because I know, I'm not up to anything." But then you get into this, like, then you're non-normalizing... don't know, it gets complicated. And I'm curious about your thoughts on that kind of stuff. Elle 51:56 So like, for me, for me personally, I am very public about who I am. What I'm about, like, what my politics are. I'm extremely open about it. Partially, because I don't think that, like I think that there is value in de-stigmatizing anarchism. Margaret 52:20 Yes. Elle 52:20 I think there is value in being someone who is just a normal fucking human being. And also anarchist. Margaret 52:29 Yeah. Elle 52:30 And I think that, you know, I...not even I think. I know, I know that, through being exactly myself and being open about who I am, and not being super worried about the labels that other people apply to themselves. And instead, kind of talking about, talking about anarchism, both from a place of how it overlaps with Judaism, because it does in a lot of really interesting ways, but also just how it informs my decision making processes. I've been able to expose people who would not necessarily have had any, like, concept of anarchism, or the power dynamics that we're interested in equalizing to people who just wouldn't have wouldn't have even thought about it, or would have thought that anarchists are like this big, scary, whatever. And, like, there, there are obviously a multitude of tendencies within anarchism, and no anarchist speaks for anybody but themselves, because that's how it works. But, it's one of the things that's been really interesting to me is that in the security field, one of the new buzzwords is Zero Trust. And the idea is that you don't want to give any piece of technology kind of the sole ability to to be the linchpin in your security, right? So you want to build redundancy, you want to make sure that no single thing is charged with being the gatekeeper for all of your security. And I think that that concept actually also applies to power. And so I...when I'm trying to talk about anarchism in a context where it makes sense to security people, I sometimes talk about it as like a Zero Trust mechanism for organizing a society. Margaret 54:21 Yeah. Elle 54:21 Where you just you...No person is trustworthy enough to hold power over another person. And, so like, I'm really open about it, but the flip side of that is that, you know, I also am a fucking anarchist, and I go to demonstrations, and sometimes I get arrested or whatever. And so I'm not super worried about the government knowing who I am because they know exactly who I am. But I don't share things like my place of work on the internet because I've gotten death threats from white nationalists. And I don't super want white nationalists like sending death threats into my place of work because It's really annoying to deal with. Margaret 55:02 Yeah. Elle 55:03 And so you know, there's...it really comes down to how you think about compartmentalizing information. And which pieces of yourself you want public and private and and how, how you kind of maintain consistency in those things. Margaret 55:21 Yeah. Elle 55:22 Like people will use the same...people will like be out and anarchists on Twitter, but use the same Twitter handle as their LinkedIn URL where they're talking about their job and have their legal name. And it's just like, "Buddy, what are you doing?" Margaret 55:37 Yeah. Elle 55:38 So you do have to think about how pieces of data can be correlated and tied back to you. And what story it is that you're you're presenting, and it is hard and you are going to fuck it up. Like people people are going to fuck it up. Compartmentalization is super hard. Maintaining operational security is extremely hard. But it is so worth thinking about. And even if you do fuck it up, you know, that doesn't mean that it's the end of the world, it might mean that you have to take some extra steps to mitigate that risk elsewhere. Margaret 56:11 The reason I like this whole framework that you're building is that I tend to operate under this conception that clandestinity is a trap. I don't want to I don't want to speak this....I say it as if it's a true statement across all and it's not it. I'm sure there's absolute reasons in different places at different times. But in general, when I look at like social movements, they, once they move to "Now we're just clandestine." That's when everyone dies. And, again, not universally, Elle 56:40 Yeah, but I mean, okay, so this is where I'm gonna get like really off the wall. Right? Margaret 56:46 All right. We're an hour in. It's the perfect time. Elle 56:50 I know, right? People may or may not know who Allen Dulles is. But Allen Margaret 56:54 Not unless they named an airport after him. Elle 56:56 They Did. Margaret 56:57 Oh, then i do who he is. Elle 56:59 Allen Dulles is one of the people who founded the CIA. And he released this pamphlet called "73 Points On Spycraft." And it's a really short read. It's really interesting, I guess. But the primary point is that if you are actually trying to be clandestine, and be successful about it, you want to be as mundane as possible. Margaret 57:22 Yep. Elle 57:23 And in our modern world with the Panopticon being what it is, the easiest way to be clandestine, is actually to be super open. So that if you are trying to hide something, if there is something that you do want to keep secret, there's enough information out there about you, that you're not super worth digging into. Margaret 57:46 Oh, yeah. Cuz they think they already know you. Elle 57:48 Exactly. So if, if that is what your threat model is, then the best way to go about keeping a secret is to flood as many other things out there as possible. So that it's just it's hard to find anything, but whatever it is that you're flooding. Margaret 58:04 Oh, it's like I used to, to get people off my back about my dead name, I would like tell one person in a scene, a fake dead name, and be like, "But you can't tell anyone." Elle 58:15 Right. Margaret 58:16 And then everyone would stop asking about my dead name, because they all thought they knew it, because that person immediately told everyone, Elle 58:22 Right. Margaret 58:23 Yeah. Elle 58:24 It's, it's going back to that same using the noise to hide your signal concept, that it...the same, the same kind of concepts and themes kind of play out over and over and over again. And all security really is is finding ways to do harm reduction for yourself, finding ways to minimize the risk that you're undertaking just enough that that you can operate in whatever it is that you're trying to do. Margaret 58:53 No, I sometimes I like, ask questions. And then I am like, Okay, well don't have an immediate follow up, because I just need to like, think about it. Instead of being like, "I know immediately what to say about that." But okay, so, but with clandestinity in general in this this concept...I also think that this is true on a kind of movement level in a way that I I worry about sometimes not necessarily....Hmm, what am I trying to say? Because I also really hate telling people what to do. It's like kind of my thing I don't like telling people what to do. But there's a certain level... Elle 59:25 Really? Margaret 59:25 Yeah, you'd be shocked to know, Elle 59:27 You? Don't like telling people what to do? Margaret 59:31 Besides telling people not to tell me what to do. That's one of my favorite things to tell people. But, there's a certain amount of. Margaret 59:38 Oh, that's true, like different conceptions of freedom. Elle 59:38 But that's not telling people what to do, that's telling people what not to do. Elle 59:44 It's actually setting a boundary as opposed to dictating a behavior. Margaret 59:48 But I've been in enough relationships where I've learned that setting boundaries is the same as telling people to do. This is a funny joke. Elle 59:55 Ohh co-dependency. Margaret 59:58 But all right, there's a quote from a guy whose name I totally space who was an old revolutionist, who wasn't very good at his job. And his quote was, "Those who make half a revolution dig their own graves." And I think he like, I think it proved true for him. If I remember correctly, I think he died in jail after kind of making half a revolution with some friends. I think he got like arrested for pamphleteering or something, Elle 1:00:20 Jesus. Margaret 1:00:21 It was a couple hundred years ago. And but there's this but then if you look forward in history that like revolutionists, who survive are the ones who win. Sometimes, sometimes the revolutionists win, and then their comrades turn on them and murder them. But, I think overall, the survival rate of a revolution is better when you win is my theory. And and so there's this this concept where there's a tension, and I don't have an answer to it. And I want people to actually think about it instead of assuming, where the difference between videotaping a cop car on fire and not is more complicated than people want you to know. Because, if you want there to be more cop cars on fire, which I do not unless we're in Czarist Russia, in which case, you're in an autocracy, and it's okay to set the cop cars on fire, but I'm clearly not talking about that, or the modern world. But, you're gonna have to film it on your cell phone in order for people to fucking know that it's happening. Sure. And and that works absolutely against your best interest. Like, on an individual level, and even a your friends' level. Elle 1:01:25 So like, here's the thing, being in proximity to a burning cop car is not in and of itself a crime. Margaret 1:01:33 Right. Elle 1:01:34 So there's, there's nothing wrong with filming a cop car on fire. Margaret 1:01:41 But there's that video... Margaret 1:01:41 Right. Elle 1:01:41 There is something wrong with filming someone setting a cop car on fire. And there's something extremely wrong with taking a selfie while setting a cop car on fire. And don't do that, because you shouldn't do crime. Obviously, right? Elle 1:01:42 But there's Layers there...No, go ahead. Margaret 1:02:03 Okay, well, there's the video that came out of Russia recently, where someone filmed themselves throwing Molotovs at a recruitment center. And one of the first comments I see is like, "Wow, this person has terrible OpSec." And that's true, right? Like this person is not looking at how to maximize their lack of chance of going to jail, which is probably the way to maximize that in non Czarist Russia... re-Czarist Russia, is to not throw anything burning at buildings. That's the way to not go to jail. Elle 1:02:35 Right. Margaret 1:02:35 And then if you want to throw the thing at the... and if all you care about is setting this object on fire, then don't film yourself. Elle 1:02:41 Right. Margaret 1:02:41 But if you want more people to know that this is a thing that some people believe is a worthwhile thing to do, you might need to film yourself doing it now that person well didn't speak. Elle 1:02:53 Well no. Margaret 1:02:56 Okay. Elle 1:02:56 You may not need to film yourself doing it. Right? Because what what you can do is if, for example, for some reason, you are going to set something on fire. Margaret 1:03:09 Right, in Russia. Elle 1:03:09 Perhaps what you might want to do is first get the thing to be in a state where it is on fire, and then begin filming the thing once it is in a burning state. Margaret 1:03:25 Conflaguration. Yeah. Elle 1:03:25 Right? And that can that can do a few things, including A) you're not inherently self incriminating. And, you know, if if there are enough people around to provide some form of cover, like for example, if there are 1000s of other people's cell phones also in proximity, it might even create some degree of plausible deniability for you because what fucking dipshit films themself doing crimes. So it's, you know, there's, there's, there's some timing things, right. And the idea is to get it...if you are a person who believes that cop cars look best on fire... Margaret 1:04:10 Buy a cop car, and then you set it on fire. And then you film it. Elle 1:04:15 I mean, you know, you know, you just you opportunistically film whenever a cop car happens to be on fire in your proximity. Margaret 1:04:23 Oh, yeah. Which might have been set on fire by the person who owned it. There's no reason to know one way or not. Elle 1:04:27 Maybe the police set the cop car on fire you know? You never know. There's no way to there....You don't have to you don't have to speculate about how the cop car came to be on fire. You can just film a burning cop car. And so the you know, I think that the line to walk there is just making sure there's no humans in your footage of things that you consider to be art. Margaret 1:04:29 Yeah. No, it it makes sense. And I guess it's like because people very, very validly have been very critical about the ways that media or people who are independently media or whatever, like people filming shit like this, right? But But I think then to say that like, therefore no, no cop cars that are on fire should ever be filmed versus the position you're presenting, which is only cop cars that are already on fire might deserve to be filmed, which is the kind of the long standing like film the broken window, not the window breaker and things like that. But... Elle 1:05:29 I think and I think also there's, you know, there's a distinction to be made between filming yourself setting a cop car on fire, and filming someone else setting a cop car on fire, because there's a consent elemenet, right? Margaret 1:05:34 Totally. Totally. Elle 1:05:47 You shouldn't like...Don't do crime. Nobody should do crime. But if you are going to do crime, do it on purpose. Right? Margaret 1:05:55 Fair enough. Elle 1:05:55 Like that's, that's what civil disobedience is. Civil disobedience is doing crime for the purpose of getting caught to make a point. That's what it is. And if you if you really feel that strongly about doing a crime to make a point, and you want everyone to know that you're doing a crime to make a point, then that's, that's a risk calculation that you yourself need to make for yourself. But you can't make that calculation for anybody else. Margaret 1:06:25 I think that's a great way to sum it up. Elle 1:06:27 So unless your friend is like, "Yo, I'm gonna set this cop car on fire. Like, get the camera ready, hold my beer." You probably shouldn't be filming them. Margaret 1:06:38 See you in 30 years. Elle 1:06:39 Right? You probably shouldn't be filming them setting the cop car on fire either. Margaret 1:06:43 No. No Elle 1:06:44 And also, that's a shitty friend because they've just implicated you in conspiracy, right? Margaret 1:06:49 Yeah. Elle 1:06:50 Friends don't implicate friends. Margaret 1:06:53 It's a good, it's a good rule. Yeah, yeah. All right. Well, I that's not entirely where I immediately expected to go with Threat Modeling. But I feel like we've covered an awful lot. Is there something? Is there something...Do you have any, like final thoughts about Threat Modeling, and as relates to the stuff that we've been talking about? Elle 1:07:18 I think that you know, the thing that I do really want to drive home. And that honestly does come back to your point about clandestinity being a trap is that, again, the purpose of threat modeling is to first understand, you know, what risks you're trying to protect against, and then figure out how to do what you're accomplishing in a way that minimizes risk. But the important piece is still doing whatever it is that you're trying to accomplish, whether that's movement building, or something else. And so there there is, there is a calculation that needs to be made in terms of what level of risk is acceptable to you. But if if, ultimately, your risk threshold is preventing you from accomplishing whatever you're trying to accomplish, then it's time to take a step back, recalculate and figure out whether or not you actually want to accomplish the thing, and what level of risk is worth taking. Because I think that, you know, again, if if you're, if your security mechanisms are preventing you from doing the thing that you're you set out to try to do, then your adversaries are already winning, and something probably needs to shift. Margaret 1:08:39 I really like that line. And so I feel like that's a decent spot, place to end on. Do. Do you have anything that you'd like to shout out? People can follow you on the internet? Or they shouldn't follow you on the internet? What? What do you what do you want to advocate for here? Elle 1:08:53 If you follow me on the internet, I'm so sorry. That's really all I can say. I'm, I am on the internet. I am a tire fire. I'm probably fairly easy to find based on my name, my pronouns and the things that I've said here today, and I can't recommend following my Twitter. Margaret 1:09:17 I won't put in the show notes then. Elle 1:09:19 I mean, you're welcome to but I can't advocate in good conscience for anyone to pay attention to anything that I have to say. Margaret 1:09:27 Okay, so go back and don't listen to the last hour everyone. Elle 1:09:31 I mean, I'm not going to tell you what to do. Margaret 1:09:34 I am that's my favorite thing to do. Elle 1:09:36 I mean, you know, this is just like my opinion, you know? There are no leaders. We're all the leaders. I don't know. Do do do what you think is right. Margaret 1:09:55 Agreed. All right. Well, thank you so much. Elle 1:09:59 Thank you. I really appreciate it. Margaret 1:10:07 Thank you so much for listening. If you enjoyed this podcast, you should tell people about it by whatever means occurs to you to tell people about it, which might be the internet, it might even be in person, it might be by taking a walk, leaving your cell phones behind, and then getting in deep into the woods and saying," I like the following podcast." And then the other person will be like, "Really, I thought we were gonna make out or maybe do some crimes." But, instead you have told them about the podcast. And I'm recording this at the same time as I record the intro, and now the
Sean Moriarity, the author of Genetic Algorithms in Elixir, lays out Machine Learning in the Elixir space. We talk about where it is today and where it's going in the future. Sean talks more about his book, how that led to working with José Valim which then led to the creation of Nx. He fills us in on recent ML events with Google and Facebook and shows us how Elixir fits into the bigger picture. It's a fast developing area and Sean helps us follow the important points even if we aren't doing ML ourselves… because our teams may still need it. Show Notes online - http://podcast.thinkingelixir.com/102 (http://podcast.thinkingelixir.com/102) Elixir Community News - https://github.com/phoenixframework/phoenixliveview/blob/v0.17.10/CHANGELOG.md (https://github.com/phoenixframework/phoenix_live_view/blob/v0.17.10/CHANGELOG.md) – Phoenix LiveView gets a minor release v0.17.10 with formatting improvements - https://www.rakeroutes.com/2022/05/18/let-s-write-an-elixir-livebook-smart-cell (https://www.rakeroutes.com/2022/05/18/let-s-write-an-elixir-livebook-smart-cell) – Creating custom Livebook Smart Cells - https://twitter.com/evadne/status/1527651328188723209 (https://twitter.com/evadne/status/1527651328188723209) – Etso was updated to work with the latest Ecto - https://github.com/evadne/etso (https://github.com/evadne/etso) – Etso library Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/ (https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/) – Genetic Algorithms in Elixir - https://github.com/elixir-nx/nx (https://github.com/elixir-nx/nx) – Numerical Elixir (Nx) - https://github.com/elixir-nx/axon (https://github.com/elixir-nx/axon) – Nx-powered Neural Networks for Elixir. - https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/ (https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/) – Book - Genetic Algorithms in Elixir - https://scala-lang.org/ (https://scala-lang.org/) - https://www.quora.com/ (https://www.quora.com/) - https://pragprog.com/titles/elixir16/programming-elixir-1-6/ (https://pragprog.com/titles/elixir16/programming-elixir-1-6/) - https://pragprog.com/titles/phoenix14/programming-phoenix-1-4/ (https://pragprog.com/titles/phoenix14/programming-phoenix-1-4/) - https://www.linkedin.com/in/briancardarella/ (https://www.linkedin.com/in/briancardarella/) - https://dockyard.com/ (https://dockyard.com/) - https://dockyard.com/blog/authors/sean-moriarity (https://dockyard.com/blog/authors/sean-moriarity) – Sean's blog posts on Dockyard blog - https://numpy.org/ (https://numpy.org/) - https://llvm.org/ (https://llvm.org/) - https://en.wikipedia.org/wiki/Softmax_function (https://en.wikipedia.org/wiki/Softmax_function) - https://en.wikipedia.org/wiki/Naturallanguageprocessing (https://en.wikipedia.org/wiki/Natural_language_processing) - https://xkcd.com/1897/ (https://xkcd.com/1897/) – XKCD comic - https://www.image-net.org/ (https://www.image-net.org/) - https://www.deeplearningbook.org/ (https://www.deeplearningbook.org/) - https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html) - https://erlef.org/wg/machine-learning (https://erlef.org/wg/machine-learning) – Erlang Eco-system foundation machine working group Guest Information - https://twitter.com/sean_moriarity (https://twitter.com/sean_moriarity) – on Twitter - https://github.com/seanmor5/ (https://github.com/seanmor5/) – on Github - https://seanmoriarity.com (https://seanmoriarity.com) – Blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)
Opinions That Don't Matter podcast episode 108 00:00 Looking for the dog that bit Roxy 03:20 Kati has a wonderful community, after releasing a personal video, there were so many nice comments 5:50 Song writing & Keith Richards 6:45 Rick Rubin was a fantastic guest on the Lex Fridman Podcast https://open.spotify.com/episode/41N7H6fGT7repvu8OP84su?si=c23b8edc336c43f5 16:17 Sean's Italian is reviewed, an update on caffeine consumption & sandwich review 19:18 We're about to have a new baby in the family. 21:42 Christina P. and Chase O'Donnell - Stand up comedy tickets: https://christinaponline.com/tour-dates 21:57 Looking at your phone in the morning… The attack on the Subway in Brooklyn 26:57 Where is Morena from? AUDIENCE LETTERS 30:00 Do good and talk about it …and dinner with an Italian family - Christoph our Ambassador of R&R 37:00 A story about showers in France… 44:17 Ride into the Danger Zone....... Top Gun - Erin the AWESOME Toronto contributor 47:00 Response to ep. 95 & Norwegian Lesson! -Christina 01:03:00 Finally following up & A Romantic Tale from Venice - Hannah Aussie In Canada 01:11:33 Sean the Paperboy consolidated routes by any means necessary. 01:13:13 How to develop b&w film with coffee at home (+ short health update) https://youtu.be/hLjVntJIU5Q - Matt Love for Cheez Whiz! 01:23:00 On password selection - Science Ben excellent web comic XKCD https://xkcd.com/936/#podcast --- Send in a voice message: https://anchor.fm/otdm/message Support this podcast: https://anchor.fm/otdm/support