Podcast appearances and mentions of Larry Wall

American computer programmer and author

  • 22PODCASTS
  • 24EPISODES
  • 50mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 13, 2022LATEST
Larry Wall

POPULARITY

20172018201920202021202220232024


Best podcasts about Larry Wall

Latest podcast episodes about Larry Wall

The Nonlinear Library
EA - Useful Vices for Wicked Problems by Holden Karnofsky

The Nonlinear Library

Play Episode Listen Later Apr 13, 2022 23:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Useful Vices for Wicked Problems, published by Holden Karnofsky on April 12, 2022 on The Effective Altruism Forum. Cross-posted from Cold Takes. I've claimed that the best way to learn is by writing about important topics. (Examples I've worked on include: which charity to donate to, whether life has gotten better over time, whether civilization is declining, whether AI could make this the most important century of all time for humanity.) But I've also said this can be "hard, taxing, exhausting and a bit of a mental health gauntlet," because: When trying to write about these sorts of topics, I often find myself needing to constantly revise my goals, and there's no clear way to know whether I'm making progress. That is: trying to write about a topic that I'm learning about is generally a wicked problem. I constantly find myself in situations like "I was trying to write up why I think X, but I realized that X isn't quite right, and now I don't know what to write." and "I either have to write something obvious and useless or look into a million more things to write something interesting." and "I'm a week past my self-imposed deadline, and it feels like I have a week to go, but maybe it's actually 12 weeks - that's what happened last time." Overall, this is the kind of work where I can't seem to tell how progress is going, or stay on a schedule. This post goes through some tips I've collected over the years for dealing with these sorts of challenges - both working on them myself, and working with teammates and seeing what works for them. A lot of what matters for doing this sort of work is coming at it with open-mindedness, self-criticality, attention to detail, and other virtues. But a running theme of this work is that it can be deadly to approach with too much virtue: holding oneself to self-imposed deadlines, trying for too much rigor on every subtopic, and otherwise trying to do "Do everything right, as planned and on time" can drive a person nuts. So this post is focused on a less obvious aspect of what helps with wicked problems, which is useful vices - antidotes to the kind of thoroughness and conscientiousness that lead to unreachable standards, and make wicked problems impossible. I've organized my tips under the following vices, borrowing from Larry Wall and extending his framework a bit: Laziness. When some key question is hard to resolve, often the best move is to just ... not resolve it, and change the thesis of your writeup instead (and change how rigorous you're trying to make it). For example, switching from "These are the best charities" to "These are the charities that are best by the following imperfect criteria." Impatience. One of the most crucial tools for this sort of work is interrupting oneself. I could be reading through study after study on some charitable activity (like building wells), when stepping back to ask "Wait, why does this matter for the larger goal again?" could be what I most need to do. Hubris. Whatever I was originally arguing ("Charity X is the best"), I'm probably going to realize at some point that I can't actually defend it. This can be demoralizing, even crisis-inducing. I recommend trying to build an unshakable conviction that one has something useful to say, even when one has completely lost track of what that something might be. Self-preservation. When you're falling behind, it can be tempting to make a "heroic" effort at superhuman productivity. When a problem seems impossible, it can be tempting to fix your steely gaze on it and DO IT ANYWAY. I recommend the opposite: instead of rising to the challenge, shrink from it and fight another day (when you'll solve some problem other than the one you thought you were going for). Overall, it's tempting to try to "boil the ocean" and thoroughly examine every aspect of a topic o...

The History of Computing
awk && Regular Expressions For Finding Text

The History of Computing

Play Episode Listen Later Mar 4, 2022 8:40


Programming was once all about math. And life was good. Then came strings, or those icky non-numbery things. Then we had to process those strings. And much of that is looking for patterns that wouldn't be a need with integers, or numbers. For example, a space in a string of text. Let's say we want to print hello world to the screen in bash. That would be the echo command, followed by “Hello World!” Now let's say we ran that without the quotes then it would simply echo out the word Hello to the screen, given that the interpreter saw the space and ended the command, or looked for the next operator or verb according to which command is being used. Unix was started in 1969 at Bell Labs. Part of that work was The Thompson shell, the first Unix shell, which shipped in 1971. And C was written in 1972. These make up the ancestral underpinnings of the modern Linux, BSD, Android, Chrome, iPhone, and Mac operating systems. A lot of the work the team at Bell Labs was doing was shifting from pure statistical and mathematical operations to connect phones and do R&D faster to more general computing applications. Those meant going from math to those annoying stringy things. Unix was an early operating system and that shell gave them new abilities to interact with the computer. People called files funny things. There was text in those files. And so text manipulation became a thing. Lee McMahon developed sed in 1974, which was great for finding patterns and doing basic substitutions. Another team  at Bell Labs that included Finnish programmer Alfred Aho, Peter Weinberger, and Brian Kernighan had more advanced needs. Take their last name initials and we get awk. Awk is a programming language they developed in 1977 for data processing, or more specifically for text manipulation. Marc Rochkind had been working on a version management tool for code at Bell and that involved some text manipulation, as well as a good starting point for awk.  It's meant to be concise and given some input, produce the desired output. Nice, short, and efficient scripting language to help people that didn't need to go out and learn C to do some basic tasks. AWK is a programming language with its own interpreter, so no need to compile to run AWK scripts as executable programs.  Sed and awk are both written to be used as one0line programs, or more if needed. But building in an implicit loops and implicit variables made it simple to build short but power regular expressions. Think of awk as a pair of objects. The first is a pattern followed by an action to take in curly brackets. It can be dangerous to call if the pattern is too wide open.; especially when piping information For example,  ls -al at the root of a volume and piping that to awk $1 or some other position and then piping that into xargs to rm and a systems administrator could have a really rough day. Those $1, $2, and so-on represent the positions of words. So could be directories.  Think about this, though. In a world before relational databases, when we were looking to query the 3rd column in a file with information separated by some delimiter, piping those positions represented a simple way to effectively join tables of information into a text file or screen output. Or to find files on a computer that match a pattern for whatever reason.  Awk began powerful. Over time, improvements have enabled it to be used in increasingly  complicated scenarios. Especially when it comes to pattern matching with regular expressions. Various coding styles for input and output have been added as well, which can be changed depending on the need at hand.  Awk is also important because it influenced other languages. After becoming part of the IEEE Standard 1003.1, it is now a part of the POSIX standard. And after a few years, Larry Wall came up with some improvements, and along came Perl. But the awk syntax has always been the most succinct and useable regular expression engines. Part of that is the wildcard, piping, and file redirection techniques borrowed from the original shells. The AWK creators wrote a book called The AWK Programming Language for Addison-Wesley in 1988. Aho would go on to develop influential algorithms, write compilers, and write books (some of which were about compilers). Weinberger continued to do work at Bell before becoming the Chief Technology Officer of Hedge Fund Renaissance Technologies with former code breaker and mathematician James Simon and Robert Mercer. His face led to much love from his coworkers at Bell during the advent of digital photography and hopefully some day we'll see it on the Google Search page, given he now works there.  Brian Kernighan was a contributor to the early Multics then Unix work, as well as C. In fact, an important C implementation, K&R C, stands for Kernighan and Ritchie C. He coauthored The C Programming Language ands written a number of other books, most recently on the Go Programming Language. He also wrote a number of influential algorithms, as well as some other programming languages, including AMPL. His 1978 description of how to manage memory when working with those pesky strings we discussed earlier went on to give us the Hello World example we use for pretty much all introductions to programming languages today. He worked on ARPA projects at Stanford, helped with emacs, and now teaches computer science at Princeton, where he can help to shape the minds of future generations of programming languages and their creators. 

The History of Computing
Perl, Larry Wall, and Camels

The History of Computing

Play Episode Listen Later Nov 21, 2021 15:00


Perl was started by Larry Wall in 1987. Unisys had just released the 2200 series and only a few years stopped using the name UNIVAC for any of their mainframes. They merged with Burroughs the year before to form Unisys. The 2200 was a continuation of the 36-bit UNIVAC 1107, which went all the way back to 1962. Wall was one of the 100,000 employees that helped bring in over 10 and a half billion in revenues, making Unisys the second largest computing company in the world at the time. They merged just in time for the mainframe market to start contracting. Wall had grown up in LA and Washington and went to grad school at the University of California at Berkeley. He went to the Jet Propulsion Laboratory after Grad School and then landed at System Development Corporation, which had spun out of the SAGE missile air defense system in 1955 and merged into Burroughs in 1986, becoming Unisys Defense Systems. The Cold War had been good to Burroughs after SDC built the timesharing components of the AN/FSQ-32 and the JOVIAL programming language. But changes were coming. Unix System V had been released in 1983 and by 1986 there was a rivalry with BSD, which had been spun out of UC - Berkeley where Wall went to school. And by then AT&T had built up the Unix System Development Laboratory, so Unix was no longer just a language for academics. Wall had some complicated text manipulation to program on these new Unix system and as many of us have run into, when we exceed a certain amount of code, awk becomes unwieldy - both from a sheer amount of impossible to read code and from a runtime perspective. Others were running into the same thing and so he got started on a new language he named Practical Extraction And Report Language, or Perl for short. Or maybe it stands for Pathologically Eclectic Rubbish Lister. Only Wall could know. The rise of personal computers gave way to the rise of newsgroups, and NNTP went to the IETF to become an Internet Draft in RFC 977. People were posting tools to this new medium and Wall posted his little Perl project to comp.sources.unix in 1988, quickly iterating to Perl 2 where he added the languages form of regular expressions. This is when Perl became one of the best programming languages for text processing and regular expressions available at the time. Another quick iteration came when more and more people were trying to write arbitrary data into objects with the rise of byte-oriented binary streams. This allowed us to not only read data from text streams, terminated by newline characters, but to read and write with any old characters we wanted to. And so the era of socket-based client-server technologies was upon us. And yet, Perl would become even more influential in the next wave of technology as it matured alongside the web. In the meantime, adoption was increasing and the only real resource to learn Perl was a the manual, or man, page. So Wall worked with Randal Schwartz to write Programming Perl for O'Reilly press in 1991. O'Reilly has always put animals on the front of their books and this one came with a Camel on it. And so it became known as “the pink camel” due to the fact that the art was pink and later the art was blue and so became just “the Camel book”. The book became the primary reference for Perl programmers and by then the web was on the rise. Yet perl was still more of a programming language for text manipulation. And yet most of what we did as programmers at the time was text manipulation. Linux came around in 1991 as well. Those working on these projects probably had no clue what kind of storm was coming with the web, written in 1990, Linux, written in 1991, php in 1994, and mysql written in 1995. It was an era of new languages to support new ways of programming. But this is about Perl - whose fate is somewhat intertwined. Perl 4 came in 1993. It was modular, so you could pull in external libraries of code. And so CPAN came along that year as well. It's a repository of modules written in Perl and then dropped into a location on a file system that was set at the time perl was compiled, like /usr/lib/perl5. CPAN covers far more libraries than just perl, but there are now over a quarter million packages available, with mirrors on every continent except Antartica. That second edition coincided with the release of Perl 5 and was published in 1996. The changes to the language had slowed down for a bit, but Perl 5 saw the addition of packages, objects, references, and the authors added Tom Christiansen to help with the ever-growing camel book. Perl 5 also brought the extension system we think of today - somewhat based off the module system in Linux. That meant we could load the base perl into memory and call those extensions. Meanwhile, the web had been on the rise and one aspect of the power of the web was that while there were front-ends that were stateless, cookies had come along to maintain a user state. Given the variety of systems html was able to talk to mod_perl came along in 1996, from Gisle Was and others started working on ways to embed perl into pages. Ken Coar chaired a working group in 1997 to formalize the concept of the Common Gateway Interface. Here, we'd have a common way to call external programs from web servers. The era of web interactivity was upon us. Pages that were constructed on the fly could call scripts. And much of what was being done was text manipulation. One of the powerful aspects of Perl was that you didn't have to compile. It was interpreted and yet dynamic. This meant a source control system could push changes to a site without uploading a new jar - as had to be done with a language like Java. And yet, object-oriented programming is weird in perl. We bless an object and then invoke them with arrow syntax, which is how Perl locates subroutines. That got fixed in Perl 6, but maybe 20 years too late to use a dot notation as is the case in Java and Python. Perl 5.6 was released in 2000 and the team rewrote the camel book from the ground up in the 3rd edition, adding Jon Orwant to the team. This is also when they began the design process for Perl 6. By then the web was huge and those mod_perl servlets or CGI scripts were, along with PHP and other ways of developing interactive sites, becoming common. And because of CGI, we didn't have to give the web server daemons access to too many local resources and could swap languages in and out. There are more modern ways now, but nearly every site needed CGI enabled back then. Perl wasn't just used in web programming. I've piped a lot of shell scripts out to perl over the years and used perl to do complicated regular expressions. Linux, Mac OS X, and other variants that followed Unix System V supported using perl in scripting and as an interpreter for stand-alone scripts. But I do that less and less these days as well. The rapid rise of the web mean that a lot of languages slowed in their development. There was too much going on, too much code being developed, too few developers to work on the open source or open standards for a project like Perl. Or is it that Python came along and represented a different approach with modules in python created to do much of what Perl had done before? Perl saw small slow changes. Python moved much more quickly. More modules came faster, and object-oriented programming techniques hadn't been retrofitted into the language. As the 2010s came to a close, machine learning was on the rise and many more modules were being developed for Python than for Perl. Either way, the fourth edition of the Camel Book came in 2012, when Unicode and multi-threading was added to Perl. Now with Brian Foy as a co-author. And yet, Perl 6 sat in a “it's coming so soon” or “it's right around the corner” or “it's imminent” for over a decade. Then 2019 saw Perl 6 finally released. It was renamed to Raku - given how big a change was involved. They'd opened up requests for comments all the way back in 2000. The aim was to remove what they considered historical warts, that the rest of us might call technical debt. Rather than a camel, they gave it a mascot called Camelia, the Raku Bug. Thing is, Perl had a solid 10% market share for languages around 20 years ago. It was a niche langue maybe, but that popularity has slowly fizzled out and appears to be on a short resurgence with the introduction of 6 - but one that might just be temporary. One aspect I've always loved about programming is the second we're done with anything, we think of it as technical debt. Maybe the language or server matures. Maybe the business logic matures. Maybe it's just our own skills. This means we're always rebuilding little pieces of our code - constantly refining as we go. If we're looking at Perl 6 today we have to look at whether we want to try and do something in Python 3 or another language - or try and just update Perl. If Perl isn't being used in very many micro-services then given the compliance requirements to use each tool in our stack, it becomes somewhat costly to think of improving our craft with Perl rather than looking to use possibly more expensive solutions at runtime, but less expensive to maintain. I hope Perl 6 grows and thrives and is everything we wanted it to be back in the early 2000s. It helped so much in an era and we owe the team that built it and all those modules so much. I'll certainly be watching adoption with fingers crossed that it doesn't fade away. Especially since I still have a few perl-based lamda functions out there that I'd have to rewrite. And I'd like to keep using Perl for them!

Oxide and Friends
Dijkstra's Tweetstorm

Oxide and Friends

Play Episode Listen Later Oct 19, 2021 86:51


Oxide and Friends Twitter Space: October 18th, 2021Dijkstra's TweetstormWe've been holding a Twitter Space weekly on Mondays at 5p for about an hour. Even though it's not (yet?) a feature of Twitter Spaces, we have been recording them all; here is the recording for our Twitter Space for October 18th, 2021.In addition to Bryan Cantrill and Adam Leventhal, speakers on October 18th included Edwin Peer, Dan Cross, Ryan Zezeski, Tom Lyon, Aaron Goldman, Simeon Miteff, MattSci, Nate, raycar5, night, and Drew Vogel. (Did we miss your name and/or get it wrong? Drop a PR!)Some of the topics we hit on, in the order that we hit them:Dijkstra's 1975 “How do we tell truths that might hurt?” EWD 498 tweet > PL/1 > belongs more to the problem set than to the solution setThe use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offenceAPL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums - [@3:08](https://youtu.be/D-Uzo7M-ioQ?t=188) Languages affect the way you think It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration. - [@4:33](https://youtu.be/D-Uzo7M-ioQ?t=273) Adam's Perl story - The Camel Book, not to be confused with OCaml - “You needed books to learn how to do things” - CGI - [@9:04](https://youtu.be/D-Uzo7M-ioQ?t=544) Adam meets Larry Wall - [@11:59](https://youtu.be/D-Uzo7M-ioQ?t=719) Meeting Dennis Ritchie - “We were very excited; too excited some would say…” - [@15:04](https://youtu.be/D-Uzo7M-ioQ?t=904) Effects of learning languages, goals of a language, impediments to learning - Roger Hui of APL and J fame, RIP. - Accessible as a language value - Microsoft Pascal, Turbo Pascal - Scratch - LabVIEW - [@25:31](https://youtu.be/D-Uzo7M-ioQ?t=1531) Nate's experience - Languages have different audiences - [@27:18](https://youtu.be/D-Uzo7M-ioQ?t=1638) Human languages - The Esperanto con-lang - Tonal langages - Learning new and different programming languages - [@37:06](https://youtu.be/D-Uzo7M-ioQ?t=2226) Adam's early JavaScript (tweet) - circa 1996 - [@44:10](https://youtu.be/D-Uzo7M-ioQ?t=2650) Learning from books, sitting down and learning by typing out examples - How do you learn to program in a language? - Zed Shaw on learning programming through spaced repetition blog - Rigid advice on how to learn - ALGOL 68, planned successor to ALGOL 60 - ALGOL 60, was, according to Tony Hoare, “An improvment on nearly all of its successors” - [@50:41](https://youtu.be/D-Uzo7M-ioQ?t=3041) Where does Rust belong in the progression of languages someone learns? Rust is what happens when you've got 25 years of experience with C++, and you remove most of the rough edges and make it safer? - “Everyone needs to learn enough C, to appreciate what it is and what it isn't” - [@52:45](https://youtu.be/D-Uzo7M-ioQ?t=3165) “I wish I had learned Rust instead of C++” - [@53:35](https://youtu.be/D-Uzo7M-ioQ?t=3215) Adam: Brown revisits intro curriculum, teaching Scheme, ML, then Java - Adam learning Rust back in 2015 (tweet) “First Rust Program Pain (So you can avoid it…)” Tom: There's a tension in learning between the people who hate magic and want to know how everything works in great detail, versus the people who just want to see something useful done. It's hard to satisfy both. - [@1:00:02](https://youtu.be/D-Uzo7M-ioQ?t=3602) Bryan coming to Rust - “Learn Rust with entirely too many linked lists” guide - Rob Pike interview Its concurrency is rooted in CSP, but evolved through a series of languages done at Bell Labs in the 1980s and 1990s, such as Newsqueak, Alef, and Limbo. - [@1:03:01](https://youtu.be/D-Uzo7M-ioQ?t=3781) Debugging Erlang processes. Ryan on runtime v. language - Tuning runtimes. Go and Rust - [@1:06:42](https://youtu.be/D-Uzo7M-ioQ?t=4002) Rust is its own build system - Bryan's 2018 “Falling in love with Rust” post - Lisp macros, Clean, Logo, Scratch - [@1:11:27](https://youtu.be/D-Uzo7M-ioQ?t=4287) The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity. - [@1:12:09](https://youtu.be/D-Uzo7M-ioQ?t=4329) Oxide bringup updates - I2C Inter-Integrated Circuit - SPI Serial Peripheral Interface - iCE40If we got something wrong or missed something, please file a PR! Our next Twitter space will likely be on Monday at 5p Pacific Time; stay tuned to our Twitter feeds for details. We'd love to have you join us, as we always love to hear from new speakers!

Byte Papo
B0011: Larry Wall

Byte Papo

Play Episode Listen Later Aug 17, 2021 8:00


Neste episódio, apresentamos a história de Larry Wall, criador da linguagem de programação Perl, linguista e ex-funcionário do Laboratório de Propulsão a Jato da NASA, além de pretendente a missionário na África. Separe um byte de minutos do seu dia e venha conosco no quadro Uma Byta História! Quer nos conhecer melhor e ter acesso a mais conteúdos? Visite bytepapo.com! _____ Apresentação: Mateus Mendelson Pesquisa biográfica: Hugo Tadashi Edição de áudio: Randi Maldonado

Thoughty Auti - The Autism & Mental Health Podcast
Neurotribes - The History & Future Of Autism w/ Steve Silberman

Thoughty Auti - The Autism & Mental Health Podcast

Play Episode Listen Later Jul 11, 2021 67:50


What journey did Steve go on when writing Neurotribes? How was autism first developed? What is the Geek Syndrome? In this episode of the Thoughty Auti Podcast, Thomas Henley is joined by Steve Silberman to talk about the history and future of autism - Steve is a multi-award winning writer, former writer for Wired and Ally Of The Year award winner for his contributions to the autistic community! They start of the podcast talking about two authors Steve had involvement with during the development stages of their books - Alex Riley in A Cure For Darkness & Dara McAnulty in Diary Of A Young Naturalist - and the background to these standout writers. With the daunting reality of COVID-19, Steve highlights the sly endorsements of eugenics during the pandemic, further exploring the affect COVID has had on the autistic people. In a welcome discussion, the two talk about the recent topic of self diagnosis and Steve's recent award... the Samuel Johnson Prize. Steve started his research into autism following a tech giant cruise across the Alaskan Panhandle where he met the renowned Larry Wall, the inventor of the innovative programming language PERL. In an investigative piece ‘The Geek Syndrome' Steve ignored the ramblings of scared parents around the ‘epidemic of autism' in Silicon Valley, pressing on to look at the strengths demonstrated by autistic tech entrepreneurs. Steve became obsessed with autism and its complex history, pouring his heart, soul and money into the formation of Neurotribes. Highlighting the misconceptions around the conceptualisation of Aspergers Syndrome and the little known past of Asperger's Jewish colleagues, the two delve into the developments that may lie in the future. Thank you so much if you've listened through all the episodes; it's been an amazing season and I'm definitely looking forward to bring you an even better season... very soon! If you have an exciting or interesting story and want to appear on the next season, please contact me at: aspergersgrowth@gmail.com Steve's Links:- Twitter - https://twitter.com/stevesilberman?s=21 Buy Neurotribes - https://www.amazon.co.uk/NeuroTribes-Legacy-Autism-Smarter-Differently/dp/1760113638 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Website - https://www.thomashenley.co.uk ♫ THOUGHTY AUTI PODCAST Get it on Spotify free here - https://open.spotify.com/show/6vjXgCB7Q3FwtQ2YqPjnEV FOLLOW ME On Social Media ♥ - ☼ Facebook - Aspergers Growth ☼ Twitter - @aspergersgrowth ☼ Instagram - @aspergersgrowth Support via Patreon! - https://www.patreon.com/aspergersgrowth

BSD Now
367: Changing jail datasets

BSD Now

Play Episode Listen Later Sep 10, 2020 45:28


A 35 Year Old Bug in Patch, Sandbox for FreeBSD, Changing from one dataset to another within a jail, You don’t need tmux or screen for ZFS, HardenedBSD August 2020 Status Report and Call for Donations, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/) Headlines A 35 Year Old Bug in Patch (http://bsdimp.blogspot.com/2020/08/a-35-year-old-bug-in-patch-found-in.html) Larry Wall posted patch 1.3 to mod.sources on May 8, 1985. A number of versions followed over the years. It's been a faithful alley for a long, long time. I've never had a problem with patch until I embarked on the 2.11BSD restoration project. In going over the logs very carefully, I've discovered a bug that bites this effort twice. It's quite interesting to use 27 year old patches to find this bug while restoring a 29 year old OS... Sandbox for FreeBSD (https://www.relkom.sk/en/fbsd_sandbox.shtml) A sandbox is a software which artificially limits access to the specific resources on the target according to the assigned policy. The sandbox installs hooks to the kernel syscalls and other sub-systems in order to interrupt the events triggered by the application. From the application point of view, application working as usual, but when it wants to access, for instance, /dev/kmem the sandbox software decides against the assigned sandbox scheme whether to grant or deny access. In our case, the sandbox is a kernel module which uses MAC (Mandatory Access Control) Framework developed by the TrustedBSD team. All necessary hooks were introduced to the FreeBSD kernel. Source Code (https://gitlab.com/relkom/sandbox) Documentation (https://www.relkom.sk/en/fbsd_sandbox_docs.shtml) News Roundup Changing from one dataset to another within a jail (https://dan.langille.org/2020/08/16/changing-from-one-dataset-to-another-within-a-freebsd-iocage-jail/) ZFS has a the ability to share itself within a jail. That gives the jail some autonomy, and I like that. I’ve written briefly about that, specifically for iocage. More recently, I started using a zfs snapshot for caching clearing. The purpose of this post is to document the existing configuration of the production FreshPorts webserver and outline the plan on how to modify it for more zfs-snapshot-based cache clearing. You don’t need tmux or screen for ZFS (https://rubenerd.com/you-dont-need-tmux-or-screen-for-zfs/) Back in January I mentioned how to add redundancy to a ZFS pool by adding a mirrored drive. Someone with a private account on Twitter asked me why FreeBSD—and NetBSD!—doesn’t ship with a tmux or screen equivilent in base in order to daemonise the process and let them run in the background. ZFS already does this for its internal commands. HardenedBSD August 2020 Status Report and Call for Donations (https://hardenedbsd.org/article/shawn-webb/2020-08-15/hardenedbsd-august-2020-status-report-and-call-donations) This last month has largely been a quiet one. I've restarted work on porting five-year-old work from the Code Pointer Integrity (CPI) project into HardenedBSD. Chiefly, I've started forward-porting the libc and rtld bits from the CPI project and now need to look at llvm compiler/linker enhancements. We need to be able to apply SafeStack to shared objects, not just application binaries. This forward-porting work I'm doing is to support that effort. The infrastructure has settled and is now churning normally and happily. We're still working out bandwidth issues. We hope to have a new fiber line ran by the end of September. As part of this status report, I'm issuing a formal call for donations. I'm aiming for $4,000.00 USD for a newer self-hosted Gitea server. I hope to purchase the new server before the end of 2020. Important parts of Unix's history happened before readline support was common (https://utcc.utoronto.ca/~cks/space/blog/unix/TimeBeforeReadline) Unix and things that run on Unix have been around for a long time now. In particular, GNU Readline was first released in 1989 (as was Bash), which is long enough ago for it (or lookalikes) to become pretty much pervasive, especially in Unix shells. Today it's easy to think of readline support as something that's always been there. But of course this isn't the case. Unix in its modern form dates from V7 in 1979 and 4.2 BSD in 1983, so a lot of Unix was developed before readline and was to some degree shaped by the lack of it. Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Mason - mailserver (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/367/feedback/Mason%20-%20mailserver.md) casey - freebsd on decline (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/367/feedback/casey%20-%20freebsd%20on%20decline.md) denis - postgres (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/367/feedback/denis%20-%20postgres.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

Founder Confessionals by Impact Founder
Pivoting During COVID 19 with Larry Wall Jr.

Founder Confessionals by Impact Founder

Play Episode Listen Later Jun 22, 2020 49:21


Larry Wall Jr. is the Founder and CEO of MTreatment, a company that helps you access all of your healthcare data through an easy and interactive portal.In this conversation, Kristin and Larry discuss entrepreneurship during the pandemic and what entrepreneurs are doing to pivot. Larry also shares his passion for the healthcare system to become less fragmented and how his company is helping people navigate the fragmented healthcare system. You can connect with Larry on LinkedIn: https://www.linkedin.com/in/lawrence-wall-jr-39a0aa9This episode is brought to you by Founders First System. Founders First System is a simple framework of metrics and disciplines that helps keep founders healthy, happy, and productive as they build their companies and change the world. For June and July 2020, Founders First System is offering Impact Founder subscribers a highly discounted membership to Peak Ability, a program for founders who've committed to track and improve their health and happiness. Join Peak Ability today at https://community.foundersfirstsystem.com/groups/2284899?utm_source=manual. You can also join the free Founders First community at https://community.foundersfirstsystem.com/ or by searching for Founders First Community in the app store on your smartphone.About Impact FounderImpact Founder is an independent social impact media company with a solution. We tell the untold stories, through in-depth inquiries delving into the failures and triumphs of entrepreneurs. Visit https://www.impactfounder.com/ to share your story of entrepreneurship.

Python Podcast
Javascript Frontends

Python Podcast

Play Episode Listen Later Apr 23, 2020 105:23


Da wir aus unterschiedlichen Gründen angefangen haben, uns auch ein bisschen mit Javascript-Frontends auseinanderzusetzen, sprechen wir heute mal ganz allgemein über dieses Thema. Und wie man dann von da aus mit - üblicherweise in Python implementierten - Backends spricht. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de Lost & Found PyData Deep Dive Meta-Podcast Audio Hard/Software Headsets von Beyerdynamic: DT 297 DT 797 Superlux HMC 660 X und wie man es verwendet HMC 660 X über Klinke anschliessen Audiointerface, das nativ 12v Phantomspeisung kann: Zoom H6 Ultraschall REAPER Studio Link / Beta Zencastr Videokonferenzsoftware Zoom Microsoft Teams Selbsthosting möglich: Jitsi BigBlueButton Pythoncamp Google Meet Whereby FaceTime News aus der Szene A Language Creators' Conversation: Guido van Rossum, James Gosling, Larry Wall & Anders Hejlsberg Django 1.11 EOL Pytest troubles Pyenv windows Javascript Frontends Vielleicht der Ort, um eine Lerngruppe zu organisieren: Vue-JS-Cologne vue react angular jQuery History API REST / GraphQL Relay / Apollo / axios ASGI Single page application redux DRF serializer Monorepo Jacob Kaplan-Moss - Assets in Django without losing your hair - PyCon 2019 WhiteNoise django-storages webpack Parcel FastAPI / Starlette Öffentliches Tag auf konektom

CodeArmy
Ser un Programador Perezoso realmente es Malo?

CodeArmy

Play Episode Listen Later Apr 10, 2020 52:12


En este episodio te hacemos una pequeña introducción a cuales son las Virtudes de un buen Programador

Technically Religious
S1E38: End of Season Wrap-Up

Technically Religious

Play Episode Listen Later Dec 31, 2019 40:55


In our last episode of the season Josh and Leon look back at the stories that most stood out and the data that shows how we performed; and then look ahead to what next year will bring. Stick with us as we highlight some of the greatest moments of season one, and chart a course into season 2. Listen or read the transcript below. Josh: 00:00 Welcome to our podcast where we talk about the interesting, frustrating and inspiring experiences we have as people with strongly held religious views working in corporate IT. We're not here to preach or teach you our religion. We're here to explore ways we make our career as IT professionals mesh - or at least not conflict - with our religious life. This is Technically Religious.Leon: 00:23 It's our last episode of the year. And so we're going to do what every major Hollywood production does.Josh: 00:27 Take a vacation to Hawaii and bring the film crew so we can expense it?Leon: 00:31 Uh, no.Josh: 00:32 And then do a retrospective episode so that we don't have to actually create that much!Leon: 00:36 Okay, so you're half right. Actually, maybe a third, right? Because we're still going to do a full episode.Josh: 00:40 And no Hawaii?Leon: 00:42 No Hawaii. So let's dive right in. I'm Leon Adato.Josh: 00:47 And I'm Josh Biggley. And while we normally start the show with a shameless self promotion today we're going to do an end of the year economy size version. Like we shopped at Costco,Leon: 00:57 Right, exactly. For all this stuff that we need for the end of year, all our parties and everything like that. Right. So instead of introducing just the two of us, we're going to introduce everyone who's been on the podcast this year. So here we go! Um, Josh, kick it off.Josh: 01:11 All right, so, uh, Josh Biggley, Tech Ops Strategy Consultant. Now with New Relic. You can find me on the Twitters @jbiggley. I am officially as of this last week officially. ex-Mormon.Leon: 01:20 Do I say congratulations?Josh: 01:22 I think so. Maybe there's a hallmark card for it. I don't know, but yeah, no, we officially resigned this week. It came through a Thursday, Wednesday. I don't remember. Uh, yeah, so that's it. We're done.Leon: 01:33 Okay. All right. And, uh, I'm Leon Adato. I'm a Head Geek at SolarWinds. You can find me on the Twitters @LeonAdato. I also pontificate on technical and religious things at https://www.Adatosystems.com. I am still Orthodox Jewish. I am not ex anything. Uh, and in the show notes, just so you know, we're going to list out everybody that we talk about in the next few minutes along with all of their social media connections and the episodes they appear in so you can look them up. We're just going to go back and forth on this one. So I'm going to kick it off. Doug Johnson was on our show. He's the CTO of WaveRFID.Josh: 02:08 Destiny Bertucci is the product manager at SolarWinds... uh, "A" product manager. They have lots of them. You can find her on the Twitters @Dez_sayz,Leon: 02:17 And also a program manager at Solarwinds, Kate Asaff.Josh: 02:21 All right. And Roddie Hasan, Technical Solutions Architect at Cisco.Leon: 02:25 Al Rasheed, who's contractor and virtualization admin. Extra-ordinaire.Josh: 02:28 Indeed. Xtrordinair, a Mike Wise president of blockchain wisdom. I see. I see what he did there.Leon: 02:35 Yeah, yeah. Blockchain wisdom, Wise-dom, right, whatever. Okay. Keith Townsend, who is CEO of CTO AdvisorJosh: 02:43 Yechiel Kalmenson is a software engineer at Pivotal. Yay.Leon: 02:47 Yay. I'm so glad that you got to say his name again. Cory Adler, who's lead developer at park place.Josh: 02:53 Rabbi. Ben Greenberg is developer advocate at Vonage.Leon: 02:57 Steven Hunt or "Phteven" as we like to call him, Steven Hunt, who is senior director of product management at DataCore software.Josh: 03:04 All right. Leon, you're going to have to help me here because I know I'm going to mis-pronounce this name.Leon: 03:08 Go for it. It's a hard "H". It's a hard H.Josh: 03:11 Hame? Chame?Leon: 03:11 Chaim (Cha-yim).Josh: 03:11 Okay. Chaim Weiss a front end angular developer at DecisionLink there. I feel like we probably should have done that a little different and not made the guy who does not, um, you know, speak,Leon: 03:25 No, I think we did it exactly right.Josh: 03:29 You are a scoundrel.Leon: 03:30 I am. So, Hey, you can have me say all the hard, uh, Mormon names.Josh: 03:37 Definitely. Oh, we need to insert some of those. All right, let's talk about numbers cause I mean, I, I, I'm a number geek. I love numbers. You called me out today on Twitter, uh, because I was complaining about a certain hundred billion dollar investment account that has certain former, uh, church that I have or a church that I formerly belonged to, might have. And I was comparing it to the bill and Melinda Gates foundation. Um, our numbers don't have nearly as many zeros.Leon: 04:02 No, not nearly as much. Um, and the numbers we're talking about are not financial. The numbers that we're going to talk about is just, uh, who's been listening to the episode. So, uh, I think I mentioned the top of the show. This is our last episode. It's number 38 for the year. We got a late start in the year, but we've been almost every week. So 38 episodes, uh, and yay. And you can find us on a variety of platforms you can find us on. I'm just going to do this in one breath. iTunes, Spotify, Google play, Stitcher pocket cast, Podbean, YouTube, PlayerFM , iHeartRadio. And of course you can listen directly from the website at https://www,technicallyreligious.com.Josh: 04:37 Wow, congratulations. That was well done.Leon: 04:39 Thank you.Josh: 04:41 All right, so, um, let's talk about who's listening. I mean, or maybe how many people are listening. So as of this recording or prior to this recording, um, we've had 2100... Over 2100 listens and downloads. OVER 21... Does that mean like 2101 or we.Leon: 04:57 It's anything between 2101 and a billion.Josh: 05:00 Sweet.Leon: 05:01 But you have to figure that if it was anything close to say 3000, we probably would have said it.Josh: 05:05 That that is true. So over 2100 listens and because we like math, that's about 50 listeners per episode. Thanks mom. Appreciate.Leon: 05:14 Right. It's yeah, it's not necessarily listened nerves, it's just people who've listened. So yes, it could have been both of our moms clicking the podcast repeatedly. Hopefully that's not the case. And in those 2100 listens, the results are that the top five episodes for the year based on the listen count. Uh, our number one episode is also our number one episode, "Religious Synergy". Podcast episode number one is first with 89 listeners.Josh: 05:42 That's going way back, way back. Tied actually for number one, but not the first episode was episode 12"Ffixing the World One Error Message at a Time." That was a good episode.Leon: 05:55 It really was. There were some amazing aha moments for me in that one. Uh, number three is episode 17, "Pivoting Our Career on the Tip of a Torah Scroll," which is where I was talking with Cory Adler, Rabbi Ben Greenberg, and Yechiel Kalmenson about their respective transitions from the rabbinate from rabbinic life or just Yeshiva life into becoming programmers, which was kind of a weird, interesting pivot in and of itself. And that had 76 listeners.Josh: 06:25 Following up to... I mean, that really riveting discussion. I mean, honestly, it, it, it was very interesting to me is this whole idea of a possible imposter syndrome, which apparently I'm imposing on you by making you listen to this episode? I don't know. Um, episode 11 was "Imposter Syndrome" with 71 listeners. Um, I would encourage others to listen to it because it's still very, very relevant.Leon: 06:51 Yeah. Yeah, there was, again, that was another one where I think we had a few aha moments both in, in ourselves. Like, "Oh, that's right. That's it. You know, that's a good way to look at it. That's an interesting way to..." You know, some and some ways to deal with imposter syndrome, which I think in IT is definitely a thing. Um, and the last of the top five is episode three. So going again, way back, "Being a Light Unto the Nations During a Sev One Call," I think the "sev one call" was what got people's attention. Um, and that had 68 listeners.Josh: 07:20 I want to point out that this is the first time in my entire career that I have not been on call.Leon: 07:26 Wow.Leon: 07:27 Right. I realized that my very first, I mean maybe my second week at new Relic, I was like, Oh my goodness, I'm not on call anymore. I, no one's going to call me when there's a Sev One. It was weird.Leon: 07:38 Yeah. That's a, that's a, and that's something we're going to talk about in the coming year. One of the episodes is how we have to, uh, almost rewire our brain for different, um, positive feedback loops when we change, when we significantly change our role. And that was something that actually, uh, Charity Majors talked about on Twitter about a month ago is going from developer to CEO / CTO, and then back to developer and how it's just a completely different positive reinforcement model and what that's like, what that does and we'll talk about that. But yeah, it's, it's really weird when you make the transition. Um, as far as numbers, I also want to talk about where people are listening from. Uh, I will say "obviously:... Obviously the, the largest number of our listeners, uh, come from the United States about, uh, 1,586 or 82% of our listeners from the U S but that's not everything. It's, you know, it's not all about the U S as many people not in the U S remind us.Josh: 08:33 I mean, Canada's pretty far down the list. I mean, the UK came in at number two at 104. So thanks Jez (Marsh) for listening to all of our episodes. Three times. Is that the way it works?Leon: 08:44 Yeah, something like that. That was the numbers, right? Three again, you know, a couple of our UK listeners just kept on clicking. Um, interestingly, number three position is Israel with 73 listens. So I can think of a few people, Ben Greenberg being one of them, but Sharone Zeitzman and a few other and Aaron Wolf, uh, are people I know there, but who knows where those are. The, you know, 70 clicks came from.Josh: 09:06 Are you asking your son to click every week as well?Leon: 09:09 He actually is in Yeshiva. He doesn't have access.Josh: 09:11 Oh, interesting. So you're not, you're not gaming. All right. I get you're not gaming the system. I appreciate that. Um, so number four, Germany, um, I don't know anyone on German. Well... Nope, no.Leon: 09:22 Well Sasha Giese, another Head Geek. He's in Germany. Well, actually he's in Cork, but I don't know what kind of, how he VPNs things. So he's either the United, the UK folks or he's the Germany folks. Who knows. Um, let's see. Number five position is Finland with 38 listeners. And then we get to...Josh: 09:39 Canada!!Leon: 09:39 Oh, Canada,Josh: 09:42 28. Um, yeah. Yay. VPN. I'll tell him and I say, okay, so Canadians need to up your game.Leon: 09:50 Puerto Rico comes in next with 8 listens or 8 listeners. It's hard to tell.Josh: 09:55 Austria?Leon: 09:55 Austria.Josh: 09:55 People listen from Austria?Leon: 09:59 They listened to us from Australia.Josh: 10:00 Five people in Austria. Yay. Austria.Leon: 10:02 Right? And Australia, not to be confused with Austria. Uh, also five listens and number 10:Josh: 10:07 Uh, Czech Republic number four. All right, with four. I don't know what about in the Czech Republic either.Leon: 10:13 So I know a lot of, uh, SolarWinds, developers are in the Czech Republic. So that could be, that could be it. So thank you. There's, there's more stats than that. I mean, you know, it, it goes down all the way to Vietnam and the Philippines, and they are the ones with one listen each, I don't know who it is, whoever the person is from Belgium. Thank you for listening. Same thing for France in Japan. But, uh, we appreciate all the people who are listening.Josh: 10:36 Our Bahamas listeners, all two of you, if you'd like us to come and visit, we've been more than happy to do that, especially during the cold winter months. So I mean, just get ahold of us. We'll arrange, we'll arrange flights.Leon: 10:47 And, and uh, the two listeners from Switzerland, um, I apologize for everything I might say about Switzerland. I didn't have a delightful time when I was there in 2000. Uh, and I'm kind of take it out on you sometimes, so thank you for listening. Anyway. All right, so where are people, is this, that's weird geographically, but how are people listening? I know I listed out the type, the platforms that we, uh, promote on, but actually people are listening in a variety of different ways. What are, some of them aren't?Josh: 11:15 So browser, uh, 370, that's almost 20% of you are listening in the browser, which means, Hey, you're listening to us at work. Great. And I'll get back to work and do your job, right?Leon: 11:23 Well, they can, they can listen while they work. It's okay. All right.Josh: 11:26 Whistle while they work?Leon: 11:27 No, listen, listen.Josh: 11:30 Oh. I thought we were promoting Disney+ all of a sudden.Leon: 11:31 No we are not promoting Disney+. We are not going to do that. Um, the next, uh, platform or agent that's being used is Overcast, which is interesting. Uh, 235 listens, came from, um, over the overcast platform,Josh: 11:44 uh, Apple podcasts coming in at 168.Leon: 11:47 So I'm willing to bet that that's destiny and Kate who are both Apple fanatics and they are just clicking repeatedly.Josh: 11:53 That's nice. Yay. Thank you. Thank you for clicking repeatedly. We appreciate that. OKhttp. I don't even know what that is.Leon: 12:00 It's an interesting little platform that some people are using and it's number four on the list. So 165 listens. PocketCasts is 133 listens. M.Josh: 12:10 My preferred platform, actually a Podcast Addict, a 124.Leon: 12:14 Spotify, which actually is how I like to listen to a lot of stuff. Spotify has 96 listens,Josh: 12:19 The PodBean app, 94 listens.Leon: 12:22 Right. And that's actually how we're hosting. We'll talk about that in a minute. iTunes. So, I'm not sure exactly the differentiation between the Apple podcast in iTunes, but iTunes is at 72 listens. And in the number 10 spot:Josh: 12:33 Google podcasts where I started listening to a lot of podcasts, 70 listens, and then, I mean the list is pretty long after that, but there's a lot of diversity out there.Leon: 12:42 Yeah. It's not just like one, one, one, one, one, you know, all the way down after that. I, you know, there's, there's a bunch of them, PlayerFM and Bullhorn and, and CFnetwork and things like that. So...Josh: 12:51 WatchOS?Leon: 12:52 Yeah, watchOS people listening to it on their watch, now. It's, you know, I mean, you know, and you've got, you know, iHeartRadio, Facebook app, um, you know, Twitter app. People are listening to us in a lot of different ways, which is kind of issues. So, so what do these numbers tell us? Okay, so those are the numbers, but what are we getting from this?Josh: 13:08 Um, people in the US like the listen to us on their watches. That would be a connection that you could possibly draw, but probably not accurate. I, the first thing is, you know, we have a long way to go. I think that 2000 listens in the better part of a year, 50 listens per episode. If you just divide it mathematically, um, there's, there's a lot more growth that we can do. So if you're listening and you think, "Oh, you know, it'd be so much easier to listen to this if you just..." Blah, blah, please let us know. Um, you know, we want to make this interesting and listen-able, whether you are listening to it live or meaning, you know, from a podcast platform or you're reading it through a transcript or what have you, please let us know what we can do to make the podcast more consumable for you or your friends or family or coworkers.Josh: 13:56 If that suggestion is that I don't participate anymore as well to make up more or listen-able, I mean, let Leon know and he'll let me down gently.Leon: 14:05 Right? And vice versa, vice versa. I could see it going either way.Josh: 14:09 Definitely.Leon: 14:11 So, so, right. And I think also the numbers are interesting in terms of the ways that people are listening. And I think that tells us something a little bit about where we might want to advertise or promote. Along the way that, you know, that Overcast was really a surprise for me. I did not expect that. It's not on the list of things that I had targeted. Um, and yet there it is. You know, people were listening to it, so that might tell us where we want to reach out to people.Josh: 14:33 And it's funny too because both you and I participate a fair bit on Twitter and LinkedIn and we've been known to, I mean both retweet and post about our podcast on those two platforms. I mean, I'm, I surprised because I would've expected more people to be listening, via one of those platforms like Twitter, you know, in tweet listening. So...Leon: 14:56 Yeah, it is interesting. And maybe that's something we need to find a way to enable more of. I dunno. I dunno. Um, you know, that's, so we're going to, we're going to dig through those numbers, um, and see what else we can find. Again, if you see something in those numbers that we didn't let us know. The next thing I want to do is go relatively quickly through some behind the scenes we've had. I've had some folks ask, "Well, how exactly do you make the podcast?" Um, either because they're interested in doing one of their own or because they just, you know, are interested in that stuff. So, uh, the behind the scenes stuff, first of all, we use a variety of microphones because we have guests from all over the place. So since Josh and I are, are the two primary voices you're going to hear, I use a blue Yeti microphone, um, which I love.Josh: 15:37 Yeah. And I use a job for pro nine 30, which I use both for work and for the podcast. I think the takeaway here is you don't have to go and drop a hundred or 200 or more on a specialized a microphone if you're just going to be doing a podcast from home. And if you're going to have more than one guest, it gets really awkward when people want to hug up against my face to talk into my mic.Leon: 16:02 Yeah. At least to some awkward questions, you know, in the house,Josh: 16:05 right? Yeah. So you know why, why do you have Leon's whiskers on your sweater?Leon: 16:13 Right, exactly. So yeah, you don't need a lot. Now again, I, I'm really enjoying the blue Yeti. Um, Destiny turned me on to it. Uh, when we first started doing, you know, talk about podcasts and doing them and it was really a worthwhile investment for me, but I wholly support what Josh was saying is you can get good quality sound out of a, a variety of low end low cost microphones. To record the podcast we use cast, which you can...Josh: 16:40 OK. Hold on a second, can I just, can I point out how awesome it is that a bunch of D&D geeks use a platform called "Cast" to record this show?Leon: 16:49 Yes. Okay. It is kind of cool and yes, I do. I do have a little bit of nerdery in my head. And I say, "Okay, I'm going to cask now... HOYYYY!" Oh, you'll find cast at http://trica.st. Um, so you can find that there and it's really economical. It's 10 bucks a month for, I think it's 20 hours of recording. So for a home podcast you can fit the time that you... And you can export individual tracks or you can export a premixed version or whatever. It gives you a lot of nice granular controls and they even serve as a hosting platform, but we're not using it. And speaking of exporting, I export individual tracks for each voice and then I'll do the audio editing in Audacity, a free tool. It does everything that I need it to do. And if the sound is horrible, it's my fault because I'm, it's me using Audacity. If the sound is amazing and you love it, it's purely because Sudacity is an amazing tool to use.Josh: 17:50 Wait... we edit this show?Leon: 17:51 We do. I tried to take out a lot of the ums and ahs and every once in a while we really mess up and we have to go back or something like that. I edit that out. Most of the time. I think episode 11 ended up the unedited version ended up getting posted, but we didn't say anything terribly embarrassing in that one.Josh: 18:07 We usually say all sorts of terribly embarrassing things that we publish well,Leon: 18:11 Right, right. The embarrasing stuff is the best part.Josh: 18:16 Um, so we, uh, we as an ep, as a podcast, we try to be very inclusive and accessible. And, uh, for our listeners who don't actually listen, who are hearing impaired, we use Temi, uh, for doing transcription. And I mean, that's, that's something that I picked up from you, uh, about halfway through this year. And I've really enjoyed that experience. And today as we were prepping for the show, I realized that doing the transcription isn't just for people who are hearing impaired. It's also very much for us. Because we post all of those transcriptions and I was looking for a particular episode, something that we had said in those these past 37 episodes and I was able to go and search on http://technicallyreligious.com and just find it, boom. Just like that.Leon: 19:03 Right. So that, that is a, a secondary benefit that I like. Of course I said that we needed to do transcribing because I have a lot of friends who are Deaf or hard of hearing. I also have a lot of friends for whom English is not their first language. And so having the transcript works really well. Uh, and yes, it makes it very searchable. We can go back and find where we said something really easily. You don't have to listen to hours and hours of, uh, of recordings just to see "now, where was it that Doug talked about being the worst person to invite to a Christmas party..." Or whatever, which was hysterical by the way. Um, so yeah, it, it's, it comes in really handy and a little bit of extra work. Um, we host on PodBean, I mentioned that earlier. So that's where the episode gets uploaded to when it's finally done. And PodBean pushes things out to just about everything else. It pushes out to iTunes, Spotify, YouTube, um, a whole mess of platforms. And then I manually repost it to http://technicallyreligious.com and uh, that does the promotion, the actual promotion of the episode out to Twitter, Facebook, um, and LinkedIn.Josh: 20:06 Interesting. And then I think that it's important that our listeners know that we invest between three and five hours per episode. Well, we've certainly gone longer. Some of our episodes and the prep, the recording and then the dissecting, I mean we're probably up around 8, sometimes 10 hours for a particular set of episodes. You know, those two-part-ers that we've done, you know, they've run really long, but yeah, three to five hours a week, uh, on top of our full time gigs as uh, husbands and fathers, uh, and jobs. Apparently we have to have jobs in order to make money and feed ourselves. So yeah, it's a labor of love.Leon: 20:43 Yeah. My family is much, they're much more uh, solicitous of my saying "I want to go record a podcast"Josh: 20:48 when they've eaten, you know, regular. Yes. Yeah. They're totally accepting of that. Right?Leon: 20:53 Yeah. It makes things easier. And you know, the, I think the message there is that if, if you feel the itch to do a podcast, it's accessible. It's relatively easy to do. It requires more or less some free or cheap software. I told you the cast is $10 a month. Um, Tammy, one of the reasons why I like it is that it is 10 cents a minute for the transcribing. So, you know, a 30 minute episode is $3. Nice. It's really, really affordable to do so, you know, the costs are relatively low. Um, between that and hosting and um, Podbean. So it's really accessible to do. You know, don't think that there's a barrier to entry that that money or even level of effort is a very true entry. And that means also that you can take a shot at it, make some mistakes, figure it out. I fully ascribed to IRA Glass' story that he did about, uh, the gap that when you first start to do something, there's this gap between what you see in your head in terms of quality and how it comes out initially that it's not, it may not be what you envision it can be, but you have to keep at it. You have to keep trying because ultimately you'll get there because it's your, your sensibility of, and your vision. That really is what's carrying you through. Not necessarily your technical acumen at the start. That comes later. So that just, you know, it just a little encouragement. If you think you want to do this, absolutely try reach out to us on the side, either on social media or email or whatever and say, "Hey, I just need some help getting started." Or "Can you walk me through the basics of this or that," you know, we would love to help see another fledgling podcast get off the ground.Josh: 22:28 This is why I had four children. The first three. I'm like, all right, that's uh, uh, obviously I've really messed up. And the fourth one, or maybe I should have a fifth. I dunno,Leon: 22:38 Who knows? Well, okay. So I, I routinely and publicly refer to my oldest daughter as my 'pancake kid'. You know, when you're making pancakes and, uh, you make the first one and it's like overdone on one side and kind of squishy on the other and misshapen and kind of, you know, that's, and the rest of them come out perfectly circular and golden Brown and cooked all the way through because the griddle's finally up to the right temperature and everything. But the first pancake that first pancake comes out and it's just a little weird. And my daughter is the pancake kid. So, uh, moving on from pancake children and how the sausage gets made, having made the sausage, I think we both have some moments in some episodes that were our favorites. And I'd like to start off, uh, I got a little bit nostalgic, um, about this. So my top favorite moment was actually when we had Al Rasheed on and you and Al ended up getting into this 80's music nostalgia showdown where every other comment was, you know, an oblique reference to some song that was, you know, top 40 radio at some point during the decade. It was by end of the episode. It was just. It was wonderful and awful and cringe-worthy and delightful all at the same time. And I just sat there with my jaw hanging open, laughing constantly. I had to mute myself. It was amazing.Josh: 23:59 Wow. I mean, Cher would say, if we, "if I could turn back time..."Leon: 24:05 See? See? It was like this, it was like this for 35 minutes straight. It was nothing but this. Okay. So that was one. The second one was, and we talked about this, uh, earlier with the top episodes Fixing the World One Error Message at a Time. There were just some amazing overlaps that came out during that episode. You know, where we saw that, you know, the pair programming may have had its roots, whether it knows it or not in the idea of chevruta, or partner style learning in Yeshiva that, you know, that was just a total like, Oh my goodness. Like again, an aha moment for me. So that was a really interesting one as we were talking about it and finally, not a specific episode, but just every episode that, that we were together and that's most of them, the time that I got to spend with you, Josh, you know, as we planned out the show, sort of 30, 40 minutes of prep time before we record and we just had a chance to catch up on our lives and our families and things like that and really share it. And that's something that the audience is never going to necessarily hear. We weren't recording and it's just, you know, it was just personal banter between us. But you know, uh, we worked together for a very brief time, you know, at the same company, but then we worked together, you know, on the same tools and the same projects far longer than that. And this was, this really just gave us a chance to deepen that friendship. And I really value that. And to that end, the episode that is, that is titled failure to launch, for me, was really a very personal moment. It was a really hard moment for me where my son was going through a hard time. And as a parent, when you see your kid struggling, it just tears you apart. And both the prep and actually the execution of that episode I think was for me, a Testament to our friendship, you know, in audio like in a podcast. That was, that was you being really supportive of me and helping me think through and talk through those moments. And um, you shared a lot of yourself in that episode also. And, and I think that was sort of emblematic of the, again, the secondary benefit of the podcast. The first benefit is just being able to share these ideas and stories with the public. But the secondary benefit for me was just how much friendship we were able to build and share throughout the, this last year.Josh: 26:22 And I, I have to remind the audience that your son, he stayed in Israel, right. And he's doing absolutely fantastic. So that time for you and I to commiserate for, to be a virtual shoulder, um, to, you know, snuggle your head on and yeah, t.Leon: 26:40 That's how the whiskers got there! Angela, if you're listening, that's, that's how it happened.Josh: 26:45 That is absolutely how it happened.Leon: 26:47 Don't think anything else.Josh: 26:49 No, I agree that those, those are the things that you don't really, you don't really value until suddenly they happen. And you realize that for the past year we've spent more time together than probably most of my friends. It's just weird. I mean life is busy and you squeezed friendships in between other things, but this was something that we carved out every week. So, I mean, I got to spend 90 minutes to 120 minutes a week just chatting with you on top of the chatting we did in social media and whatnot. So a 100% super powerful. Um, I often say, uh, you know, my best friend in the world, um, doesn't live anywhere near me. Uh, he lives in Cleveland, so that's great. So I,Leon: 27:34 And that's the amazing part about the internet in general. But yeah, this podcast has helped. Okay. So those were, those were my favorites. Josh, you know what are yours? I've got the tissues out.Josh: 27:41 Yeah, you got em? All right. So my first one was recently outing. Um, I'm making you out yourself and your ongoing feud with Adam Sandler.Leon: 27:52 Sorry Adam. It goes all the way back to college. Uh, couldn't stand you. You are, I'm sure you're a much better person now, but you were impossible to deal with back then.Josh: 28:01 I mean, we were all, we were all impossible to deal with at that age. I'm just going to point that out. There's a reason that we send our kids to college. Just saying. There's also a reason that some animals eat their young also saying that,Leon: 28:13 Oh, right. Media was merely misunderstood. She was just having a bad day that many mothers can commiserate with .Josh: 28:22 Uh, also I enjoy at least once an episode, sometimes more reminding you that, um, you did abandon me after four days to take a role as a Head Geek at Solarwinds,Leon: 28:37 Mea Culpa, mea culpa, marxima culpa! I'm so sorry. Yes, I know. I know.Josh: 28:42 I, and I think that that will probably go on my tombstone. Um, "do you remember when Leon left me?" Or something.Leon: 28:52 Again, hard to explain to your family why that's on your tombstone.Josh: 28:55 It's going to be a big tombstone door and don't, don't worry. Um, and I think to your failure to launch episode, um, one of the moments that, not when it happened, but in retrospect was sharing with the world that I suffer from depression and uh, and that it's OK, um, that we, and we talked about that later on, we talked about the power of reaching out to people, um, who say, "Look, I, I suffer from depression and it's okay to suffer from depression." And people who know me, uh, and who know me well will know that sometimes it's very situational, but to tell the entire world or at least 2100 people or 2100 listens, um, that I suffered from depression. It, that's fine. It really was.Leon: 29:41 Yeah, it really, it came out okay. And that actually arose from a previous episode. So the episode we're talking about is "Fight the Stigma" and the previous episode, it just, it was like in passing and it was very to the listener, it was very, you know, noncommittal. It was just, "...and I suffered from depression" and et cetera, et cetera. Actually that was the "Failure to Launch" episode that you mentioned it. And afterward, after we'd stopped recording I said, "Wow, that, that seems so easy for you. Was it, was it a big deal?" And you said, "Yeah, it was a huge deal. Like my heart was beating in my chest!" And, and every like, it really wasn't, it didn't seem like it, but it was a big admission. We said, "we need to explore this a little bit more. We need to go into it." And it was really brave. I know that that's terrible. Like, Oh wow, you're such an inspiration, like don't turn you into that. But it made hopefully made a difference in other people who are listening. But it was really a, a big thing for, for us who are doing the recording.Josh: 30:35 Yeah. And I will say that, uh, in addition to that depression at admission, this podcast has really been a part of my transition away from Mormonism. I mean, we started talking about this podcast a year before we actually started the podcast. So I was, you know, I was kind of in the throws of it, but I mean 30 to 60 minutes a week of being able to hear other people's perspectives who, um, may or may not, um, share our religious views or former religious views in my case, was really powerful for me and helped me process through my transition away from Mormonism a lot faster than most people. I've, you know, I, in the community, I've seen people that are going on decades of trying to transition away from Mormonism. And I did it in under two years.Leon: 31:28 Right. And I think, I think part of that, and this is one of the foundational ideas behind the, the "Tales from the TAMO Cloud" series that we've started to do is to talk about people's journeys. Um, you know, both their technical journeys and also their religious journeys. Uh, and to make sure that the listeners understand that life is a journey. I know that's really cliche, that there's a place where you are today that is different from where you stood before at the beginning when you were, when you were growing up that the house that you grew up in, in the traditions in that house are valid and they are a thing. But that may not be what you do now. You may be doing what you may think of as more or less or different. And that's normal that we have multiple voices on here who say, "I started off like this and then I was this and then, and now I am this and this is how I got from here to there." And the, this is in that conversation could be, I started off on help desk and then I was a storage engineer and now I'm working as a, you know, customer advocate or it could be that I started off as, you know, Protestant and then I was disillusioned and I was nothing. And now I'm, you know, born again, evangelical Christian or whatever and people, you know,...that, that those transitions are normal and healthy and not an admission of failure. It's an admission of life.Josh: 32:50 You forgot to include my transition from working in technology and despising sales to now working in presales and being part of the sales cycle. I mean, I've literally gone to the dark side. It's,Leon: 33:04 You really have, and you probably going to have to talk about that at some point. Yeah. After Star Wars is out for a while. So we're not spoiling anything for anyone.Josh: 33:11 Exactly. Right. Uh, I will also point out that it is moments like this that are so powerful for me. I quote you, Leon, in real life. Um, so often that I'm pretty sure people are convinced. I am considering converting to Judaism.Leon: 33:28 I know that you got that comment, especially when you were still involved in the church and you were running a Sunday teaching programs and you'd, you'd say, and you know, and I think the group, the class would say, "and what is your friend Leon think about that?"Josh: 33:42 It really was hilarious. It would be like, "...so I have a friend" and they'd be like, "...and his name is Leon."Leon: 33:48 Right.Josh: 33:49 It, it, it was fantastic. Um, and then I think, no, I know that my all time favorite tagline of this past season came from, uh, episode 30, uh, when good people make bad choices and an evolved, um, melons,Leon: 34:06 I'll play the clip.Josh: 34:07 That's of wonderful. I think that's better than me reading it because yes, play the clip.Josh: 34:13 In the Bible. Matthew records "...by their fruits, you shall know them."Doug: 34:17 So ironically, we're not supposed to be judges, but we're supposed to be fruit inspectors.Josh: 34:23 Doug, are you looking at my melons?Leon: 34:26 I cannot be having this conversation.Josh: 34:28 I don't know why we played that clipLeon: 34:32 Because we have no shame. Um, yeah, it was... Just talking about that clip took up a good solid five to 10 minutes of, of solid laughter of us just trying to do that. And that represents some of the joy. So those were some of our favorite moments. If you have some of your favorite moments, uh, please share it with us on social media. We're on Twitter, Facebook, uh, there's, you know, posts again on LinkedIn. You can share it in the comments area on the website, anywhere that you want to. Um, all right, so I want to transition over to looking ahead. We looked back a little bit, um, in the coming year, what are we thinking? Uh, Technically Religious is going to move into and that idea of constantly improving and I'll start off by saying that we're really gonna work on improving the production quality. I think we have some room to grow. That we can get better. I'm, I'm getting better at, again, editing the audio and getting better sound levels and things like that. And that's going to continue. I also want to make sure that we make the time that we're talking as clear as possible. So getting the ums and AHS and those vocal tics out of the way. I think that transcripts are getting better and faster and so they're getting easier to do and we're going to keep on doing that and especially to our deaf and hard of hearing listeners. But anybody who's consuming the transcripts, please let us know if there's something we can do to make it easier for you. And the last piece I'm going to unveil is that we are going to have intro and outro music along with the intro text, so stay tuned for that. We'll have a big unveiling of that.Josh: 36:03 Does it involve kazoos?Leon: 36:04 It probably does not actually involve kazoos.Josh: 36:06 That's disappointing.Leon: 36:06 I, okay, so we're still working on it. Maybe we can work some kazoos. It's going to have a lot of sound. It's gonna have a lot of sounds,Josh: 36:13 A lot of sounds. Okay. good. I'm okay with that. Are we also going to leverage Elon Musk's Starlink satellite system in order to broadcast?Leon: 36:23 If you can make that happen. I'm fully on board with that, but that, that's news to me. But I, yeah, I'm all for it. Slightly less ambitious than Elon Musk's Starlink system would be getting some other guests in and maybe some higher profile guests. Uh, somebody mentioned earlier that Larry Wall has a very interesting religious point of view and also he is the progenitor of the Perl programming language, which I have an undying love for. This is a hill I'm willing to die on that Pearl is still valid and and useful. So someone said, "Hey, you should get him on the show." So I am actively pursuing that and a few other guests whose names you might recognize even if you don't know me or Josh or the circles that we run in.Josh: 37:04 I just want to say that Charity Majors is high on my list this year. Unfortunately I missed having a chance to chat with charity last week while I was in San Francisco. A charity. I'm so sorry. I realized as I was wrapping up my week that I didn't reach out cause I'm a terrible person.Leon: 37:21 That's right because you were terrible. That's what it was. Not that you were busy learning the ropes of a completely new job and juggling several responsibilities and things like that. No, no. Just because you're a bad person.Josh: 37:33 Yes, absolutely. Absolutely. So to make it up for you to you, I, we will invite you onto the show. We'd love to talk about this journey and then to make it, make it up to you for inviting you onto the show. Uh, we will also get together next time I'm in San Francisco.Leon: 37:50 Same, same. Since you took time to get... So I met Charity when she was at, we were both at DevOps days, Tel Aviv. So Charity, we do not all, both have to fly literally around the globe to see each other and get to hang out next time. So, so there's that. Um, we're going to have some more TAMO interviews. If you are interested in being part of the show, either you want to do a tales from the Tamar cloud interview or just part of any conversation. We would love to speak to you. If you want to be a guest. If you think that you want to try your hand at editing, I will be happy to give up the reins to either the audio or transcription editing responsibilities. Um, let me know, again, reach out in social media and also promotion. Uh, I want this year to be more about getting, uh, Technically Religious promoted better and more so that we can have more readers, more input, more fun, more more goodness. And that leads to something that sorta speaks up your alley Josh.Josh: 38:48 Well, I was gonna say if someone happens to have $100 billion laying around and would like to sponsor the show, we would be,Leon: 38:58 yeah, we wouldn't use all 100 billion, would we?.Josh: 39:00 No. I mean at least at least a billion or so we would leave.Leon: 39:04 Oh, okay. Yeah. I mean cause we're not greedy.Josh: 39:07 99 billion? We can totally make this happen on nine, 99 billion. In all honesty. If you are interested in sponsoring the show and we've dropped a number of names of, uh, vendors, uh, during this episode... And not intentionally, we really do appreciate the technology that allows us to deliver the show. But if you're interested in a sponsorship, please reach out to us. We'd be more than happy to talk about you, your products, um, and to also accept your money.Leon: 39:32 So that's, I think that's a good wrap up. I think there's a good look back at, at 2019 season one. Uh, the next episode you hear will be the official start of season two of technically religious. Do we have a cliffhanger? Is there some sort of, are you going to poise over me with a knife or,Leon: 39:48 Right. Is this so... Josh, I have to tell you something really important. I'm...Josh: 39:54 And we fade to black. No, no, no. We're not going to do that. I was waiting with bated breath. I was, I was going to put it in my ANYDo so that I can remember to listen to the next episode.Leon: 40:03 Yes. Uh, so just to wrap up to everyone who's listening, uh, both Josh and I and everyone else who's been part of the show, uh, thank you deeply. We hope that you're going to keep listening as we kick off season two, and that you will share Technically Religious podcast with your friends, your family, and your coworkers. And while as you listen to this episode is probably somewhat belated, we'd like to wish you:Josh: 40:25 A Merry Christmas.Leon: 40:26 or happy Christmas if you're in Britain. Also a Chag Chanukah Sameach.Josh: 40:30 A happy Kwanzaa.Leon: 40:31 A joyful winter solstice.Josh: 40:33 Festivus... For the rest of us!Leon: 40:37 Thanks for making time for us this week to hear more of Technically Religious visit our website, http://technicallyreligious.com, where you can find our other episodes, leave us ideas for future discussions and connect us on social media.Leon: 40:49 You really want to end the year with a Festivus joke?Josh: 40:51 Well, since we can't be in Hawaii.

Software Defined Talk
Episode 201: The 10 pillar strategy

Software Defined Talk

Play Episode Listen Later Oct 18, 2019 78:46


"This week there has been a lot of confusion on social media” around MeetUp charging more, along with the launch of a rival service at LinkedIn. Yeah, we get deep into LinkedIn talk! Then we discuss into what exactly a GitLab is. Mood board: Hard to come down on a definitive opinion on carrots. We can educate Coté. Learning from each other, the more you know! There’s a lot of kube shit. Non-subjugating windows. Anyone can do a hyphen, you have to go out of your way to do an em-dash. What’s a ‘fixie’? HOT LINKEDIN ETHICS DEBATE. You’re probably using the Twitter webpage and following the “Suggested Follows.” There have been reports in social media. It’s the chaos monkey for business models. We can Armchair Product Management this thing. I've been asked if crocodiles are considered "pescatarian-friendly." It’s kind of like the yaml version of the Rational dream. The whole rest of the world was putting together best of breed tools. If you say so, Grammerly. “Monty-python simulation” Stay out of the room Mr. AI! It’s paper size A-somebullshit. We’re puttin’ the Plan column back on! Space carpets. Hey, Coté got off his ass and finally reved back up his newsletter (https://buttondown.email/cote). People love it! Subscribe (https://buttondown.email/cote) and tell all your friends to subscribe (https://buttondown.email/cote). Latest issues: Relevant to your interests Meetups Meetup wants to charge users $2 just to RSVP for events — and some are furious (https://www.theverge.com/2019/10/15/20893343/meetup-users-furious-new-rsvp-payment-test) LinkedIn Launches Events to Facilitate Professional Meet-Ups (https://www.socialmediatoday.com/news/linkedin-launches-events-to-facilitate-professional-meet-ups/565171/) freeCodeCamp is building an open source alternative to Meetup (http://mattray 5:55 PM https://twitter.com/ossia/status/1183845054449930241) GitLab Blood money is fine with us, says GitLab: Vetting non-evil customers is 'time consuming, potentially distracting' (https://www.theregister.co.uk/2019/10/16/gitlab_employees_gagged/) GitLab reset --hard bad1dea: Biz U-turns, unbans office political chat, will vet customers (https://www.theregister.co.uk/2019/10/17/gitlab_reverse_ferret/) AWS AWS Promotional Credits for Open Source Projects | Amazon Web Services (https://aws.amazon.com/blogs/opensource/aws-promotional-credits-open-source-projects/) Amazon migrates more than 100 consumer services from Oracle to AWS databases (https://techcrunch.com/2019/10/15/amazon-migrates-more-than-100-consumer-services-from-oracle-to-aws-databases/) Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database | Amazon Web Services (https://aws.amazon.com/es/blogs/aws/migration-complete-amazons-consumer-business-just-turned-off-its-final-oracle-database/) Security Google teams up with Yubico to build a USB-C Titan Security Key (https://www.engadget.com/2019/10/14/google-yubico-usb-c-titan-security-key/) Thoma Bravo makes $3.9 billion offer to acquire security firm Sophos (https://techcrunch.com/2019/10/14/thoma-bravo-makes-3-9-billion-offer-to-acquire-security-firm-sophos/) Potential bypass of Runas user restrictions (https://www.sudo.ws/alerts/minus_1_uid.html) Sudo Flaw Lets Linux Users Run Commands As Root Even When They're Restricted (https://thehackernews.com/2019/10/linux-sudo-run-as-root-flaw.html) Kube Corner Microsoft launches new open-source projects around Kubernetes and microservices (https://techcrunch.com/2019/10/16/microsoft-launches-new-open-source-projects-around-kubernetes-and-microservices/) Red Hat Flexes OpenShift Kubernetes Muscles (https://www.lightreading.com/cloud/network/red-hat-flexes-openshift-kubernetes-muscles/d/d-id/754862) MuleSoft Announces Anypoint Service Mesh, Extending the Power of Anypoint Platform to Any Microservice | MuleSoft (https://www.mulesoft.com/press-center/october-2019-release-anypoint-service-mesh) Facebook Can Be Forced to Delete Content Worldwide, E.U.’s Top Court Rules (https://www.nytimes.com/2019/10/03/technology/facebook-europe.html?utm_source=Memberful&utm_campaign=e1340d4a90-daily_update_2019_10_07&utm_medium=email&utm_term=0_d4c7fece27-e1340d4a90-111265207) Ahead of Zuckerberg testimony, new setbacks for Libra (https://www.axios.com/mark-zuckerberg-libra-facebook-congressional-testimony-6665e91a-e520-4cda-92c7-50783393cd17.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Inside Mozilla's 18-month effort to market without Facebook (https://digiday.com/marketing/after-mozilla-stopped-spending-on-facebook-the-company-increased-its-focus-on-offline-marketing/) Tim Cook’s Company-Wide Memo on HKmap.live Doesn’t Add Up (https://daringfireball.net/linked/2019/10/10/cook-hkmap-live-email) Open Source Gerrymandering (https://www.aniszczyk.org/2019/10/08/open-source-gerrymandering/) Larry Wall has approved renaming Perl 6 to Raku (https://twitter.com/ripienaar/status/1182794059297050624) Docker Desktop asset, fiscal stress prompt acquisition buzz (https://searchitoperations.techtarget.com/news/252471956/Docker-Desktop-asset-fiscal-stress-prompt-acquisition-buzz) Building China's Comac C919 airplane involved a lot of hacking, report says (https://www.zdnet.com/article/building-chinas-comac-c919-airplane-involved-a-lot-of-hacking-report-says/) Headless CMS company Strapi raises $4 million (https://techcrunch.com/2019/10/15/headless-cms-company-strapi-raises-4-million/) Why Richard Stallman doesn’t matter (https://maffulli.net/2019/10/17/why-richard-stallman-doesnt-matter/) IBM stock falls on revenue miss (https://www.cnbc.com/2019/10/16/ibm-earnings-q3-2019.html) IBM Reports Messy Results @themotleyfool #stocks $IBM (https://www.fool.com/investing/2019/10/16/ibm-reports-messy-results.aspx?Cid=UheJXN) Nonsense The Best Burritos in San Francisco (https://www.seriouseats.com/places/best-burritos-in-san-francisco?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+seriouseatsfeaturesvideos+%28Serious+Eats%29) Sponsors SolarWinds: To try it FREE for 14 days, just go to https://loggly.com/sdt. If it logs, it can log to Loggly. PagerDuty: To see how companies like GE, Vodafone, Box and American Eagle Outfitters rely on PagerDuty to continuously improve their digital operations visit https://pagerduty.com. Conferences, et. al. Nov 2nd - EmacsConf (https://emacsconf.org/2019/) 2019 Nov 3rd to 7th - Gartner Symposium, Barcelona. Coté has a €625 discount code if you ask him for it. December - 2019, a city near you: The 2019 SpringOne Tours are posted (http://springonetour.io/): Toronto Dec 2nd (https://springonetour.io/2019/toronto). December 12-13 2019 - Kubernetes Forum Sydney (https://events.linuxfoundation.org/events/kubernetes-summit-sydney-2019/) Discount off KubeCon North America which is November 18 – 21 in San Diego. Use code KCNASFTPOD19 for a 10% discount. NO-SSH-JJ wants you go to DeliveryConf (https://www.deliveryconf.com/) in Seattle on Jan 21st & 22nd (https://www.deliveryconf.com/), Use promo code: SDT10 to get 10% off. JJ wants you to read about Delivery Conf Format too (https://www.deliveryconf.com/format). † SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Recommendations Brandon: Epson Scanner (https://epson.com/For-Home/Scanners/Document-Scanners/WorkForce-ES-400-Duplex-Document-Scanner/p/B11B226201). Matt: Anti-pick: The Dead Don’t Die (https://www.imdb.com/title/tt8695030/). This Must Be The Gig (https://consequenceofsound.net/thismustbethegig/) podcast: Mike Patton (https://consequenceofsound.net/2019/09/this-must-be-the-gig-mike-patton/). Coté: The Fifth Season (https://www.goodreads.com/book/show/19161852-the-fifth-season).

Linux Headlines
2019-10-14

Linux Headlines

Play Episode Listen Later Oct 14, 2019 2:48


Perl 6 is renamed, AWS goes metal with ARM, OnionShare just got a big upgrade, and Google has a new security dongle. Plus IBM gives away $50k, and more.

The Frontside Podcast
Svelte and Reactivity with Rich Harris

The Frontside Podcast

Play Episode Listen Later Sep 4, 2019 52:11


Rich Harris talks about Svelte and Reactivity. Rich Harris: Graphics Editor on The New York Times investigations team. Resources: Svelte Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer. This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. TARAS: It's actually a really nice, Rich and I'm really, really happy to have a chance to actually chat with you about this because Svelte is a really fun piece technology. In many ways, it's interesting to see our technology evolve and our industry evolve through innovation, real innovation. I think Svelte 3 has really been kind of that next thought provoking technology that kind of makes you think about different ways that we can approach problems in our space. So, really excited to chat with you about this stuff. RICH: Well, thank you. Excited to be here. TARAS: I think quite a lot of people know, Rich, about your history, like how you got into what you're doing now. But I'm not sure if Charles is aware, so if you could kind of give us a little bit of a lowdown on where you kind of come from in terms of your technical background and such. RICH: Sure. I'll give you the 30-second life history. I started out as a reporter at a financial news organization. I had a Philosophy Degree and didn't know what else to do with it. So, I went into journalism. This was around the time of the great recession. And within a few weeks of me joining this company, I watched half of my colleagues get laid off and it's like, "Shit, I need to make myself more employable." And so gradually, sort of took on more and more technical responsibilities until I was writing JavaScript as part of my day job. Then from there, all these opportunities kind of opened up. And the big thing that I had in mind was building interactive pieces of journalism, data-driven, personalized, all of that sort of thing, which were being built at places like the New York Times, and The Guardian, and the BBC. That was the reason that I really wanted to get into JavaScript. And that's guided my career path ever since. CHARLES: It's interesting that this D3 and all that did come out of journalism. RICH: It's not a coincidence because when you're working under extreme time pressure and you're not building things with a view to maintain them over a long period of time, you just need to build something and get it shipped immediately. But it needs to be built in a way that is going to work across a whole range of devices. We've got native apps, we've got [inaudible], we've got our own website. And in order to do all that, you need to have tools that really guide you into the pit of success. And D3 is a perfect example of that. And a lot of people have come into JavaScript through D3. CHARLES: And so, are you still working for the same company? RICH: No. That's ancient history at this point. CHARLES: Because I'm wondering, are you actually getting to use these tools that you've been building to actually do the types of visualizations and stuff that we've been talking about? RICH: Very much so. I moved to The Guardian some years ago. And then from there, moved to Guardian US, which has an office in New York. And it was there that I started working on Svelte. I then moved to the New York Times and I'm still working on Svelte. I've used it a number of times to build things at the New York Times and the people have built things with it too. And so, yeah, it's very much informed by the demands of building high performance interactive applications on a very tight deadline. CHARLES: Okay, cool. So I've probably used, I mean, I'm an avid reader of both Guardian and the New York Times, so I've probably used a bunch of these visualizations. I had no idea what was driving them. I just assumed it was all D3. RICH: There is a lot of D3. Mike Bostock, the creator of D3, he was a linchpin at the graphics department for many years. Unfortunately we didn't overlap. He left the Times before I joined the Times, but his presence is still very much felt in the department. And a lot of people who are entering the industry, they're still becoming database practitioners by learning from D3 examples. It's been a hugely influential thing in our industry. TARAS: How long is a typical project? How long would it take to put together a visualization for an article that we typically see? RICH: It varies wildly. The graphics desk is about 50 strong and they will turn around things within a day. Like when the Notre Dame burnt down a couple of months ago, my colleagues turned around this interactive scroll driven webGL 3D reconstruction of how the fire spreads through the cathedral in less than 24 hours, which was absolutely mind blowing. But at the same time, there are projects that will take months. I work on the investigations team at the Times. And so, I'm working with people who are investigating stories for the best part of the year or sometimes more. And I'm building graphics for those. And so that, it's two very different timescales, but you need to be able to accommodate all of those different possibilities. CHARLES: So, what does the software development practice look like? I mean, because it sounds like some of this stuff, are you just throwing it together? I guess what I mean by that is, I guess the projects that we typically work on, three months is kind of a minimum that you would expect. So, you go into it, we need to make sure we've got good collaboration practices around source control and continuous integration and testing and all this stuff. But I mean, you're talking about compressing that entire process into a matter of hours. So what, do you just throw right out the window? What do you say? "We're just doing a live version of this." RICH: Our collaboration processes consist of sitting near each other. And when the time calls for it, getting in the same room as each other and just hammering stuff out on the laptop together. There's no time for messing around with continuous integration and writing tests. No one writes tests in the news graphics, it's just not a thing. CHARLES: Right. But then for those projects that stretch into like three months, I imagine there are some. Do you run into like quality concerns or things like that where you do have to take into account some of those practices? I'm just so curious because it sounds like there's actually, the difference between two hours and two months is, that's several orders of magnitude and complexity of what you're developing. RICH: It is. Although I haven't worked on a news project yet that has involved tests. And I know that's a shocking admission to a lot of people who have a development background, but it's just not part of the culture. And I guess the main difference between the codebase for a two-hour project and a two-month project is that the two-month project will strive to have some reasonable components. And that's, I think, the main thing that I've been able to get out of working on the kinds of projects that I do is instead of just throwing code at the page until it works, we actually have a bit of time to extract out common functionality and make components that can be used in subsequent interactives. So, things like scroll driven storytelling, that's much easier for me now than it was when I first built a scroll driven storytelling component like a couple of years ago. CHARLES: Yeah. That was actually literally my next question is how do you bridge that, given that you've got kind of this frothy experimentation, but you are being, sounds like, very deliberate about extracting those tools and extracting those common components? And how do you find the time to even do that? RICH: Well, this is where the component driven mindset comes in really handy, I think. I think that five or 10 years ago when people thought in terms of libraries and scripts, there wasn't like that good unit of reusability that wasn't the sort of all encompassing, like a component is just the right level of atomicity or whatever the word is. It makes sense to have things that are reusable but also very easy to tweak and manipulate and adapt to your current situation. And so, I think that the advent of component oriented development is actually quite big for those of us working in this space. And it hasn't really caught on yet to a huge degree because like I say, a lot of people are still coming with this kind of D3 script based mindset because the news industry, for some interesting and historical reasons, is slightly out of step with mainstream mode development in some ways. We don't use things like Babel a lot, for example. CHARLES: That makes sense, right? I mean, the online print is not like it's a React application or it's not like the application is all encompassing, so you really need to have a light footprint, I would imagine, because it really is a script. What you're doing is scripting in the truest sense of the word where you essentially have a whole bunch of content and then you just need to kind of -- RICH: Yeah. And the light footprint that you mentioned is key because like most new sites, we have analytics on the page and we have ads and we have comments and all of these things that involve JavaScript. And by the time our code loads, all of this other stuff is already fighting for the main thread. And so, we need to get in there as fast as we can and do our work with a minimum fuss. We don't have the capacity to be loading big frameworks and messing about on the page. So that again is one of these sort of downward pressures that kind of enforces a certain type of tool to come out of the news business. TARAS: A lot of the tooling that's available, especially on like the really fatter, bigger frameworks, the tools that you get with those frameworks, they benefit over long term. So if you have like a long running project, the weight of the abstractions, you've experienced that benefit over time and it adds up significantly. But if you're working to ship something in a day, you want something that is just like a chisel. It does exactly what you want it to do. You want to apply it in exactly the right place and you want to get it done exactly, like you want the outcome to be precise. RICH: That's true. And I think a lot of people who have built large React apps, for example, or large Ember apps, they sort of look at Svelte and think, "Well, maybe this isn't going to be applicable to my situation," because it has this bias towards being able to very quickly produce something. And I'm not convinced that that's true. I think that if you make something easier to get started with, then you're just making it easier. If you build something that is simple for beginners to use, then you're also building something simple for experts to use. And so, I don't necessarily see it as a tradeoff, I don't think we're trading long-term maintainability for short term production. But it is certainly a suspicion that I've encountered from people. TARAS: This is something that we've also encountered recently. It's been kind of a brewing discussion inside a front side about the fact that it seems to be that certain problems are actually better to rewrite than they are to maintain or refactor towards an end goal. And we found this, especially as the tools that we create have gotten more precise and more refined and simplified and lighter, it is actually easier to rewrite those things five times than it is to refactor it one time to a particular place that we want it to be. And it's interesting, like I find this to be very recent, this idea is blossoming in my mind very recently. I didn't observe this in the past. CHARLES: Do you mean in the sense that like if a tool is focused enough and a tool is simple enough, then refactoring is tantamount to a rewrite if you're talking about 200 or 300 lines of code? Is that what you mean? TARAS: Yeah. If you're sitting down to make a change or you have something in mind, it is actually easy to say, "Let's just start from scratch and then we're going to get exactly the same place in the same amount of time." But this kind of mantra of not rewriting makes me think about that, makes me question whether that's actually something that is always the right answer. RICH: I definitely question that conventional wisdom at all levels, as well. I started a bundler called Rollup as well as Svelte more recently. And Rollup was the second JavaScript bundler that I wrote, because the first one that I wrote wasn't quite capable of doing the things that I wanted. And it was easier to just start from scratch than to try and shift the existing user base of its predecessor over to this new way of doing things. Svelte 3 is a more or less complete rewrite. Svelte has had multiple, more or less, complete rewrite. Some of them weren't breaking changes. But Svelte itself was a rewrite of an earlier project that I'd started in 2013. And so in my career, I've benefited massively from learning from having built something. But then when the time comes and you realize that you can't change it in the ways that you need to change it, just rewrite it. And I think that at the other end of the spectrum, the recent debate about micro frontend has largely missed this point. People think that the benefit of the micro frontend is that people don't need to talk to each other, which is absolute nonsense. I think the benefit of this way of thinking about building applications is that it optimizes for this fact of life that we all agree is inevitable, which is that at some point, you're going to have to rewrite your code. And we spend so much energy trying to optimize for the stability of a code base over the long term. And in the process, lock ourselves into architectural and technical decisions that don't necessarily make sense three or four years down the line. And I think as an industry, would be a lot better placed if we all started thinking about how to optimize for rewrites. CHARLES: So for those of us who aren't familiar, what is the debate surrounding micro frontends? This is actually something I've heard a lot about, but I've actually never heard what micro frontends actually are. RICH: Yeah. I mean, to be clear, I don't really have a dog in this fight because I'm not building products, but the nub of it is that typically if you're building a website that maybe has like an admin page, maybe it has a a settings page, maybe it has product pages, whatever. Traditionally, these would all be parts of a single monolithic application. The micro frontend approach is to say, "Well, this team is going to own the settings page. This team is going to own the product page." And they can use whatever technologies they want to bring that about. And the detractors sort of attack a straw man version of this, "You're going to have different styles in every page. You're going to have to load Vue on one page. You're going to have to load React on the other page. It's going to be a terrible user experience," when actually its proponents aren't suggesting that at all. They're suggesting that people from these different teams coordinate a lot more that are free to deviate from some kind of grand master architectural plan when it's not suitable for a given task. And darn right. I think it means that you have a lot more agility as an engineering organization than you would if you're building this monolithic app where someone can't say, "Oh, we should use this new tool for this thing. We should use microstates when the rest of the organization is using Google docs." It's not possible. And so, you get locked into the decisions of a previous generation. CHARLES: Right. No, it makes sense. It's funny because my first reaction is like, "Oh my goodness, that's a potential for disaster." The klaxon's going to go off in your head, but then you think, really then the work is how do you actually manage it so it doesn't become a disaster. And if you can figure that out, then yeah, there is a lot of potential. RICH: Yeah. People always try and solve social problems with technology. You solve social problems with social solutions. CHARLES: Right. And you have to imagine it too, it depends on the application, right? I think Amazon, the Amazon website is developed that way where they have different teams that are responsible even down to little content boxes that are up on the toolbar. And the site doesn't really, it shows, right? Like it shows like this is kind of like slapped together, but that's not what they need. They don't need it to not look like there's slight variation with the different ways that things behave. They need to be showing for their business to work. They need to be showing the right thing at the right time. And that's the overriding concern. So having it look very beautiful and very coherent isn't necessarily a thing. Same thing in Spotify, used as another example of this. I didn't know if it was called micro frontends, but I know that they've got a similar type thing, but they are clearly the experience and having it look coherent is more important. And so, they make it work somehow. And then like you're saying, it probably involves groups of people talking to other groups of people about the priorities. So yeah, it doesn't sound to me like just like you're going to adopt micro frontends guarantees one particular set of outcomes. It really is context dependent on what you make of it. RICH: Totally. TARAS: I'm curious though, so with Svelte, essentially for your reactivity engine, you have to compile to get that reactive behavior. RICH: Yeah. TARAS: How does that play with other tools like when you actually integrate it together? I've never worked with Svelte on a large project, so I can't imagine what it looks like at scale. I was wondering if you've seen those kind of use cases and what that ends up, if there's any kind of side effects from that. RICH: As you say, the reactivity within a component is only in the local state within that component or to state that is patched in as a prop from a parent component. But we also have this concept called a store. And a store is just a project that represents a specific value and you import it from svelte/store. And there are three types of store that you get out of the box. A writable, a readable and a derived. And a writeable is just, var count = writable (0) and then you can update that and you can set it using methods on that store. Inside your marker, you can reference or in fact inside the script block in the component, you can reference the value of that store just by prefacing it with a dollar sign. And the compiler sees that and says, "Okay, we need to subscribe to this store as value and then assign it and apply the reactivity." And that is the primary way of having state that exists outside the component hierarchy. Now, I mentioned the writable, readable, and derived are the built in stores that you get, but you can actually implement your own stores. You just need to implement this very simple contract. And so,, it's entirely possible to use that API to wrap any state management solution you have. So you can wrap redux, you can wrap microstates, you can wrap state, you can wrap whatever it is, whatever your preferred state management solution is, you can adapt it to use with Svelte. And it's very sort of idiomatic and streamlined. Like it takes care of unsubscriptions when the component is unmounted. All of that stuff is just done for you. CHARLES: Digging a little bit deeper into the question of integration, how difficult would it be to take wholesale components that were implemented in Svelte and kind of integrate them with some other component framework like React? RICH: If the component is a leaf node, then it's fairly straightforward. There is a project called react-svelte which is, I say project, it's like 20 lines of code and I don't think it's [inaudible] they did for Svelte 3, which I should probably do. But that allows you to use a Svelte component in the context of React application, just using the component API the same way that you would [inaudible] or whatever. You can do that inside a React component. Or you could compile the Svelte component to a web component. And this is one of the great benefits of being a compiler is that you can target different things. You can generate a regular JavaScript class and you've got an interactive application. Or you can target a server side rendering component which will just generate some html for some given state which can then later be hydrated on the client. Or you can target a web component which you can use like any other element in the context of any framework at all. And because it's a compiler, because it's discarding all of the bits of the framework that you're not using, it's not like you're bundling an entire framework to go along with your component. And I should mention while I'm talking about being able to target different outputs, we can also, as a NativeScript project, you can target iOS and Android that same way. Where it gets a little bit more complicated is if it's not a leaf node. If you want to have a React app that contains a Svelte component that has React [inaudible], then things start to get a little bit more unwieldy, I think. It's probably technically possible, but I don't know that I would recommend it. But the point is that it is definitely possible to incrementally adopt Svelte inside an existing application, should that be what you need to do. CHARLES: You said there's a NativeScript project, but it sounds to me like you shouldn't necessarily need NativeScript, right? If you're a compiler, you can actually target Android and you could target iOS directly instead of having NativeScript as an intermediary, right? RICH: Yes. If, if we had the time to do the work, then yes. I think the big thing there would be getting styles to work because Svelte components have styles. And a regular style tag just to CSS and you can't just throw CSS in a native app. CHARLES: Right. Sometimes, I feel like it'd be a lot cooler if you could. [Laughter] RICH: NativeScript really is doing a lot of heavy lifting. Basically what it's doing is it's providing a fake dom. And so, what the NativeScript does is it targets that dom instead of the real dom and then NativeScript turns that into the native instructions. CHARLES: Okay. And you can do that because you're a compiler. TARAS: Compilers has been on our radar for some time, but I'm curious like what is your process for figuring out what it should compile to? Like how do you arrive at the final compile output? Manually, have you written that code and then, "I'm going to now change this to be dynamically generated." Or like how do you figure out what the output should be? RICH: That's pretty much it. Certainly, when the project started, it was a case of, I'm going to think like a compiler, I'm going to hand convert this declarative component code into some framework plus JavaScript. And then once that's done, sort of work backwards and figure out how a compiler would generate that code. And then the process, you do learn certain things about what the points of reusability are, which things should be abstracted out into a shared internal helper library and what things should be generated in line. The whole process is designed to produce output that is easy for a human to understand and reason about. It's not like what you would imagine compile [inaudible] to be like, it's not completely inscrutable. It's designed to be, even to that level of being well formatted, it's designed to be something that someone can look at and understand what the compiler was thinking at that moment. And there's definitely ways that we could change and improve it. There are some places where there's more duplication than we need to have. There are some places where we should be using classes instead of closures for performance and memory benefits. But these are all things that once you've got that base, having gone through that process, that you can begin to iterate on. CHARLES: It's always curious to me about when is the proper time to move to a compiler, because when you're doing everything at runtime, there's more flexibility there. But at what point do you decide, "You know what? I know that these pathways are so well worn that I'm going to lay down pavement. And I'm going to write a compiler." What was the decision process in your mind about, "Okay, now it's time." Because I think that that's maybe not a thought that occurs to most of us. It's like, "I had to write a compiler for this." Is this something that people should do more often? RICH: The [inaudible] of 'this should be a compiler' is one that is worth sort of having at the back of your head. I think there are a lot of opportunities not just in DUI framework space but in general, like is there some way that we can take this work that is currently happening at runtime and shift it into a step that only happens once. That obviously benefits users. And very often we find that benefits developers as well. I don't think there was a point at which I said, "Oh, this stuff that's happening at runtime should be happening at compile time." It was more, I mean, the actual origin has felt that it was a brain worm that someone else infected me with. Judgment is a very well known figure in the JavaScript world. He had been working on this exact idea but hadn't taken it to the point where he was ready to open source it. But he had shared like his findings and the general idea and I was just immediately smitten with this concept of getting rid of the framework runtime. At the time, the big conversation happening in the JavaScript community was about the fact that we're shipping too much JavaScript and it's affecting startup performance time. And so the initial thought was, "Well, maybe we can solve that problem by just not having the runtime." And so, that was the starting point with Svelte. Over time, I've come to realize that that is maybe not the main benefit. That is just one of the benefits that you get from this approach. You also get much faster update performance because you don't have to do this fairly expensive virtual dom different process. Lately, I've come to think that the biggest win from it is that you can write a lot less code. If you're a compiler, then you're not kind of hemmed in by the constraints of the language, so you can almost invent your own language. And if you can do that, then you can do the same things that you have been doing with an API in the language itself. And that's the basis of our system of reactivity, for example. We can build these apps that are smaller and by extension, less bug prone and more maintainable. I just wanted to quickly address the point you made about flexibility. This is a theoretical downside of being a compiler. We're throwing away the constraints about the code needing to be something that runs in the browser, but we're adding a constraint, which is that the code needs to be statically analyzable. And in theory, that results in a loss of flexibility. In practice, we haven't found that to affect the things that we can build. And I think that a lot of times when people have this conversation, they're focusing on the sort of academic concepts of flexibility. But what matters is what can you build? How easy is it to build a certain thing? And so if empirically you find that you're not restricted in the things that you can build and you can build the same things much faster, then that academic notion of flexibility doesn't, to my mind, have any real value. CHARLES: Hearing you talk reminded me of kind of a quote that I heard that always stuck with me back from early in my career. I came into programming through Perl. Perl was my first language and Perl is a very weird language. But among other things, you can actually just change the way that Perl parses code. You can write Perl that makes Perl not throw, if that makes any sense. And when asked about this feature, the guy, Larry Wall, who came up with Perl, he's like, "You program Perl, but really what you're doing is you're programming Perl with a set of semantics that you've negotiated with the compiler." And that was kind of a funny way of saying like, "You get to extend the compiler yourself." Here's like the default set of things that you can do with our compiler, but if you want to tweak it or add or modify, you can do that. And so, you can utilize the same functionality that makes it powerful in the first place. You can kind of inject that whole mode of operation into the entire workflow. Does that make sense? That's like a long way of saying, have you thought about, and is it possible to kind of extend the Svelte compiler as part of a customization or as part of the Svelte programming experience? RICH: We have a very rudimentary version of that, which is pre-processing. There's an API that comes with Svelte called preprocess. And the idea there is that you can pass in some code and it will do some very basic, like it will extract your styles, it will extract your script and it will extract your markup. And then it will give you the opportunity to replace those things with something else. So for example, you could write some futuristic JavaScript and then compile it with Babel before it gets passed to the Svelte compiler, which uses acorn and therefore needs to be able to have managed other scripts so that it can construct an abstract syntax tree. A more extreme version of that, people can use [inaudible] to write their markup instead of html. You can use Sass and Less and things like that. Generally, I don't recommend that people do because it adds these moving parts and it makes like a lot of bug reports of people just trying to figure out how to get these different moving parts to operate together. I don't know, it means that your editor plugins can't understand what's inside your style tag all of a sudden and stuff like that. So, it definitely adds some complexity, but it is possible. At the other end, at a slightly more extreme level, we have talked about making the cogeneration part plugable so that for example, the default renderer and the SSR renderer are just two examples of something that plugs into the compiler that says, "Here is the component, here's the abstract syntax tree, here's some metadata about which values are in scope," all of this stuff and then go away and generate some code from this. We haven't done that so far, partly because there hasn't been a great demand for it, but also because it's really complicated. As soon as you turn something into a plugin platform, you just magnify the number of connection points and the number of ways that things could go wrong by an order of magnitude. And so, we've been a little bit wary of doing that, but it is something that we've talked about primarily in the context of being able to do new and interesting things like target webGL directly or target the command line. There are renders for React that let you build command line apps using React components. And like we've talked about, maybe we should be able to do that. Native is another example. The NativeScript integration as you say, it could be replaced with the compiler doing that work directly, but for that to work presently, that would mean that all of that logic would need to sit in core. And it would be nice if that could be just another extension to the compiler. We're talking about a lot of engineering effort and there's higher priority items on our to do list at the moment. So, it's filed under one day. CHARLES: Right. What are those high priority items? RICH: The biggest thing I think at the moment is TypeScript integration. Surprisingly, this is probably like the number one feature request I think is that people want to be able to write Typescript inside the Svelte components and they want to be able to get TypeScript when they import the Svelte component into something else. They want to be able to get completion [inaudible] and type checking and all the rest of it. A couple of years ago, that would've been more or less than thinkable but now it's like table stakes is that you have to have first-class TypeScript support. CHARLES: Yeah, TypeScript is as popular as Babel these days, right? RICH: Yeah, I think so. I don't need to be sold on the benefits. I've been using TypeScript a lot myself. Svelte is written in TypeScript, but actually being able to write it inside your components is something that would involve as hacking around in the TypeScript compiler API in a way that, I don't know if anyone actually or any of us on the team actually knows how to do. So, we just need to spend some time and do that. But obviously when you've got an open source project, you need to deal with the bugs that arise and stuff first. So, it's difficult to find time to do a big project like that. CHARLES: So, devil's advocate here is if the compiler was open for extension, couldn't a TypeScript support be just another plugin? RICH: It could, but then you could end up with a situation where there's multiple competing TypeScript plugins and no one's sure which ones are used and they all have slightly different characteristics. I always think it's better if these things that are common feature requests that a lot of people would benefit from, if they're built into the project themselves. I go really light in the batteries included way of developing and I think this is something that we've sort of drifted away from in the frontend world over the last few years, we've drifted away from batteries included towards do it yourself. CHARLES: Assemble the entire thing. Step one, open the box and pour the thousand Lego pieces onto the floor. RICH: Yeah, but it's worse than that because at least, with a Lego set, you get the Lego pieces. It's like if you had the Lego manual showing you how to build something, but you were then responsible for going out and getting the Lego pieces, that's frontend development and I don't like it. CHARLES: Right. Yeah. I don't like that either. But still, there's a lot of people advocating directly. You really ought to be doing everything completely and totally yourself. RICH: Yes. CHARLES: And a lot of software development shops still operate that way. RICH: Yeah. I find that the people advocating for that position the most loudly, they tend to be the maintainers of the projects in question. The whole small modules philosophy, they exist for the benefit primarily of library authors and framework authors, not for the benefit of developers, much less users. And the fact that the people who are building libraries and frameworks tend to have the loudest megaphones means that that mindset, that philosophy is taken as a best practice for the industry as a whole. And I think it's a mistake to think that way. TARAS: There is also, I think, a degree of a sliding scale where you start off with like as the more experience you get, because there is more experience you get closer, you get to that kind of wanting granular control and then they kind of slides down towards granular control and then slice back up to, once you've got a lot of experience, you're like, "Okay, I don't want this control anymore." And then you kind of cast that and you get into like, "I'm now responsible for tools that my team uses," and now you're back to wanting that control because you want things to be able to click together. It's kind of like a way that your interest in that might change over time depending on your experience level and your position in the organization. So yeah, there's definitely different motivating factors. Like one of the things that we've been thinking a lot about is designing tools that are composable and granular at individual module level, but combined together into a system for consumption by regular people. So like finding those primitives that will just click together when you know how to click them together. But when you're consuming them, just feel like a holistic whole, but at the same time not being monolithic. That's a lot of things to figure out and it's a lot of things to manage over time, but that's solely the kind of things we've been thinking about a lot. RICH: I think that's what distinguishes the good projects that are going to have a long lifespan from the projects that are maybe interesting but don't have a long shelf life is whether they're designed in such a way that permits that kind of cohesion and innovation tradeoff, if you think of it as a trade off. Anyone can build the fastest thing or the smallest thing or the whatever it is thing. But building these things in a way that feels like it was designed holistically but is also flexible enough to be used with everything else that you use, that's the real design challenge. CHARLES: It's hard to know where to draw that line. Maybe one good example of this and, these are actually two projects that I'm not particularly a fan of, but I think they do a good job of operating this way. So, I guess in that sense, it means I can even be more honest about it. I don't particularly care for Redux or like observables, but we ended up using, in one of our last React projects, we had to choose between using Redux-Saga and Redux-Observable. The Redux-Observable worked very well for us. And I think one of the reasons is because they both had to kind of exist. They had to kind of co-exist is their own projects. Like Redux exists as its own entity and Observables exist as their own kind of whole ecosystem. And so, they put a lot of thought in like what is the natural way in which these two primitives compose together? As opposed to the Saga, which I don't want to disparage the project because I think it actually is a really good project. There's a lot of really good ideas there but because it's more like just bolted on to Redux and it doesn't exist outside of the ecosystem of Redux and the ideas can't flourish outside and figure out how it interfaces with other things. Like the true primitive is still unrevealed there. And so, whereas I feel like with Redux you actually have to really, really true primitives. Now, they're not necessarily my favorite primitives, but they are very refined and very like these do exactly what they are meant to do. And so when you find how they connect together, that experience is also really good. And the primitive that arises there I think ends up being better. Is that an example of what you guys are talking about? RICH: Maybe. [Laughs] TARAS: No, I think so. I mean, it's distilling to the essence, the core of what you're trying to do and then be able to combine it together. I mean, that's been kind of the thing that we've been working on at the Frontside. But also within this context, it makes me think of how does a compiler fit into that? How does that work with the compiler? It's just like when you add the compiler element, it just makes it like my mind just goes poof! [Laughter] CHARLES: Yeah, exactly. That's why I keep coming back to like, how do you, and maybe I haven't, you just have to kind of go through the experience, but it feels like maybe there's this cycle of like you build up the framework and then once it's well understood, you throw the framework away in favor of like just wiring it straight in there with the compiler and then you iterate on that process. Is that fair to say? RICH: Kind of, yeah. At the moment, I'm working on this project, so I referred a moment ago to being able to target webGL directly. At the moment, the approach that I'm taking to building webGL apps is to have webGL components inside Svelte in this project called SvelteGL. And we've used it a couple of times at the Times. It's not really production ready yet, but I think it has some promise. But it's also slightly inefficient, like it needs to have all of the shade of code available for whichever path you're going to take, whatever characteristics your materials have, you need to have all of the shade of code. And if we're smart about it, then the compiler could know ahead of time which bits of shade of code it needed to include. At the moment, it just doesn't have a way of figuring that out. And so that would be an example of paving those cow paths. Like if you do try and do everything within the compiler universe, it does restrict your freedom of movement. It's true. And to qualify my earlier statements about how the small modules philosophy is to the benefit of authors over developers, it has actually enabled this huge flourishing of innovation, particularly in the React world. We've got this plethora of different state management solutions and CSS and JS solutions. And while I, as a developer, probably don't want to deal with that, I just want there to be a single correct answer. It's definitely been to the advantage of the ecosystem as a whole to have all of this experimentation. Then in the wild, there are projects like Svelte they can then take advantage of. We can say, "Oh well, having observed all of this, this is the right way to solve this problem." And so, we can kind of bake in that and take advantage of the research that other people have done. And I think we have made contributions of our own but there is a lot of stuff in Svelte like the fact that data generally flows one way instead of having [inaudible] everywhere. Things like that are the results of having seen everyone make mistakes in the past and learning from them. So, there are tradeoffs all around. TARAS: One thing on topic of data flow here and there, one thing that I've been kind of struggling to compute is the impact of that as opposed to something where you have like one directional data flow because it seems like conceptually it's really simple. You set a property like in two way balance system, like you just propagate through stuff but we don't really have a way, you don't have any way of assessing what is the true impact of that computation. Like what is the cost of that propagation where I think it's almost easier to see the cost of that computation if you have like one directional data flow because you know that essentially everything between the moment that you invoke transition to computing the next state, that is the cost of your computation where you don't have that way of computing the result in a two way balance system. Something like Ember Run Loop or mobx or zones, Vues, reactive system. All these systems make it really difficult to understand what is the real cost of setting state. And that's something that I personally find difficult because this clarity that you have about the one directional data flow and what it takes to compute the next state, it's almost like because that cost is tangible where you're thinking about like mutation of objects and tracking their change like that cost is almost immeasurable. It just seems like a blob of changes that they have to propagate. I don't know. That's just something that I've been thinking a lot because especially with the work that we'll be doing with microstates because as you're figuring out what the next state is, you know exactly what operations are performed in a process where that might not be the case with the system that tracks changes like where you'd have with zones or with Ember Run Loop, or Vue. RICH: I would agree with that. The times that I found it to be beneficial to deviate from the top-down ideology is when you have things like form elements and you want to bind to the values of those form elements. You want to use them in some other computation. And when you do all that by having props going in and then events going out and then you intercept the event and then you set the prop, you're basically articulating what the compiler can articulate for you more effectively anyway. And so conceptually, we have two way bindings within Svelte, but mechanically everything is top down, if that makes sense. CHARLES: Is it because you can analyze the tree of top down and basically understanding when you can cheat. This might be really over-simplistic, but if you're kind of with the event, you're collecting the water and then you have to put it way up on top of the thing and it flows down. But if you can see the entire apparatus, you can say, "Actually, I've got this water and it's going to end up here, so I'm just going to cheat and put it over right there." Is that the type of thing that you're talking about where you're effectively getting a two way binding, but you're skipping the ceremony. RICH: It's kind of writing the exact same code that you would write if you were doing it using events. But if you're writing it yourself, then maybe you would do something in a slightly inefficient way perhaps. For example, with some kinds of bindings, you have to be careful to avoid an infinite loop. If you have an event that triggers a state change, the state change could trigger the event again and you get this infinite loop. A compiler can guard against that. It can say this is a binding that could have that problem, so we're going to just keep track of whether the state changes as a result of the binding. And so, the compiler can sort of solve all of these really hairy problems that you had faced as a developer while also giving you the benefit in terms of being able to write much less code and write code that expresses the relationship between these two things in a more semantic and declarative way without the danger. TARAS: This is one of the reasons why I was so excited to talk to you about this stuff, Rich, because this stuff is really interesting. I mentioned that we might, so we have a little bit more time. So I just want to mention, because I think that you might find this interesting, the [inaudible], the stuff that we were talking about that I mentioned to you before. So, I want to let Charles talk about it briefly because it's interesting, because it essentially comes down to managing asynchrony as it ties to life cycle of objects. Life cycle of objects and components are something we deal with on a regular basis. So, it's been an interesting exercise and experimenting with that. Charles, do you want to give kind of a low down? CHARLES: Sure. It's definitely something that I'm very excited about. So, Taras gets to hear like an earful pretty much every day. But the idea behind structure concurrency, I don't know if you're familiar with it. It's something that I read a fantastic -- so people have been using this for a while in the Ember community. So Alex Matchneer, who's a friend and often time guest on the podcast created a library called ember-concurrency where he brought these ideas of structure concurrency to the ember world. But it's actually very prevalent. There's C libraries and Python libraries. There's not a generic one for JavaScript yet, but the idea is just really taking the same concepts of scope that you have with variables and with components, whether they be ember components, Svelte components, React components or whatever there is, you have a tree of components or you have a of parents and children and modeling every single asynchronous process as a tree rather than what we have now, which is kind of parallel linear stacks. You call some tick happens in the event loop and you drill down and you either edit an exception or you go straight back up. The next tick of the event loop comes, you drill down to some stack and then you go back up. A promise resolves, you do that stack. And so with structure concurrency, essentially every stack can have multiple children. And so, you can fork off multiple children. But if you have an error in any of these children, it's going to propagate up the entire tree. And so, it's essentially the same idea as components except to apply to concurrent processes. And you can do some just really, really amazing things because you don't ever have to worry about some process going rogue and you don't have to worry about coordinating all these different event loops. And one of the things that I'm discovering is that I don't need like event loops. I don't really use promises anymore. Like actually, I was watching, I think it was why I was watching your talk when you're talking about Svelte 3, when you're like -- or maybe did you write a blog post about we've got to stop saying that virtual doms are fast? RICH: Yes, I did. CHARLES: So I think it was that one. I was reading that one and it jived with me because it's just like, why can't we just go and do the work? We've got the event, we can just do the work. And one of the things that I'm discovering is with using the construction concurrency with generators, I'm experiencing a very similar phenomenon where these stack traces, like if there's an error, the stack traces like three lines long because you're basically doing the work and you're executing all these stacks and you're pausing them with a generator. And then when an event happens, you just resume right where you left off. There's no like, we've got this event, let's push it into this event queue that's waiting behind these three event loops. And then we're draining these queues one at a time. It's like, nope, the event happens. You can just resume right where you were. You're in the middle of a function call, in the middle of like [inaudible] block. You just go without any ceremony, without any fuss. You just go straight to where you were, and the stack and the context and all the variables and everything is there preserved exactly where you left it. So, it's really like you're just taking the book right off the shelf and going right to your bookmark and continuing along. Rather than when you've got things like the run loop in ember or the zones in angular where you have all these mechanics to reconstruct the context of where you were to make sure that you don't have some event listener. An event listeners created inside of a context and making sure that that context is either reconstructed or the event listener doesn't fire. All these problems just cease to exist when you take this approach. And so, if it's pertinent to this conversation, that was a surprising result for me was that if you're using essentially code routines to manage your concurrency, you don't need event loops, you don't need buffers, you don't need any of this other stuff. You just use the JavaScript call stack. And that's enough. RICH: I'm not going to pretend to have fully understood everything you just said but it does sound interesting. It does have something not that dissimilar to ember's run loop because if you have two state changes right next to each other, X+=1, Y+=1, you want to have a single update resulting from those. So instead of instruments in the code such that your components are updated immediately after X+=1, it waits until the end of the event loop and then it will flush all of the pending changes simultaneously. So, what you're describing sounds quite wonderful and I hope to understand that better. You have also reminded me that Alex Matchneer implemented this idea in Svelte, it's called svelte-concurrency. And when he sent it to me, I was out in the woods somewhere and I couldn't take a look at it and it went on my mental to do list and you just brought it to the top of that to do list. So yeah, we have some common ground here, I think. CHARLES: All right. TARAS: This is a really, really fascinating conversation. Thank you, Rich, so much for joining us. CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @thefrontside or over just plain old email at contact@frontside.io. Thanks and see you next time.

PODebug
#039 – O Bom, o Mau e o Feio: Três desenvolvedores em conflito

PODebug

Play Episode Listen Later Feb 17, 2019 60:15


Esta semana o nosso time original, partindo do conceito de Larry Wall, autor da linguagem Perl, que descreve as três virtudes de um programador como: preguiça, impaciência e audácia/arrogância, tenta identificar as principais características que definem um bom desenvolvedor. Buscamos analisar sob três aspectos principais: O técnico, o criativo e …

BSD Now
Episode 272: Detain the bhyve | BSD Now 272

BSD Now

Play Episode Listen Later Nov 15, 2018 68:39


Byproducts of reading OpenBSD’s netcat code, learnings from porting your own projects to FreeBSD, OpenBSD’s unveil(), NetBSD’s Virtual Machine Monitor, what 'dependency' means in Unix init systems, jailing bhyve, and more. ##Headlines ###The byproducts of reading OpenBSD netcat code When I took part in a training last year, I heard about netcat for the first time. During that class, the tutor showed some hacks and tricks of using netcat which appealed to me and motivated me to learn the guts of it. Fortunately, in the past 2 months, I was not so busy that I can spend my spare time to dive into OpenBSD‘s netcat source code, and got abundant byproducts during this process. (1) Brush up socket programming. I wrote my first network application more than 10 years ago, and always think the socket APIs are marvelous. Just ~10 functions (socket, bind, listen, accept…) with some IO multiplexing buddies (select, poll, epoll…) connect the whole world, wonderful! From that time, I developed a habit that is when touching a new programming language, network programming is an essential exercise. Even though I don’t write socket related code now, reading netcat socket code indeed refresh my knowledge and teach me new stuff. (2) Write a tutorial about netcat. I am mediocre programmer and will forget things when I don’t use it for a long time. So I just take notes of what I think is useful. IMHO, this “tutorial” doesn’t really mean teach others something, but just a journal which I can refer when I need in the future. (3) Submit patches to netcat. During reading code, I also found bugs and some enhancements. Though trivial contributions to OpenBSD, I am still happy and enjoy it. (4) Implement a C++ encapsulation of libtls. OpenBSD‘s netcat supports tls/ssl connection, but it needs you take full care of resource management (memory, socket, etc), otherwise a small mistake can lead to resource leak which is fatal for long-live applications (In fact, the two bugs I reported to OpenBSD are all related resource leak). Therefore I develop a simple C++ library which wraps the libtls and hope it can free developer from this troublesome problem and put more energy in application logic part. Long story to short, reading classical source code is a rewarding process, and you can consider to try it yourself. ###What I learned from porting my projects to FreeBSD Introduction I set up a local FreeBSD VirtualBox VM to test something, and it seems to work very well. Due to the novelty factor, I decided to get my software projects to build and pass the tests there. The Projects https://github.com/shlomif/shlomif-computer-settings/ (my dotfiles). https://web-cpan.shlomifish.org/latemp/ https://fc-solve.shlomifish.org/ https://www.shlomifish.org/open-source/projects/black-hole-solitaire-solver/ https://better-scm.shlomifish.org/source/ http://perl-begin.org/source/ https://www.shlomifish.org/meta/site-source/ Written using a mix of C, Perl 5, Python, Ruby, GNU Bash, XML, CMake, XSLT, XHTML5, XHTML1.1, Website META Language, JavaScript and more. Work fine on several Linux distributions and have https://en.wikipedia.org/wiki/TravisCI using Ubuntu 14.04 hosts Some pass builds and tests on AppVeyor/Win64 What I Learned: FreeBSD on VBox has become very reliable Some executables on FreeBSD are in /usr/local/bin instead of /usr/bin make on FreeBSD is not GNU make m4 on FreeBSD is not compatible with GNU m4 Some CPAN Modules fail to install using local-lib there DocBook/XSL Does Not Live Under /usr/share/sgml FreeBSD’s grep does not have a “-P” flag by default FreeBSD has no “nproc” command Conclusion: It is easier to port a shell than a shell script. — Larry Wall I ran into some cases where my scriptology was lacking and suboptimal, even for my own personal use, and fixed them. ##News Roundup ###OpenBSD’s unveil() One of the key aspects of hardening the user-space side of an operating system is to provide mechanisms for restricting which parts of the filesystem hierarchy a given process can access. Linux has a number of mechanisms of varying capability and complexity for this purpose, but other kernels have taken a different approach. Over the last few months, OpenBSD has inaugurated a new system call named unveil() for this type of hardening that differs significantly from the mechanisms found in Linux. The value of restricting access to the filesystem, from a security point of view, is fairly obvious. A compromised process cannot exfiltrate data that it cannot read, and it cannot corrupt files that it cannot write. Preventing unwanted access is, of course, the purpose of the permissions bits attached to every file, but permissions fall short in an important way: just because a particular user has access to a given file does not necessarily imply that every program run by that user should also have access to that file. There is no reason why your PDF viewer should be able to read your SSH keys, for example. Relying on just the permission bits makes it easy for a compromised process to access files that have nothing to do with that process’s actual job. In a Linux system, there are many ways of trying to restrict that access; that is one of the purposes behind the Linux security module (LSM) architecture, for example. The SELinux LSM uses a complex matrix of labels and roles to make access-control decisions. The AppArmor LSM, instead, uses a relatively simple table of permissible pathnames associated with each application; that approach was highly controversial when AppArmor was first merged, and is still looked down upon by some security developers. Mount namespaces can be used to create a special view of the filesystem hierarchy for a set of processes, rendering much of that hierarchy invisible and, thus, inaccessible. The seccomp mechanism can be used to make decisions on attempts by a process to access files, but that approach is complex and error-prone. Yet another approach can be seen in the Qubes OS distribution, which runs applications in virtual machines to strictly control what they can access. Compared to many of the options found in Linux, unveil() is an exercise in simplicity. This system call, introduced in July, has this prototype: int unveil(const char *path, const char *permissions); A process that has never called unveil() has full access to the filesystem hierarchy, modulo the usual file permissions and any restrictions that may have been applied by calling pledge(). Calling unveil() for the first time will “drop a veil” across the entire filesystem, rendering the whole thing invisible to the process, with one exception: the file or directory hierarchy starting at path will be accessible with the given permissions. The permissions string can contain any of “r” for read access, “w” for write, “x” for execute, and “c” for the ability to create or remove the path. Subsequent calls to unveil() will make other parts of the filesystem hierarchy accessible; the unveil() system call itself still has access to the entire hierarchy, so there is no problem with unveiling distinct subtrees that are, until the call is made, invisible to the process. If one unveil() call applies to a subtree of a hierarchy unveiled by another call, the permissions associated with the more specific call apply. Calling unveil() with both arguments as null will block any further calls, setting the current view of the filesystem in stone. Calls to unveil() can also be blocked using pledge(). Either way, once the view of the filesystem has been set up appropriately, it is possible to lock it so that the process cannot expand its access in the future should it be taken over and turn hostile. unveil() thus looks a bit like AppArmor, in that it is a path-based mechanism for restricting access to files. In either case, one must first study the program in question to gain a solid understanding of which files it needs to access before closing things down, or the program is likely to break. One significant difference (beyond the other sorts of behavior that AppArmor can control) is that AppArmor’s permissions are stored in an external policy file, while unveil() calls are made by the application itself. That approach keeps the access rules tightly tied to the application and easy for the developers to modify, but it also makes it harder for system administrators to change them without having to rebuild the application from source. One can certainly aim a number of criticisms at unveil() — all of the complaints that have been leveled at path-based access control and more. But the simplicity of unveil() brings a certain kind of utility, as can be seen in the large number of OpenBSD applications that are being modified to use it. OpenBSD is gaining a base level of protection against unintended program behavior; while it is arguably possible to protect a Linux system to a much greater extent, the complexity of the mechanisms involved keeps that from happening in a lot of real-world deployments. There is a certain kind of virtue to simplicity in security mechanisms. ###NetBSD Virtual Machine Monitor (NVVM) NetBSD Virtual Machine Monitor The NVMM driver provides hardware-accelerated virtualization support on NetBSD. It is made of an ~MI frontend, to which MD backends can be plugged. A virtualization API is provided in libnvmm, that allows to easily create and manage virtual machines via NVMM. Two additional components are shipped as demonstrators, toyvirt and smallkern: the former is a toy virtualizer, that executes in a VM the 64bit ELF binary given as argument, the latter is an example of such binary. Download The source code of NVMM, plus the associated tools, can be downloaded here. Technical details NVMM can support up to 128 virtual machines, each having a maximum of 256 VCPUs and 4GB of RAM. Each virtual machine is granted access to most of the CPU registers: the GPRs (obviously), the Segment Registers, the Control Registers, the Debug Registers, the FPU (x87 and SSE), and several MSRs. Events can be injected in the virtual machines, to emulate device interrupts. A delay mechanism is used, and allows VMM software to schedule the interrupt right when the VCPU can receive it. NMIs can be injected as well, and use a similar mechanism. The host must always be x8664, but the guest has no constraint on the mode, so it can be x8632, PAE, real mode, and so on. The TSC of each VCPU is always re-based on the host CPU it is executing on, and is therefore guaranteed to increase regardless of the host CPU. However, it may not increase monotonically, because it is not possible to fully hide the host effects on the guest during #VMEXITs. When there are more VCPUs than the host TLB can deal with, NVMM uses a shared ASID, and flushes the shared-ASID VCPUs on each VM switch. The different intercepts are configured in such a way that they cover everything that needs to be emulated. In particular, the LAPIC can be emulated by VMM software, by intercepting reads/writes to the LAPIC page in memory, and monitoring changes to CR8 in the exit state. ###What ‘dependency’ means in Unix init systems is underspecified (utoronto.ca) I was reading Davin McCall’s On the vagaries of init systems (via) when I ran across the following, about the relationship between various daemons (services, etc): I do not see any compelling reason for having ordering relationships without actual dependency, as both Nosh and Systemd provide for. In comparison, Dinit’s dependencies also imply an ordering, which obviates the need to list a dependency twice in the service description. Well, this may be an easy one but it depends on what an init system means by ‘dependency’. Let’s consider ®syslog and the SSH daemon. I want the syslog daemon to be started before the SSH daemon is started, so that the SSH daemon can log things to it from the beginning. However, I very much do not want the SSH daemon to not be started (or to be shut down) if the syslog daemon fails to start or later fails. If syslog fails, I still want the SSH daemon to be there so that I can perhaps SSH in to the machine and fix the problem. This is generally true of almost all daemons; I want them to start after syslog, so that they can syslog things, but I almost never want them to not be running if syslog failed. (And if for some reason syslog is not configured to start, I want enabling and starting, say, SSH, to also enable and start the syslog daemon.) In general, there are three different relationships between services that I tend to encounter: a hard requirement, where service B is useless or dangerous without service A. For instance, many NFS v2 and NFS v3 daemons basically don’t function without the RPC portmapper alive and active. On any number of systems, firewall rules being in place are a hard requirement to start most network services; you would rather your network services not start at all than that they start without your defenses in place. a want, where service B wants service A to be running before B starts up, and service A should be started even if it wouldn’t otherwise be, but the failure of A still leaves B functional. Many daemons want the syslog daemon to be started before they start but will run without it, and often you want them to do so so that at least some of the system works even if there is, say, a corrupt syslog configuration file that causes the daemon to error out on start. (But some environments want to hard-fail if they can’t collect security related logging information, so they might make rsyslogd a requirement instead of a want.) an ordering, where if service A is going to be started, B wants to start after it (or before it), but B isn’t otherwise calling for A to be started. We have some of these in our systems, where we need NFS mounts done before cron starts and runs people’s @reboot jobs but neither cron nor NFS mounts exactly or explicitly want each other. (The system as a whole wants both, but that’s a different thing.) Given these different relationships and the implications for what the init system should do in different situations, talking about ‘dependency’ in it systems is kind of underspecified. What sort of dependency? What happens if one service doesn’t start or fails later? My impression is that generally people pick a want relationship as the default meaning for init system ‘dependency’. Usually this is fine; most services aren’t actively dangerous if one of their declared dependencies fails to start, and it’s generally harmless on any particular system to force a want instead of an ordering relationship because you’re going to be starting everything anyway. (In my example, you might as well say that cron on the systems in question wants NFS mounts. There is no difference in practice; we already always want to do NFS mounts and start cron.) ###Jailing The bhyve Hypervisor As FreeBSD nears the final 12.0-RELEASE release engineering cycles, I’d like to take a moment to document a cool new feature coming in 12: jailed bhyve. You may notice that I use HardenedBSD instead of FreeBSD in this article. There is no functional difference in bhyve on HardenedBSD versus bhyve on FreeBSD. The only difference between HardenedBSD and FreeBSD is the aditional security offered by HardenedBSD. The steps I outline here work for both FreeBSD and HardenedBSD. These are the bare minimum steps, no extra work needed for either FreeBSD or HardenedBSD. A Gentle History Lesson At work in my spare time, I’m helping develop a malware lab. Due to the nature of the beast, we would like to use bhyve on HardenedBSD. Starting with HardenedBSD 12, non-Cross-DSO CFI, SafeStack, Capsicum, ASLR, and strict W^X are all applied to bhyve, making it an extremely hardened hypervisor. So, the work to support jailed bhyve is sponsored by both HardenedBSD and my employer. We’ve also jointly worked on other bhyve hardening features, like protecting the VM’s address space using guard pages (mmap(…, MAPGUARD, …)). Further work is being done in a project called “malhyve.” Only those modifications to bhyve/malhyve that make sense to upstream will be upstreamed. Initial Setup We will not go through the process of creating the jail’s filesystem. That process is documented in the FreeBSD Handbook. For UEFI guests, you will need to install the uefi-edk2-bhyve package inside the jail. I network these jails with traditional jail networking. I have tested vnet jails with this setup, and that works fine, too. However, there is no real need to hook the jail up to any network so long as bhyve can access the tap device. In some cases, the VM might not need networking, in which case you can use a network-less VM in a network-less jail. By default, access to the kernel side of bhyve is disabled within jails. We need to set allow.vmm in our jail.conf entry for the bhyve jail. We will use the following in our jail, so we will need to set up devfs(8) rules for them: A ZFS volume A null-modem device (nmdm(4)) UEFI GOP (no devfs rule, but IP assigned to the jail) A tap device Conclusion The bhyve hypervisor works great within a jail. When combined with HardenedBSD, bhyve is extremely hardened: PaX ASLR is fully applied due to compilation as a Position-Independent Executable (HardenedBSD enhancement) PaX NOEXEC is fully applied (strict W^X) (HardenedBSD enhancement) Non-Cross-DSO CFI is fully applied (HardenedBSD enhancement) Full RELRO (RELRO + BINDNOW) is fully applied (HardenedBSD enhancement) SafeStack is applied to the application (HardenedBSD enhancement) Jailed (FreeBSD feature written by HardenedBSD) Virtual memory protected with guard pages (FreeBSD feature written by HardenedBSD) Capsicum is fully applied (FreeBSD feature) Bad guys are going to have a hard time breaking out of the userland components of bhyve on HardenedBSD. :) ##Beastie Bits GhostBSD 18.10 has been released Project Trident RC3 has been released The OpenBSD Foundation receives the first Silver contribution from a single individual Monitoring pf logs gource NetBSD on the RISC-V is alive The X hole Announcing the pkgsrc-2018Q3 release (2018-10-05) NAT performance on EdgeRouter X and Lite with EdgeOS, OpenBSD, and OpenWRT UNIX (as we know it) might not have existed without Mrs. Thompson Free Pizza for your dev events Portland BSD Pizza Night: Nov 29th 7pm ##Feedback/Questions Dennis - Core developers leaving illumOS? Ben - Jumping from snapshot to snapshot Ias - Question about ZFS snapshots Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Kodsnack
Kodsnack 271 - Skriva sitt eget socker

Kodsnack

Play Episode Listen Later Jul 31, 2018 71:59


Fredrik och Kristoffer snackar väderdata och värdering av samlade data. Gamla papers, språk som skrevs utan att någon inblandad sett en dator, spännande saker att livekoda med mera. Kristoffer behöver göra nya saker för att få ämnen att hålla presentationer om. Vi jobbar oss hela vägen tillbaka till Lisps barndom innan vi jobbar oss framåt igen, hela vägen till Gos sophantering. Som avslutning pratar vi, utifrån Linus Torvalds korrespondens kring Linuxkärnans utveckling, om att kommunicera bra och dåligt, och diskuterar att välja att öka eller minska vågorna på vattnet oavsett om man sänder eller reagerar. Ett stort tack till Cloudnet som sponsrar vår VPS! Har du kommentarer, frågor eller tips? Vi är @kodsnack, @tobiashieta, @iskrig, @itssotoday och @bjoreman på Twitter, har en sida på Facebook och epostas på info@kodsnack.se om du vill skriva längre. Vi läser allt som skickas. Gillar du Kodsnack får du hemskt gärna recensera oss i iTunes! Länkar Uppsalas centralstation Väderdata verkar ha börjat samlas i Sverige 1858 Linuxconf.au 2019 - call for papers har nu stängt Lisp John McCarthy John McCarthy första paper om Lisp Paper om Lisps historia Kristoffers C-implementation av Lisp JS party Avsnittet med IOT-keynoten Guy Steele Hal Abelson Gerald Sussman The lambda papers - papers om Scheme S-expression M-expressions Donald Knuth TeX Bram Cohen Bittorrent Organizing programs without classes Self Prototypbaserat arv Vår “recension” av ES6 class-konstruktionen i Javascript Elm Destroy all software Wat Haltingproblemet - stopproblemet på ren svenska? Lambdakalkyl Alonzo Church DHH:s första demo av Rails Noam Chomsky Chomskys hierarki Perl Idris Curry on Videos från årets Curry on Larry Wall Perl 6 Larry Wall pratar Perl 6 på Curry on Go-keynote från ISMM (International symposium on memory management) om Gos sophantering Udda commits i Linuxkärnans Git-historik Martin - Grapefrukt Holedown - superskoj! Mr Driller Titlar Jag grävde gropar i solen En sommar är ju bara väder Det måste vara fel på mätaren Tvinga dem att äta gurkor Slut på saker som jag redan gjort De kände någon som kände någon som hade en dator tillgänglig Så uppfinner han i princip if-satsen Implementera en minimal Lisp live Om man vet vad man pratar om så kan man förklara saker på ett väldigt tydligt sätt Formler och “inses lätt” Javascript är ju tillräckligt förvirrande som det är (Just det där med) Syntaktiskt socker (När man kan) Skriva sitt eget socker Forma språket till sitt eget Man sitter och formger editorn 100% av tiden Mitt första Susecon Ett kluster som körde på en mainframe i Tyskland Implementera Lisp i Lisp Att bli förolämpad är något jag gör

Philip Guo - podcasts and vlogs - pgbovine.net
PG Podcast - Episode 37 - Henry Zhu on humans of open-source software

Philip Guo - podcasts and vlogs - pgbovine.net

Play Episode Listen Later Jun 12, 2018


Twitter: https://twitter.com/pgbovineSupport with Patreon, PayPal, or credit/debit: http://pgbovine.net/support.htmhttp://pgbovine.net/PG-Podcast-37-Henry-Zhu.htm- [Things You Should Never Do, Part I - Netscape rewriting their entire codebase](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/) by Joel Spolsky- [Google and HTTP](http://this.how/googleAndHttp/) by Dave Winer- [#SmooshGate FAQ](https://developers.google.com/web/updates/2018/03/smooshgate)- [Diligence, Patience, and Humility](https://www.oreilly.com/openbook/opensources/book/larry.html) by Larry Wall- [Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure](https://www.fordfoundation.org/library/reports-and-studies/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure/) by Nadia Eghbal- [PG Podcast - Episode 35 - Audrey Boguchwal + Nadia Eghbal on sustainable online communities](http://pgbovine.net/PG-Podcast-35-Audrey-Boguchwal-and-Nadia-Eghbal.htm)Recorded: 2018-06-13 (2)

Soul Donkey Music
Episode 008

Soul Donkey Music

Play Episode Listen Later Feb 28, 2018 77:16


"Hubris itself will not let you be an artist." -- Larry Wall

hubris larry wall
Mapping The Journey
Episode 13: Interview with Damian Conway, designer of Perl 6 programming language

Mapping The Journey

Play Episode Listen Later Nov 9, 2017 56:07


Damian Conway is a computer scientist, a member of the Perl community and the author of several books. He is perhaps best known for his contributions to CPAN and Perl 6 language design, and his Perl programming training courses as well. He has won the Larry Wall Award three times for CPAN contributions. He worked with Larry Wall on Perl6 design for more than a decade.

科技最前沿,论天文物理 人工智能 数码编程 大数据等
一三三、涨姿势,世界流行编程语言之父都是谁?

科技最前沿,论天文物理 人工智能 数码编程 大数据等

Play Episode Listen Later May 21, 2017 36:17


W3School 2017-05-18 20:31作为IT开发,我相信大家都在使用不同的语言进行开发,但是这些语言又是谁开发的呢,W3C中文网(w3schools.wang)搜集并整理了目前世界最流行开发语言的创始人以及其他信息,在不阅读下面内容的情况下,你知道几个呢?下面就开始来涨姿势吧!C语言创始人丹尼斯·里奇,C语言之父,UNIX之父。曾担任朗讯科技公司贝尔实验室下属的计算机科学研究中 心系统软件研究部的主任一职。1978年与布莱恩·科尔尼干(Brian W. Kernighan)一起出版了名著《C程序设计语言(The C Programming Language)》,现在此书已翻译成多种语言,成为C语言方面最权威的教材之一。2011年10月12日(北京时间为10月13日),丹尼斯·里奇去世,享年70岁。在技术讨论中,他常被称为dmr,这是他在贝尔实验室的Email地址。丹尼斯·里奇被世人尊称为“无形之王的C语言之父”,是计算机及网络技术的奠定者,曾担任朗讯科技公司贝尔实验室下属的计算机科学研究中心系统软件研究部的主任一职。是为乔布斯等一众IT巨擘提供肩膀的巨人。2011年10月与乔布斯相继离世,却远未像乔布斯那样得到全球的追捧和悼念。Java创始人詹姆斯·高斯林(英语:James Gosling,1955年5月19日-),出生于加拿大,软件专家,Java编程语言的共同创始人之一,一般公认他为“Java之父”。在他12岁的时候,他已能设计电子游戏机,帮忙邻居修理收割机。大学时期在天文系担任程式开发工读生,1977年获得了加拿大卡尔加里大学计算机科学学士学位。1981年开发在Unix上运行的Emacs类编辑器Gosling Emacs(以C语言编写,使用Mocklisp作为扩展语言)。1983年获得了美国卡内基梅隆大学计算机科学博士学位,博士论文的题目是:"The Algebraic Manipulation of Constraints"。毕业后到IBM工作,设计IBM第一代工作站NeWS系统,但不受重视。后来转至Sun公司。1990年,与Patrick Naughton和Mike Sheridan等人合作“绿色计划”,后来发展一套语言叫做“Oak”,后改名为Java。1994年底,James Gosling在硅谷召开的“技术、教育和设计大会”上展示Java程式。2000年,Java成为世界上最流行的电脑语言。C++语言创始人本贾尼·斯特劳斯特卢普博士,1950年出生于丹麦,先后毕业于丹麦阿鲁斯大学和英国剑桥大学,曾担任AT&T大规模程序设计研究部门负责人,AT&T、贝尔实验室和ACM成员,德州农工大学计算机系首席教授,德州农工大学“杰出教授”。现任摩根士丹利信息技术部门董事总经理、哥伦比亚大学计算机科学系客座教授,美国国家工程学会会员,IEEE、ACM、CHM资深会员。1979年,B. S开始开发一种语言,当时称为“C with Classes”(带类的C),后来演化为C++。1998年,ANSI/ISO C++标准建立,同年,B. S推出了其经典著作The C++ Programming Language的第三版。C++的标准化标志着B. S博士倾20年心血的伟大构想终于实现。C#语言创始人安德斯·海尔斯伯格(Anders Hejlsberg,1960年12月-),丹麦人,Borland Turbo Pascal编译器的主要作者。进入微软公司后,先后主持了Visual J++、.Net和C#。安德斯·海尔斯伯格出生于哥本哈根,安德斯·海尔斯伯格曾在丹麦技术大学学习工程,但没有毕业,大学时期他曾替Nascom microcomputer撰写程式,他曾为Nascom-2电脑撰写蓝标签(Blue Label)Pascal compiler,到了DOS时代他又重新改写这套compiler。当时他在丹麦拥有个叫Poly Data的公司,他编写了Compass Pascal编译器核心,后来叫Poly Pascal。1986年他首次认识了Philippe Kahn(Borland的创立者)。JavaScript语言创始人布兰登·艾克(英语:Brendan Eich,1961年-,美国程序员与企业家,JavaScript主要创造者与架构师,曾任Mozilla公司首席首席技术官。布兰登·艾克生于美国加州的森尼维尔市,在圣塔克拉拉大学(Santa Clara University)就读时,最初主修物理学,在大三时,因兴趣转变,投入计算机科学领域,后获取数学与计算机科学学士学位。1986年获取伊利诺伊大学香槟分校计算机科学硕士学位。毕业后进入SGI工作,在此工作七年,主要负责操作系统与网络功能。之后他至MicroUnity工作了三年。1995年4月4日,任职于网景期间,为网景浏览器开发出JavaScript,之后成为浏览器上应用最广泛的脚本语言之一。1998年,布兰登协助成立Mozilla.org,2003年在美国在线决定结束网景公司营运后,布兰登协助成立了Mozilla基金会。Python语言创始人Python的创始人为Guido van Rossum。1989年圣诞节期间,在阿姆斯特丹,Guido为了打发圣诞节的无趣,决心开发一个新的脚本解释程序,做为ABC 语言的一种继承。之所以选中Python(大蟒蛇的意思)作为该编程语言的名字,是因为他是一个叫Monty Python的喜剧团体的爱好者。Guido van Rossum(吉多·范罗苏姆)1982年获得阿姆斯特丹大学的数学和计算机科学的硕士学位,并于同年加入一个多媒体组织CWI,做调研员。1989年,他创立了Python语言。那时,他还在荷兰的CWI(Centrum voor Wiskunde en Informatica,国家数学和计算机科学研究院)。1991年初,Python发布了第一个公开发行版。Guido原居荷兰,1995移居到美国,并遇到了他现在的妻子。在2003年初,Guido和他的家人,包括他2001年出生的儿子Orlijn一直居住在华盛顿州北弗吉尼亚的郊区。随后他们搬迁到硅谷,从2005年开始就职于Google公司,其中有一半时间是花在Python上,现在Guido在为Dropbox工作。PHP语言创始人PHP于1994年由Rasmus Lerdorf创建,刚刚开始是Rasmus Lerdorf为了要维护个人网页而制作的一个简单的用Perl语言编写的程序。这些工具程序用来显示 Rasmus Lerdorf 的个人履历,以及统计网页流量。后来又用C语言重新编写,包括可以访问数据库。他将这些程序和一些表单直译器整合起来,称为 PHP/FI。PHP/FI 可以和数据库连接,产生简单的动态网页程序。在1995年以Personal Home Page Tools (PHP Tools) 开始对外发表第一个版本,Lerdorf写了一些介绍此程序的文档。并且发布了PHP1.0!在这的版本中,提供了访客留言本、访客计数器等简单的功能。以后越来越多的网站使用了PHP,并且强烈要求增加一些特性。比如循环语句和数组变量等等;在新的成员加入开发行列之后,Rasmus Lerdorf 在1995年6月8日将 PHP/FI 公开发布,希望可以透过社群来加速程序开发与寻找错误。这个发布的版本命名为 PHP 2,已经有 PHP 的一些雏型,像是类似 Perl的变量命名方式、表单处理功能、以及嵌入到 HTML 中执行的能力。程序语法上也类似 Perl,有较多的限制,不过更简单、更有弹性。PHP/FI加入了对MySQL的支持,从此建立了PHP在动态网页开发上的地位。到了1996年底,有15000个网站使用 PHP/FI。在1997年,任职于 Technion IIT公司的两个以色列程序设计师:Zeev Suraski 和 Andi Gutmans,重写了 PHP 的剖析器,成为 PHP 3 的基础。而 PHP 也在这个时候改称为PHP:Hypertext Preprocessor。经过几个月测试,开发团队在1997年11月发布了 PHP/FI 2。随后就开始 PHP 3 的开放测试,最后在1998年6月正式发布 PHP 3。Zeev Suraski 和 Andi Gutmans 在 PHP 3 发布后开始改写PHP 的核心,这个在1999年发布的剖析器称为 Zend Engine,他们也在以色列的 Ramat Gan 成立了 Zend Technologies 来管理 PHP 的开发。Perl语言创始人Perl 最初的设计者为拉里·沃尔(Larry Wall),他于1987年12月18日发表。Perl借取了C、sed、awk、shell 脚本语言以及很多其他程序语言的特性。其中最重要的特性是它内部集成了正则表达式的功能,以及巨大的第三方代码库CPAN。 Perl 被称为“实用报表提取语言”(Practical Extraction and Report Language)。它是术语,而不仅仅是简写,Perl的创造者,Larry Wall提出第一个,但很快又扩展到第二个。那就是为什么“Perl”没有所有字母都大写。没必要争论哪一个正确,Larry 两个都认可。Ruby语言创始人Ruby,一种简单快捷的面向对象(面向对象程序设计)脚本语言,在20世纪90年代由日本人松本行弘(Yukihiro Matsumoto)开发,遵守GPL协议和Ruby License。 松本行弘,Yukihiro Matsumoto(大家都叫他Matz.)Matz是一位专业的程序员,他在日本的开源公司 Netlab工作。他也是日本子最为著名的开放源码传播者之一。他发布了许多开源的产品,包括cmail,一个基于 Emacs 的邮件客户端程序,完全用Lisp写的。Ruby 是他第一个在日本以外国家成名的软件。Go语言创始人Go语言于2009年11月正式宣布推出,成为开放源代码项目,并在Linux及Mac OS X平台上进行了实现,后追加Windows系统下的实现。 谷歌资深软件工程师罗布·派克(Rob Pike)表示,“Go让我体验到了从未有过的开发效率。”派克表示,和今天的C++或C一样,Go是一种系统语言。他解释道,“使用它可以进行快速开发,同时它还是一个真正的编译语言,我们之所以现在将其开源,原因是我们认为它已经非常有用和强大。”罗伯伯是Unix的先驱,是贝尔实验室最早和Ken Thompson以及 Dennis M. Ritche 开发Unix的猛人,UTF-8的设计人。他还在美国名嘴David Letterman 的晚间节目上露了一小脸,一脸憨厚地帮一胖子吹牛搞怪。让偶佩服不已的是,罗伯伯还是1980年奥运会射箭的银牌得主。他还是个颇为厉害的业余天文学家,设计的珈玛射线望远镜差点被NASA用在航天飞机上。Rob Pike AT&T Bell Lab前Member of Technical Staff ,现在google研究操作系统。Delphi语言创始人Delphi,是Windows平台下著名的快速应用程序开发工具(Rapid Application Development,简称RAD)。它的前身,即是DOS时代盛行一时的“BorlandTurbo Pascal”,最早的版本由美国Borland(宝兰)公司于1995年开发。主创者为Anders Hejlsberg。经过数年的发展,此产品也转移至Embarcadero公司旗下。Delphi是一个集成开发环境(IDE),使用的核心是由传统Pascal语言发展而来的Object Pascal,以图形用户界面为开发环境,透过IDE、VCL工具与编译器,配合连结数据库的功能,构成一个以面向对象程序设计为中心的应用程序开发工具。安德斯·海尔斯伯格(Anders Hejlsberg,1960.12~),丹麦人,Turbo Pascal编译器的主要作者,Delphi、C#和TypeScript之父,同时也是·NET创立者。 出生于哥本哈根,安德斯·海尔斯伯格曾在丹麦技术大学学习工程,但没有毕业,大学时期他曾替 Nascom microcomputer撰写程式,他曾为Nascom-2电脑撰写蓝标签(Blue Label) Pascal compiler,到了DOS时代他又重新改写这套compiler。当时他在丹麦拥有个叫Poly Data的公司,他编写了Compass Pascal编译器核心,后来叫Poly Pascal。1986年他首次认识了Philippe Kahn。Lua语言创始人Lua是一个小巧的脚本语言。是巴西里约热内卢天主教大学(Pontifical Catholic University of Rio de Janeiro)里的一个研究小组,由Roberto Ierusalimschy、Waldemar Celes 和 Luiz Henrique de Figueiredo所组成并于1993年开发。 其设计目的是为了嵌入应用程序中,从而为应用程序提供灵活的扩展和定制功能。Lua由标准C编写而成,几乎在所有操作系统和平台上都可以编译,运行。Lua并没有提供强大的库,这是由它的定位决定的。所以Lua不适合作为开发独立应用程序的语言。Lua 有一个同时进行的JIT项目,提供在特定平台上的即时编译功能。Objective-C语言创始人布莱德·考克斯(Brad Cox)是计算机科学家和生物数学的博士,知名于他在以下领域的工作:软件工程,特别是代码复用,软件构成,Objective-C。1980年代初布莱德·确斯在其公司Stepstone发明Objective-C,它以一种叫做SmallTalk-80的语言为基础。Objective-C创建在C语言之上,意味着它是在C语言基础上添加了扩展而创造出来的能够创建和操作对象的一门新的程序设计语言。易语言是中国人开发的一种语言,小编也特意为大家整理了一下相关信息!易语言创始人易语言是一门以中文作为程序代码编程语言。以“易”著称。创始人为吴涛。早期版本的名字为E语言。易语言最早的版本的发布可追溯至2000年9月11日。创造易语言的初衷是进行用中文来编写程序的实践。从2000年至今,易语言已经发展到一定的规模,功能上、用户数量上都十分可观。 1990年吴涛开始自学程序设计,作为中国最早的一批共享软件作者,吴涛在1994年就开始了共享软件的开发。1998年,应北京乾为天公司的邀请,吴涛与该公司一起合作开发CCED2000,仅用半年就开发出了试用版,后来连续升级了五、六个版本。 在长期的开发过程中,吴涛虽然能非常熟练地应用国外公司出品的开发工具,但却对此耿耿于怀。他认为,阻碍国内软件事业发展的根本原因之一在于中国人没有属于自己的编程语言,有一些国外编程语言虽然做了汉化,但那是不彻底的,除非他们重新开发全中文内核。有很多人想学会编写程序以灵活、充分地利用计算机资源,但却又不懂英文,尤其是计算机专业英语,使人很难迈过这一道门槛。对此,吴涛在2000年初开始国内第一种全中文编程开发系统——“易语言”的研制。 凭着在软件开发、项目管理方面的丰富经验,在经过一段时间的努力之后,“易语言”的第一版开发成功。 “易语言”对在校学生尤其适合,学生的求知欲非常强,软件中提供的流程图功能,很大意义上就是基于这个用户群体开发的。我们感谢这些人,为我们提供这些优秀的编程语言。

System Smarts - System Design with John Ackley
203: Perl and Systems with Larry Wall

System Smarts - System Design with John Ackley

Play Episode Listen Later Aug 24, 2016 57:26


Larry Wall is a linguist, computer scientist, and community organizer. Larry is the creator of the Perl programming language along with many other useful software tools. Perl’s usefulness led to its popularity, which in turn led to a large community of developers and maintainers. We talk about the role of programming languages in systems, the evolution of Perl itself, and then explore the nature of the community that organized around Perl and its creator.

larry wall
Rebuild
78: Have The Appropriate Amount Of Fun (Larry Wall)

Rebuild

Play Episode Listen Later Feb 4, 2015 47:55


Larry Wall joins me to talk about Perl 6. Show Notes FOSDEM 2015 - Get ready to party! Perl 6 Parrot VM Pugs - pugscode sorear/niecza Not Quite Perl MoarVM - A VM for NQP and Rakudo Perl 6 rakudo.org | Rakudo Perl 6 Curtis Poe: Perl 6 For Mere Mortals Perl 6 RFC Index Gradual Typing Inline::Perl5 FOSDEM 2015 - Over 9000: The Future of JRuby The Wall Nuthouse #perl6 IRC channel Larry Wall (@TimToady) | Twitter Tim O'Reilly Rosetta Code

Roderick on the Line
Ep. 07: "The Compulsive Sherry Algorithm"

Roderick on the Line

Play Episode Listen Later Oct 26, 2011


Ep. 07: “The Compulsive Sherry Algorithm” - Roderick on the Line - Merlin Mann on Huffduffer The Problems: Sidling up to German Sex Tourists; Elephant 6 bands decamping to a new porch; more on John’s uncontrollable steaming; almost closing the thread on the Bruce Vilanch problem; FDA’s daily requirement of Femineseum; why John treasures his collection of Braille Playboys; pitching the pilot for DecencyBusters; a pledge of index cards to help deflect John’s photons; the inexcusable lack of a decent Grand Guignol magazine; the long menarche that preceded our heavy internet period; John’s studied reluctance to buying young boys; Merlin’s reflections on accepting a strong man’s syllabus; why so few teens today offer to make candy penis bang bang; grave concern for the Teutonic hitting-and-poo thing; why you never fuck with Leonard Bernstein; Merlin’s culpability for Florida’s many orphan towel-babies; how Harold Ramis’ heart broke and broke; why John’s compound may be neither decadent nor depraved; chronicling our mass exodus from wool; knowing when your sword deserves its own bathrobe; strategies for rebooting John’s complex legacy; the spelling error that created a frottage industry; Wilde’s femoral focus on rentboy stickling; some benefits of packing an improbably large crossbow; the surprising trouble with faking The Loco Eyes; the tactical defense strategy of misquoting Larry Wall; finding the proper cave for Cartoon Billy Barty; flying a rainbow flag of convenience; why every arsenal should make room for a mildly inconvenient rose bush; the uncanny effectiveness of John’s splintered pickets; and, finally learning what John’s been hiding behind that steel-reinforced door. 40% of Canadians Will Die At Some Point In Their Lives by tedSeverson Roderick on the Line Ep. 07 Show Notes